text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section{Introduction}
\label{sec:intro}
\emph{Gender diversity}, or more often its lack thereof, among participants to
software development activities has been thoroughly studied in recent years. In
particular, the presence of, effects of, and countermeasures for \emph{gender
bias} in Free/Open Source Software (FOSS) have received a lot of attention
over the past decade~\cite{david2008fossdevs, qiu2010kdewomen,
nafus2012patches, kuechler2012genderfoss, vasilescu2014gender,
oneil2016debiansurvey, robles2016womeninfoss, terrell2017gender,
zacchiroli2021gender}. \emph{Geographic diversity} is on the other hand the
kind of diversity that stems from participants in some global activity coming
from different world regions and cultures.
Geographic diversity in FOSS has received relatively little attention in scholarly
works. In particular, while seminal survey-based and
point-in-time medium-scale studies of the geographic origins of FOSS
contributors exist~\cite{ghosh2005understanding, david2008fossdevs,
barahona2008geodiversity, takhteyev2010ossgeography, robles2014surveydataset,
wachs2021ossgeography}, large-scale longitudinal studies of the geographic
origin of FOSS contributors are still lacking. Such a quantitative
characterization would be useful to inform decisions related to global
development teams~\cite{herbsleb2007globalsweng} and hiring strategies in the
information technology (IT) market, as well as contribute factual information
to the debates on the economic impact and sociology of FOSS around the world.
\paragraph{Contributions}
With this work we contribute to close this gap by conducting \textbf{the first
longitudinal study of the geographic origin of contributors to public code
over 50 years.} Specifically, we provide a preliminary answer to the
following research question:
\begin{researchquestion}
From which world regions do authors of publicly available commits come from
and how has it changed over the past 50 years?
\label{rq:geodiversity}
\end{researchquestion}
We use as dataset the \SWH/ archive~\cite{swhipres2017} and analyze from it
2.2 billion\xspace commits archived from 160 million\xspace projects and authored by
43 million\xspace authors during the 1971--2021 time period.
We geolocate developers to
\DATAWorldRegions/ world regions, using as signals email country code top-level domains (ccTLDs) and
author (first/last) names compared with name distributions around the world, and UTC offsets
mined from commit metadata.
We find evidence of the early dominance of North America in open source
software, later joined by Europe. After that period, the geographic diversity
in public code has been constantly increasing.
We also identify relevant historical shifts
related to the end of the UNIX wars and the increase of coding literacy in
Central and South Asia, as well as of broader phenomena like colonialism and
people movement across countries (immigration/emigration).
\paragraph{Data availability.}
A replication package for this paper is available from Zenodo at
\url{https://doi.org/10.5281/zenodo.6390355}~\cite{replication-package}.
\section{Related Work}
\label{sec:related}
Both early and recent works~\cite{ghosh2005understanding, david2008fossdevs,
robles2014surveydataset, oneil2016debiansurvey} have characterized the
geography of Free/Open Source Software (FOSS) using \emph{developer surveys},
which provide high-quality answers but are limited in size (2-5\,K developers)
and can be biased by participant sampling.
In 2008 Barahona et al.~\cite{barahona2008geodiversity} conducted a seminal
large-scale (for the time) study on FOSS \emph{geography using mining software
repositories (MSR) techniques}. They analyzed the origin of 1\,M contributors
using the SourceForge user database and mailing list archives over the
1999--2005 period, using as signals information similar to ours: email domains
and UTC offsets.
The studied period (7 years) in~\cite{barahona2008geodiversity} is shorter than
what is studied in the present paper (50 years) and the data sources are
largely different; with that in mind, our results show a slightly larger quote of
European v.~North American contributions.
Another empirical work from 2010 by Takhteyev and
Hilts~\cite{takhteyev2010ossgeography} harvested self-declared geographic
locations of GitHub accounts recursively following their connections,
collecting information for $\approx$\,70\,K GitHub users. A very recent
work~\cite{wachs2021ossgeography} by Wachs et al.~has geolocated half a million
GitHub users, having contributed at least 100 commits each, and who
self-declare locations on their GitHub profiles. While the study is
point-in-time as of 2021, the authors compare their findings
against~\cite{barahona2008geodiversity, takhteyev2010ossgeography} to
characterize the evolution of FOSS geography over the time snapshots taken by
the three studies.
Compared with previous empirical works, our study is much larger scale---having
analyzed 43 million\xspace authors of 2.2 billion\xspace commits from 160 million\xspace
projects---longitudinal over 50 years of public code contributions rather than
point in time, and also more fine-grained (with year-by-year granularity over
the observed period). Methodologically, our study relies on Version Control
System (VCS) commit data rather than platform-declared location information.
Other works---in particular the work by Daniel~\cite{daniel2013ossdiversity}
and, more recently, Rastogi et al.~\cite{rastogi2016geobias,
rastogi2018geobias, prana2021geogenderdiversity}---have studied geographic
\emph{diversity and bias}, i.e., the extent to which the origin of FOSS
developers affect their collaborative coding activities.
In this work we characterized geographic diversity in public code for the first
time at this scale, both in terms of contributors and observation period. We do
not tackle the bias angle, but provide empirical data and findings that can be
leveraged to that end as future work.
\emph{Global software engineering}~\cite{herbsleb2007globalsweng} is the
sub-field of software engineering that has analyzed the challenges of scaling
developer collaboration globally, including the specific concern of how to deal
with geographic diversity~\cite{holmstrom2006globaldev, fraser2014eastwest}.
Decades later the present study provides evidence that can be used, in the
specific case of public code and at a very large scale, to verify which
promises of global software engineering have borne fruit.
\section{Methodology}
\label{sec:method}
\newif\ifgrowthfig \growthfigtrue
\ifgrowthfig
\begin{figure}
\includegraphics[width=\columnwidth]{yearly-commits}
\caption{Yearly public commits over time (log scale).
}
\label{fig:growth}
\end{figure}
\fi
\paragraph{Dataset}
We retrieved from \SWH/~\cite{swh-msr2019-dataset} all commits archived until \DATALastCommitDate/.
They amount to \DATACommitsRaw/ commits, unique by SHA1 identifier, harvested from \DATATotalCommitsInSH/ public projects coming from major development forges (GitHub, GitLab, etc.) and package repositories (Debian, PyPI, NPM, etc.).
Commits in the dataset are by \DATAAuthorsRaw/ authors, unique by $\langle$name, email$\rangle$ pairs.
The dataset came as two relational tables, one for commits and one for authors, with the former referencing the latter via a foreign key.
\iflong
Each row in the commit table contains the following fields: commit SHA1 identifier, author and committer timestamps, author and committer identifiers (referencing the author table).
The distinction between commit authors and committers come from Git, which allows to commit a change authored by someone else.
For this study we focused on authors and ignored committers, as the difference between the two is not relevant for our research questions and the amount of commits with a committer other than its author is negligible.
\fi
For each entry in the author table we have author full name and email as two separate strings of raw bytes.
We removed implausible or unusable names that: are not decodable as UTF-8 (\DATAAuthorsRmNondecodable/ author names removed), are email addresses instead of names (\DATAAuthorsRmEmail/ ``names''), consist of only blank characters (\DATAAuthorsRmBlank/), contain more than 10\% non-letters (\DATAAuthorsRmNonletter/), are longer than 100 characters (\DATAAuthorsRmToolong/).
After filtering, about \DATAAuthorsPlausibleApprox/ authors (\DATAAuthorsPlausiblePct/ of the initial dataset) remained for further analysis.
Note that the amount of public code commits (and authors) contained in the
initial dataset grows exponentially over
time~\cite{swh-provenance-emse}\ifgrowthfig, as shown for commits in
\Cref{fig:growth}\else: from $10^4$ commits in 1971, to $10^6$ in 1998, to
almost $10^9$ in 2020\fi. As a consequence the observed trends tend to be more
stable in recent decades than in 40+ year-old ones, due to statistics taken on
exponentially larger populations.
\paragraph{Geolocation}
\begin{figure}
\centering
\includegraphics[clip,trim=6cm 6cm 0 0,width=\linewidth]{subregions-ours}
\caption{The \DATAWorldRegions/ world regions used as geolocation targets.}
\label{fig:worldmap}
\end{figure}
As geolocation targets we use macro world regions derived from the United Nations geoscheme~\cite{un1999geoscheme}.
To avoid domination by large countries (e.g., China or Russia) within macro regions, we merged and split some regions based on geographic proximity and the sharing of preeminent cultural identification features, such as spoken language.
\Cref{fig:worldmap} shows the final list of \DATAWorldRegions/ world regions used as geolocation targets in this study.
Geolocation of commit authors to world regions uses the two complementary techniques introduced in~\cite{icse-seis-2022-gender}, briefly recalled below.
The first one relies on the country code top-level domain (ccTLD) of email addresses extracted from commit metadata, e.g., \texttt{.fr}, \texttt{.ru}, \texttt{.cn}, etc.
We started from the IANA list of Latin character ccTLDs~\cite{wikipedia-cctld} and manually mapped each corresponding territory to a target world region.
The second geolocation technique uses the UTC offset of commit timestamps (e.g., UTC-05:00) and author names to determine the most likely world region of the commit author.
For each UTC offset we determine a list of compatible places (country, state, or dependent territory) in the world that, at the time of that commit, had that UTC offset; commit time is key here, as country UTC offsets vary over time due to timezone changes.
To make this determination we use the IANA time zone database~\cite{tzdata}.
Then we assign to each place a score that captures the likelihood that a given author name is characteristic of it.
To this end we use the Forebears dataset of the frequencies of the most common first and family names which, quoting from~\cite{forebear-names}: {\itshape ``provides the approximate incidence of forenames and surnames produced from a database of \num{4 044 546 938} people (55.5\% of living people in 2014). As of September 2019 it covers \num{27 662 801} forenames and \num{27 206 821} surnames in 236 jurisdictions.''}
As in our dataset authors are full name strings (rather than split by first/family name), we first tokenize names (by blanks and case changes) and then lookup individual tokens in both first and family names frequency lists.
For each element found in name lists we multiply the place population\footnotemark{} by the name frequency to obtain a measure that is proportional to the number of persons bearing that name (token) in the specific place.
\footnotetext{To obtain population totals---as the notion of ``place'' is heterogeneous: full countries v.~slices of large countries spanning multiple timezones---we use a mixture of primary sources (e.g., government websites), and non-primary ones (e.g., Wikipedia articles).}
We sum this figure for all elements to obtain a place score, ending up with a list of $\langle$place, score$\rangle$ pairs.
We then partition this list by the world region that a place belongs to and sum the score for all the places in each region to obtain an overall score, corresponding to the likelihood that the commit belongs to a given world region.
We assign the starting commit as coming from the world region with the highest score.
The email-based technique suffers from the limited and unbalanced use of ccTLDs: most developers use generic TLDs such as \texttt{.com}, \texttt{.org}, or \texttt{.net}.
Moreover this does not happen uniformly across zones: US-based developers, for example, use the \texttt{.us} ccTLD much more seldomly than their European counterparts.
On the other hand the offset/name-based technique relies on the UTC offset of the commit timestamps.
Due to tool configurations on developer setups, a large number of commits in the dataset has an UTC offset equal to zero.
This affects less recent commits (\DATACommitsTZZTwoThousandTwenty/ of 2020s commits have a zero offset) than older ones (\DATACommitsTZZTwoThousand/ in 2000).
As a result the offset/name-based technique could end up detecting a large share of older commits as authored by African developers, and to a lesser extent Europeans.
To counter these issues we combine the two geolocation techniques together by applying the offset/name-based techniques to all commits with a non-zero UTC offset, and the email-based on to all other commits.
\section{Results and Discussion}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{stacked.pdf}
\caption{Ratio of commits (above) and active authors (below) by world zone over the 1971--2020 period.}
\Description[Chart]{Stacked bar chart showing the world zone ratios for commits and authors over the 1971--2020 period.}
\label{fig:results}
\end{figure*}
To answer \cref{rq:geodiversity} we gathered the number of commits and distinct authors per year and per world zone.
We present the obtained results in \Cref{fig:results} as two stacked bar charts, showing yearly breakdowns for commits and authors respectively.
Every bar represents a year and is partitioned in slices showing the commit/author ratio for each of the world regions of \Cref{fig:worldmap} in that year.
To avoid outliers due to sporadic contributors, in the author chart we only consider authors having contributed at least 5 commits in a given year.
While observing trends in the charts remember that the total numbers of commits and authors grow exponentially over time.
Hence for the first years in the charts, the number of data points in some world regions can be extremely small, with negative consequences on the stability of trends.
\paragraph{Geographic diversity over time}
Overall, the general trend appears to be that the \textbf{geographic diversity in public code is increasing}: North America and Europe alternated their ``dominance'' until the middle of the 90s; from that moment on most other world regions show a slow but steady increment.
This trend of increased participation into public code development includes Central and South Asia (comprising India), Russia, Africa, Central and South America,
Notice that also zones that do not seem to follow this trend, such as Australia and New Zealand, are also increasing their participation, but at a lower speed with respect to other zones.
For example, Australia and New Zealand incremented the absolute number of their commits by about 3 orders of magnitude from 2000 to present days.
Another interesting phenomenon that can be appreciated in both charts is the sudden contraction of contributions from North America in 1995; since the charts depict ratios, this corresponds to other zones, and Europe in particular, increasing their share.
An analysis of the main contributions in the years right before the contraction shows that nine out of ten have \texttt{ucbvax.Berkeley.EDU} as author email domain, and the tenth is Keith Bostic, one of the leading Unix BSD developers, appearing with email \texttt{bostic}.
No developer with the same email domain appears anymore within the first hundred contributors in 1996.
This shows the relevance that BSD Unix and the Computer Systems Research Group at the University of California at Berkeley had in the history of open source software.
The group was disbanded in 1995, partially as a consequence of the so-called UNIX wars~\cite{kernighan2019unixhistory}, and this contributes significantly---also because of the relatively low amount of public code circulating at the time---to the sudden drop of contributions from North America in subsequent years.
Descendant UNIX operating systems based on BSD, such as OpenBSD, FreeBSD, and NetBSD had smaller relevance to world trends due to (i) the increasing amount of open source code coming from elsewhere and (ii) their more geographically diverse developer community.
Another time frame in which the ratios for Europe and North America are subject to large, sudden changes is 1975--79.
A preliminary analysis shows that these ratios are erratic due to the very limited number of commits in those time period, but we were unable to detect a specific root cause.
Trends for those years should be subject to further studies, in collaboration with software historians.
\paragraph{Colonialism}
Another trend that stands out from the charts is that Africa appears to be well represented.
To assess if this results from a methodological bias, we double-checked the commits detected as originating from Africa for timezones included in the $[0, 3]$ range using both the email- the offset/name-based methods.
The results show that the offset/name-based approach assigns 22.7\% of the commits to Africa whereas the email-based one only assigns 2.7\% of them.
While a deeper investigation is in order, it is our opinion that the phenomenon we are witnessing here is a consequence of colonialism, specifically the adoption of Europeans names in African countries.
For example the name Eric, derived from Old Norse, is more popular in Ghana than it is in France or in the UK.
This challenges the ability of the offset/name-based method to correctly differentiate between candidate places.
Together with the fact that several African countries are largely populated, the offset/name-based method could detect European names as originating from Africa.
While this cuts both way, the likelihood of a random person contributing to public code is very different between European countries, all having a well-developed software industry, and African countries that do not all share this trait.
\paragraph{Immigration/emigration}
Another area where a similar phenomenon could be at play is the evolution of Central and South America.
Contribution from this macro region appears to be growing steadily.
To assess if this is the result of a bias introduced by the name-based detection we analyzed the evolution of offset/name-based assignment over time for authors whose email domain is among the top-ten US-based entities in terms of overall contributions (estimated in turn by analyzing the most frequent email domains and manually selecting those belonging to US-based entities).
In 1971 no author with an email from top US-based entities is detected as belonging to Central and South America, whereas in 2019 the ratio is 12\%.
Nowadays more than one tenth of the people email-associated to top US-based entities have popular Central and South American names, which we posit as a likely consequence of immigration into US (emigration from Central and South America).
Since immigration has a much longer history than what we are studying here, what we are witnessing probably includes long-term consequences of it, such as second and third generation immigrants employed in white-collar jobs, such as software development.
\section{Limitations and Future Work}
\label{sec:conclusion}
We have performed an exploratory, yet very large scale, empirical study of the geographic diversity in public code commits over time.
We have analyzed 2.2 billion\xspace public commits covering the \DATAYearRange/ time period.
We have geolocated developers to \DATAWorldRegions/ world regions using as signals email domains, timezone offsets, and author names.
Our findings show that the geographic diversity in public code is increasing over time, and markedly so over the past 20--25 years.
Observed trends also co-occur with historical events and macro phenomena like the end of the UNIX wars, increase of coding literacy around the world, colonialism, and immigration.
\medskip
\emph{Limitations.}
This study relies on a combination of two geolocation methods: one based on email domains, another based on commit UTC offsets and author names.
We discussed some of the limitations of either method in \Cref{sec:method}, motivating our decision of restricting the use of the email-based method to commits with a zero UTC offset.
As a consequence, for most commits in the dataset the offset/name-based method is used.
With such method, the frequencies of forenames and surnames are used to rank candidate zones that have a compatible UTC offset at commit time.
A practical consequence of this is that for commits with, say, offset UTC+09:00 the candidate places can be Russia, Japan and Australia, depending on the specific date due to daylight saving time.
Popular forenames and surnames in these regions tend to be quite different so the likelihood of the method to provide a reliable detection is high.
For other offsets the set of popular forenames and surnames from candidate zones can exhibit more substantial overlaps, negatively impacting detection accuracy.
We have discussed some of these cases in \Cref{sec:results}, but other might be lingering in the results impacting observed trends.
The choice of using the email-based method for commits with zero UTC offset, and the offset/name-based method elsewhere, has allowed us to study all developers not having a country-specific email domain (ccTLD), but comes with the risk of under-representing the world zones that have (in part and in some times of the year) an actual UTC offset of zero.
A potential bias in this study could be introduced by the fact that the name database used for offset/name-based geolocation only contains names formed using Latin alphabet characters.
We looked for names containing Chinese, Japanese, and Korean characters in the original dataset, finding only a negligible amount of authors who use non-Latin characters in their VCS names, which leads us to believe that the impact of this issue is minimal.
We did not apply identity merging (e.g., using state-of-the-art tools like SortingHat~\cite{moreno2019sortinghat}), but we do not expect this to be a significant issue because: (a) to introduce bias in author trends the distribution of identity merges around the world should be uneven, which seems unlikely; and (b) the observed commit trends (which would be unaffected by identity merging) are very similar to observed author trends.
We did not systematically remove known bot accounts~\cite{lebeuf2018swbots} from the author dataset, but we did check for the presence of software bots among the top committers of each year. We only found limited traces of continuous integration (CI) bots, used primarily to automate merge commits. After removing CI bots from the dataset the observed global trends were unchanged, therefore this paper presents unfiltered data.
\medskip
\emph{Future work.}
To some extent the above limitations are the price to pay to study such a large dataset: there exists a trade-off between large-scale analysis and accuracy.
We plan nonetheless to further investigate and mitigate them in future work.
Multi-method approaches, merging data mining with social science methods, could be applied to address some of the questions raised in this exploratory study.
While they do not scale to the whole dataset, multi-methods can be adopted to dig deeper into specific aspects, specifically those related to social phenomena.
Software is a social artifact, it is no wonder that aspects related to sociocultural evolution emerge when analyzing its evolution at this scale.
\clearpage
| {'timestamp': '2022-03-30T02:27:00', 'yymm': '2203', 'arxiv_id': '2203.15369', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.15369'} |
\section{Introduction}
One of the fundamental ingredients in the theory of non-commutative or
quantum geometry is the notion of a differential calculus.
In the framework of quantum groups the natural notion
is that of a
bicovariant differential calculus as introduced by Woronowicz
\cite{Wor_calculi}. Due to the allowance of non-commutativity
the uniqueness of a canonical calculus is lost.
It is therefore desirable to classify the possible choices.
The most important piece is the space of one-forms or ``first
order differential calculus'' to which we will restrict our attention
in the following. (From this point on we will use the term
``differential calculus'' to denote a
bicovariant first order differential calculus).
Much attention has been devoted to the investigation of differential
calculi on quantum groups $C_q(G)$ of function algebra type for
$G$ a simple Lie group.
Natural differential calculi on matrix quantum groups were obtained by
Jurco \cite{Jur} and
Carow-Watamura et al.\
\cite{CaScWaWe}. A partial classification of calculi of the same
dimension as the natural ones
was obtained by
Schm\"udgen and Sch\"uler \cite{ScSc2}.
More recently, a classification theorem for factorisable
cosemisimple quantum groups was obtained by Majid \cite{Majid_calculi},
covering the general $C_q(G)$ case. A similar result was
obtained later by Baumann and Schmitt \cite{BaSc}.
Also, Heckenberger and Schm\"udgen \cite{HeSc} gave a
complete classification on $C_q(SL(N))$ and $C_q(Sp(N))$.
In contrast, for $G$ not simple or semisimple the differential calculi
on $C_q(G)$
are largely unknown. A particularly basic case is the Lie group $B_+$
associated with the Lie algebra $\lalg{b_+}$ generated by two elements
$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra
\ensuremath{U_q(\lalg{b_+})}{}
is self-dual, i.e.\ is non-degenerately paired with itself \cite{Drinfeld}.
This has an interesting consequence: \ensuremath{U_q(\lalg{b_+})}{} may be identified with (a
certain algebraic model of) \ensuremath{C_q(B_+)}. The differential calculi on this
quantum group and on its ``classical limits'' \ensuremath{C(B_+)}{} and \ensuremath{U(\lalg{b_+})}{}
will be the main concern of this paper. We pay hereby equal attention
to the dual notion of ``quantum tangent space''.
In section \ref{sec:q} we obtain the complete classification of differential
calculi on \ensuremath{C_q(B_+)}{}. It turns out that (finite
dimensional) differential
calculi are characterised by finite subsets $I\subset\mathbb{N}$.
These
sets determine the decomposition into coirreducible (i.e.\ not
admitting quotients) differential calculi
characterised by single integers. For the coirreducible calculi the
explicit formulas for the commutation relations and braided
derivations are given.
In section \ref{sec:class} we give the complete classification for the
classical function algebra \ensuremath{C(B_+)}{}. It is essentially the same as in the
$q$-deformed setting and we stress this by giving an almost
one-to-one correspondence of differential calculi to those obtained in
the previous section. In contrast, however, the decomposition and
coirreducibility properties do not hold at all. (One may even say that
they are maximally violated). We give the explicit formulas for those
calculi corresponding to coirreducible ones.
More interesting perhaps is the ``dual'' classical limit. I.e.\ we
view \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra with quantum enveloping
algebra \ensuremath{C(B_+)}{}. This is investigated in section \ref{sec:dual}. It
turns out that in this setting we have considerably more freedom in
choosing a
differential calculus since the bicovariance condition becomes much
weaker. This shows that this dual classical limit is in a sense
``unnatural'' as compared to the ordinary classical limit of section
\ref{sec:class}.
However, we can still establish a correspondence of certain
differential calculi to those of section \ref{sec:q}. The
decomposition properties are conserved while the coirreducibility
properties are not.
We give the
formulas for the calculi corresponding to coirreducible ones.
Another interesting aspect of viewing \ensuremath{U(\lalg{b_+})}{} as a quantum function
algebra is the connection to quantum deformed models of space-time and
its symmetries. In particular, the $\kappa$-deformed Minkowski space
coming from the $\kappa$-deformed Poincar\'e algebra
\cite{LuNoRu}\cite{MaRu} is just a simple generalisation of \ensuremath{U(\lalg{b_+})}.
We use this in section \ref{sec:kappa} to give
a natural $4$-dimensional differential calculus. Then we show (in a
formal context) that integration is given by
the usual Lesbegue integral on $\mathbb{R}^n$ after normal ordering.
This is obtained in an intrinsic context different from the standard
$\kappa$-Poincar\'e approach.
A further important motivation for the investigation of differential
calculi on
\ensuremath{U(\lalg{b_+})}{} and \ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale
Hopf algebra \cite{Majid_Planck}\cite{Majid_book}. This shall be
developed elsewhere.
In the remaining parts of this introduction we will specify our
conventions and provide preliminaries on the quantum group \ensuremath{U_q(\lalg{b_+})}, its
deformations, and differential calculi.
\subsection{Conventions}
Throughout, $\k$ denotes a field of characteristic 0 and
$\k(q)$ denotes the field of rational
functions in one parameter $q$ over $\k$.
$\k(q)$ is our ground field in
the $q$-deformed setting, while $\k$ is the
ground field in the ``classical'' settings.
Within section \ref{sec:q} one could equally well view $\k$ as the ground
field with $q\in\k^*$ not a root of unity. This point of view is
problematic, however, when obtaining ``classical limits'' as
in sections \ref{sec:class} and \ref{sec:dual}.
The positive integers are denoted by $\mathbb{N}$ while the non-negative
integers are denoted by $\mathbb{N}_0$.
We define $q$-integers, $q$-factorials and
$q$-binomials as follows:
\begin{gather*}
[n]_q=\sum_{i=0}^{n-1} q^i\qquad
[n]_q!=[1]_q [2]_q\cdots [n]_q\qquad
\binomq{n}{m}=\frac{[n]_q!}{[m]_q! [n-m]_q!}
\end{gather*}
For a function of several variables (among
them $x$) over $\k$ we define
\begin{gather*}
(T_{a,x} f)(x) = f(x+a)\\
(\fdiff_{a,x} f)(x) = \frac{f(x+a)-f(x)}{a}
\end{gather*}
with $a\in\k$ and similarly over $\k(q)$
\begin{gather*}
(Q_{m,x} f)(x) = f(q^m x)\\
(\partial_{q,x} f)(x) = \frac{f(x)-f(qx)}{x(1-q)}\\
\end{gather*}
with $m\in\mathbb{Z}$.
We frequently use the notion of a polynomial in an extended
sense. Namely, if we have an algebra with an element $g$ and its
inverse $g^{-1}$ (as
in \ensuremath{U_q(\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power
series in $g$ with exponents in $\mathbb{Z}$. The length of such a polynomial
is the difference between highest and lowest degree.
If $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra
with the opposite product.
\subsection{\ensuremath{U_q(\lalg{b_+})}{} and its Classical Limits}
\label{sec:intro_limits}
We recall that,
in the framework of quantum groups, the duality between enveloping algebra
$U(\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie
group carries over to $q$-deformations.
In the case of
$\lalg{b_+}$, the
$q$-deformed enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} defined over $\k(q)$ as
\begin{gather*}
U_q(\lalg{b_+})=\k(q)\langle X,g,g^{-1}\rangle \qquad
\text{with relations} \\
g g^{-1}=1 \qquad Xg=qgX \\
\cop X=X\otimes 1 + g\otimes X \qquad
\cop g=g\otimes g \\
\cou (X)=0 \qquad \cou (g)=1 \qquad
\antip X=-g^{-1}X \qquad \antip g=g^{-1}
\end{gather*}
is self-dual. Consequently, it
may alternatively be viewed as the quantum algebra \ensuremath{C_q(B_+)}{} of
functions on the Lie group $B_+$ associated with $\lalg{b_+}$.
It has two classical limits, the enveloping algebra \ensuremath{U(\lalg{b_+})}{}
and the function algebra $C(B_+)$.
The transition to the classical enveloping algebra is achieved by
replacing $q$
by $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in
$t$, introducing a new generator $H$. Now, all expressions are written in
the form $\sum_j a_j t^j$ and only the lowest order in $t$ is kept.
The transition to the classical function algebra on the other hand is
achieved by setting $q=1$.
This may be depicted as follows:
\[\begin{array}{c @{} c @{} c @{} c}
& \ensuremath{U_q(\lalg{b_+})} \cong \ensuremath{C_q(B_+)} && \\
& \diagup \hspace{\stretch{1}} \diagdown && \\
\begin{array}{l} q=e^{-t} \\ g=e^{tH} \end{array} \Big| _{t\to 0}
&& q=1 &\\
\swarrow &&& \searrow \\
\ensuremath{U(\lalg{b_+})} & <\cdots\textrm{dual}\cdots> && \ensuremath{C(B_+)}
\end{array}\]
The self-duality of \ensuremath{U_q(\lalg{b_+})}{} is expressed as a pairing
$\ensuremath{U_q(\lalg{b_+})}\times\ensuremath{U_q(\lalg{b_+})}\to\k$
with
itself:
\[\langle X^n g^m, X^r g^s\rangle =
\delta_{n,r} [n]_q!\, q^{-n(n-1)/2} q^{-ms}
\qquad\forall n,r\in\mathbb{N}_0\: m,s\in\mathbb{Z}\]
In the classical limit this becomes the pairing $\ensuremath{U(\lalg{b_+})}\times\ensuremath{C(B_+)}\to\k$
\begin{equation}
\langle X^n H^m, X^r g^s\rangle =
\delta_{n,r} n!\, s^m\qquad \forall n,m,r\in\mathbb{N}_0\: s\in\mathbb{Z}
\label{eq:pair_class}
\end{equation}
\subsection{Differential Calculi and Quantum Tangent Spaces}
In this section we recall some facts about differential calculi
along the lines of Majid's treatment in \cite{Majid_calculi}.
Following Woronowicz \cite{Wor_calculi}, first order bicovariant differential
calculi on a quantum group $A$ (of
function algebra type) are in one-to-one correspondence to submodules
$M$ of $\ker\cou\subset A$ in the category $^A_A\cal{M}$ of (say) left
crossed modules of $A$ via left multiplication and left adjoint
coaction:
\[
a\triangleright v = av \qquad \mathrm{Ad_L}(v)
=v_{(1)}\antip v_{(3)}\otimes v_{(2)}
\qquad \forall a\in A, v\in A
\]
More precisely, given a crossed submodule $M$, the corresponding
calculus is given by $\Gamma=\ker\cou/M\otimes A$ with $\diff a =
\pi(\cop a - 1\otimes a)$ ($\pi$ the canonical projection).
The right action and coaction on $\Gamma$ are given by
the right multiplication and coproduct on $A$, the left action and
coaction by the tensor product ones with $\ker\cou/M$ as a left
crossed module. In all of what follows, ``differential calculus'' will
mean ``bicovariant first order differential calculus''.
Alternatively \cite{Majid_calculi}, given in addition a quantum group $H$
dually paired with $A$
(which we might think of as being of enveloping algebra type), we can
express the coaction of $A$ on
itself as an action of $H^{op}$ using the pairing:
\[
h\triangleright v = \langle h, v_{(1)} \antip v_{(3)}\rangle v_{(2)}
\qquad \forall h\in H^{op}, v\in A
\]
Thereby we change from the category of (left) crossed $A$-modules to
the category of left modules of the quantum double $A\!\bowtie\! H^{op}$.
In this picture the pairing between $A$ and $H$ descends to a pairing
between $A/\k 1$ (which we may identify with $\ker\cou\subset A$) and
$\ker\cou\subset H$. Further quotienting $A/\k 1$ by $M$ (viewed in
$A/\k 1$) leads to a pairing with the subspace $L\subset\ker\cou H$
that annihilates $M$. $L$ is called a ``quantum tangent space''
and is dual to the differential calculus $\Gamma$ generated by $M$ in
the sense that $\Gamma\cong \Lin(L,A)$ via
\begin{equation}
A/(\k 1+M)\otimes A \to \Lin(L,A)\qquad
v\otimes a \mapsto \langle \cdot, v\rangle a
\label{eq:eval}
\end{equation}
if the pairing between $A/(\k 1+M)$ and $L$ is non-degenerate.
The quantum tangent spaces are obtained directly by dualising the
(left) action of the quantum double on $A$ to a (right) action on
$H$. Explicitly, this is the adjoint action and the coregular action
\[
h \triangleright x = h_{(1)} x \antip h_{(2)} \qquad
a \triangleright x = \langle x_{(1)}, a \rangle x_{(2)}\qquad
\forall h\in H, a\in A^{op},x\in A
\]
where we have converted the right action to a left action by going
from \mbox{$A\!\bowtie\! H^{op}$}-modules to \mbox{$H\!\bowtie\! A^{op}$}-modules.
Quantum tangent spaces are subspaces of $\ker\cou\subset H$ invariant
under the projection of this action to $\ker\cou$ via \mbox{$x\mapsto
x-\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be
converted to a left coaction of $H$ being the comultiplication (with
subsequent projection onto $H\otimes\ker\cou$).
We can use the evaluation map (\ref{eq:eval})
to define a ``braided derivation'' on elements of the quantum tangent
space via
\[\partial_x:A\to A\qquad \partial_x(a)={\diff a}(x)=\langle
x,a_{(1)}\rangle a_{(2)}\qquad\forall x\in L, a\in A\]
This obeys the braided derivation rule
\[\partial_x(a b)=(\partial_x a) b
+ a_{(2)} \partial_{a_{(1)}\triangleright x}b\qquad\forall x\in L, a\in A\]
Given a right invariant basis $\{\eta_i\}_{i\in I}$ of $\Gamma$ with a
dual basis $\{\phi_i\}_{i\in I}$ of $L$ we have
\[{\diff a}=\sum_{i\in I} \eta_i\cdot \partial_i(a)\qquad\forall a\in A\]
where we denote $\partial_i=\partial_{\phi_i}$. (This can be easily
seen to hold by evaluation against $\phi_i\ \forall i$.)
\section{Classification on \ensuremath{C_q(B_+)}{} and \ensuremath{U_q(\lalg{b_+})}{}}
\label{sec:q}
In this section we completely classify differential calculi on \ensuremath{C_q(B_+)}{}
and, dually, quantum tangent spaces on \ensuremath{U_q(\lalg{b_+})}{}. We start by
classifying the relevant crossed modules and then proceed to a
detailed description of the calculi.
\begin{lem}
\label{lem:cqbp_class}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ensuremath{C_q(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs $(1,\{n\})$ with $n$ the
codimension. The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{n\})$.
\end{lem}
\begin{proof}
(a) Let $M\subseteq\ensuremath{C_q(B_+)}$ be a crossed \ensuremath{C_q(B_+)}-submodule by left
multiplication and left adjoint coaction and let
$\sum_n X^n P_n(g) \in M$, where $P_n$ are polynomials in $g,g^{-1}$
(every element of \ensuremath{C_q(B_+)}{} can be expressed in
this form). From the formula for the coaction ((\ref{eq:adl}), see appendix)
we observe that for all $n$ and for all $t\le n$ the element
\[X^t P_n(g) \prod_{s=1}^{n-t} (1-q^{s-n}g)\]
lies in $M$.
In particular
this is true for $t=n$, meaning that elements of constant degree in $X$
lie separately in $M$. It is therefore enough to consider such
elements.
Let now $X^n P(g) \in M$.
By left multiplication $X^n P(g)$ generates any element of the form
$X^k P(g) Q(g)$, where $k\ge n$ and $Q$ is any polynomial in
$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)
We see that $M$ contains the following elements:
\[\begin{array}{ll}
\vdots & \\
X^{n+2} & P(g) \\
X^{n+1} & P(g) \\
X^n & P(g) \\
X^{n-1} & P(g) (1-q^{1-n}g) \\
X^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\
\vdots & \\
X & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g) \\
& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g)(1-g)
\end{array}
\]
Moreover, if $M$ is generated by $X^n P(g)$ as a module
then these elements generate a basis for $M$ as a vector
space by left
multiplication with polynomials in $g,g^{-1}$. (Observe that the
application of the coaction to any of the elements shown does not
generate elements of new type.)
Now, let $M$ be a given crossed submodule. We pick, among the
elements in $M$ of the form $X^n P(g)$ with $P$ of minimal
length,
one
with lowest degree in $X$. Then certainly the elements listed above are
in $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must
contain $P$ as a factor and for $k<n$, $Q$ must contain $P(g) (1-q^{1-n}g)$
as a factor. We continue by picking the smallest $n_2$, so that
$X^{n_2} P(g) (1-q^{1-n}g) \in M$. Certainly $n_2<n$. Again, for any
element of $X^l Q(g)$ in $M$ with $l<n_2$, we have that
$P(g) (1-q^{1-n}g) (1-q^{1-n_2}g)$ divides Q(g). We proceed by
induction, until we arrive at degree zero in $X$.
We obtain the following elements generating a basis for $M$ by left
multiplication with polynomials in $g,g^{-1}$ (rename $n_1=n$):
\[ \begin{array}{ll}
\vdots & \\
X^{n_1+1} & P(g) \\
X^{n_1} & P(g) \\
X^{n_1-1} & P(g) (1-q^{1-{n_1}}g) \\
\vdots & \\
X^{n_2} & P(g) (1-q^{1-{n_1}}g) \\
X^{n_2-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2})\\
\vdots & \\
X^{n_3} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) \\
X^{n_3-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) (1-q^{1-n_3})\\
\vdots & \\
& P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2}g) (1-q^{1-n_3}g) \ldots (1-q^{1-n_m}g)
\end{array}
\]
We see that the integers $n_1,\ldots,n_m$ uniquely determine the shape
of this picture. The polynomial $P(g)$ on the other hand can be
shifted (by $g$ and $g^{-1}$) or renormalised. To determine $M$
uniquely we shift and normalise $P$ in such a way that it contains no
negative powers
and has unit constant coefficient. $P$ can then be viewed as a
polynomial $\in\k(q)[g]$.
We see that the codimension of $M$ is the sum of the lengths of the
polynomials in $g$ over all degrees in $X$ in the above
picture. Finite codimension corresponds to $P=1$. In this
case the codimension is the sum
$n_1+\ldots +n_m$.
(b) We observe that polynomials of the form $1-q^{j}g$
have no common divisors for distinct $j$. Therefore,
finite codimensional crossed
submodules are maximal if and only if
there is just one integer ($m=1$). Thus, the maximal left
crossed submodule of
codimension $k$ is generated by $X^k$ and $1-q^{1-k}g$.
For an infinite codimensional crossed submodule we certainly need
$m=0$. Then, the maximality corresponds to irreducibility of
$P$.
(c) This is again due to the distinctness of factors $1-q^j g$.
\end{proof}
\begin{cor}
\label{cor:cqbp_eclass}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C_q(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in lemma \ref{lem:cqbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $1\in I$.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs
$(1,\{1,n\})$ with $n\ge 2$ the
codimension. The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{1,n\})$.
\end{cor}
\begin{proof}
First observe that $\sum_n X^n P_n(g)\in \ker\cou$ if and only if
$(1-g)$ divides $P_0(g)$. This is to say that that $\ker\cou$
is the crossed submodule corresponding to the pair $(1,\{1\})$ in
lemma \ref{lem:cqbp_class}. We obtain the classification
from the one of lemmas \ref{lem:cqbp_class} by intersecting
everything with this crossed submodule. In particular, this reduces
the codimension by one in the finite codimensional case.
\end{proof}
\begin{lem}
\label{lem:uqbp_class}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint
action and left
regular coaction are in one-to-one correspondence to the set
$3^{\mathbb{N}_0}\times2^{\mathbb{N}}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets $I\subset\mathbb{N}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{lem}
\begin{proof}
(a) The action takes the explicit form
\[g\triangleright X^n g^k = q^{-n} X^n g^k\qquad
X\triangleright X^n g^k = X^{n+1}g^k(1-q^{-(n+k)})\]
while the coproduct is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binomq{n}{r}
q^{-r(n-r)} X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction here.
Let now $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ be a crossed \ensuremath{U_q(\lalg{b_+})}-submodule via this action
and coaction. For $\sum_n X^n P_n(g)\in L$ invariance under
the action by
$g$ clearly means that \mbox{$X^n P_n(g)\in L\ \forall n$}. Then from
invariance under the coaction we can conclude that
if $X^n \sum_j a_j g^j\in L$ we must have
$X^n g^j\in L\ \forall j$.
I.e.\ elements of the form $X^n g^j$ lie separately in $L$ and it is
sufficient to consider such elements. From the coaction we learn that
if $X^n g^j\in L$ we have $X^m g^j\in L\ \forall m\le n$.
The action
by $X$ leads to $X^n g^j\in L \Rightarrow X^{n+1} g^j\in
L$ except if
$n+j=0$. The classification is given by the possible choices we have
for each power in $g$. For every positive integer $j$ we can
choose wether or not to include the span of
$\{ X^n g^j|\forall n\}$ in $L$ and for
every non-positive
integer we can choose to include either the span of $\{ X^n
g^j|\forall n\}$
or just
$\{ X^n g^j|\forall n\le -j\}$ or neither. I.e.\ for positive
integers ($\mathbb{N}$) we have two choices while for non-positive (identified
with $\mathbb{N}_0$) ones we have three choices.
Clearly, the finite dimensional $L$ are those where we choose only to
include finitely many powers of $g$ and also only finitely many powers
of $X$. The latter is only possible for the non-positive powers
of $g$.
By identifying positive integers $n$ with powers $1-n$ of $g$, we
obtain a classification by finite subsets of $\mathbb{N}$.
(b) Irreducibility clearly corresponds to just including one power of $g$
in the finite dimensional case.
(c) The decomposition property is obvious from the discussion.
\end{proof}
\begin{cor}
\label{cor:uqbp_eclass}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ker\cou\subset\ensuremath{U_q(\lalg{b_+})}$ via
the left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
the set $3^{\mathbb{N}}\times2^{\mathbb{N}_0}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets
$I\subset\mathbb{N}\setminus\{1\}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n\ge 2$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{cor}
\begin{proof}
Only a small modification of lemma \ref{lem:uqbp_class} is
necessary. Elements of
the form $P(g)$ are replaced by elements of the form
$P(g)-P(1)$. Monomials with non-vanishing degree in $X$ are unchanged.
The choices for elements of degree $0$ in $g$ are reduced to either
including the span of
$\{ X^k |\forall k>0 \}$ in the crossed submodule or not. In
particular, the crossed submodule characterised by \{1\} in lemma
\ref{lem:uqbp_class} is projected out.
\end{proof}
Differential calculi in the original sense of Woronowicz are
classified by corollary \ref{cor:cqbp_eclass} while from the quantum
tangent space
point of view the
classification is given by corollary \ref{cor:uqbp_eclass}.
In the finite dimensional case the duality is strict in the sense of a
one-to-one correspondence.
The infinite dimensional case on the other hand depends strongly on
the algebraic models we use for the function or enveloping
algebras. It is therefore not surprising that in the present purely
algebraic context the classifications are quite different in this
case. We will restrict ourselves to the finite dimensional
case in the following description of the differential calculi.
\begin{thm}
\label{thm:q_calc}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C_q(B_+)}{} and
corresponding quantum tangent spaces $L$ on \ensuremath{U_q(\lalg{b_+})}{} are
in one-to-one correspondence to
finite sets $I\subset\mathbb{N}\setminus\{1\}$. In particular
$\dim\Gamma=\dim L=\sum_{n\in I}n$.
(b) Coirreducible $\Gamma$ and irreducible $L$ correspond to
$\{n\}$ with $n\ge 2$ the dimension.
Such a $\Gamma$ has a
right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that the relations
\begin{gather*}
\diff X=\eta_1+(q^{n-1}-1)\eta_0 X \qquad
\diff g=(q^{n-1}-1)\eta_0 g\\
[a,\eta_0]=\diff a\quad \forall a\in\ensuremath{C_q(B_+)}\\
[g,\eta_i]_{q^{n-1-i}}=0\quad \forall i\qquad
[X,\eta_i]_{q^{n-1-i}}=\begin{cases}
\eta_{i+1} & \text{if}\ i<n-1 \\
0 & \text{if}\ i=n-1
\end{cases}
\end{gather*}
hold, where $[a,b]_p := a b - p b a$. By choosing the dual basis on
the corresponding irreducible $L$ we obtain
the braided derivations
\begin{gather*}
\partial_i\no{f}=
\no{Q_{n-1-i,g} Q_{n-1-i,X} \frac{1}{[i]_q!} (\partial_{q,X})^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=
\no{Q_{n-1,g} Q_{n-1,X} f - f}
\end{gather*}
for $f\in \k(q)[X,g,g^{-1}]$ with normal ordering
$\k(q)[X,g,g^{-1}]\to \ensuremath{C_q(B_+)}$ given by \mbox{$g^n X^m\mapsto g^n X^m$}.
(c) Finite dimensional $\Gamma$ and $L$ decompose into direct sums of
coirreducible respectively irreducible ones.
In particular $\Gamma=\oplus_{n\in I}\Gamma^n$ and
$L=\oplus_{n\in I}L^n$ with $\Gamma^n$ and $L^n$ corresponding to $\{n\}$.
\end{thm}
\begin{proof}
(a) We observe that the classifications of lemma
\ref{lem:cqbp_class} and lemma \ref{lem:uqbp_class} or
corollary \ref{cor:cqbp_eclass} and corollary \ref{cor:uqbp_eclass}
are dual to each other in the finite (co){}dimensional case. More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in lemma \ref{lem:cqbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $I$ in lemma \ref{lem:uqbp_class}
and vice versa.
$\ensuremath{C_q(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For $I\subset\mathbb{N}\setminus\{1\}$ finite this descends to
$M$ corresponding to $(1,I\cup\{1\})$ in corollary
\ref{cor:cqbp_eclass} and $L$ corresponding to $I$ in corollary
\ref{cor:uqbp_eclass}.
For the dimension of $\Gamma$ observe
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) Coirreducibility (having no proper quotient) of $\Gamma$
clearly corresponds to maximality of $M$. The statement then follows
from parts (b) of corollaries
\ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass}. The formulas are
obtained by choosing the basis $\eta_0,\dots,\eta_{n-1}$ of
$\ker\cou/M$ as the equivalence classes of
\[(g-1)/(q^{n-1}-1),X,\dots,X^{n-1}\]
The dual basis of $L$ is then given by
\[g^{1-n}-1, X g^{1-n},\dots, q^{k(k-1)} \frac{1}{[k]_q!} X^k g^{1-n},
\dots,q^{(n-1)(n-2)} \frac{1}{[n-1]_q!} X^{n-1} g^{1-n}\]
(c) The statement follows from corollaries \ref{cor:cqbp_eclass} and
\ref{cor:uqbp_eclass} parts (c) with the observation
\[\ker\cou/M=\ker\cou/{\bigcap_{n\in I}}M^n
=\oplus_{n\in I}\ker\cou/M^n\]
\end{proof}
\begin{cor}
There is precisely one differential calculus on \ensuremath{C_q(B_+)}{} which is
natural in the sense that it
has dimension $2$.
It is coirreducible and obeys the relations
\begin{gather*}
[g,\diff X]=0\qquad [g,\diff g]_q=0\qquad
[X,\diff X]_q=0\qquad [X,\diff g]_q=(q-1)({\diff X}) g
\end{gather*}
with $[a,b]_q:=ab-qba$. In particular we have
\begin{gather*}
\diff\no{f} = {\diff g} \no{\partial_{q,g} f} + {\diff X}
\no{\partial_{q,X} f}\qquad\forall f\in \k(q)[X,g,g^{-1}]
\end{gather*}
\end{cor}
\begin{proof}
This is a special case of theorem \ref{thm:q_calc}.
The formulas follow from (b) with $n=2$.
\end{proof}
\section{Classification in the Classical Limit}
\label{sec:class}
In this section we give the complete classification of differential
calculi and quantum tangent spaces in the classical case of \ensuremath{C(B_+)}{}
along the lines of the previous section.
We pay particular
attention to the relation to the $q$-deformed setting.
The classical limit \ensuremath{C(B_+)}{} of the quantum group \ensuremath{C_q(B_+)}{} is
simply obtained by substituting the parameter $q$ with $1$.
The
classification of left crossed submodules in part (a) of lemma
\ref{lem:cqbp_class} remains
unchanged, as one may check by going through the proof.
In particular, we get a correspondence of crossed modules in the
$q$-deformed setting with crossed modules in the
classical setting
as a map of
pairs $(P,I)\mapsto (P,I)$
that converts polynomials $\k(q)[g]$ to polynomials $\k[g]$ (if
defined) and leaves
sets $I$ unchanged. This is one-to-one in the finite
dimensional case.
However, we did use the distinctness of powers of $q$ in part (b) and
(c) of lemma
$\ref{lem:cqbp_class}$ and have to account for changing this. The
only place where we used it, was in observing that
factors $1-q^j g $ have no common divisors for distinct $j$. This was
crucial to conclude the maximality (b) of certain finite codimensional
crossed submodules and the intersection property (c).
Now, all those factors become $1-g$.
\begin{cor}
\label{cor:cbp_class}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ensuremath{C(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-g$ for any
$k\in\mathbb{N}_0$.
\end{cor}
In the restriction to $\ker\cou\subset\ensuremath{C(B_+)}$ corresponding to corollary
\ref{cor:cqbp_eclass} we observe another difference to the
$q$-deformed setting.
Since the condition for a crossed submodule to lie in $\ker\cou$ is exactly
to have factors $1-g$ in the $X$-free monomials this condition may now
be satisfied more easily. If the characterising polynomial does not
contain this factor it is now sufficient to have just any non-empty
characterising integer set $I$ and it need not contain $1$. Consequently,
the map $(P,I)\mapsto (P,I)$ does not reach all crossed submodules now.
\begin{cor}
\label{cor:cbp_eclass}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in corollary \ref{cor:cbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $I$ non-empty.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-g$.
\end{cor}
Let us now turn to quantum tangent spaces on \ensuremath{U(\lalg{b_+})}{}. Here, the process
to go from the $q$-deformed setting to the classical one is not quite
so straightforward.
\begin{lem}
\label{lem:ubp_class}
Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ensuremath{U(\lalg{b_+})}$ via the left
adjoint action
and left regular coaction are
in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and
$I\subset\mathbb{N}$ finite. $\dim L<\infty$ iff $l=0$. In particular $\dim
L=\sum_{n\in I}n$ if $l=0$.
\end{lem}
\begin{proof}
The left adjoint action takes the form
\[
X\triangleright X^n H^m = X^{n+1}(H^m-(H+1)^m) \qquad
H\triangleright X^n H^m = n X^n H^m
\]
while the coaction is
\[
\cop(X^n H^m) = \sum_{i=1}^n \sum_{j=1}^m \binom{n}{i} \binom{m}{j}
X^i H^j\otimes X^{n-1} H^{m-j}
\]
Let $L$ be a crossed submodule invariant under the action and coaction.
The (repeated) action of $H$ separates elements by degree in $X$. It is
therefore sufficient to consider elements of the form $X^n P(H)$, where
$P$ is a polynomial.
By acting with $X$ on an element $X^n P(H)$ we obtain
$X^{n+1}(P(H)-P(H+1))$. Subsequently applying the coaction and
projecting on the left hand side of the tensor product onto $X$ (in
the basis $X^i H^j$ of \ensuremath{U(\lalg{b_+})})
leads to the element $X^n (P(H)-P(H+1))$. Now the degree of
$P(H)-P(H+1)$ is exactly the degree of $P(H)$ minus $1$. Thus we have
polynomials $X^n P_i(H)$ of any degree $i=\deg(P_i)\le \deg(P)$ in $L$
by induction. In particular, $X^n H^m\in L$ for all
$m\le\deg(P)$. It is thus sufficient to consider elements of
the form $X^n H^m$. Given such an element, the coaction generates all
elements of the form $X^i H^j$ with $i\le n, j\le m$.
For given $n$, the characterising datum is the maximal $m$ so
that $X^n H^m\in L$. Due to the coaction this cannot decrease
with decreasing $n$ and due to the action of $X$ this can decrease at
most by $1$ when increasing $n$ by $1$. This leads to the
classification given. For $l\in N_0$ and $I\subset\mathbb{N}$ finite, the
corresponding crossed submodule
is generated by
\begin{gather*}
X^{n_m-1} H^{l+m-1}, X^{n_m+n_{m-1}-1} H^{l+m-2},\dots,
X^{(\sum_i n_i)-1} H^{l}\\
\text{and}\qquad
X^{(\sum_i n_i)+k} H^{l-1}\quad \forall k\ge 0\quad\text{if}\quad l>0
\end{gather*}
as a crossed module.
\end{proof}
For the transition from the $q$-deformed (lemma
\ref{lem:uqbp_class}) to the classical case we
observe that the space spanned by $g^{s_1},\dots,g^{s_m}$ with $m$
different integers $s_i\in\mathbb{Z}$ maps to the space spanned by
$1, H, \dots, H^{m-1}$ in the
prescription of the classical limit (as described in section
\ref{sec:intro_limits}). I.e.\ the classical crossed submodule
characterised by an integer $l$ and a finite set $I\subset\mathbb{N}$ comes
from a crossed submodule characterised by this same $I$ and additionally $l$
other integers $j\in\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In
particular, we have a one-to-one correspondence in the finite
dimensional case.
To formulate the analogue of corollary \ref{cor:uqbp_eclass} for the
classical case is essentially straightforward now. However, as for
\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed
setting. This is due to the degeneracy introduced by forgetting the
powers of $g$ and just retaining the number of different powers.
\begin{cor}
\label{cor:ubp_eclass}
(a) Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules
$L\subset\ker\cou\subset\ensuremath{U(\lalg{b_+})}$ via the
left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite where $l\neq 0$
or $I\neq\emptyset$.
$\dim L<\infty$ iff $l=0$. In particular $\dim
L=(\sum_{n\in I}n)-1$ if $l=0$.
\end{cor}
As in the $q$-deformed setting, we give a description of the finite
dimensional differential calculi where we have a strict duality to
quantum tangent spaces.
\begin{prop}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C(B_+)}{} and
finite dimensional quantum tangent spaces $L$ on \ensuremath{U(\lalg{b_+})}{} are
in one-to-one correspondence to non-empty finite sets $I\subset\mathbb{N}$.
In particular $\dim\Gamma=\dim L=(\sum_{n\in I}) n)-1$.
The $\Gamma$ with $1\in\mathbb{N}$ are in
one-to-one correspondence to the finite dimensional
calculi and quantum tangent spaces of the $q$-deformed setting
(theorem \ref{thm:q_calc}(a)).
(b) The differential calculus $\Gamma$ of dimension $n\ge 2$
corresponding to the
coirreducible one of \ensuremath{C_q(B_+)}{} (theorem \ref{thm:q_calc}(b)) has a right
invariant
basis $\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1+\eta_0 X \qquad
\diff g=\eta_0 g\\
[g, \eta_i]=0\ \forall i \qquad
[X, \eta_i]=\begin{cases}
0 & \text{if}\ i=0\ \text{or}\ i=n-1\\
\eta_{i+1} & \text{if}\ 0<i<n-1
\end{cases}
\end{gather*}
hold. The braided derivations obtained from the dual basis of the
corresponding $L$ are
given by
\begin{gather*}
\partial_i f=\frac{1}{i!}
\left(\frac{\partial}{\partial X}\right)^i f\qquad
\forall i\ge 1\\
\partial_0 f=\left(X \frac{\partial}{X}+
g \frac{\partial}{g}\right) f
\end{gather*}
for $f\in\ensuremath{C(B_+)}$.
(c) The differential calculus of dimension $n-1$
corresponding to the
one in (b) with $1$ removed from the characterising set is
the same as the one above, except that we set $\eta_0=0$ and
$\partial_0=0$.
\end{prop}
\begin{proof}
(a) We observe that the classifications of corollary
\ref{cor:cbp_class} and lemma \ref{lem:ubp_class} or
corollary \ref{cor:cbp_eclass} and corollary \ref{cor:ubp_eclass}
are dual to each other in the finite (co)dimensional case.
More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in corollary \ref{cor:cbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $(0,I)$ in lemma \ref{lem:ubp_class}
and vice versa.
$\ensuremath{C(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For non-empty $I$ this descends to
$M$ corresponding to $(1,I)$ in corollary
\ref{cor:cbp_eclass} and $L$ corresponding to $(0,I)$ in corollary
\ref{cor:ubp_eclass}.
For the dimension of $\Gamma$ note
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) For $I=\{1,n\}$ we choose in
$\ker\cou\subset\ensuremath{C(B_+)}$ the basis $\eta_0,\dots,\eta_{n-1}$ as the
equivalence classes of
$g-1,X,\dots,X^{n-1}$. The dual basis in $L$
is then $H,X,\dots,\frac{1}{k!}X^k,\dots,\frac{1}{(n-1)!}X^{n-1}$.
This leads to the
formulas given.
(c) For $I=\{n\}$ we get the same as in (b) except that $\eta_0$ and
$\partial_0$ disappear.
\end{proof}
The classical commutative calculus is the special case of (b) with
$n=2$. It is the only calculus of dimension $2$ with
$\diff g\neq 0$. Note that it is not coirreducible.
\section{The Dual Classical Limit}
\label{sec:dual}
We proceed in this section to the more interesting point of view where
we consider the classical algebras, but with their roles
interchanged. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as the ``function algebra''
and \ensuremath{C(B_+)}{} as the ``enveloping algebra''. Due to the self-duality of
\ensuremath{U_q(\lalg{b_+})}{}, we can again view the differential calculi and quantum tangent
spaces as classical limits of the $q$-deformed setting investigated in
section \ref{sec:q}.
In this dual setting the bicovariance constraint for differential
calculi becomes much
weaker. In particular, the adjoint action on a classical function
algebra is trivial due to commutativity and the adjoint coaction on a
classical enveloping algebra is trivial due to cocommutativity.
In effect, the correspondence with the
$q$-deformed setting is much weaker than in the ordinary case of
section \ref{sec:class}.
There are much more differential
calculi and quantum tangent spaces than in the $q$-deformed setting.
We will not attempt to classify all of them in the following but
essentially
contend ourselves with those objects coming from the $q$-deformed setting.
\begin{lem}
\label{lem:cbp_dual}
Left \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ via the left regular coaction are
$\mathbb{Z}$-graded subspaces of \ensuremath{C(B_+)}{} with $|X^n g^m|=n+m$,
stable under formal derivation in $X$.
By choosing any ordering in \ensuremath{C_q(B_+)}{}, left crossed submodules via left
regular action and adjoint coaction are in one-to-one correspondence
to certain subcomodules of \ensuremath{C(B_+)}{} by setting $q=1$. Direct sums
correspond to direct sums.
This descends to $\ker\cou\subset\ensuremath{C(B_+)}$ by the projection $x\mapsto
x-\cou(x) 1$.
\end{lem}
\begin{proof}
The coproduct on \ensuremath{C(B_+)}{} is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binom{n}{r}
X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction.
Projecting on the left hand side of the tensor product onto $g^l$ in a
basis $X^n g^k$, we
observe that coacting on an element
$\sum_{n,k} a_{n,k} X^n g^k$ we obtain elements
$\sum_n a_{n,l-n} X^n g^{l-n}$ for all $l$.
I.e.\ elements of the form
$\sum_n b_n X^n g^{l-n}$ lie
separately in a subcomodule and it is
sufficient to consider such elements. Writing the coaction
on such an element as
\[\sum_t \frac{1}{t!} X^t g^{l-t}\otimes \sum_n b_n
\frac{n!}{(n-t)!} X^{n-t} g^{l-n}\]
we see that the coaction generates all formal derivatives in $X$
of this element. This gives us the classification: \ensuremath{C(B_+)}-subcomodules
$\subseteq\ensuremath{C(B_+)}$ under the left regular coaction are $\mathbb{Z}$-graded
subspaces with $|X^n g^m|=n+m$, stable under formal derivation in
$X$ given by $X^n
g^m \mapsto n X^{n-1} g^m$.
The correspondence with the \ensuremath{C_q(B_+)} case follows from
the trivial observation
that the coproduct of \ensuremath{C(B_+)}{} is the same as that of \ensuremath{C_q(B_+)}{} with $q=1$.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
\begin{lem}
\label{lem:ubp_dual}
The process of obtaining the classical limit \ensuremath{U(\lalg{b_+})}{} from \ensuremath{U_q(\lalg{b_+})}{} is
well defined for subspaces and sends crossed \ensuremath{U_q(\lalg{b_+})}-submodules
$\subset\ensuremath{U_q(\lalg{b_+})}$ by
regular action and adjoint coaction to \ensuremath{U(\lalg{b_+})}-submodules $\subset\ensuremath{U(\lalg{b_+})}$
by regular
action. This map is injective in the finite codimensional
case. Intersections and codimensions are preserved in this case.
This descends to $\ker\cou$.
\end{lem}
\begin{proof}
To obtain the classical limit of a left ideal it is enough to
apply the limiting process (as described in section
\ref{sec:intro_limits}) to the
module generators (We can forget the additional comodule
structure). On the one hand,
any element generated by left multiplication with polynomials in
$g$ corresponds to some element generated by left multiplication with a
polynomial in $H$, that is, there will be no more generators in the
classical setting. On the other hand, left multiplication by a
polynomial in $H$ comes
from left multiplication by the same polynomial in $g-1$, that is,
there will be no fewer generators.
The maximal left crossed \ensuremath{U_q(\lalg{b_+})}-submodule $\subseteq\ensuremath{U_q(\lalg{b_+})}$
by left multiplication and adjoint coaction of
codimension $n$ ($n\ge 1$) is generated as a left ideal by
$\{1-q^{1-n}g,X^n\}$ (see lemma
\ref{lem:cqbp_class}). Applying the limiting process to this
leads to the
left ideal of \ensuremath{U(\lalg{b_+})}{} (which is not maximal for $n\neq 1$) generated by
$\{H+n-1,X^n\}$ having also codimension $n$.
More generally, the picture given for arbitrary finite codimensional left
crossed modules of \ensuremath{U_q(\lalg{b_+})}{} in terms of generators with respect to
polynomials in $g,g^{-1}$ in lemma \ref{lem:cqbp_class} carries over
by replacing factors
$1-q^{1-n}g$ with factors $H+n-1$ leading to generators with
respect to polynomials in $H$. In particular,
intersections go to intersections since the distinctness of
the factors for different $n$ is conserved.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
We are now in a position to give a detailed description of the
differential calculi induced from the $q$-deformed setting by the
limiting process.
\begin{prop}
(a) Certain finite dimensional
differential calculi $\Gamma$ on \ensuremath{U(\lalg{b_+})}{} and quantum tangent spaces $L$
on \ensuremath{C(B_+)}{}
are in one-to-one correspondence to finite dimensional differential
calculi on \ensuremath{U_q(\lalg{b_+})}{} and quantum
tangent spaces on \ensuremath{C_q(B_+)}{}. Intersections correspond to intersections.
(b) In particular,
$\Gamma$ and $L$ corresponding to coirreducible differential calculi
on \ensuremath{U_q(\lalg{b_+})}{} and
irreducible quantum tangent spaces on \ensuremath{C_q(B_+)}{} via the limiting process
are given as follows:
$\Gamma$ has a right invariant basis
$\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1 \qquad \diff H=(1-n)\eta_0 \\
[H, \eta_i]=(1-n+i)\eta_i\quad\forall i\qquad
[X, \eta_i]=\begin{cases}
\eta_{i+1} & \text{if}\ \ i<n-1\\
0 & \text{if}\ \ i=n-1
\end{cases}
\end{gather*}
holds. The braided derivations corresponding to the dual basis of
$L$ are given by
\begin{gather*}
\partial_i\no{f}=\no{T_{1-n+i,H}
\frac{1}{i!}\left(\frac{\partial}{\partial X}\right)^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=\no{T_{1-n,H} f - f}
\end{gather*}
for $f\in\k[X,H]$
with the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ via $H^n X^m\mapsto H^n X^m$.
\end{prop}
\begin{proof}
(a) The strict duality between \ensuremath{C(B_+)}-subcomodules $L\subseteq\ker\cou$
given by lemma \ref{lem:cbp_dual} and corollary \ref{cor:uqbp_eclass}
and \ensuremath{U(\lalg{b_+})}-modules $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$ with $M$ given by lemma
\ref{lem:ubp_dual} and
corollary \ref{cor:cqbp_eclass} can be checked explicitly.
It is essentially due to mutual annihilation of factors $H+k$ in
\ensuremath{U(\lalg{b_+})}{} with elements $g^k$ in \ensuremath{C(B_+)}{}.
(b) $L$ is generated by
$\{g^{1-n}-1,Xg^{1-n},\dots,
X^{n-1}g^{1-n}\}$ and
$M$ is generated by $\{H(H+n-1),X(H+n-1),X^n \}$.
The formulas are obtained by denoting with
$\eta_0,\dots,\eta_{n-1}$ the equivalence classes of
$H/(1-n),X,\dots,X^{n-1}$ in $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$.
The dual basis of $L$ is then
\[g^{1-n}-1,X g^{1-n},
\dots,\frac{1}{(n-1)!}X^{n-1}
g^{1-n}\]
\end{proof}
In contrast to the $q$-deformed setting and to the usual classical
setting the many freedoms in choosing a calculus leave us with many
$2$-dimensional calculi. It is not obvious which one we should
consider to be the ``natural'' one. Let us first look at the
$2$-dimensional calculus coming from the $q$-deformed
setting as described in (b). The relations become
\begin{gather*}
[\diff H, a]=\diff a\qquad [\diff X, a]=0\qquad\forall a\in\ensuremath{U(\lalg{b_+})}\\
\diff\no{f} =\diff H \no{\fdiff_{1,H} f}
+ \diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
for $f\in\k[X,H]$.
We might want to consider calculi which are closer to the classical
theory in the sense that derivatives are not finite differences but
usual derivatives. Let us therefore demand
\[\diff P(H)=\diff H \frac{\partial}{\partial H} P(H)\qquad
\text{and}\qquad
\diff P(X)=\diff X \frac{\partial}{\partial X} P(X)\]
for polynomials $P$ and ${\diff X}\neq 0$ and ${\diff H}\neq 0$.
\begin{prop}
\label{prop:nat_bp}
There is precisely one differential calculus of dimension $2$ meeting
these conditions. It obeys the relations
\begin{gather*}
[a,\diff H]=0\qquad [X,\diff X]=0\qquad [H,\diff X]=\diff X\\
\diff \no{f} =\diff H \no{\frac{\partial}{\partial H} f}
+\diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
where the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ is given by
$X^n H^m\mapsto X^n H^m$.
\end{prop}
\begin{proof}
Let $M$ be the left ideal corresponding to the calculus. It is easy to
see that for a primitive element $a$ the classical derivation condition
corresponds to $a^2\in M$ and $a\notin M$. In our case $X^2,H^2\in
M$. If we take the
ideal generated from these two elements we obtain an ideal of
$\ker\cou$ of codimension $3$. Now, it is sufficient without loss of
generality to add a generator of the form $\alpha H+\beta X+\gamma
XH$. $\alpha$ and $\beta$ must then be zero in order not
to generate $X$ or $H$ in $M$.
I.e.\ $M$ is generated by $H^2,
XH, X^2$. The relations stated follow.
\end{proof}
\section{Remarks on $\kappa$-Minkowski Space and Integration}
\label{sec:kappa}
There is a straightforward generalisation of \ensuremath{U(\lalg{b_+})}.
Let us define the Lie algebra $\lalg b_{n+}$ as generated by
$x_0,\dots, x_{n-1}$ with relations
\[ [x_0,x_i]=x_i\qquad [x_i,x_j]=0\qquad\forall i,j\ge 1\]
Its enveloping algebra \ensuremath{U(\lalg{b}_{n+})}{} is nothing but (rescaled) $\kappa$-Minkowski
space as introduced in \cite{MaRu}. In this section we make some
remarks about its intrinsic geometry.
We have an injective Lie algebra
homomorphism $b_{n+}\to b_+$ given by
$x_0\mapsto H$ and $x_i\mapsto X$.
This is an isomorphism for $n=2$. The injective Lie algebra
homomorphism extends to an injective homomorphism of enveloping
algebras $\ensuremath{U(\lalg{b_+})}\to \ensuremath{U(\lalg{b}_{n+})}$ in the obvious way. This gives rise
to an injective map from the set of submodules of \ensuremath{U(\lalg{b_+})}{} to the set of
submodules of \ensuremath{U(\lalg{b}_{n+})}{} by taking the pre-image. In
particular this induces an injective
map from the set of differential calculi on \ensuremath{U(\lalg{b_+})}{} to the set of
differential calculi on \ensuremath{U(\lalg{b}_{n+})}{} which are invariant under permutations
of the $x_i\ i\ge 1$.
\begin{cor}
\label{cor:nat_bnp}
There is a natural $n$-dimensional differential calculus on \ensuremath{U(\lalg{b}_{n+})}{}
induced from the one considered in proposition
\ref{prop:nat_bp}.
It obeys the relations
\begin{gather*}
[a,\diff x_0]=0\quad\forall a\in \ensuremath{U(\lalg{b}_{n+})}\qquad [x_i,\diff x_j]=0
\quad [x_0,\diff x_i]=\diff x_i\qquad\forall i,j\ge 1\\
\diff \no{f} =\sum_{\mu=0}^{n-1}\diff x_{\mu}
\no{\frac{\partial}{\partial x_{\mu}} f}
\end{gather*}
where the normal ordering is given by
\[\k[x_0,\dots,x_{n-1}]\to \ensuremath{U(\lalg{b}_{n+})}\quad\text{via}\quad
x_{n-1}^{m_{n-1}}\cdots
x_0^{m_0}\mapsto x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\]
\end{cor}
\begin{proof}
The calculus is obtained from the ideal generated by
\[x_0^2,x_i x_j, x_i x_0\qquad\forall i,j\ge 1\]
being the pre-image of
$X^2,XH,X^2$ in \ensuremath{U(\lalg{b_+})}{}.
\end{proof}
Let us try to push the analogy with the commutative case further and
take a look at the notion of integration. The natural way to encode
the condition of translation invariance from the classical context
in the quantum group context
is given by the condition
\[(\int\otimes\id)\circ\cop a=1 \int a\qquad\forall a\in A\]
which defines a right integral on a quantum group $A$
\cite{Sweedler}.
(Correspondingly, we have the notion of a left integral.)
Let us
formulate a slightly
weaker version of this equation
in the context of a Hopf algebra $H$ dually paired with
$A$. We write
\[\int (h-\cou(h))\triangleright a = 0\qquad \forall h\in H, a\in A\]
where the action of $H$ on $A$ is the coregular action
$h\triangleright a = a_{(1)}\langle a_{(2)}, h\rangle$
given by the pairing.
In the present context we set $A=\ensuremath{U(\lalg{b}_{n+})}$ and $H=\ensuremath{C(B_{n+})}$. We define the
latter as a generalisation of \ensuremath{C(B_+)}{} with commuting
generators $g,p_1,\dots,p_{n-1}$ and coproducts
\[\cop p_i=p_i\otimes 1+g\otimes p_i\qquad \cop g=g\otimes g\]
This can be identified (upon rescaling) as the momentum sector of the
full $\kappa$-Poincar\'e algebra (with $g=e^{p_0}$).
The pairing is the natural extension of (\ref{eq:pair_class}):
\[\langle x_{n-1}^{m_{n-1}}\cdots x_1^{m_1} x_0^{k},
p_{n-1}^{r_{n-1}}\cdots p_1^{r_1} g^s\rangle
= \delta_{m_{n-1},r_{n-1}}\cdots\delta_{m_1,r_1} m_{n-1}!\cdots m_1!
s^k\]
The resulting coregular
action is conveniently expressed as (see also \cite{MaRu})
\[p_i\triangleright\no{f}=\no{\frac{\partial}{\partial x_i} f}\qquad
g\triangleright\no{f}=\no{T_{1,x_0} f}\]
with $f\in\k[x_0,\dots,x_{n-1}]$.
Due to cocommutativity, the notions of left and right integral
coincide. The invariance conditions for integration become
\[\int \no{\frac{\partial}{\partial x_i} f}=0\quad
\forall i\in\{1,\dots,n-1\}
\qquad\text{and}\qquad \int \no{\fdiff_{1,x_0} f}=0\]
The condition on the left is familiar and states the invariance under
infinitesimal translations in the $x_i$. The condition on the right states the
invariance under integer translations in $x_0$. However, we should
remember that we use a certain algebraic model of \ensuremath{C(B_{n+})}{}. We might add,
for example, a generator $p_0$
to \ensuremath{C(B_{n+})}{}
that is dual to $x_0$ and behaves
as the ``logarithm'' of $g$, i.e.\ acts as an infinitesimal
translation in $x_0$. We then have the condition of infinitesimal
translation invariance
\[\int \no{\frac{\partial}{\partial x_{\mu}} f}=0\]
for all $\mu\in\{0,1,\dots,{n-1}\}$.
In the present purely algebraic context these conditions do not make
much sense. In fact they would force the integral to be zero on the
whole algebra. This is not surprising, since we are dealing only with
polynomial functions which would not be integrable in the classical
case either.
In contrast, if we had for example the algebra of smooth functions
in two real variables, the conditions just characterise the usual
Lesbegue integral (up to normalisation).
Let us assume $\k=\mathbb{R}$ and suppose that we have extended the normal
ordering vector
space isomorphism $\mathbb{R}[x_0,\dots,x_{n-1}]\cong \ensuremath{U(\lalg{b}_{n+})}$ to a vector space
isomorphism of some sufficiently large class of functions on $\mathbb{R}^n$ with a
suitable completion $\hat{U}(\lalg{b_{n+}})$ in a functional
analytic framework (embedding \ensuremath{U(\lalg{b}_{n+})}{} in some operator algebra on a
Hilbert space). It is then natural to define the integration on
$\hat{U}(\lalg{b_{n+}})$ by
\[\int \no{f}=\int_{\mathbb{R}^n} f\ dx_0\cdots dx_{n-1}\]
where the right hand side is just the usual Lesbegue integral in $n$
real variables $x_0,\dots,x_{n-1}$. This
integral is unique (up to normalisation) in
satisfying the covariance condition since, as we have seen,
these correspond
just to the usual translation invariance in the classical case via normal
ordering, for which the Lesbegue integral is the unique solution.
It is also the $q\to 1$ limit of the translation invariant integral on
\ensuremath{U_q(\lalg{b_+})}{} obtained in \cite{Majid_qreg}.
We see that the natural differential calculus in corollary
\ref{cor:nat_bnp} is
compatible with this integration in that the appearing braided
derivations are exactly the actions of the translation generators
$p_{\mu}$. However, we should stress that this calculus is not
covariant under the full $\kappa$-Poincar\'e algebra, since it was
shown in \cite{GoKoMa} that in $n=4$ there is no such
calculus of dimension $4$. Our results therefore indicate a new
intrinsic approach to $\kappa$-Minkowski space that allows a
bicovariant
differential calculus of dimension $4$ and a unique translation
invariant integral by normal ordering and Lesbegue integration.
\section*{Acknowledgements}
I would like to thank S.~Majid for proposing this project,
and for fruitful discussions during the preparation of this paper.
| {'timestamp': '1998-07-19T14:33:52', 'yymm': '9807', 'arxiv_id': 'math/9807097', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807097'} |
\section{Introduction}
Continuous Engineering (CE) practices,
such as Continuous Integration (CI) and Continuous Deployment (CD),
are gaining prominence in software engineering,
as they help streamline and optimize the way software is built, tested and shipped.
The most salient advantage of CE is the tighter feedback loops:
CE practices help developers test and build their software more,
and makes software releases less brittle by enabling more incremental releases.
Nevertheless, a frequently reported barrier for success is the need to effectively analyze
the data that results from the numerous build and test
runs~\cite{Laukkanen2017,Hilton2017,Shahin2017,Debbiche2014,Olsson2012}.
One evident example of this is the handling and
analysis of results from complex end-to-end integration tests
which we focus on in this paper:
CE practices make it easier to run such end-to-end tests,
which include system integration and deployment to production hardware,
and they are critical for ensuring the quality of the end product.
However, since these end-to-end tests by their nature can fail for multiple
reasons, not least in the sense that new product code can make the tests
fail in new ways, it is critical to rapidly diagnose these failures.
In this paper we concern ourselves with how to rapidly analyze a set
of logs resulting from complex CE tasks\footnote{~For simplicity, and without loss of generality,
we will refer to these CE tasks as ``integration tests'' or ``tests'' throughout the paper,
though we acknowledge that they include more than just testing,
such as building the system and deploying it on hardware in a test or staging environment,
and failures can occur in any of these phases.
The proposed approach aims to cover all these situations,
and is evaluated on real-life logs capturing everything from building the system,
to deploying it on production hardware,
and running complex integration and interaction scenarios.}
where the overall outcome of the task (i.e. 'fail' or 'pass') is known,
but where analysts must consult the resulting logs to fully diagnose why the failures occurred.
Since these logs can get large and unwieldy, we
develop a tool that automatically suggests which segments in the logs
are most likely relevant for troubleshooting purposes.
Our method gives each event in the log an interestingness score based
on the overall event frequencies in the test result set: The log
events are in turn clustered based on these scores, and the event
clusters are presented to the user in decreasing order of overall
interestingness. The goal is to enable users to find all relevant
diagnostic information in the first presented event cluster, while having the
option of retrieving additional clusters if needed. An
additional benefit of our method is that the extracted events can help
identify commonly occurring patterns that are symptomatic for specific
errors. Future logs that exhibit the same characteristics can then be
automatically classified as having symptoms of that error.
\head{Contributions} We present Spectrum-Based Log Diagnosis (SBLD), a method for helping developers quickly find the
most relevant segments of a log. Using data from \CiscoNorway{an
industrial partner}, we empirically evaluate SBLD by investigating the following
three questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Overview}
The rest of the paper is structured as follows: Section~\ref{sec:approach}
explains SBLD and the methodology underlying its event ranking
procedures. Sections~\ref{sec:rqs} and~\ref{sec:expdesign} motivates our research questions
and empirical design. We report and discuss our results in
Section~\ref{sec:resdiscuss}. Section~\ref{sec:relwork} surveys related work,
and we discuss threats to validity in Section~\ref{sec:ttv} before concluding
in Section~\ref{sec:conclusion}.
%
\section{Approach}
\label{sec:approach}
\begin{figure}[b]
\includegraphics[width=0.99\columnwidth]{overview.pdf}
\vspace*{-2ex}
\caption{A visual overview of our approach.}
\label{fig:approach}
\end{figure}
SBLD takes a set of log files from test failures, a set of log files from test successes, and a singular log file from a test failure called the \emph{target log} that the user wants analyzed and produces a list of segments from the target log file that are likely relevant for understanding why the corresponding test run failed.
In the following we explain the workings of SBLD in a stepwise
manner. At each step, we present the technical background needed to
understand how SBLD accomplishes its task. A visual overview of SBLD is
shown in Figure \ref{fig:approach}.
\head{Prerequisites}
First of all, SBLD requires access to a set of log files from failing test runs and a set of log files from successful test runs.
For brevity, we will refer to log files from failing test runs as 'failing logs',
and log files from successful test runs as 'passing logs'.%
\footnote{~Note that we explicitly assume that the outcome of each run is known;
This work is not concerned with determining whether the run was a failure or a success,
but rather with helping identify why the failing runs failed.}
We also require a programmatic way of segmenting each log file
into individually meaningful components. For the dataset used in this
paper these components are \emph{events} in the form of blocks of text
preceded by a date and a time-stamp in a predictable format. Lastly,
we require that run-time specific information such as timestamps,
dynamically generated IP addresses, check-sums and so on are removed
from the logs and replaced with standardized text. We refer to the process of
enforcing these requirements and delineating the log into events as
the \emph{abstraction} step. This enables SBLD to treat events
like ``2019-04-05 19:19:22.441 CEST: Alice calls Bob'' and ``2019-04-07
13:12:11.337 CEST: Alice calls Bob'' as two instances of the same
generic event "Alice calls Bob". The appropriate degree of abstraction
and how to meaningfully delineate a log will be context-dependent
and thus we require the user to perform these steps before using SBLD.
In the current paper we use an abstraction mechanism
and dataset generously provided by \CiscoNorway{our industrial partner}.
\renewcommand{\Ncf}{\ensuremath{\text{N}_\text{FI}}} %
\renewcommand{\Nuf}{\ensuremath{\text{N}_\text{FE}}} %
\renewcommand{\Ncs}{\ensuremath{\text{N}_\text{PI}}} %
\renewcommand{\Nus}{\ensuremath{\text{N}_\text{PE}}} %
\head{Computing coverage and event relevance} SBLD requires an assumption about what makes an event \emph{relevant}
and a method for computing this relevance. Our method takes inspiration
from Spectrum-Based Fault Localization (SBFL) in which the suspiciousness
or fault-proneness of a program statement is treated as a function of
the number of times the statement was activated in a failing test case,
combined with the number of times it is skipped in a passing test case~\cite{Jones2002,Abreu2007,Abreu2009}.
The four primitives that need to be computed are shown on the right-hand side in Table~\ref{table:measures}.
We treat each abstracted event as a statement and study their occurrences
in the logs like Fault Localization tracks the activation of statements in test cases.
We compute the analysis primitives by devising a binary
\emph{coverage matrix} whose columns represent every unique event
observed in the set of failing and successful logs while each row $r$
represents a log and tracks whether the event at column $c$ occurred in
log $r$ (1), or not (0), as shown in Figure~\ref{fig:approach}.
By computing these primitives, we can rank each event by using an
\emph{interestingness measure} (also referred to as ranking
metric, heuristic, or similarity coefficient~\cite{Wong2016}).
The choice of interestingness measure
is ultimately left to the user, as these are context dependent and
there is no generally optimal choice of interestingness measure~\cite{Yoo2014}.
In this paper we consider a
selection of nine interestingness measures prominent in the literature
and a simple metric that emphasizes the events that exclusively occur
in failing logs in the spirit of the \emph{union model} discussed
by Renieres et al.~\cite{renieres2003:fault}. We
report on the median performance of these interestingness measures with the intention of providing a
representative, yet unbiased, result. The ten measures considered are
precisely defined in Table~\ref{table:measures}.
\begin{table*}
\centering
\begin{tabular}{c@{\hspace{10mm}}c}
{\renewcommand{\arraystretch}{1.7} %
\begin{tabular}{lc}
\toprule
measure & formula \\\midrule
Tarantula \cite{Jones2001,Jones2002} & %
\( \frac{ \frac{ \cef{} }{ \cef{} + \cnf{} } }{ \frac{ \cef{} }{ \cef{} + \cnf{} } + \frac{ \cep{} }{ \cep{} + \cnp{} } } \)
\\
Jaccard \cite{Jaccard1912,Chen2002} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs } \)
\\
Ochiai \cite{Ochiai1957,Abreu2006} & %
\( \frac{ \Ncf }{ \sqrt{ ( \cef + \cnf ) \times ( \cef + \cep ) } } \)
\\
Ochiai2 \cite{Ochiai1957, Naish2011} & %
\( \frac{ \Aef \times \Anp }{ \sqrt{ ( \Aef + \Aep ) \times ( \Anf + \Anp ) \times ( \Aef + \Anf) \times ( \Aep + \Anp ) } } \)
\\
Zoltar \cite{Gonzalez2007} & %
\( \frac{ \Ncf }{ \Ncf + \Nuf + \Ncs + \frac { 10000 \times \Nuf \times \Ncs }{ \Ncf } } \)
\\
D$^\star$ \cite{Wong2014} (we use $\star = 2$) & %
\( \frac{ (\cef)^\star }{ \cnf + \cep } \)
\\
O$^p$ \cite{Naish2011} & %
\( \Aef - \frac{ \Aep }{ \Aep + \Anp + 1} \)
\\
Wong3 \cite{Wong2007,Wong2010} &
\( \Aef - h, \text{where~} h = \left\{
\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
\Aep & \text{if~} \Aep \leq 2 \\
2 + 0.1(\Aep - 2) & \text{if~} 2 < \Aep \leq 10 \\
2.8 + 0.001(\Aep - 10) & \text{if~} \Aep > 10 \\
\end{array}\)}
\right. \)
\\
Kulczynski2 \cite{Kulczynski1927,Naish2011} & %
\( \frac{ 1 }{ 2 } \times ( \frac{ \Aef }{ \Aef + \Anf } + \frac{ \Aef }{ \Aef + \Aep } ) \)
\\
Failed only & %
\( \left\{\scalebox{.8}{\(\renewcommand{\arraystretch}{1} %
\begin{array}{@{}ll@{}}
1 & \text{if~} \Ncs = 0 \\
0 & \text{otherwise~} \\
\end{array}\)}
\right. \)
\\
\bottomrule
\end{tabular}} &
\begin{tabular}{lp{2.99cm}}
\toprule
\multicolumn{2}{l}{notation used} \\\midrule
\Ncf & number of \emph{failing} logs \\ & that \emph{include} the event \\
\Nuf & number of \emph{failing} logs \\ & that \emph{exclude} the event \\
\Ncs & number of \emph{passing} logs \\ & that \emph{include} the event \\
\Nus & number of \emph{passing} logs \\ & that \emph{exclude} the event \\
\bottomrule
\end{tabular}
\end{tabular}\vspace*{1ex}
\caption{\label{table:measures}The 10 interestingness measures under consideration in this paper.}
\vspace*{-3ex}
\end{table*}
\head{Analyzing a target log file} Using our database of event scores,
we first identify the events occurring in the target log file and the
interestingness scores associated with these events. Then, we group
similarly scored events together using a clustering algorithm. Finally,
we present the best performing cluster of events to the end user. The
clustering step helps us make a meaningful selection of events rather
than setting an often arbitrary window selection size. Among other
things, it prevents two identically scored events from falling at
opposite sides of the selection threshold. If the user suspects that
the best performing cluster did not report all relevant events, she can
inspect additional event clusters in order of decreasing
aggregate interestingness score. To perform the clustering step we use Hierarchical Agglomerative
Clustering (HAC) with Complete linkage~\cite{manning2008introduction}, where
sub-clusters are merged until the maximal distance between members of
each candidate cluster exceeds some specified threshold. In SBLD,
this threshold is the uncorrected sample standard deviation of the event
scores for the events being clustered.\footnote{~Specifically,
we use the \texttt{numpy.std} procedure from the SciPy framework~\cite{2020SciPy-NMeth},
in which the uncorrected sample standard deviation is given by
$ \sqrt{\frac{1}{N} \sum_{i=1}^{N}\lvert x_{i} - \bar{x} \rvert^2} $ where
$\bar{x}$ is the sample mean of the interestingness scores obtained for the
events in the log being analyzed and $N$ is the number of events in the log.}
This ensures that the ``interestingness-distance'' between two events
in a cluster never exceeds the uncorrected sample standard deviation observed in the set.
%
\section{Research Questions}
\label{sec:rqs}
The goal of this paper is to present SBLD and help practitioners make
an informed decision whether SBLD meets their needs. To this end, we have identified
three research questions that encompass several concerns practitioners
are likely to have and that also are of interested to the research community at
large:
\begin{enumerate}[\bfseries RQ1]
\item How well does SBLD reduce the effort needed to identify all
known-to-be relevant events ("does it work?") ?
\item How is the efficacy of SBLD impacted by increased evidence in the form of
additional failing and passing logs ("how much data do we need before
running the analysis?") ?
\item How does SBLD perform compared to a strategy based on searching for
common textual patterns with a tool like \texttt{grep} ("is it better than doing the obvious thing?") ?
\end{enumerate}
RQ1 looks at the aggregated performance of SBLD to assess its viability.
With RQ2 we assess how sensitive the performance is to the amount of
available data: How many logs should you have before you can expect the
analysis to yield good results? Is more data unequivocally a good thing?
What type of log is more informative: A passing log or a failing log?
Finally, we compare SBLD's performance to a more traditional method for
finding relevant segments in logs: Using a textual search for strings
one expects to occur near informative segments, like
"failure" and "error". The next section details the dataset used, our
chosen quality measures for assessment and our methodology for answering
each research question.
%
\section{Experimental Design}
\label{sec:expdesign}
\begin{table}
\centering
\caption{The key per-test attributes of our dataset. Two events are considered
distinct if they are treated as separate events after the abstraction
step. A "mixed" event is an event that occurs in logs of both failing and
passing runs.}
\vspace*{-1ex}
\label{table:descriptive}
\renewcommand{\tabcolsep}{0.11cm}\small
\begin{tabular}{rcrrrrrr}
\toprule
& & \# fail & \# pass & distinct & fail-only & mixed & pass-only \\
test & signature & logs & logs & events & events & events & events \\
\midrule
1 & C & 24 & 100 & 36391 & 21870 & 207 & 14314 \\
2 & E & 11 & 25 & 380 & 79 & 100 & 201 \\
3 & E & 11 & 25 & 679 & 174 & 43 & 462 \\
4 & E & 4 & 25 & 227 & 49 & 39 & 139 \\
5 & C & 2 & 100 & 33420 & 2034 & 82 & 31304 \\
6 & C & 19 & 100 & 49155 & 15684 & 893 & 32578 \\
7 & C & 21 & 100 & 37316 & 17881 & 154 & 19281 \\
8 & C & 4 & 100 & 26614 & 3976 & 67 & 22571 \\
9 & C & 21 & 100 & 36828 & 19240 & 228 & 17360 \\
10 & C & 22 & 100 & 110479 & 19134 & 1135 & 90210 \\
11 & E & 5 & 25 & 586 & 95 & 47 & 444 \\
12 & E & 7 & 25 & 532 & 66 & 18 & 448 \\
13 & C & 2 & 100 & 15351 & 2048 & 232 & 13071 \\
14 & C & 3 & 100 & 16318 & 2991 & 237 & 13090 \\
15 & C & 26 & 100 & 60362 & 20964 & 1395 & 38003 \\
16 & C & 12 & 100 & 2206 & 159 & 112 & 1935 \\
17 & E & 8 & 25 & 271 & 58 & 98 & 115 \\
18 & A & 23 & 75 & 3209 & 570 & 156 & 2483 \\
19 & C & 13 & 100 & 36268 & 13544 & 411 & 22313 \\
20 & B & 3 & 19 & 688 & 69 & 31 & 588 \\
21 & B & 22 & 25 & 540 & 187 & 94 & 259 \\
22 & E & 1 & 25 & 276 & 11 & 13 & 252 \\
23 & C & 13 & 100 & 28395 & 13629 & 114 & 14652 \\
24 & E & 7 & 26 & 655 & 117 & 56 & 482 \\
25 & C & 21 & 100 & 44693 & 18461 & 543 & 25689 \\
26 & C & 21 & 100 & 42259 & 19434 & 408 & 22417 \\
27 & C & 21 & 100 & 44229 & 18115 & 396 & 25718 \\
28 & C & 20 & 100 & 43862 & 16922 & 642 & 26298 \\
29 & C & 28 & 100 & 54003 & 24216 & 1226 & 28561 \\
30 & C & 31 & 100 & 53482 & 26997 & 1063 & 25422 \\
31 & C & 27 & 100 & 53092 & 23283 & 463 & 29346 \\
32 & C & 21 & 100 & 55195 & 19817 & 768 & 34610 \\
33 & E & 9 & 25 & 291 & 70 & 30 & 191 \\
34 & D & 2 & 13 & 697 & 76 & 92 & 529 \\
35 & E & 9 & 25 & 479 & 141 & 47 & 291 \\
36 & E & 10 & 75 & 1026 & 137 & 68 & 821 \\
37 & E & 7 & 25 & 7165 & 1804 & 94 & 5267 \\
38 & E & 4 & 25 & 647 & 67 & 49 & 531 \\
39 & G & 47 & 333 & 3350 & 428 & 144 & 2778 \\
40 & G & 26 & 333 & 3599 & 240 & 157 & 3202 \\
41 & G & 26 & 332 & 4918 & 239 & 145 & 4534 \\
42 & C & 17 & 100 & 30411 & 14844 & 348 & 15219 \\
43 & F & 267 & 477 & 10002 & 3204 & 1519 & 5279 \\
44 & C & 9 & 100 & 29906 & 8260 & 274 & 21372 \\
45 & E & 3 & 25 & 380 & 44 & 43 & 293 \\
\bottomrule
\end{tabular}
\vspace*{-2ex}
\end{table}
%
\begin{table}
\centering
\caption{Ground-truth signatures and their occurrences in distinct events.}
\label{table:signature}
\vspace*{-1ex}
\small
\begin{tabular}{ccrrrc}
\toprule
& sub- & fail-only & pass-only & fail \& & failure \\
signature & pattern & events & events & pass & strings* \\
\midrule
A & 1 & 1 & 0 & 0 & yes \\
A & 2 & 2 & 0 & 0 & no \\
B & 1 & 2 & 0 & 0 & yes \\
C & 1 & 21 & 0 & 0 & yes \\
C & 2 & 21 & 0 & 0 & yes \\
D & 1 & 4 & 0 & 0 & yes \\
\textbf{D$^{\#}$} & \textbf{2} & 69 & 267 & 115 & no \\
\textbf{D$^{\#}$} & \textbf{3} & 2 & 10 & 13 & no \\
\textbf{E$^{\#}$} & \textbf{1} & 24 & 239 & 171 & no \\
E & 1 & 1 & 0 & 0 & no \\
E & 2 & 9 & 0 & 0 & no \\
E & 3 & 9 & 0 & 0 & yes \\
E & 4 & 23 & 0 & 0 & yes \\
F & 1 & 19 & 0 & 0 & yes \\
F & 2 & 19 & 0 & 0 & no \\
F & 3 & 19 & 0 & 0 & yes \\
F & 4 & 14 & 0 & 0 & yes \\
G & 1 & 2 & 0 & 0 & yes \\
G & 2 & 1 & 0 & 0 & no \\
G & 3 & 1 & 0 & 0 & no \\
\bottomrule
\multicolumn{6}{l}{* signature contains the lexical patterns 'error', 'fault' or 'fail*'}\\
\multicolumn{6}{l}{$^{\#}$ sub-patterns that were removed to ensure a clean ground truth}
\end{tabular}
\vspace*{-3ex}
\end{table}
\subsection{Dataset and ground truth}
\label{sec:dataset}
Our dataset provided by \CiscoNorway{our industrial partner} consists
of failing and passing log files from 45 different end-to-end integration
tests. In addition to the log text we also have data on when a given
log file was produced. Most test-sets span a time-period of 38 days, while
the largest set (test 43 in Table~\ref{table:descriptive}) spans 112
days. Each failing log is known to exemplify symptoms of one of seven
known errors, and \CiscoNorway{our industrial partner} has given us a
set of regular expressions that help determine which events are relevant
for a given known error. We refer to the set of regular expressions
that identify a known error as a \emph{signature} for that error. These
signatures help us construct a ground truth for our investigation.
Moreover, an important motivation for developing SBLD is to help create
signatures for novel problems: The events highlighted by SBLD should be
characteristic of the observed failure, and the textual contents of the
events can be used in new signature expressions.
Descriptive facts about our dataset is listed in
Table~\ref{table:descriptive} while Table~\ref{table:signature}
summarizes key insights about the signatures used.
Ideally, our ground truth should highlight exactly and \emph{only} the
log events that an end user would find relevant for troubleshooting
an error. However, the signatures used in this investigation were
designed to find sufficient evidence that the \emph{entire log} in
question belongs to a certain error class: the log might contain other
events that a human user would find equally relevant for diagnosing
a problem, but the signature in question might not encompass these
events. Nevertheless, the events that constitute sufficient evidence
for assigning the log to a given error class are presumably relevant
and should be presented as soon as possible to the end user. However,
if our method cannot differentiate between these signature events and
other events we cannot say anything certain about the relevance of
those other events. This fact is reflected in our choice of quality
measures, specifically in how we assess the precision of the approach. This
is explained in detail in the next section.
When producing the ground truth, we first ensured that a log would only be
associated with a signature if the entire log taken as a whole satisfied all
the sub-patterns of that signature. If so, we then determined which events
the patterns were matching on. These events constitute the known-to-be relevant
set of events for a given log. However, we identified some problems with two of the provided
signatures that made them unsuitable for assessing SBLD. Signature \emph{E}
(see Table~\ref{table:signature}) had a sub-pattern that searched for a "starting test"-prefix that necessarily
matches on the first event in all logs due to the structure of the logs.
Similarly, signature \emph{D} contained two sub-patterns that necessarily
match all logs in the set--in this case by searching for whether the test
was run on a given machine, which was true for all logs for the corresponding
test. We therefore elected to remove these sub-patterns from the signatures
before conducting the analysis.
\subsection{Quality Measures}
As a measure of how well SBLD reports all known-to-be relevant log
events, we measure \emph{recall in best cluster}, which we for brevity refer to
as simply \emph{recall}.
This is an adaption of the classic recall measure used in information retrieval,
which tracks the proportion of all relevant events that were retrieved
by the system~\cite{manning2008introduction}.
As our method presents events to the user in a series of ranked clusters,
we ideally want all known-to-be relevant events to appear in the highest ranked cluster.
We therefore track the overall recall obtained as if the first cluster were the only events retrieved.
Note, however, that SBLD ranks all clusters, and a user can retrieve additional clusters if desired.
We explore whether this could improve SBLD's performance on a
specific problematic test-set in Section~\ref{sec:testfourtythree}.
It is trivial to obtain a perfect recall by simply retrieving all events
in the log, but such a method would obviously be of little help to a user
who wants to reduce the effort needed to diagnose failures.
We therefore also track the \emph{effort reduction} (ER), defined as
\[ \text{ER} = 1 - \frac{\text{number of events in first cluster}}{\text{number of events in log}} \]
Much like effective information retrieval systems aim for high recall and
precision, we want our method to score a perfect recall while obtaining the
highest effort reduction possible.
\subsection{Recording the impact of added data}
To study the impact of added data on SBLD's performance, we need to measure how
SBLD's performance on a target log $t$ is affected by adding an extra
failing log $f$ or a passing log $p$. There are several strategies
for accomplishing this. One way is to try all combinations in the
dataset i.e.\ compute the performance on any $t$ using any choice of
failing and passing logs to produce the interestingness scores. This
approach does not account for the fact that the logs in the data are
produced at different points in time and is also extremely expensive
computationally. We opted instead to order the logs chronologically and
simulate a step-wise increase in data as time progresses, as shown in
Algorithm~\ref{alg:time}.
\begin{algorithm}[b]
\caption{Pseudo-code illustrating how we simulate a step-wise increase in data
as time progresses and account for variability in choice of
interestingness measure.}
\label{alg:time}
\begin{algorithmic}\small
\STATE $F$ is the set of failing logs for a given test
\STATE $P$ is the set of passing logs for a given test
\STATE $M$ is the set of interestingness measures considered
\STATE sort $F$ chronologically
\STATE sort $P$ chronologically
\FOR{$i=0$ to $i=\lvert F \rvert$}
\FOR{$j=0$ to $j=\lvert P \rvert$}
\STATE $f = F[:i]$ \COMMENT{get all elements in F up to and including position i}
\STATE $p = P[:j]$
\FORALL{$l$ in $f$}
\STATE initialize $er\_scores$ as an empty list
\STATE initialize $recall\_scores$ as an empty list
\FORALL{$m$ in $M$}
\STATE perform SBLD on $l$ using $m$ as measure \\ \hspace*{1.75cm} and $f$ and $p$ as spectrum data
\STATE append recorded effort reduction score to $er\_scores$
\STATE append recorded recall score to $recall\_scores$
\ENDFOR
\STATE record median of $er\_scores$
\STATE record median of $recall\_scores$
\ENDFOR
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Variability in interestingness measures}
\label{sec:imvars}
As mentioned in Section~\ref{sec:approach}, SBLD requires a
choice of interestingness measure for scoring the events,
which can have a considerable impact on SBLD's performance.
Considering that the best choice of interestingness measure is context-dependent,
there is no global optimum,
it is up to the user to decide which interestingness metric best reflects their
notion of event relevance.
Consequently, we want to empirically study SBLD in way
that captures the variability introduced by this decision.
To this end, we record the median score obtained by performing SBLD for every possible choice of
interestingness measure from those listed in Table~\ref{table:measures}.
Algorithm~\ref{alg:time} demonstrates the procedure in pseudo-code.
\subsection{Comparing alternatives}
\label{sec:comps}
To answer RQ2 and RQ3, we use pairwise comparisons of
different configurations of SBLD with a method that searches for regular expressions.
The alternatives are compared
on each individual failing log in the set in a paired fashion. An
important consequence of this is that the statistical comparisons have
no concept of which test the failing log belongs to, and thus the test
for which there is most data has the highest impact on the result of the
comparison.
The pairwise comparisons are conducted using paired Wilcoxon signed-rank
tests~\cite{wilcoxon1945} where the Pratt correction~\cite{Pratt1959}
is used to handle ties. We apply Holm's correction~\cite{Holm1979}
to the obtained p-values to account for the family-wise error
rate arising from multiple comparisons. We declare a comparison
\emph{statistically significant} if the Holm-adjusted p-value is below
$\alpha=0.05$. The Wilcoxon tests check the two-sided null hypothesis of
no difference between the alternatives. We report the Vargha-Delaney $A_{12}$ and
$A_{21}$~\cite{Vargha2000} measures of stochastic superiority to
indicate which alternative is the strongest. Conventionally, $A_{12}=0.56$ is
considered a small difference, $A_{12}=.64$ is considered a medium difference
and $A_{12}=.71$ or greater is considered large~\cite{Vargha2000}. Observe
also that $A_{21} = 1 - A_{12}$.
\begin{figure*}
\includegraphics[width=0.8\textwidth]{rq1_boxplot.png}
%
\caption{The overall performance of SBLD in terms of effort reduction
and recall. On many tests, SBLD exhibited perfect recall for
all observations in the inter-quartile range and thus the box collapses to a single line on the $1.0$ mark.\label{fig:rq1boxplot}}
\end{figure*}
\subsection{Analysis procedures}
We implement the SBLD approach in a prototype tool
DAIM (Diagnosis and Analysis using Interestingness Measures),
and use DAIM to empirically evaluate the idea.
\head{RQ1 - overall performance} We investigate the overall performance
of SBLD by analyzing a boxplot for each test in our dataset. Every individual
datum that forms the basis of the plot is the median performance of SBLD over
all choices of interestingness measures for a given set of failing and passing
logs subject to the chronological ordering scheme outlined above.
\head{RQ2 - impact of data} We analyze the impact of added data by
producing and evaluating heatmaps that show the obtained performance
as a function of the number of failing logs (y-axis) and number of
passing logs (x-axis). The color intensity of each tile in the heatmaps
is calculated by taking the median of the scores obtained for each
failing log analyzed with the given number of failing and passing logs
as data for the spectrum inference, wherein the score for each log is
the median over all the interestingness measures considered as outlined in
Section~\ref{sec:imvars}.
Furthermore, we compare three variant configurations
of SBLD that give an overall impression of the influence of added
data. The three configurations considered are \emph{minimal evidence},
\emph{median evidence} and \emph{maximal evidence}, where minimal
evidence uses only events from the log being analyzed and one additional
passing log, median evidence uses the median amount of respectively failing and
and passing logs available while maximal evidence uses
all available data for a given test. The comparisons are conducted with the
statistical scheme described above in Section~\ref{sec:comps}.
\head{RQ3 - SBLD versus pattern-based search} To compare SBLD
against a pattern-based search, we record the effort reduction and
recall obtained when only selecting events in the log that match on the
case-insensitive regular expression \texttt{"error|fault|fail*"}, where
the $*$ denotes a wildcard-operator and the $\lvert$ denotes logical
$OR$. This simulates the results that a user would obtain by using
a tool like \texttt{grep} to search for words like 'error' and 'failure'.
Sometimes the ground-truth signature expressions contain words from this
pattern, and we indicate this in Table~\ref{table:signature}. If so, the
regular expression-based method is guaranteed to retrieve the event.
Similarly to RQ2, we compare the three configurations of SBLD described
above (minimum, median and maximal evidence) against the pattern-based
search using the statistical described in Section~\ref{sec:comps}.
%
\section{Results and Discussion}
\label{sec:resdiscuss}
This section gradually dissects Figure~\ref{fig:rq1boxplot}, showing a breakdown of SBLD's performance per test for both recall
and effort reduction, Figures \ref{fig:erheat} and \ref{fig:recallheat},
showing SBLD's performance as a function of the number of failing and passing
logs used, as well as Table~\ref{table:comparisons}, which shows the results
of the statistical comparisons we have performed.
\begin{figure*}
\includegraphics[width=\textwidth]{er_heatmap.pdf}
\caption{Effort reduction score obtained when SBLD is run on a given number of failing and passing logs. The tests not listed in this figure all obtained a lowest median effort reduction score of 90\% or greater and are thus not shown for space considerations. \label{fig:erheat}}
\vspace*{-2ex}
\end{figure*}
\begin{table*}
\caption{Statistical comparisons performed in this investigation. The
bold p-values are those for which no statistically significant difference under $\alpha=0.05$
could be established.}
\label{table:comparisons}
{\small%
\begin{tabular}{lllrrrr}
\toprule
variant 1 & variant 2 & quality measure & Wilcoxon statistic & $A_{12}$ & $A_{21}$ & Holm-adjusted p-value\\
\midrule
pattern-based search & minimal evidence & effort reduction & 29568.5 & 0.777 & 0.223 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & effort reduction & 202413.0 & 0.506 & 0.494 & \textbf{1.000} \\
pattern-based search & median evidence & effort reduction & 170870.5 & 0.496 & 0.504 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & effort reduction & 832.0 & 0.145 & 0.855 & $\ll$ 0.001 \\
minimal evidence & median evidence & effort reduction & 2666.0 & 0.125 & 0.875 & $\ll$ 0.001 \\
maximal evidence & median evidence & effort reduction & 164674.0 & 0.521 & 0.479 & \textbf{1.000} \\
pattern-based search & minimal evidence & recall & 57707.0 & 0.610 & 0.390 & $\ll$ 0.001 \\
pattern-based search & maximal evidence & recall & 67296.0 & 0.599 & 0.401 & $\ll$ 0.001 \\
pattern-based search & median evidence & recall & 58663.5 & 0.609 & 0.391 & $\ll$ 0.001 \\
minimal evidence & maximal evidence & recall & 867.5 & 0.481 & 0.519 & $\ll$ 0.001 \\
minimal evidence & median evidence & recall & 909.0 & 0.498 & 0.502 & 0.020 \\
maximal evidence & median evidence & recall & 0.0 & 0.518 & 0.482 & $\ll$ 0.001 \\
\bottomrule
\end{tabular}
%
}
\end{table*}
\begin{figure}
\includegraphics[width=\columnwidth]{recall_heatmap.pdf}
\caption{Recall score obtained when SBLD is run on a given number of failing and passing logs. For space
considerations, we only show tests for which the minimum observed
median recall was smaller than 1 (SBLD attained perfect median recall for all configurations in the other tests). \label{fig:recallheat}}
\vspace*{-3ex}
\end{figure}
\subsection{RQ1: The overall performance of SBLD}
Figure~\ref{fig:rq1boxplot} suggests that SBLD's overall performance is strong,
since it obtains near-perfect recall while retaining a high degree of effort
reduction. In terms of recall, SBLD obtains a perfect performance on all except
four tests: 18, 34, 42 and 43, with the lower quartile stationed at perfect recall for all tests
except 43 (which we discuss in detail in Section~\ref{sec:testfourtythree}).
For test 18, only 75 out of 20700 observations ($0.036\%$) obtained a recall score
of $0.5$ while the rest obtained a perfect score. On test 34 (the smallest in our
dataset), 4 out of 39 observations obtained a score of zero recall while the
others obtained perfect recall.
For test 42, 700 out of 15300 ($0.4\%$) observations obtained a score of zero recall while the rest obtained perfect recall.
Hence with the exception of test 43 which is discussed later,
SBLD obtains very strong recall scores overall with only a few outliers.
The performance is also strong in terms of effort reduction, albeit
more varied. To a certain extent this is expected since the attainable
effort reduction on any log will vary with the length of the log and
the number of ground-truth relevant events in the log. As can be seen
in Figure~\ref{fig:rq1boxplot}, most of the observations fall well
over the 75\% mark, with the exceptions being tests 4 and 22. For test
4, Figure~\ref{fig:erheat} suggests that one or more of the latest
passing logs helped SBLD refine the interestingness scores. A similar
but less pronounced effect seems to have happened for test 22. However,
as reported in Table~\ref{table:descriptive}, test 22 consists only of
\emph{one} failing log. Manual inspection reveals that the log consists
of 30 events, of which 11 are fail-only events. Without additional
failing logs, most interestingness measures will give a high score to
all events that are unique to that singular failing log, which is likely
to include many events that are not ground-truth relevant. Reporting 11
out of 30 events to the user yields a meager effort reduction of around
63\%. Nevertheless, the general trend is that SBLD retrieves a compact
set of events to the user which yields a high effort reduction score.
In summary, the overall performance shows that SBLD
retrieves the majority of all known-to-be-relevant events
in compact clusters, which dramatically reduces the analysis burden for the
end user. The major exception is Test 43, which we return to in
Section~\ref{sec:testfourtythree}.
\subsection{RQ2: On the impact of evidence}
The heatmaps suggest that the effort reduction is generally not
adversely affected by adding more \emph{passing logs}. If the
assumptions underlying our interestingness measures are correct,
this is to be expected: Each additional passing log either gives us
reason to devalue certain events that co-occur in failing and passing
logs or contain passing-only events that are deemed uninteresting.
Most interestingness measures highly value events that
exclusively occur in failing logs, and additional passing logs help
reduce the number of events that satisfy this criteria. However, since
our method bases itself on clustering similarly scored events it is
weak to \emph{ties} in interestingness scores. It is possible that
an additional passing log introduces ties where there previously was
none. This is likely to have an exaggerated effect in situations with
little data, where each additional log can have a dramatic impact on the
interestingness scores. This might explain the gradual dip in effort
reduction seen in Test 34, for which there are only two failing logs.
Adding more failing logs, on the other hand, draws a more nuanced
picture: When the number of failing logs (y-axis) is high relative
to the number of passing logs (x-axis), effort reduction seems to suffer.
Again, while most interestingness measures will prioritize events that
only occur in failing logs, this strategy only works if there is a
sufficient corpus of passing logs to weed out false positives. When
there are far fewer passing than failing logs, many events will be
unique to the failing logs even though they merely reflect a different
valid execution path that the test can take. This is especially true for
complex integration tests like the ones in our dataset, which might test
a system's ability to recover from an error, or in other ways have many
valid execution paths.
The statistical comparisons summarized in Table~\ref{table:comparisons}
suggest that the minimal evidence strategy performs poorly compared to the
median and maximal evidence strategies. This is especially
pronounced for effort reduction, where the Vargha-Delaney
metric scores well over 80\% in favor of the maximal and median
strategy. For recall, the difference between the minimum strategy and
the other variants is small, albeit statistically significant. Furthermore,
the jump from minimal evidence to median evidence is much more
pronounced than the jump from median evidence to maximal evidence.
For effort reduction, there is in fact no statistically discernible
difference between the median and maximal strategies. For recall, the maximal
strategies seems a tiny bit better, but the $A_{12}$ measure suggests the
magnitude of the difference to be small.
Overall, SBLD seems to benefit from extra data, especially additional passing
logs. Failing logs also help, but depend on a proportional amount of passing
logs for SBLD to fully benefit.
The performance increase from going from minimal data to some data is more pronounced than going from some data to
maximal data. This suggests that there may be diminishing returns to
collecting extra logs, but our investigation cannot prove or disprove this.
\subsection{RQ3: SBLD versus simple pattern-search}
In terms of effort reduction, Table~\ref{table:comparisons} shows that
the pattern-based search clearly beats the minimal evidence variant of
SBLD. It does not, however, beat the median and maximal variants: The
comparison to median evidence suggests a statistically significant win
in favor of median evidence, but the effect reported by $A_{12}$ is
so small that it is unlikely to matter in practice. No statistically
significant difference could be established between the pattern-based
search and SBLD with maximal evidence.
In one sense, it is to be expected that the pattern-based search does
well on effort reduction assuming that events containing words like
"fault" and "error" are rare. The fact that the pattern-based search
works so well could indicate that \CiscoNorway{our industrial partner}
has a well-designed logging infrastructure where such words are
rare and occur at relevant positions in the logs. On the other
hand, it is then notable that the median and maximum variants of SBLD perform
comparably on effort reduction without having any concept of the textual
content in the events.
In terms of recall, however, pattern-based search beats all variants of
SBLD in a statistically significant manner, where the effect size of the
differences is small to medium. One likely explanation for this better performance is that the
pattern-based search performs very well on Test 43, which SBLD generally
performs less well on. Since the comparisons are run per failing log and test
43 constitutes 29\% of the failing logs (specifically, 267 out of 910 logs), the
performance of test 43 has a massive impact. We return to test 43 and its
impact on our results in Section~\ref{sec:testfourtythree}.
On the whole, SBLD performs similarly to pattern-based search, obtaining
slightly poorer results on recall for reasons that are likely due
to a particular test we discuss below. At any rate, there is no
contradiction in combining SBLD with a traditional pattern-based search.
Analysts could start by issuing a set of pattern-based searches and
run SBLD afterward if the pattern search returned unhelpful results.
Indeed, an excellent and intended use of SBLD is to suggest candidate
signature patterns that, once proven reliable, can be incorporated in a
regular-expression based search to automatically identify known issues
in future runs.
\subsection{What happens in Test 43?}
\label{sec:testfourtythree}
SBLD's performance is much worse on Test 43 than the other tests, which
warrants a dedicated investigation. The first thing we observed in the
results for Test 43 is that all of the ground-truth-relevant events
occurred \emph{exclusively} in failing logs and were often singular
(11 out of the 33) or infrequent (30 out of 33 events occurred in 10\%
of the failing logs or fewer). Consequently, we observed a strong
performance from the \emph{Tarantula} and \emph{Failed only}-measures
that put a high premium on failure-exclusive events. Most of the
interestingness measures, on the other hand, will prefer an event that
is very frequent in the failing logs and sometimes occur in passing logs
over a very rare event that only occurs in failing logs. This goes a
long way in explaining the poor performance on recall. The abundance of
singular events might also suggest that there is an error in the event
abstraction framework, where several events that should be treated as
instances of the same abstract event are treated as separate events. We
discuss this further in Section~\ref{sec:ttv}.
\begin{sloppypar}%
Another observation we made is that the failing logs contained only \emph{two}
ground-truth relevant events, which means that the recorded recall can quickly
fluctuate between $0$, $0.5$ and $1$.
\end{sloppypar}
Would the overall performance improve by retrieving an additional
cluster? A priori, retrieving an extra cluster would strictly improve
or not change recall since more events are retrieved without removing
the previously retrieved events. Furthermore, retrieving an additional
cluster necessarily decreases the effort reduction. We re-ran the
analysis on Test 43 and collected effort reduction and recall scores
for SBLD when retrieving \emph{two} clusters, and found that the added
cluster increased median recall from $0$ to $0.5$ while the median
effort reduction decreased from $0.97$ to $0.72$. While the proportional
increase in recall is larger than the decrease in effort reduction,
this should in our view not be seen as an improvement: As previously
mentioned, the failing logs in this set contain only two ground-truth
relevant events and thus recall is expected to fluctuate greatly.
Secondly, an effort reduction of $0.72$ implies that you still have to
manually inspect 28\% of the data, which in most information retrieval
contexts is unacceptable. An unfortunate aspect of our analysis in this
regard is that we do not account for event \emph{lengths}: An abstracted
event is treated as one atomic entity, but could in reality vary from a
single line to a stack trace that spans several pages. A better measure
of effort reduction should incorporate a notion of event length to
better reflect the real-world effect of retrieving more events.
All in all, Test 43 exhibits a challenge that SBLD is not suited for:
It asks SBLD to prioritize rare events that are exclusive to failing
logs over events that frequently occur in failing logs but might
occasionally occur in passing logs. The majority of interestingness
measures supported by SBLD would prioritize the latter category of
events. In a way, this might suggest that SBLD is not suited for finding
\emph{outliers} and rare events: Rather, it is useful for finding
events that are \emph{characteristic} for failures that have occurred
several times - a "recurring suspect", if you will. An avenue for future
research is to explore ways of letting the user combine a search for
"recurring suspects" with the search for outliers.
%
\section{Related Work}
\label{sec:relwork}
We distinguish two main lines of related work:
First, there is other work aimed at automated analysis of log files,
i.e., our problem domain,
and second, there is other work that shares similarities with our technical approach,
i.e., our solution domain.
\head{Automated log analysis}
Automated log analysis originates in \emph{system and network monitoring} for security and administration~\cite{lin1990:error,Oliner2007},
and saw a revival in recent years due to the needs of \emph{modern software development}, \emph{CE} and \emph{DevOps}~\cite{Hilton2017,Laukkanen2017,Debbiche2014,Olsson2012,Shahin2017,candido2019:contemporary}.
A considerable amount of research has focused on automated \emph{log parsing} or \emph{log abstraction},
which aims to reduce and organize log data by recognizing latent structures or templates in the events in a log~\cite{zhu2019:tools,el-masri2020:systematic}.
He et al. analyze the quality of these log parsers and conclude that many of them are not accurate or efficient enough for parsing the logs of modern software systems~\cite{he2018:automated}.
In contrast to these automated approaches,
our study uses a handcrafted log abstracter developed by \CiscoNorway{our industrial collaborator}.
\emph{Anomaly detection} has traditionally been used for intrusion detection and computer security~\cite{liao2013:intrusion,ramaki2016:survey,ramaki2018:systematic}.
Application-level anomaly detection has been investigated for troubleshooting~\cite{chen2004:failure,zhang2019:robust},
and to assess compliance with service-level agreements~\cite{banerjee2010:logbased,He2018,sauvanaud2018:anomaly}.
Gunter et al. present an infrastructure for troubleshooting of large distributed systems, %
by first (distributively) summarizing high volume event streams before submitting those summaries to a centralized anomaly detector.
This helps them achieve the fidelity needed for detailed troubleshooting,
without suffering from the overhead that such detailed instrumentation would bring~\cite{Gunter2007}.
Deeplog by Du et al. enables execution-path and performance anomaly detection in system logs by training a Long Short-Term Memory neural network of the system's expected behavior from the logs, and using that model to flag events and parameter values in the logs that deviate from the model's expectations~\cite{Du2017}.
Similarly, LogRobust by Zhang et al. performs anomaly detection using a bi-LSTM neural network but also detects events that are likely evolved versions of previously seen events, making the learned model more robust to updates in the target logging infrastructure~\cite{zhang2019:robust}.
In earlier work, we use \emph{log clustering} to reduce the effort needed to process a backlog of failing CE logs
by grouping those logs that failed for similar reasons~\cite{rosenberg2018:use,rosenberg:2018:improving}.
They build on earlier research that uses log clustering to identify problems in system logs~\cite{Lin2016,Shang2013}.
Common to these approaches is how the contrast between passing and failing logs is used to improve accuracy,
which is closely related to how SBLD highlights failure-relevant events.
Nagarash et al.~\cite{nagaraj:2012} explore the use of dependency networks to exploit the contrast between two sets of logs,
one with good and one with bad performance,
to help developers understand which component(s) likely contain the root cause of performance issues.
An often-occurring challenge is the need to (re)construct an interpretable model of a system's execution.
To this end, several authors investigate the combination of log analysis with (static) source code analysis,
where they try to (partially) match events in logs to log statements in the code,
and then use these statements to reconstruct a path through the source code to help determine
what happened in a failed execution~\cite{Xu2009,yuan:2010:sherlog,zhao2014:lprof,schipper2019:tracing}.
Gadler et al. employ Hidden Markov Models to create a model of a system's usage patterns from logged events~\cite{gadler2017:mining}, while
Pettinato et al. model and analyze the behavior of a complex telescope system using Latent Dirichlet Allocation~\cite{pettinato2019:log}.
Other researchers have analyzed the logs for successful and failing builds,
to warn for anti-patterns and decay~\cite{vassallo2019:automated},
give build repair hints~\cite{Vassallo2018},
and automatically repair build scripts~\cite{hassan2018:hirebuild, tarlow2019:learning}.
Opposite to our work,
these techniques exploit the \emph{overlap} in build systems used by many projects to mine patterns that hint at decay or help repair a failing build,
whereas we exploit the \emph{contrast} with passing runs for the same project to highlight failure-relevant events.
\begin{sloppypar}
\head{Fault Localization}
As mentioned, our approach was inspired by Spectrum-Based Fault Localization (SBFL),
where the fault-proneness of a statement is computed as a function of
the number of times that the statement was executed in a failing test case, combined with
the number of times that the statement was skipped in a passing test case~\cite{Jones2002,Chen2002,Abreu2007,Abreu2009,Naish2011}.
This more or less directly translates to the inclusion or exclusion of events in failing, resp. passing logs,
where the difference is that SBLD adds clustering of the results to enable step-wise presentation of results to the user.
\end{sloppypar}
A recent survey of Software Fault Localization includes the SBFL literature up to 2014~\cite{Wong2016}.
De Souza et. all extend this with SBFL work up to to 2017, and add an overview of seminal work on automated debugging from 1950 to 1977~\cite{deSouza2017}.
By reflecting on the information-theoretic foundations of fault localization, Perez proposes the DDU metric,
which can be used to evaluate test suites and predict their diagnostic performance when used in SBFL~\cite{Perez2018}.
One avenue for future work is exploring how a metric like this can be adapted to our context,
and see if helps to explain what happened with test 43.
A recent evaluation of \emph{pure} SBFL on large-scale software systems found that it under-performs in these situations
(only 33-40\% of the bugs are identified with the top 10 of ranked results~\cite{heiden2019:evaluation}.
The authors discuss several directions beyond pure SBFL, such as combining it with dynamic program analysis techniques,
including additional text analysis/IR techniques~\cite{Wang2015a}, mutation based fault localization,
and using SBFL in an interactive feedback-based process, such as whyline-debugging~\cite{ko2008:debugging}.
Pure SBFL is closely related to the Spectrum-Based Log Diagnosis proposed here,
so we may see similar challenges (in fact, test 43 may already show some of this).
Of the proposed directions to go beyond pure SBFL,
both the inclusion of additional text analysis/IR techniques,
and the application of Spectrum-Based Log Diagnosis in an interactive feedback-based process
are plausible avenues to extend our approach.
Closely related to the latter option,
de Souza et al.~\cite{deSouza2018b} assess guidance and filtering strategies to \emph{contextualize} the fault localization process.
Their results suggest that contextualization by guidance and filtering can improve the effectiveness of SBFL,
by classifying more actual bugs in the top ranked results.
\begin{comment}
Direct comparison~\cite{He2018, jiang2017:what, Jones:2007:DP:1273463.1273468,
Xu2009, Hwa-YouHsu:2008:RIB:1642931.1642994}.
Hsu et
al~\cite{Hwa-YouHsu:2008:RIB:1642931.1642994} discuss methods for extracting
failure signatures as sequences of code executions, which in spirit is rather
similar to what we are trying to accomplish.
An interesting data-structure, the event correlation
graph, is explores in~\cite{Fu2012a}. An FL metric that takes frequencies into
account~\cite{Shu2016}.
\end{comment}
%
\section{Threats to Validity}
\label{sec:ttv}
\head{Construct Validity} %
The signatures that provide our ground truth were devised to determine whether a given log \emph{in its entirety} showed symptoms of a known error.
As discussed in Section~\ref{sec:dataset}, we have used these signatures to detect events that give sufficient evidence for a symptom,
but there may be other events that could be useful to the user that are not part of our ground truth.
We also assume that the logs exhibit exactly the failures described by the signature expression.
In reality, the logs could contain symptoms of multiple failures beyond the ones described by the signature.
Furthermore, we currently do not distinguish between events that consist of single line of text,
or events that contain a multi-line stack-trace, although these clearly represent different comprehension efforts.
This threat could be addressed by tracking the \emph{length} of the event contents,
and using it to further improve the accuracy of our effort reduction measure.
The choice of clustering algorithm and parameters affects the events retrieved,
but our investigation currently only considers HAC with complete linkage.
While we chose complete linkage to favor compact clusters,
outliers in the dataset could cause unfavorable clustering outcomes.
Furthermore, using the uncorrected sample standard deviation as threshold criterion
may be too lenient if the variance in the scores is high.
This threat could be addressed by investigate alternative cluster algorithm and parameter choices.
Moreover, as for the majority of log analysis frameworks, the performance of SBLD strongly depends on the quality of log abstraction.
An error in the abstraction will directly propagate to SBLD:
For example, if abstraction fails to identify two concrete events as being instances of the same generic event,
their aggregated frequencies will be smaller and consequently treated as less interesting by SBLD.
Similarly, the accuracy will suffer if two events that represent distinct generic events are treated as instances of the same generic event.
Future work could investigate alternative log abstraction approaches.
\head{Internal Validity} %
While our heatmaps illustrate the interaction between additional data and SBLD performance,
they are not sufficient to prove a causal relationship between performance and added data.
Our statistical comparisons suggests that a strategy of maximizing data is generally preferable,
but they are not sufficient for discussing the respective contribution of failing or passing logs.
\head{External Validity} %
This investigation is concerned with a single dataset from one industrial partner.
Studies using additional datasets from other contexts is needed to assess the generalizability of SBLD to other domains.
Moreover, while SBLD is made to help users diagnose problems that are not already well understood,
we are assessing it on a dataset of \emph{known} problems.
It could be that these errors, being known, are of a kind that are generally easier to identify than most errors.
Studying SBLD in-situ over time and directly assessing whether end users found it helpful
in diagnosis would better indicate the generalizability of our approach.
%
\section{Concluding Remarks}
\label{sec:conclusion}
\head{Contributions}
This paper presents and evaluates Spectrum-Based Log Diagnosis (SBLD),
a method for automatically identifying segments of failing logs
that are likely to help users diagnose failures.
Our empirical investigation of SBLD addresses the following questions:
(i) How well does SBLD reduce the \emph{effort needed} to identify all \emph{failure-relevant events} in the log for a failing run?
(ii) How is the \emph{performance} of SBLD affected by \emph{available data}?
(iii) How does SBLD compare to searching for \emph{simple textual patterns} that often occur in failure-relevant events?
\head{Results}
In response to (i),
we find that SBLD generally retrieves the failure-relevant events in a compact manner
that effectively reduces the effort needed to identify failure-relevant events.
In response to (ii),
we find that SBLD benefits from addition data, especially more logs from successful runs.
SBLD also benefits from additional logs from failing runs if there is a proportional amount of successful runs in the set.
We also find that the effect of added data is most pronounced when going from little data to \emph{some} data rather than from \emph{some} data to maximal data.
In response to (iii),
we find that SBLD achieves roughly the same effort reduction as traditional search-based methods but obtains slightly lower recall.
We trace the likely cause of this discrepancy on recall to a prominent part of our dataset, whose ground truth emphasizes rare events.
A lesson learned in this regard is that SBLD is not suited for finding statistical outliers but rather \emph{recurring suspects}
that characterize the observed failures.
Furthermore, the investigation highlights that traditional pattern-based search and SBLD can complement each other nicely:
Users can resort to SBLD if they are unhappy with what the pattern-based searches turn
up, and SBLD is an excellent method for finding characteristic textual patterns
that can form the basis of automated failure identification methods.
\head{Conclusions}
We conclude that SBLD shows promise as a method diagnosing failing runs,
that its performance is positively affected by additional data,
but that it does not outperform textual search on the dataset considered.
\head{Future work}
We see the following directions for future work:
(a) investigate SBLD's performance on other datasets, to better assess generalizability,
(b) explore the impact of alternative log abstraction mechanisms,
(c) explore ways of combining SBLD with outlier detection, to accommodate different user needs,
(d) adapt the Perez' DDU metric to our context and see if it can help predict diagnostic efficiency,
(e) experiment with extensions of \emph{pure SBLD} that include additional text analysis/IR techniques,
or apply it in an interactive feedback-based process
(f) rigorously assess (extensions of) SBLD in in-situ experiments.
\begin{acks}
We thank Marius Liaaen and Thomas Nornes of Cisco Systems Norway for help with obtaining and understanding the dataset, for developing the log abstraction
mechanisms and for extensive discussions.
This work is supported by the \grantsponsor{RCN}{Research Council of Norway}{https://www.rcn.no} through the
Certus SFI (\grantnum{RCN}{\#203461/030)}.
The empirical evaluation was performed on resources provided by \textsc{uninett s}igma2,
the national infrastructure for high performance computing and data
storage in Norway.
\end{acks}
\printbibliography
\end{document}
| {'timestamp': '2020-08-18T02:18:33', 'yymm': '2008', 'arxiv_id': '2008.06948', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.06948'} |
\section{Introduction}
When granular material in a cubic container is shaken
horizontally one observes experimentally different types of
instabilities, i.e. spontaneous formation of ripples in shallow
beds~\cite{StrassburgerBetatSchererRehberg:1996},
liquefaction~\cite{RistowStrassburgerRehberg:1997,Ristow:1997}, convective
motion~\cite{TennakoonBehringer:1997,Jaeger} and recurrent swelling of
shaken material where the period of swelling decouples from the
forcing period~\cite{RosenkranzPoeschel:1996}. Other interesting experimental results concerning simultaneously vertically and horizontally vibrated granular systems~\cite{TennakoonBehringer:1998} and enhanced packing of spheres due to horizontal vibrations~\cite{PouliquenNicolasWeidman:1997} have been reported recently. Horizontally shaken
granular systems have been simulated numerically using cellular
automata~\cite{StrassburgerBetatSchererRehberg:1996} as well as
molecular dynamics
techniques~\cite{RistowStrassburgerRehberg:1997,Ristow:1997,IwashitaEtAl:1988,LiffmanMetcalfeCleary:1997,SaluenaEsipovPoeschel:1997,SPEpre99}.
Theoretical work on horizontal shaking can be found
in~\cite{SaluenaEsipovPoeschel:1997} and the dynamics of a single
particle in a horizontally shaken box has been discussed
in~\cite{DrosselPrellberg:1997}.
\begin{figure}[htbp]
\centerline{\psfig{file=sketch.eps,width=7cm,clip=}}
\caption{Sketch of the simulated system.}
\label{fig:sketch}
\end{figure}
Recently the effect of convection in a horizontally shaken box filled with
granular material attracted much attention and presently the effect is studied
experimentally by different
groups~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
Unlike the effect of convective motion in vertically shaken granular
material which has been studied intensively experimentally,
analytically and by means of computer simulations
(s.~e.g.~\cite{vertikalEX,JaegerVert,vertikalANA,vertikalMD}), there
exist only a few references on horizontal shaking. Different from the
vertical case, where the ``architecture'' of the convection pattern is
very simple~\cite{BizonEtAl:1998}, in horizontally shaken containers one observes a variety
of different patterns, convecting in different directions, in parallel
as well as perpendicular to the direction of
forcing~\cite{TennakoonBehringer:1997}. Under certain conditions one
observes several convection rolls on top of each other~\cite{Jaeger}.
An impression of the complicated convection can be found in the
internet~\cite{movies}.
Whereas the properties of convection in vertically sha\-ken systems
can be reproduced by two dimensional molecular dynamics simulations
with good reliability, for the case of horizontal motion the results
of simulations are inconsistent with the experimental results: in {\em
all} experimental investigations it was reported that the material
flows downwards close to the vertical
walls~\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996,movies},
but reported numerical simulations systematically show surface rolls
in opposite direction accompanying the more realistic deeper rolls, or
even replacing them completely~\cite{LiffmanMetcalfeCleary:1997}.
Our investigation is thus concerned with the convection pattern, i.e. the
number and direction of the convection rolls in a two dimensional
molecular dynamics simulation. We will show that the choice of the
dissipative material parameters has crucial influence on the convection pattern
and, in particular, that the type of convection rolls observed experimentally
can be
reproduced by using sufficiently high dissipation constants.
\section{Numerical Model}
The system under consideration is sketched in Fig.~\ref{fig:sketch}:
we simulate a two-dimensional vertical cross section of a three-dimensional
container.
This rectangular section of width $L=100$ (all units in cgs system), and
infinite height, contains $N=1000$ spherical particles. The system is
periodically driven by an external oscillator $x(t) = A \sin (2\pi f
t)$ along a horizontal plane. For the effect we want to show, a
working frequency $f=10$ and amplitude $A=4$ is
selected.
These values give an acceleration amplitude of approximately $16 g$.
Lower accelerations affect the intensity of the
convection but do not change the basic features of the convection
pattern which we want to discuss.
As has been shown in~\cite{SPEpre99},
past the fluidization point, a much better indicator of the convective
state is the dimensionless velocity $A 2\pi f/ \sqrt{Lg}$. This means
that in small containers motion saturates earlier, hence, results for
different container lengths at the same values of the acceleration amplitude
cannot be compared directly. Our acceleration amplitude $\approx 16g$ corresponds to
$\approx 3g$ in a 10 cm container (provided that the frequency is the same
and particle sizes have been
scaled by the same amount).
The radii of the particles of density $2$ are homogeneously
distributed in the interval $[0.6, 1.4]$. The rough inner walls of the
container are simulated by attaching additional particles of the same
radii and material properties (this simulation technique is similar to ``real''
experiments, e.g.~\cite{JaegerVert}).
For the molecular dynamics simulations, we apply a modified
soft-particle model by Cundall and Strack~\cite{CundallStrack:1979}:
Two particles $i$ and $j$, with radii $R_i$ and $R_j$ and at positions
$\vec{r}_i$ and $\vec{r}_j$, interact if their compression $\xi_{ij}=
R_i+R_j-\left|\vec{r}_i -\vec{r}_j\right|$ is positive. In this case
the colliding spheres feel the force
$F_{ij}^{N} \vec{n}^N + F_{ij}^{S} \vec{n}^S$,
with $\vec{n}^N$ and $\vec{n}^S$ being the unit vectors in normal and shear
direction. The normal force acting between colliding spheres reads
\begin{equation}
F_{ij}^N = \frac{Y\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}}{1-\nu^2}
~\left(\frac{2}{3}\xi_{ij}^{3/2} + B \sqrt{\xi_{ij}}\,
\frac{d {\xi_{ij}}}{dt} \right)
\label{normal}
\end{equation}
where $Y$ is the Young modulus, $\nu$ is the Poisson ratio and $B$
is a material constant which characterizes the dissipative
character of the material~\cite{BSHP}.
\begin{equation}
R^{\,\mbox{\it\footnotesize\it
eff}}_{ij} = \left(R_i R_j\right)/\left(R_i + R_j\right)
\end{equation}
is the
effective radius. For a strict derivation of (\ref{normal})
see~\cite{BSHP,KuwabaraKono}.
For the shear force we apply the model by Haff and Werner~\cite{HaffWerner}
\begin{equation}
F_{ij}^S = \mbox{sign}\left({v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right)
\min \left\{\gamma_s m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
\left|{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}}\right|~,~\mu
\left|F_{ij}^N\right| \right\}
\label{shear}
\end{equation}
with the effective mass $m_{ij}^{\,\mbox{\it\footnotesize\it eff}} =
\left(m_i m_j\right)/\left(m_i + m_j\right)$ and the relative velocity
at the point of contact
\begin{equation}
{v}_{ij}^{\,\mbox{\it\footnotesize\it rel}} = \left(\dot{\vec{r}}_i -
\dot{\vec{r}}_j\right)\cdot \vec{n}^S + R_i {\Omega}_i + R_j {\Omega}_j ~.
\end{equation}
$\Omega_i$ and $\Omega_j$ are the angular velocities of the particles.
The resulting momenta $M_i$ and $M_j$ acting upon the particles are
$M_i = F_{ij}^S R_i$ and $M_j = - F_{ij}^S R_j$. Eq.~(\ref{shear})
takes into account that the particles slide upon each other for the
case that the Coulomb condition $\mu \mid F_{ij}^N \mid~<~\left|
F_{ij}^S \right|$ holds, otherwise they feel some viscous friction.
By means of $\gamma _{n} \equiv BY/(1-\nu ^2)$ and $\gamma _{s}$,
normal and shear damping coefficients, energy loss during particle
contact is taken into account~\cite{restitution}.
The equations of motion for translation and rotation have been solved
using a Gear predictor-corrector scheme of sixth order
(e.g.~\cite{AllenTildesley:1987}).
The values of the coefficients used in simulations are $Y/(1-\nu
^2)=1\times 10^{8}$, $\gamma _{s}=1\times 10^{3}$, $ \mu =0.5$. For
the effect we want to show, the coefficient $\gamma _{n}$ takes values within the range
$\left[10^2,10^4\right]$.
\section{Results}
The mechanisms for convection under horizontal shaking have been
discussed in \cite{LiffmanMetcalfeCleary:1997}. Now we can show that
these mechanisms can be better understood by taking into account the
particular role of dissipation in this problem. The most striking
consequence of varying the normal damping coefficient is the change
in organization of the convective pattern, i.e. the direction and
number of rolls in the stationary regime. This is shown in
Fig.~\ref{fig1}, which has been obtained after averaging particle
displacements over 200 cycles
(2 snapshots per cycle).
The asymmetry of compression and expansion of particles close to
the walls (where the material results highly compressible) explains
the large transverse velocities shown in the figure.
Note, however, that the upward and downward motion at the walls cannot be altered
by this particular averaging procedure.
The first frame shows a convection pattern with only two rolls, where
the arrows indicate that the grains slide down the walls, with at most
a slight expansion of the material at the surface.
There are no surface rolls.
This is very
similar to what has been observed in
experiments\cite{TennakoonBehringer:1997,Jaeger,RosenkranzPoeschel:1996}.
In this case, dissipation is high enough to damp most of the sloshing
induced by the vertical walls, and not even the grains just below the
surface can overcome the pressure gradient directed downwards.
For lower damping, we see the developing of surface rolls,
which
coexist with the inner rolls circulating in the opposite way. Some
energy is now available for upward motion when the walls compress the
material fluidized during the opening of the wall ``gap'' (empty space
which is created alternatively during the shaking motion). This is the
case reported in \cite{LiffmanMetcalfeCleary:1997}. The last frames
demonstrate how the original rolls vanish at the same time that the
surface rolls grow occupying a significant part of the system.
Another feature shown in the figure is the thin layer of material involving
3 particle rows close to the bottom, which perform a different kind
of motion. This effect, which can be seen in all frames,
is due to the presence of the constraining boundaries
but has not been analyzed separately.
\onecolumn
\begin{figure}
\centerline{\psfig{file=fric1nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric2nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric3nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric4nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric5nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric6nn.eps,width=5.7cm,clip=}}
\centerline{\psfig{file=fric7nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric8nn.eps,width=5.7cm,clip=}
\hspace{0.3cm}\psfig{file=fric9nn.eps,width=5.7cm,clip=}}
\vspace{0.3cm}
\caption{Velocity field obtained after cycle averaging of
particle displacements, for different values of the normal damping
coefficient, $\gamma_n$. The first one is $1\times 10^4$, and for
obtaining each subsequent frame the coefficient has been divided by
two. The frames are ordered from left to right and from top to
bottom. The cell size for averaging is approximately one particle diameter.}
\label{fig1}
\vspace*{-0.2cm}
\end{figure}
\twocolumn
With decreasing normal damping $\gamma_n$ there are two transitions
observable in Fig.~\ref{fig1}, meaning that the convection pattern changes
qualitatively at these two particular values of $\gamma_n$:
The first transition leads to the appearance of two surface rolls
laying on top of the bulk cells and circulating in opposite direction.
The second transition eliminates the bulk rolls. A more detailed analysis of
the displacement fields (Fig.~\ref{fig2})
allows us to locate the transitions much more precisely.
In Fig.~\ref{fig2} we have represented in grey-scale the horizontal and
vertical components of the displacement vectors pictured in
Fig.~\ref{fig1} but in a denser sampling, analyzing data from 30 simulations
corresponding to
values of the normal damping coefficient within the interval [50,10000].
For horizontal displacements, we have chosen vertical sections
at some representative position in horizontal direction
($x=30$). For the vertical displacements, vertical sections of the
leftmost part of the container were selected ($x=10$), s.
Fig.~\ref{fig2}, lower part.
\begin{figure}
\centerline{\psfig{file=vx.eps,width=4.5cm,clip=}\hspace{-0.5cm}
\psfig{file=vy.eps,width=4.5cm,clip=}
\centerline{\psfig{file=sectionn.eps,height=4.2cm,bbllx=7pt,bblly=16pt,bburx=507pt,bbury=544pt,clip=}}
\vspace*{0.2cm}
\caption{Horizontal (left) and vertical (right) displacements at
selected positions of the frames in Fig.~\ref{fig1} (see the text
for details), for decreasing normal damping and as a function of
depth. White indicates strongest flow along positive axis directions
(up,right), and black the corresponding negative ones. The black region
at the bottom of the left picture corresponds to the complex boundary
effect observed in Fig.~\ref{fig1}, involving only two particle layers.
The
figure below shows a typical convection pattern together with the sections
at $x=10$ and $x=30$ at which the displacements were recorded.}
\label{fig2}
\vspace*{-0.1cm}
\end{figure}
The horizontal axis shows the values of the normal damping
coefficient scaled logarithmically in decreasing sequence. The
vertical axis represents the position in vertical direction, with the
free surface of the system located at $y \approx 60$. One observes first
that white surface shades, complemented by subsurface black ones,
appear quite clearly at about $\gamma =$2000 in Fig.~\ref{fig2}
(left), indicating the appearance of surface rolls. On the other
hand, Fig.~\ref{fig2} (right) shows a black area (indicative of
downward flow along the vertical wall) that vanishes at
$\gamma_n \approx 200$ (at this point the grey shade represents vanishing vertical velocity).
The dashed lines in Fig.~\ref{fig2} lead the eye to identify the transition values.
In the interval $ 200 \lesssim \gamma_n
\lesssim 2000$ surface and inner rolls coexist, rotating in opposite
directions.
One can analyze the situation in terms of the restitution coefficient.
\ From Eq. (\ref{normal}), the equation of motion for the displacement
$\xi_{ij}$ can be integrated and the relative energy loss in a
collision $\eta=(E_0-E)/E_0$ (with $E$ and $E_0$ being the energy of
the relative motion of the particles) can be evaluated approximately.
Up to the lowest order in the expansion parameter, one
finds~\cite{Thomas-Thorsten}
\begin{equation}
\eta = 1.78 \left( \frac{\tau}{\ell} v_0\right)^{1/5}\;,
\label{energyloss}
\end{equation}
where $v_0$ is the relative initial velocity in normal direction, and
$\tau$, $\ell$, time and length scales associated with the problem
(see~\cite{Thomas-Thorsten} for details),
\begin{equation}
\tau = \frac{3}{2} B\; ,~~~~~~~~~
\ell = \left(\frac{1}{3} \frac{m_{ij}^{\,\mbox{\it\footnotesize\it eff}}
}{\sqrt{R^{\,\mbox{\it\footnotesize\it eff}}_{ij}}
B \gamma_{n}}\right)^{2}.
\end{equation}
For $\gamma_n = 10^4$ (the highest value analyzed) and the values of
the parameters specified above ($v_0 \approx A 2\pi f$ for collisions
with the incoming wall), $B= 10^{-4}$ and $\eta$ is typically
50\%. This means that after three more collisions the particle leaves
with an energy not enough to overcome the height of one single
particle in the gravity field. For $\gamma_n = 10^3$ and the other
parameters kept constant, $B=10^{-5}$ and $\eta$ has been
reduced to 5\%, resulting in that the number of collisions needed for
the particle to have its kinetic energy reduced to the same residual
fraction, has increased roughly by an order of magnitude. On the other
hand, given the weak dependence of Eq. (\ref{energyloss}) on the
velocity, one expects that the transitions shown in Fig.~\ref{fig2}
will depend also weakly on the amplitude of the shaking velocity. The reduction of the
inelasticity $\eta$ by an order of magnitude seems enough for
particles to ``climb'' the walls and develop the characteristic
surface rolls observed in numerical simulations.
\section{Discussion}
We have shown that the value of the normal damping coefficient
influences the convective pattern of horizontally shaken granular
materials. By means of molecular dynamics simulations in two
dimensions we can reproduce the pattern observed in real experiments,
which corresponds to a situation of comparatively high damping,
characterized by inelasticity parameters $\eta$ larger than 5\%. For
lower damping, the upper layers of the material develop additional
surface rolls as has been reported previously. As normal damping
decreases, the lower rolls descend and finally disappear completely at
inelasticities of the order of 1\%.
\begin{acknowledgement}
The authors want to thank R. P. Behringer, H. M. Jaeger, M. Medved,
and D. Rosenkranz for providing experimental results prior to
publication and V. Buchholtz, S. E. Esipov, and L. Schimansky-Geier
for discussion. The calculations have been done on the parallel
machine {\it KATJA} (http://summa.physik.hu-berlin.de/KATJA/) of the
medical department {\em Charit\'e} of the Humboldt University Berlin.
The work was supported by Deut\-sche Forschungsgemeinschaft through
grant Po 472/3-2.
\end{acknowledgement}
| {'timestamp': '2002-03-19T12:47:20', 'yymm': '9807', 'arxiv_id': 'cond-mat/9807071', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9807071'} |
\section{\label{sec:intro}Introduction}
Demonstration of non-abelian exchange statistics is one of the most active areas of condensed matter research and yet experimental realization of braiding of Majorana modes remains elusive~\cite{RevModPhys.80.1083,zhang2019next}. Most efforts so far have been focused on superconductor/semiconductor nanowire hybrids, where Majorana bound states (MBS) are expected to form at the ends of a wire or at boundaries between topologically trivial and non-trivial regions~\cite{rokhinson2012fractional, deng2012anomalous, mourik2012signatures, LutchynReview}. Recently, it became clear that abrupt interfaces may also host topologically trivial Andreev states with experimental signatures similar to MBS \cite{pan2020generic,Yu2021}, which makes demonstrating braiding in nanowire-based platforms challenging. Phase-controlled long Josephson junctions (JJ) open much wider phase space to realize MBS with a promise to solve some problems of the nanowire platform, such as enabling zero-field operation to avoid detrimental flux focusing for in-plane fields \cite{pientka2017topological, ren2019topological}. However, MBSs in long JJs suffer from the same problems as in the original Fu-Kane proposal for topological insulator/superconductor JJs, such as poor control of flux motion along the junction and presence of sharp interfaces in the vicinity of MBS-carrying vortices which may host Andreev states and trap quasiparticles. For instance, MBS spectroscopy in both HgTe and InAs-based JJs shows a soft gap \cite{fornieri2019evidence}, despite a hard SC gap in an underlying InAs/Al heterostructure.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.95\textwidth}
\includegraphics[width=1\textwidth]{Schematic.pdf}
\caption{\label{fig:schematic}}
\end{subfigure}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=1\textwidth]{stack_2.pdf}
\caption{\label{fig:layers}}
\end{subfigure}
\begin{subfigure}{0.6\textwidth}
\includegraphics[width=1\textwidth]{Flow_2.pdf}
\caption{\label{fig:flow}}
\end{subfigure}
\caption{\label{fig:one} (a) Schematic of the Majorana braiding platform. Magnetic multilayer (MML) is patterned into a track and is separated from TSC by a thin insulating layer. Green lines represent on-chip microwave resonators for a dispersive parity readout setup. The left inset shows a magnified view of a SVP and the right inset shows the role of each layer (b) Expanded view of the composition of an MML (c) Process flow diagram for our Majorana braiding scheme. Here, $T_c$ is superconducting transition temperature and $T_{BKT}$ is Berezinskii–Kosterlitz–Thouless transition temperature for the TSC.}
\end{figure*}
In the search for alternate platforms to realize Majorana braiding, spectroscopic signatures of MBS have been recently reported in STM studies of vortex cores in iron-based topological superconductors (TSC) \cite{wang2018evidence}. Notably, a hard gap surrounding the zero-bias peak at a relatively high temperature of $0.55$ K, and a $5$ K separation gap from trivial Caroli-de Gennes-Matricon (CdGM) states were observed \cite{chen2020observation, chen2018discrete}. Moreover, vortices in a TSC can be field-coupled to a skyrmion in an electrically-separated magnetic multilayer (MML) \cite{volkov,petrovic2021skyrmion}, which can be used to manipulate the vortex. This allows for physical separation of the manipulation layer from the layer wherein MBS reside, eliminating the problem of abrupt interfaces faced by nanowire hybrids and JJs. Finally, recent advances in the field of spintronics provide a flexible toolbox to design MML in which skyrmions of various sizes can be stabilized in zero external magnetic field and at low temperatures \cite{petrovic2021skyrmion, buttner2018theory, dupe2016engineering}. Under the right conditions, stray fields from these skyrmions alone can nucleate vortices in the adjacent superconducting layer. In this paper, we propose TSC--MML heterostructures hosting skyrmion-vortex pairs (SVP) as a viable platform to realize Majorana braiding. By patterning the MML into a track and by driving skyrmions in the MML with local spin-orbit torques (SOT), we show that the SVPs can be effectively moved along the track, thereby facilitating braiding of MBS bound to vortices.
The notion of coupling skyrmions (Sk) and superconducting vortices (Vx) through magnetic fields has been studied before \cite{volkov, baumard2019generation, zhou_fusion_2022, PhysRevLett.117.077002, PhysRevB.105.224509, PhysRevB.100.064504, PhysRevB.93.224505, PhysRevB.99.134505, PhysRevApplied.12.034048}. Menezes et al. \cite{menezes2019manipulation} performed numerical simulations to study the motion of a skyrmion--vortex pair when the vortex is dragged via supercurrents and Hals et al. \cite{hals2016composite} proposed an analytical model for the motion of such a pair where a skyrmion and a vortex are coupled via exchange fields. However, the dynamics of a SVP in the context of Majorana braiding remains largely unexplored. Furthermore, no \textit{in-situ} non-demolition experimental technique has been proposed to measure MBS in these TSC--MML heterostructures. In this paper, through micromagnetic simulations and analytical calculations within London and Thiele formalisms, we study the dynamics of a SVP subjected to external spin torques. We demonstrate that the SVP moves without dissociation up to speeds necessary to complete Majorana braiding within estimated quasiparticle poisoning time. We further eliminate the problem of \textit{in-situ} MBS measurements by proposing a novel on-chip microwave readout technique. By coupling the electric field of the microwave cavity to dipole-moments of transitions from Majorana modes to CdGM modes, we show that a topological non-demolition dispersive readout of the MBS parity can be realized. Moreover, we show that our platform can be used to make the first experimental observations of quasiparticle poisoning times in topological superconducting vortices.
The paper is organized as follows: in Section~\ref{sec:plat} we present a schematic and describe our platform. In Section~\ref{sec:initial} we present the conditions for initializing a skyrmion--vortex pair and discuss its equilibrium properties. In particular, we characterize the skyrmion--vortex binding strength. In Section~\ref{sec:braid} we discuss the dynamics of a SVP in the context of braiding. Then in Section~\ref{sec:read}, we present details of our microwave readout technique. Finally, we discuss the scope of our platform in Section~\ref{sec:summ}.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{energies.jpg}
\caption{\label{fig:energies}}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{forces.jpg}
\caption{\label{fig:forces}}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width=1\textwidth]{fvav.jpg}
\caption{\label{fig:fvav}}
\end{subfigure}
\caption{\label{fig:onenew} (a -- b) Normalized energies and forces for Sk--Vx interaction between a Pearl vortex and a N\'eel skyrmion of varying thickness. (c) Attractive $F_{Vx-Avx}$ and repulsive $F_{Sk-Avx}$ (colored lines) for the example materials in Appendix~\ref{app:A}: $M_{0}=1450$ emu/cc, $r_{sk}=35$ nm, $d_s = 50$ nm, $\Lambda = 5$ $\mu$m and $\xi=15$ nm.}
\end{figure*}
\section{\label{sec:plat}Platform Description}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.59\textwidth}
\includegraphics[width=1\textwidth]{Braiding.jpg}
\caption{\label{fig:braiding}}
\end{subfigure}
\begin{subfigure}{0.39\textwidth}
\includegraphics[width=1\textwidth]{t0.jpg}
\caption{\label{fig:t0}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t1.jpg}
\caption{\label{fig:t1}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t2.jpg}
\caption{\label{fig:t2}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t3.jpg}
\caption{\label{fig:t3}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t4.jpg}
\caption{\label{fig:t4}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t55.jpg}
\caption{\label{fig:t5}}
\end{subfigure}
\begin{subfigure}{0.15\textwidth}
\includegraphics[width=1\textwidth]{t6.jpg}
\caption{\label{fig:t6}}
\end{subfigure}
\caption{\label{fig:two} (a) Schematic of our braiding process: manipulations of four skyrmions in the MML track are shown. MBS at the centers of vortices bound to each of these skyrmions are labeled $\gamma_1$--$\gamma_4$. Ohmic contacts in HM layers of the MML are shown in brown and rf readout lines are shown in green. II--VI show the steps involved in braiding $\gamma_2$ and $\gamma_4$. In step II, $\gamma_1$ and $\gamma_2$ are brought close to rf lines by applying charge currents from C to A and D to B, respectively. $\gamma_1$ and $\gamma_2$ are then initialized by performing a dispersive readout of their parity (see Section~\ref{sec:read}). Similarly, $\gamma_3$ and $\gamma_4$ are initialized after applying charge currents along P to R and Q to S, respectively. In step III, $\gamma_2$ is moved aside to make room for $\gamma_4$ by applying currents from B to X followed by applying currents from X to C. In step IV, $\gamma_4$ is braided with $\gamma_2$ by applying currents along S to X and X to B. Finally, in step V, the braiding process is completed by bringing $\gamma_2$ to S by applying currents from A to X and from X to S. Parities (i.e., fusion outcomes) of $\gamma_1$ and $\gamma_4$, and $\gamma_3$ and $\gamma_2$ are then measured in step VI. Fusion outcomes in each pair of MBS indicate the presence or absence of a fermion corresponding to a parity of $\pm1$ \cite{PhysRevApplied.12.054035, PhysRevX.6.031016}. (b) Initial position of the skyrmions labeled A and B in the micromagnetic simulation for skyrmion braiding (see Appendix.~\ref{app:A}) (c--h) Positions of the two skyrmions at the given times as the braiding progresses. Charge current $j = 2\times 10^{12}$ A/m$^2$ was applied.}
\end{figure*}
Our setup consists of a thin TSC layer that hosts vortices grown on top of a MML that hosts skyrmions as shown in Fig.~\ref{fig:schematic}. A thin insulating layer separates the magnetic and superconducting layers ensuring electrical separation between the two. Vortices in a TSC are expected to host MBS at their cores \cite{wang2018evidence,chen2020observation, chen2018discrete}. Stray fields from a skyrmion in the MML nucleate such a vortex in the TSC, forming a bound skyrmion--vortex pair under favorable energy conditions (see Sec.~\ref{sec:initial}). This phenomenon has been recently experimentally demonstrated in Ref.~\cite{petrovic2021skyrmion}, where stray fields from N\'eel skyrmions in Ir/Fe/Co/Ni magnetic multilayers nucleated vortices in a bare Niobium superconducting film.
The MML consists of alternating magnetic and heavy metal (HM) layers, as shown in Fig.~\ref{fig:layers}. The size of a skyrmion in a MML is determined by a delicate balance between exchange, magnetostatic, anisotropy and Dzyaloshinskii–Moriya interaction (DMI) energies \cite{wang2018theory, romming2015field} -- and the balance is highly tunable, thanks to advances in spintronics \cite{buttner2018theory, dupe2016engineering, soumyanarayanan2017tunable}. Given a TSC, this tunability allows us to find a variety of magnetic materials and skyrmion sizes that can satisfy the vortex nucleation condition [to be detailed in Eq.~(\ref{eqn:nuc})]. In Appendix~\ref{app:A}, we provide a specific example of FeTeSe topological superconductor coupled with Ir/Fe/Co/Ni magnetic multilayers.
Due to large intrinsic spin-orbit coupling, a charge current through the heavy metal layers of a MML exerts spin-orbit torques (SOT) on the magnetic moments in the MML, which have been shown to drive skyrmions along magnetic tracks \cite{fert2013skyrmions, woo2017spin}. In our platform, to realize Majorana braiding we propose to pattern the MML into a track as shown in Fig.~\ref{fig:schematic} and use local spin-orbit torques to move skyrmions along each leg of the track. If skyrmions are braided on the MML track, and if skyrmion-vortex binding force is stronger than total pinning force on the SVPs, then the MBS hosting vortices in TSC will closely follow the motion of skyrmions, resulting in the braiding of MBS. We note here that there is an upper threshold speed with which a SVP can be moved as detailed in Sec.~\ref{sec:braid}. By using experimentally-relevant parameters for TSC and MML in Appendix~\ref{app:A}, we show that our Majorana braiding scheme can be realized with existing materials.
We propose a non-demolition microwave measurement technique for the readout of the quantum information encoded in a pair of vortex Majorana bound states (MBS). A similar method has been proposed for the parity readout in topological Josephson junctions~\cite{PhysRevB.92.245432,Vayrynen2015,Yavilberg2015,PhysRevB.99.235420,PRXQuantum.1.020313} and in Coulomb blockaded Majorana islands~\cite{PhysRevB.95.235305}. Dipole moments of transitions from MBS to CdGM levels couple dispersively to electric fields in a microwave cavity, producing a parity-dependent dispersive shift in the cavity resonator frequency. Thus by probing the change in the resonator's natural frequency, the state of the Majorana modes can be inferred. Virtual transitions from Majorana subspace to excited CdGM subspace induced due to coupling to the cavity electric field are truly parity conserving, making our readout scheme a so-called topological quantum non-demolition technique \cite{PRXQuantum.1.020313, PhysRevB.99.235420}. The readout scheme is explained in greater detail in Sec.~\ref{sec:read}.
As discussed above, in our platform we consider coupling between a thin superconducting layer and magnetic multilayers. We note that in thin superconducting films, vortices are characterized by the Pearl penetration depth, given by $\Lambda \ =\ \lambda ^{2} /d_{s}$, where $\lambda$ is the London penetration depth and $d_{s}$ is the thickness of the TSC film. Typically, these penetration depths $\Lambda$ are much larger than skyrmion radii $r_{sk}$ in MMLs of interest. Further, interfacial DMI in MML stabilizes a N\'eel skyrmion as opposed to a Bloch skyrmion. So hereon, we only study coupling between a N\'eel skyrmion and a Pearl vortex in the limit $\Lambda\gg r_{sk}$.
\section{\label{sec:initial}Initialization and SVP in Equilibrium}
Fig.~\ref{fig:flow} illustrates the process flow of our initialization scheme. Skyrmions can be generated individually in MML by locally modifying magnetic anisotropy through an artificially created defect center and applying a current through adjacent heavy metal layers \cite{zhang2020skyrmion}. Such defect centers have been experimentally observed to act as skyrmion creation sites \cite{buttner2017field}. When the TSC--MML heterostructure is cooled below the superconducting transition temperature (SC $T_{C}$), stray fields from a skyrmion in the MML will nucleate a vortex and an antivortex in the superconducting layer if the nucleation leads to a lowering in overall free energy of the system \cite{volkov}. An analytical expression has been obtained for the nucleation condition in Ref.~\cite{NeelInteraction} ignoring contributions of dipolar and Zeeman energies to total magnetic energy: a N\'eel skyrmion nucleates a vortex directly on top of it if
\begin{equation}
d_{m}\left[ \alpha _{K}\frac{Kr_{sk}^{2}}{2} -\alpha _{A} A-M_{0} \phi _{0}\right] \geq \frac{{\phi _{0}}^2}{8 \pi^2 \lambda} \ln\left(\frac{\Lambda }{\xi }\right).
\label{eqn:nuc}
\end{equation}
\noindent Here, $d_{m}$ is the effective thickness, $M_{0}$ is the saturation magnetization, $A$ is the exchange stiffness and $K$ is the perpendicular anisotropy constant of the MML; $\alpha_K$ and $\alpha_A$ are positive constants that depend on skyrmion's spatial profile (see Appendix~\ref{app:A}), $r_{sk}$ is the radius of the skyrmion in the presence of a Pearl vortex \footnote{The radius of a skyrmion is not expected to change significantly in the presence of a vortex \cite{NeelInteraction}. We verified this claim with micromagnetic simulations. For the materials in Appendix~\ref{app:A}, when vortex fields are applied on a bare skyrmion, its radius increased by less than $10\%$. So, for numerical calculations in this paper, we use bare skyrmion radius for $r_{sk}$.}, $\phi _{0}$ is the magnetic flux quantum, and $\Lambda$ ($\xi$) is the Pearl depth (coherence length) of the TSC. Although a complete solution of the nucleation condition must include contributions from dipolar and Zeeman energies to total energy of a MML, such a calculation can only be done numerically and Eq.~(\ref{eqn:nuc}) can still be used as an approximate estimate. For the choice of materials listed in the Appendix, the left side of the equation exceeds the right side by $400\%$, strongly suggesting the nucleation of a vortex for every skyrmion in the MML. Furthermore, skyrmions in Ir/Fe/Co/Ni heterostructures have also been experimentally shown to nucleate vortices in Niobium superconducting films \cite{petrovic2021skyrmion}.
We proceed to characterize the strength of a skyrmion (Sk) -- vortex (Vx) binding force as it plays a crucial role in determining the feasibility of moving the skyrmion and the vortex as a single object. Spatial magnetic profile of a N\'eel skyrmion is given by $\boldsymbol{M}_{sk} =M_{0}[\zeta \sin\theta(r) \boldsymbol{\hat{r}}+ \cos\theta(r) \boldsymbol{\hat{z}}]$, where $\zeta=\pm$1 is the chirality and $\theta(r)$ is the angle of the skyrmion. For $\Lambda\gg r_{sk}$, the interaction energy between a vortex and a skyrmion is given by \cite{NeelInteraction}:
\begin{equation}
E_{Sk-Vx} =\frac{M_{0} \phi _{0} r_{sk}^{2}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q^2}(e^{-q\tilde{d}}-1) J_{0}(qR) m_{z,\theta}(q) \,dq,
\label{eqn:energy}
\end{equation}
\noindent where $\tilde{d} = d_m \slash r_{sk}$, $J_{n}$ is the nth-order Bessel function of the first kind, and $R=r/r_{sk}$ is the normalized horizontal displacement $r$ between the centers of the skyrmion and the vortex. $m_{z,\theta}(q)$ contains information about skyrmion's spatial profile and is given by \cite{NeelInteraction}: $m_{z,\theta}(q) = \int_{0}^{\infty} x [\zeta q + \theta^\prime ( x )] J_{1}( qx) \sin\theta(x) \,dx$, where $\theta ( x )$ is determined by skyrmion ansatz.
We now derive an expression for the skyrmion--vortex restoring force by differentiating Eq.~(\ref{eqn:energy}) with respect to $r$:
\begin{equation}
F_{Sk-Vx} =-\frac{M_{0} \phi _{0} r_{sk}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q}(1- e^{-q\tilde{d}}) J_{1}(qR) m_{z,\theta}(q) \,dq.
\label{eqn:force}
\end{equation}
For small horizontal displacements $r\ll r_{sk}$ between the centers of the skyrmion and the vortex, we can approximate the Sk--Vx energy as:
\begin{equation}
E_{Sk-Vx} =\frac{1}{2} kr^{2},
\label{eqn:springconstant}
\end{equation}
\noindent with an effective spring constant
\begin{equation}
k =-\frac{M_{0} \phi _{0}}{4\Lambda }\int_{0}^{\infty} (1- e^{-q\tilde{d}}) m_{z,\theta}(q) \,dq.
\label{eqn:spring}
\end{equation}
Figs.~\ref{fig:energies}--\ref{fig:forces} show binding energy and restoring force between a vortex and skyrmions of varying thickness for the materials listed in Appendix~\ref{app:A}. Here we used domain wall ansatz for the skyrmion with $\theta(x) = 2\tan^{-1}[\frac{\sinh(r_{sk}/\delta)}{\sinh(r_{sk}x/\delta)}]$, where $r_{sk}/\delta$ is the ratio of skyrmion radius to its domain wall width and $x$ is the distance from the center of the skyrmion normalized by $r_{sk}$. As seen in Fig.~\ref{fig:forces}, the restoring force between a skyrmion and a vortex increases with increasing separation between their centers until it reaches a maximum value, $F_{max}$, and then decreases with further increase in separation. We note that $F_{max}$ occurs when Sk--Vx separation is equal to the radius of the skyrmion, i.e. when $R=1$ in Eq.~(\ref{eqn:force}):
\begin{equation}
F_{max} = -\frac{M_{0} \phi _{0} r_{sk}}{2\Lambda }\int_{0}^{\infty} \frac{1}{q}(1- e^{-q\tilde{d}}) J_{1}(q) m_{z,\theta}(q) \,dq.
\label{eqn:fmax}
\end{equation}
\noindent As the size of the skyrmion increases, the maximum binding force $F_{max}$ of the SVP increases. For a given skyrmion size, increasing the skyrmion thickness increases the attractive force until the thickness reaches the size of the skyrmion. Further increase in MML thickness does not lead to an appreciable increase in stray fields outside the MML layer and, as a result, the Sk--Vx force saturates.
It is important to note that stray fields from a skyrmion nucleate both a vortex and an antivortex (Avx) in the superconducting layer \cite{volkov, PhysRevLett.88.017001, milosevic_guided_2010, PhysRevLett.93.267006}. While the skyrmion attracts the vortex, it repels the antivortex. Eqs.~(\ref{eqn:energy}) and (\ref{eqn:force}) remain valid for Sk--Avx interaction, but switch signs. The equilibrium position of the antivortex is at the location where repulsive skyrmion--antivortex force, $F_{Sk-Avx}$, is balanced by the attractive vortex--antivortex force, $F_{Vx-Avx}$~\cite{lemberger2013theory, ge2017controlled}. Fig.~\ref{fig:fvav} shows $F_{Vx-Avx}$ against $F_{Sk-Avx}$ for the platform in the Appendix. We see that for thicker magnets, the location of the antivortex is far away from that of the vortex, where the Avx can be pinned with artificially implanted pinning centers \cite{aichner2019ultradense, gonzalez2018vortex}. For thin magnetic films, where the antivortex is expected to be nucleated right outside the skyrmion radius, we can leverage Berezinskii–Kosterlitz–Thouless (BKT) transition to negate $F_{Vx-AVx}$ for Vx-Avx distances $r<\Lambda$ \cite{PhysRevB.104.024509, schneider_excess_2014, goldman2013berezinskii, zhao2013evidence}. Namely, when a Pearl superconducting film is cooled to a temperature below $T_C$ but above $T_{BKT}$, vortices and antivortices dissociate to gain entropy, which minimizes the overall free energy of the system \cite{beasley1979possibility}. While the attractive force between a vortex and an antivortex is nullified, a skyrmion in the MML still attracts the vortex and pushes the antivortex towards the edge of the sample, where it can be pinned. Therefore we assume that the antivortices are located far away and neglect their presence in our braiding and readout schemes.
\section{\label{sec:braid}Braiding}
Majorana braiding statistics can be probed by braiding a pair of MBS \cite{RevModPhys.80.1083} which involves swapping positions of the two vortices hosting the MBS. We propose to pattern the MML into interconnected Y-junctions as shown in Fig.~\ref{fig:two} to enable that swapping. Ohmic contacts in HM layers across each leg of the Y-junctions enable independent application of charge currents along each leg of the track. These charge currents in-turn apply spin-orbit torques on the adjacent magnetic layers and enable skyrmions to be moved independently along each leg of the track. As long as skyrmion and vortex move as a collective object, braiding of skyrmions in the MML leads to braiding of MBS hosting vortices in the superconducting layer. Below we study the dynamics of a SVP subjected to spin torques for braiding. We calculate all external forces acting on the SVP in the process and discuss the limits in which the skyrmion and the vortex move as a collective object.
For a charge current $\bm{J}$ in the HM layer, the dynamics in the magnetic layer is given by the modified Landau–Lifshitz–Gilbert (LLG) equation \cite{hayashi2014quantitative, slonczewski1996current}:
\begin{equation}
\partial _{t}\bm{m} =-\gamma (\bm{m} \times {{\bm H}_{eff}} +\eta J\ \bm{m} \times \bm{m} \times \bm{p}) +\alpha \bm{m} \times \partial _{t}\bm{m}
\label{eqn:llg}
\end{equation}
\noindent where we have included damping-like term from the SOT and neglected the field-like term as it does not induce motion of N\'eel skyrmions for our geometry \cite{jiang_blowing_2015}. Here, $\gamma$ is the gyromagnetic ratio, $\alpha$ is the Gilbert damping parameter, and ${{\bm H}_{eff}}$ is the effective field from dipole, exchange, anisotropy and DMI interactions. $\bm{p}=sgn(\Theta _{SH})\bm{\hat{J}} \times \hat{\bm{n}}$ is the direction of polarization of the spin current, where $\Theta _{SH}$ is the spin Hall angle, $\bm{\hat{J}}$ is the direction of charge current in the HM layer and $\hat{\bm{n}}$ is the unit vector normal to the MML. $\eta=\hbar \Theta _{SH}/2eM_{0} d_{m}$ quantifies the strength of the torque, $\hbar$ is the reduced Planck's constant and $e$ is the charge of an electron.
Assuming skyrmion and vortex move as a collective object, semiclassical equations of motion for the centers of mass of the skyrmion and the vortex can be written using collective coordinate approach as done in Ref.~\cite{hals2016composite}:
\begin{eqnarray}
m_{sk}\ddot{\bm{R}}_{sk}= {\bf{F}}_{SOT} - \frac{\partial U_{sk,\ pin}}{\partial \bm{R}_{sk}} - & {\bm{G}}_{sk}\times \dot{\bm{R}}_{sk} - 4\pi s \alpha \dot{\bm{R}}_{sk} \nonumber \\
&- k({\bm{R}}_{sk}-{\bm{r}}_{vx}),
\label{eqn:skmotion}
\end{eqnarray}
and
\begin{eqnarray}
m_{vx}\ddot{\bm{R}}_{vx} = - \frac{\partial U_{vx,\ pin}}{\partial \bm{R}_{vx}} - &{\bm{G}}_{vx}\times \dot{\bm{R}}_{vx} - {\alpha}_{vx} \dot{\bm{R}}_{vx} \nonumber \\
& + k({\bm{R}}_{sk}-{\bm{r}}_{vx}),
\label{eqn:vxmotion}
\end{eqnarray}
\noindent where ${\bm{R}}_{sk}$ (${\bm{R}}_{vx}$), $m_{sk}$ ($m_{vx}$) and $q_{sk}$ ($q_{vx}$) are the position, mass and chirality of the skyrmion (vortex). $k$ is the effective spring constant of the Sk--Vx system, given in Eq.~(\ref{eqn:spring}). ${\bm{F}}_{SOT}=\pi ^{2} \gamma \eta r_{sk} s\bm{{J}} \times \hat{\bm{n}}$ is the force on a skyrmion due to spin torques in Thiele formalism, where $s=M_0 d_m/\gamma$ is the spin density \cite{upadhyaya2015electric, thiele1970theory}. The third term on the right side of Eq.~(\ref{eqn:skmotion}) gives Magnus force on the skyrmion, with ${\bm{G}}_{sk} = 4\pi s q_{sk}\hat{\bm{z}}$, and the fourth term characterizes a dissipative force due to Gilbert damping. Similarly, the second term on the right side of Eq.~(\ref{eqn:vxmotion}) gives the Magnus force on the vortex with ${\bm{G}}_{vx} = 2\pi s n_{vx} q_{vx} \hat{\bm{z}}$, with $n_{vx}$ being the superfluid density of the TSC, and the third term characterizes viscous force with friction coefficient ${\alpha}_{vx}$. $U_{sk,\ pin}$ ($U_{vx,\ pin}$) gives the pinning potential landscape for the skyrmion (vortex). The last term in Eq.~(\ref{eqn:vxmotion}) represents restoring force on a vortex due to its separation from a skyrmion and is valid when $\mid{\bm{R}}_{sk}-{\bm{R}}_{vx}\mid <r_{sk}$. Here, $k$ is the effective spring constant characterizing Sk--Vx force, as given by Eq.~(\ref{eqn:springconstant}).
We consider steady-state solutions of the equations of motion assuming that the skyrmion and the vortex are bound. We discuss conditions for the dissociation of a SVP later. For a given external current $\bm{J}$, velocity $v$ of a SVP in steady state is obtained by setting $\ddot{\bm{R}}_{sk} = \ddot{\bm{R}}_{vx} = 0$ and $\dot{\bm{R}}_{sk} = \dot{\bm{R}}_{vx} = \dot{\bm{R}}$ in Eqs.~(\ref{eqn:skmotion}) and (\ref{eqn:vxmotion}):
\begin{equation}
v = |\dot{\bm{R}}| = \frac{\pi ^{2} \gamma \eta r_{sk} sJ}{\sqrt{(G_{sk}+G_{vx})^{2} +(4\pi s \alpha + \alpha_{vx})^{2}}}.
\label{eqn:vgivenj}
\end{equation}
\noindent In general, the SVP moves at an angle $\varphi$ relative to $\bm{{F}}_{SOT}$ due to Magnus forces on the skyrmion and the vortex, with:
\begin{eqnarray}
\tan \varphi = \frac{G_{sk}+G_{vx}}{4\pi s \alpha + \alpha_{vx}}.
\label{eqn:svpangle}
\end{eqnarray}
Armed with the above equations, we extract some key parameters that determine the feasibility of our braiding scheme. First, if $\bm{{F}}_{SOT}$ from external currents is unable to overcome the maximum pinning force on either the skyrmion ($F_{pin, sk}$) or the vortex ($F_{pin, vx}$), the SVP will remain stationary. This gives us a lower threshold $J^-$ on the external current which is obtained by weighing $\bm{{F}}_{SOT}$ against the pinning forces:
\begin{equation}
J^{-} = \frac{max(F_{pin, sk}, F_{pin, vx})}{\pi ^{2} \gamma \eta r_{sk} s}.
\label{eqn:jminus}
\end{equation}
Second, once the SVP is in motion, drag and Magnus forces that act on the skyrmion and the vortex are proportionate to their velocity. If the net external force on a vortex in motion is larger than the maximum force with which a skyrmion can pull it ($F_{max}$), then the skyrmion and the vortex dissociate and no longer move as a collective object. This sets an upper bound $v^+$ on the SVP speed which can be obtained by balancing $F_{max}$ with the net force from Magnus and drag forces on the vortex. This maximum speed plays a key role in determining whether our braiding and readout scheme can be completed within the quasiparticle poisoning time.
\begin{equation}
v^{+} = \frac{F_{max}}{\sqrt{(\alpha_{vx})^2+(G_{vx})^2}}.
\label{eqn:vplus}
\end{equation}
An upper bound on the SVP speed implies an upper bound $J^+$ on the external current which can be obtained by putting $v^+$ in Eq.~(\ref{eqn:vgivenj}):
\begin{equation}
J^{+} = \frac{v^{+} \sqrt{(G_{sk}+G_{vx})^{2} +(4\pi s \alpha + \alpha_{vx})^{2}}}{\pi ^{2} \gamma \eta r_{sk} s}.
\label{eqn:jplus}
\end{equation}
Another parameter of critical importance, the distance of closest approach between two skyrmion--vortex pairs ($r_{min}$) plays a crucial role in achieving significant overlap of the MBS wavefunctions centered at the vortex cores and is given by balancing Sk--Vx attractive force by Vx--Vx repulsive force:
\begin{equation}
r_{min} = \frac{\phi_0^2}{4\pi^2 \Lambda} \frac{1}{F_{max}}.
\label{eqn:rmin}
\end{equation}
Finally, the power $P$ dissipated in heavy metal layers due to Joule heating from charge currents has to be effectively balanced by the cooling rate of the dilution refrigerator:
\begin{equation}
P = n_{hm} L W t_{hm} \rho_{hm} J^2,
\label{eqn:power}
\end{equation}
\noindent where $n_{hm}$ is the number of heavy metal layers, $L$ ($W$) is the length (width) of the active segment of the MML track, $t_{hm}$ is the thickness of each heavy metal layer and $\rho_{hm}$ is the resistivity of a heavy metal layer.
By applying a current $J^- < J < J^+$ locally in a desired section of the MML track, each SVP can be individually addressed. For the materials listed in Appendix~\ref{app:A}, the maximum speed $v^+$ with which a SVP can be moved is over $1000$ m/s. At this top speed, SVPs can cover the braiding distance (the sum of the lengths of the track in steps I--VI of Fig.~\ref{fig:braiding}) of $50 r_{sk}$ in about $0.15$ ns, but the process generates substantial Joule heating. At a reduced speed of $0.25$ m/s, SVPs cover that distance in $7$ $\mu$s generating $30$ $\mu$W of heat during the process, which is within the cooling power of modern dilution refrigerators. SVPs can be braided at faster speeds if the dilution fridges can provide higher cooling power or if the resistivity of heavy metal layers in the MML can be lowered. Although quasiparticle poisoning times in superconducting vortices have not been measured yet, estimates in similar systems range from hundreds of microseconds to seconds \cite{higginbotham2015parity, PhysRevLett.126.057702, PhysRevB.85.174533}. Our braiding time falls well within such estimates for quasiparticle poisoning times, indicating the viability of our platform. Furthermore, the ability to easily tune braiding times in our platform by varying magnitude of currents in heavy metal layers can be used to investigate the effects of quasiparticle poisoning on the braiding protocol.
As will be shown in Section~\ref{sec:read}, Vx--Vx distances $<10\xi$ should be sufficient to perform a dispersive readout of MBS parity in adjacent vortices. For the materials listed in Appendix~\ref{app:A}, the distance of closest approach between two vortices is $r_{min}=40$ nm. The shape of the MML track further limits how close two vortices can be brought together (see step II in Fig.~\ref{fig:braiding}). With the geometry of the track taken into account, Vx--Vx distance less than $10\xi$ can still be easily achieved, enough to induce a detectable shift in cavity's resonance frequency during the dispersive readout.
Figs.~\ref{fig:t0}--\ref{fig:t6} show the results of micromagnetic simulation of braiding skyrmions in a smaller section of the MML (for computational reasons) for the example platform. The details of the simulation are given in the Appendix~\ref{app:A}. The simulation results demonstrate the effectiveness of using local SOT to move individual skyrmions and realize braiding. Finally, as discussed in this section, due to the strong skyrmion--vortex binding force, vortices hosting MBS in the TSC will braid alongside the skyrmions.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{MW_combined.pdf}
\caption{\label{fig:readout} (a) Schematic of our readout process. When two vortices are brought close, microwave transitions can be dispersively driven from the MBS to the excited hybridized CdGM levels (only level $1$ is shown). Parity of the Majorana mode can be inferred from the difference in the cavity frequency shift produced by $\omega_{-\mathcal{M},1}$ and $\omega_{\mathcal{M},1}$ transitions (see Eq.~(\ref{eq:chi})).
The allowed fermion parity conserving transitions are shown in both single-particle and many-particle representations. In the latter, dashed and solid lines denote states in the two fermion parity sectors.
The transition of frequency $\omega_{-\mathcal{M},1}$ (blue arrows) corresponds to breaking a Cooper pair and exciting the MZM and CdGM levels (MZM being initially unoccupied).
When the MZM is occupied, the transition of frequency $\omega_{\mathcal{M},1}$ (red arrow) excites the MZM quasiparticle into the CdGM level.
The dipole transition matrix elements are different for the two processes, enabling parity readout.
(b) MZM-parity sensitive dipole transition strength versus vortex pair separation. We denote $g^2_n = (|\mathbf{E}_{0}\cdot\mathbf{d}_{n,- \mathcal{M} }|^{2}-|\mathbf{E}_{0}\cdot\mathbf{d}_{n, \mathcal{M} }|^{2}) $ the dipole transition strength between the Majorana level and the $n$th CdGM level. We plot the dimensionless strength normalized by $U = e |\mathbf{E}_{0}| l$.
As expected from MZM hybridization, $g_n$ decays approximately exponentially in the distance between the two vortices. Oscillations in $g^2_n$ represent oscillations in the wave functions of a clean system. In a disordered (real) system the oscillations are expected to be smeared out.
The inset shows the probability density for the MZM hosted by a vortex pair 400 nm apart.
The simulation was done for an effective 2D model (a $1000\times 600 \mathrm{nm}^2$ rectangle) of a 3D topological insulator surface, see Refs.~\cite{PhysRevB.86.155146,PhysRevX.7.031006,MW_inprep}. We used $\xi = 15 \mathrm{nm}$, vortex radius $r = \xi$, and $E_F = 1.125 \Delta$ in the vortex.
}
\end{figure*}
\section{\label{sec:read}Readout}
Quantum information is encoded in the charge parity of MBS hosted in a pair of vortices which we propose to readout with a dispersive measurement technique. Fig.~\ref{fig:readout}a summarizes our readout scheme - the top figure shows single particle energy levels and the bottom figure shows many-body energy levels of a pair of vortices brought close to each other. In the dispersive limit, a microwave cavity electric field can drive virtual transitions from the ground state Majorana manifold to excited CdGM manifold (only one CdGM level, labeled 1, is considered in the figure). The transitions allowed by selection rules, labeled $\omega_{-\mathcal{M},1}$ and $\omega_{\mathcal{M},1}$ are shown in the many-body spectrum. Each of these virtual transitions causes a state-dependent dispersive shift in the cavity's natural frequency and the parity of the vortex pair can be inferred from relative change in cavity frequency. Note that each of the allowed transitions is truly parity conserving since microwaves cannot change the number of fermions.
Since parity states are true eigenstates (as opposed to approximate eigenstates) of the measurement operation, our readout scheme can be dubbed as a topological quantum non-demolition technique \cite{PRXQuantum.1.020313, PhysRevB.99.235420}. We now proceed to calculate dipole coupling strengths of the allowed transitions to cavity electric field and the corresponding dispersive shift.
In BCS mean-field theory, the coupling to electric field can be described by the Hamiltonian
\begin{equation}
\delta H=-\mathbf{E}(t)\cdot\hat{\mathbf{d}}\,,\quad\hat{\mathbf{d}}=\frac{e}{2}\int d^{2}\mathbf{r}\mathbf{r}\hat{\Psi}^{\dagger}\tau_{z}\hat{\Psi}\,, \label{eq:deltaH}
\end{equation}
where $\mathbf{E}(t) = \mathbf{E}_0 \cos \omega t $ is the microwave-induced time-dependent electric field
which is approximately uniform over the scale of the vortices~\footnote{In Eq.~(\ref{eq:deltaH}), we assume a thin film superconductor that can be approximated by a 2D system. This model can also describe a 3D superconductor when the electric field $\mathbf{E}$ does not penetrate deep into its bulk. }.
The electric field couples to the dipole operator $\hat{\mathbf{d}}$ of the electronic states in the vortices.
We have written it in terms of the electron field operator in
Nambu spinor notation, $\hat{\Psi}=(\psi_{\uparrow},\psi_{\downarrow},\psi_{\downarrow}^{\dagger},-\psi_{\uparrow}^{\dagger})^{T}$; The Pauli matrix $\tau_z$ acts on the particle-hole indices.
At low energies, we expand the field operators in terms of eigenstates as
\begin{equation}
\hat{\Psi}(\mathbf{r})= \phi_{1}(\mathbf{r})\hat{\gamma}_{1}+\phi_{2}(\mathbf{r})\hat{\gamma}_{2} +\Phi_{1}(\mathbf{r})\hat{\Gamma}_{1}+\Phi_{-1}(\mathbf{r})\hat{\Gamma}_{1}^{\dagger}+\dots \,, \label{eq:Psi} \end{equation}
where $\hat{\gamma}_{1,2}$ are the Majorana operators for vortices 1 and 2, and $\hat{\Gamma}_{1}^{(\dagger)}$ is the annihilation (creation) operator for the lowest CdGM state. The corresponding wave functions multiply the operators in Eq.~(\ref{eq:Psi}).
At low frequencies much below the level spacing $\delta E$ of the vortex quasiparticle bound states, $\omega \ll \delta E /\hbar$, the microwave field does not excite the quasiparticle states of the vortices.
We shall also assume that these quasiparticle states are not occupied, for example due to quasiparticle poisoning.
Under these conditions, the vortex pair stays in its ground state manifold consisting of the two states of unoccupied/occupied non-local MBS.
With sufficiently weak microwave driving we can use dispersive readout to measure the charge parity $\sigma_{z} = i\hat{\gamma}_{1}\hat{\gamma}_{2} $~\cite{RevModPhys.93.025005,PRXQuantum.1.020313}. The dispersive Hamiltonian of the resonator-vortex pair system reads~\cite{PRXQuantum.1.020313},
\begin{equation}
H_\text{resonator} + \delta H
= \hat{a}^\dagger \hat{a} (\hbar \omega + \sigma_{z} \hbar \chi) \,, \label{eq:MW+MZM}
\end{equation}
where $\hat{a},\hat{a}^\dagger$ are the harmonic oscillator annihilation and creation operators for the resonator. The MBS parity-dependent dispersive frequency shift is
\begin{equation}
\hbar \chi= \frac{g_1^2}{ \delta E} \left[\frac{\delta E^2}{\delta E^2 - (\hbar \omega)^2} \right] \,, \label{eq:chi}
\end{equation}
where we denote $g_1^2 = |\mathbf{E}_{0}\cdot\mathbf{d}_{1,- \mathcal{M} }|^{2}-|\mathbf{E}_{0}\cdot\mathbf{d}_{1, \mathcal{M}}|^{2}$ and $\omega$ is the resonator bare frequency, $\mathbf{E}_0$ is the electric field amplitude, and $\delta E $ is the energy gap separating the MBS from the first excited CdGM mode. We ignore here the exponentially small energy splitting between the MBS, which would give subleading corrections to $\chi$; we will see that $\chi$ itself will be exponentially small in the vortex separation (due to the parity-sensitive transition dipole matrix elements $\mathbf{d}_{1,- \mathcal{M} }$ and $\mathbf{d}_{1, \mathcal{M} }$ being almost equal).
We denote here $\mathbf{d}_{1, \mathcal{M} } = \langle 1 | \hat{ \mathbf{d}} | \mathcal{M} \rangle $ and $\mathbf{d}_{1, - \mathcal{M} } = \langle \mathcal{M},1 | \hat{ \mathbf{d}} | 0 \rangle $ where the relevant state are the ground state $| 0 \rangle$, the single-particle excited states $ | \mathcal{M} \rangle = \hat{\Gamma}_{\mathcal{M}}^{\dagger} | 0 \rangle$ and $ | 1 \rangle = \hat{\Gamma}_{1}^{\dagger} | 0 \rangle$, and the two-particle excited state $ | \mathcal{M}, 1 \rangle = \hat{\Gamma}_{1}^{\dagger} \hat{\Gamma}_{\mathcal{M}}^{\dagger} | 0 \rangle$; we introduced the annihilation operator $\hat{\Gamma}_{\mathcal{M}}=(\hat{\gamma}_{1}+i\hat{\gamma}_{2})/2 $ for the non-local MBS.
Evaluating the dipole transition matrix elements $ \mathbf{d}_{1, \pm \mathcal{M} }$ microscopically is somewhat involved since proper screening by the superconducting condensate needs to be carefully accounted for and is beyond the BCS mean-field theory~\cite{1996PhRvL..77..566B,2001PhRvL..86..312K,PhysRevB.91.045403,PhysRevB.97.125404,PhysRevX.8.031041}.
Nevertheless, to estimate $\mathbf{d}_{1, \pm \mathcal{M} }$ we can use Eq.~(\ref{eq:deltaH}) by replacing $\mathbf{r} \approx l \hat{\mathbf{z}}$ in it, with $l \approx a_B$ being the effective distance to the image charge in the superconductor and $\hat{\mathbf{z}}$ the surface normal vector~\cite{1996PhRvL..77..566B}. Here $a_B$ denotes the Bohr radius.
We evaluate the dimensionless matrix elements of the effective dipole ``charge'' $\mathbf{d}\cdot \hat{\mathbf{z}} / l$ by using a numerical simulation of the Majorana and CdGM states in a double vortex system depicted in Fig.~\ref{fig:readout}b. The numerical simulations will be detailed in a future publication~\cite{MW_inprep}.
In Fig.~\ref{fig:readout}b we plot the parity-sensitive term $g_n^2$ that largely determines the dispersive shift $\chi$, Eq.~(\ref{eq:chi}).
We find that even a relatively distant vortex pair can provide a parity-dependent shift $g_n^2 \sim 10^{-2} (e l E_0)^2$.
Since the relevant dipole moment is normal to the superconductor surface, we can couple to the dipole by using a microwave resonator above the surface, producing a large perpendicular electric field.
With a resonator zero-point voltage $V_0 \sim 100\, \mu \mathrm{V}$ at a $\sim 10 \mathrm{nm}$ distance from the vortices, we obtain $e l E_0 \approx 1 \mu \mathrm{eV} \cdot (l / \text{\AA}) \approx 2.4 \times 10^2 h \mathrm{MHz} \cdot (l / \text{\AA})$. (We estimate such high zero-point voltages can be achieved in high-inductance resonators~\cite{PhysRevApplied.5.044004}.)
Taking a low-lying CdGM state with $\delta E \sim 10 \mu eV$, we obtain $ \chi / 2\pi \sim 20 \, \mathrm{MHz} \cdot (l / \text{\AA})^2$ where $l \sim a_B \gtrsim 1 \text{\AA}$ is the typical dipole size~\cite{1996PhRvL..77..566B}.
We thus see that the MBS vortex parity measurement is well within standard circuit QED measurement capabilities~\cite{RevModPhys.93.025005}.
We note that the above estimate does not include the resonant enhancement, the second factor in Eq.~(\ref{eq:chi}), which may further substantially increase the frequency shift.
Finally, we note that the dipole operator $\hat{\mathbf{d}}$ also has a non-zero diagonal matrix element $\mathbf{d}_{\mathcal{M}}$ in the Majorana state~\cite{PhysRevB.97.125404}, leading to a term $\mathbf{E}_0 \cdot\mathbf{d}_{\mathcal{M}} \sigma_z (\hat{a}+\hat{a}^\dagger)$ in Eq.~(\ref{eq:MW+MZM}). This term in principle allows one to perform longitudinal readout of the MBS parity. However, making longitudinal readout practical may require parametric modulation of the coupling, in our case $\mathbf{d}_{\mathcal{M}}$, which may be difficult~\cite{PhysRevB.99.235420,RevModPhys.93.025005}.
\section{\label{sec:summ}Summary}
Measuring braiding statistics is the ultimate method to conclusively verify the existence of non-abelian excitations. We proposed a unified platform to initialize, braid and readout Majorana modes, avoiding abrupt topological-trivial interfaces at each stage. We derived general expressions for braiding speeds with spin currents, distance of closest approach between two Majorana modes and the resultant dispersive shift in cavity resonance frequency. We showed that our setup can be readily realized with existing options for TSC and MML materials.
\begin{acknowledgments}
We would like to thank Axel Hoffman and Mohammad Mushfiqur Rahman for helpful discussions. JIV thanks Dmitry Pikulin and Rafa\l{} Rechci\'{n}ski for helpful discussions on 3D TI simulations. JIV and LPR acknowledge support from the Office of the Under Secretary of Defense for Research and Engineering under award number FA9550-22-1-0354. YPC and PU acknowledges partial support of the work from US Department of Energy (DOE) Office of Science through the Quantum Science Center (QSC, a National Quantum Information Science Research Center) and NSF ECCS-1944635. STK acknowledges support from the Purdue research foundation.
\end{acknowledgments}
| {'timestamp': '2022-10-20T02:16:28', 'yymm': '2210', 'arxiv_id': '2210.10650', 'language': 'en', 'url': 'https://arxiv.org/abs/2210.10650'} |
\section{Introduction}
Over the last decade, imaging atmospheric Cherenkov telescopes
(IACTs) have emerged as the prime instrument for the detection
of cosmic $\gamma$-rays in the TeV energy regime \cite{review}. Both galactic
and extragalactic sources of such $\gamma$-rays have been firmly
established, and have been identified with pulsars, supernova
remnants, and active galactic nuclei. Going beyond the existence
proof for different classes of $\gamma$-ray sources, interests
are increasingly turning towards precise measurements of the flux
and of the energy spectra, and the search for a break or cutoff
in the spectra.
Precise measurements of flux and spectrum with the IACT technique
represent a non-trivial challenge. Unlike particle detectors
used in high-energy-physics experiments or flown on balloons
or satellites, Cherenkov telescopes cannot be calibrated in a
test beam. Their energy calibration and response function has to
be derived indirectly, usually relying heavily on Monte Carlo
simulations. In addition, conventional single IACTs do not allow
to unambiguously reconstruct the full geometry of an air shower, i.e.,
its direction in space and its core location; this lack of
constraints make consistency checks between data and simulation more
difficult.
The stereoscopic observation of air showers with multiple telescopes,
as pioneered in the HEGRA system of Cherenkov telescopes
\cite{hegra_system}, solves
the latter problem. With two telescopes, the shower geometry is fully
determined. With three or more telescopes, the geometry is overdetermined
and one can measure resolution functions etc. \cite{wh_kruger}.
Angular resolution and energy resolution is improved compared to a
single telescope. The stereoscopic reconstruction of air showers
also allows a more detailed study of shower properties.
The analysis presented in the following concentrates on one feature
of $\gamma$-ray induced air showers which is central to the reconstruction
of shower energies, namely the distribution of photon intensity in the
Cherenkov light pool, as a function of the distance to the shower core.
In principle, the distribution of Cherenkov light can be calculated
from first principles, starting from the shower evolution governed by
Quantum Electro Dynamics (QED),
followed by the well-understood emission of Cherenkov light,
and its propagation through the atmosphere. The relevant atmospheric
parameters are quite well known and parameterized
(see, e.g., \cite{standard_atmo,modtran}). Nevertheless, early simulations showed
significant differences between simulation codes \cite{early_sim}.
These discrepancies can be traced to differences in the assumptions and in the
simplifications which are unavoidable to limit the processor time
required to generate a representative sample of air showers. More
recently, simulation codes seem to have converged
(see, e.g., \cite{recent_sim}), and
agree reasonably well among each other. Nevertheless, the experimental
verification of this key input to the interpretation of IACT data seems
desirable. In the past, experimental results concerning the distribution
of Cherenkov light in air showers were mainly limited to hadron-induced showers
of much higher energies.
The study of the distribution of Cherenkov light in TeV $\gamma$-ray
showers was carried out using the HEGRA system of IACTs,
based on the extensive sample of $\gamma$-rays detected from the
AGN Mrk 501 \cite{501_paper}. The Mrk 501 $\gamma$-ray sample
combines high statistics with a very favorable ratio of signal to
cosmic-ray background.
The basic idea is quite simple: the shower direction and core location
is reconstructed based on the different views of the shower. One then selects
showers of a given energy and plots the light yield observed in the
telescopes as a function of the distance to the shower core.
For this event selection, one should not
use the standard procedures for energy reconstruction
\cite{501_paper,wh_kruger}, since these procedures already assume a
certain distribution of the light yield. Instead, a much simpler -- and
bias-free -- method
is used to select events of a given energy: one uses a sample of
events which have
their core at a fixed distance $d_i$ (typically around 100~m)
from a given telescope $i$,
and which generate
a fixed amount of light $a_i$ in this telescope. Located on a circle
around telescope $i$, these showers cover a wide range in core distance
$r_j$ relative to some second telescope $j$, which in case of the
HEGRA array is located between about 70~m and 140~m from telescope $i$.
The measurement of the light yield $a_j$ in this second telescope
provides with $a_j(r_j)$ the shape of the Cherenkov
light pool. Lacking an absolute energy scale, this method does
provide the radial dependence, but not the absolute normalization of
the light yield. The determine the distribution of light for pure
$\gamma$-rays, the cosmic-ray background under the Mrk 501 signal
is subtracted on a statistical basis.
The following sections briefly describe the HEGRA IACT system, give
more detail on the Mrk 501 data set and the
analysis technique, and present and summarize the
results.
\section{The HEGRA IACT system}
The HEGRA IACT system is located on the Canary Island of La Palma,
at the Observatorio del Roque de los Muchachos
of the Instituto Astrofisico de Canarias,
at a height of about 2200~m asl.
The system will ultimately comprise five identical telescopes,
four of which are arranged in the corners of a square with roughly
100~m side length; the fifth telescope is located
in the center of the square. Currently, four of the telescopes
are operational in their final form. The fifth telescope -- one
of the corner telescopes -- is equipped with an older camera and
will be upgraded in the near future; it is not included in the
CT system trigger, and is not used in this analysis.
The system telescopes have
8.5~m$^2$ mirror area,
5~m focal length, and 271-pixel cameras with a pixel
size of $0.25^\circ$ and a field of view of $4.3^\circ$.
The cameras are read out by 8-bit Flash-ADCs, which sample the
pixel signals at a frequency of 120 MHz. More information on
the HEGRA cameras is given in \cite{hermann_padua}. The two-level
trigger requires a coincidence of two neighboring pixels
to trigger a telescope, and a coincidence of at least two
telescope triggers to initiate the readout. The pixel trigger
thresholds were initially set to 10~mV, corresponding to
about 8 photoelectrons, and were later in the 1997 run reduced to
8~mV, resulting in a typical trigger rate of 15~Hz,
and an energy threshold of the system of about
500~GeV. An in-depth discussion of the trigger system can be
found in \cite{trigger_paper}.
During data taking, a light pulser is used to regularly monitor
the gain and timing of the PMTs. FADC pedestals and offsets of
the trigger discriminators are followed continuously. Deviations
in telescope pointing are measured and corrected using bright
stars, resulting in a pointing accuracy
of better than $0.01^\circ$ \cite{pointing_paper}.
In the data analysis, a deconvolution procedure is applied to
the FADC data to generate minimum-length signals, and a signal
amplitude and timing is derived for each pixel
\cite{hess_phd}. With the gain
set to about 1 FADC count per photoelectron, the system provides
a direct linear range of about 200 photoelectrons. For larger signals,
the pulse length as measured by the FADC can be used to recover the
amplitude information, extending the dynamic range to well beyond
500 photoelectrons per pixel. Image pixels are then selected as
those pixels having a signal above a high cut of 6 photoelectrons,
or above cut of 3 photoelectrons if adjacent
to a high pixel. By diagonalizing its `tensor of inertia', the major
and minor axes of the images are determined, and the usual {\em width}
and {\em length} parameters \cite{hillas}. Both the image of the source of
a $\gamma$-ray and the point where the shower axis intersects the
telescope plane fall onto the major axes of the images. From the
multiple views of an air shower provided by the different telescopes,
the shower direction is hence determined by superimposing the images
and intersecting their major axes (see \cite{hegra_system,kohnle_paper}
for details); the typical angular resolution is $0.1^\circ$.
Similarly, the core location is derived. The $\gamma$-ray sample is enhanced
by cuts on the {\em mean scaled width} which is calculated by
scaling the measured {\em widths} of all images to the {\em width} expected
for $\gamma$-ray images of a given image {\em size} and distance
to the shower core \cite{501_paper}.
To simulate the properties and detection characteristics of the
HEGRA IACT system, detailed Monte-Carlo simulations are available,
using either the ALTAI \cite{altai} or the CORSIKA \cite{corsika}
air shower generator, followed by
a detailed simulation of the Cherenkov emission and propagation and
of the detector. These simulations include details
such as the pulse shapes of the input signals to the pixel trigger
discriminators, or the signal recording using the Flash-ADC system
\cite{telsimu1,telsimu2}. In the following, primarily the ALTAI
generator and the detector simulation \cite{telsimu1} was used.
Samples of simulated showers were available for zenith angles of
$0^\circ$, $20^\circ$, $30^\circ$, and $45^\circ$. Distributions at
intermediate angles were obtained by suitably scaling and/or interpolating
the distributions.
\section{The Mrk501 data set}
The extragalactic VHE $\gamma$-ray source Mrk 501
\cite{whipple_501_initial,hegra_501_initial} showed in 1997 significant
activity, with peak flux levels reaching up to 10 times the
flux of the Crab nebula (see \cite{501_rome} for a summary
of experimental results, first HEGRA results are given in \cite{501_paper}).
The telescopes of
the HEGRA IACT system were directed towards Mrk 501 for about
140~h, accumulating a total of about 30000 $\gamma$-ray events
at zenith angles between $10^\circ$ and $45^\circ$. Mrk 501
was typically positioned $0.5^\circ$ off the optical axis of the
telescope,
with the sign of the displacement varying every 20~min. In this
mode, cosmic-ray background can be determined by counting events
reconstructed in an equivalent region displaced from the
optical axis by the same amount, but opposite in direction to the
source region; dedicated off-source runs are no longer required,
effectively doubling the net on-source time.
Given the
angular resolution of about $0.1^\circ$, the separation by
$1^\circ$ of the on-source and off-source regions is fully
sufficient. The relatively large field of view of the cameras
ensures, on the other hand, that images are reliably reconstructed
even with a source displaced from the center of the camera.
The Mrk 501 $\gamma$-ray data \cite{501_paper}
have provided the basis for a number of
systematic studies of the properties of the HEGRA telescopes, and of the
characteristics of $\gamma$-ray induced air showers (see, e.g.,
\cite{wh_kruger}).
For the following analysis, the data set was cleaned by rejecting
runs with poor or questionable weather conditions, with hardware
problems, or with significant deviations of the trigger rates from
typical values. A subset of events was selected where
at least three telescopes had triggered, and had provided useful
images for the reconstruction of the shower geometry.
Fig.~\ref{fig_theta2} shows the distribution of reconstructed
shower axes in the angle $\theta$ relative to the direction towards
Mrk 501. A cut $\theta^2 < 0.05 (^\circ)^2$ was applied to enhance
the $\gamma$-ray content of the sample.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{dts.ps}}
\end{center}
\caption
{Distribution in the square of the angle $\theta$ between the
reconstructed shower axis and the direction
to the source, for events with at least three triggered telescopes.
No cuts on image shapes are applied. The dashed line shows the
distribution for the background region.}
\label{fig_theta2}
\end{figure}
To further reduce the cosmic-ray background, a loose cut on the
{\em mean scaled width} was used. The distributions in the
{\em mean scaled width} are shown in Fig.~\ref{fig_width}; events
were selected requiring a value below 1.25; this cut accepts
virtually all $\gamma$-rays.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{width.eps}}
\end{center}
\caption
{Distribution in the {\em mean scaled width} for $\gamma$-ray
showers (full line) after statistical subtraction of cosmic rays based on
the off-source region, and for cosmic rays (dashed).
The dashed line indicates the cut used to select $\gamma$-ray
candidates.}
\label{fig_width}
\end{figure}
To ensure that the core location of the events is well reconstructed,
the sample was further restricted to events with a core location
within 200~m from the center of the array (Fig.~\ref{fig_core});
in addition, events with $y_{core} > 100~m$ were rejected,
corresponding to the area near the fifth telescope currently
not included in the system.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize8.0cm
\epsffile{coreloc.eps}}
\end{center}
\caption
{Distribution of the core locations of events, after the cuts to
enhance the fraction of $\gamma$-rays. Also indicated are the
selection region and the telescope locations.}
\label{fig_core}
\end{figure}
After these cuts, a sample of 11874 on-source events remained, including
a background of 1543 cosmic-ray events, as estimated using the equal-sized
off-source region.
For such a sample of events at TeV energies,
the core location is measured with a
precision of about 6~m to 7~m for events with cores within a
distance up to 100~m from the central telescope; for larger
distances, the resolution degrades gradually, due to
the smaller angles between the different views,
and the reduced image {\em size} (see Fig.~\ref{fig_coreres}).
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{res.ps}}
\end{center}
\caption
{Resolution in the core position as a function of the distance
between the shower core and the central telescope, as determined
from Monte Carlo simulations of $\gamma$-ray showers with
energies between 1 and 2 TeV. The resolution is defined by
fitting a Gaussian to the distribution of differences between the true and
reconstructed coordinates of the shower impact point, projected
onto the $x$ and $y$ axes of the coordinate system. Due to slight
non-Gaussian tails, the rms widths of the distributions are about
20\% larger.}
\label{fig_coreres}
\end{figure}
\section{The shape of the Cherenkov light pool for $\gamma$-ray
events}
Using the technique described in the introduction, the intensity
distribution in the Cherenkov light pool can now simply be traced
by selecting events with the shower core at given distance $r_i$ from
a `reference'
telescope $i$ and with a fixed image {\em size} $a_i$, and plotting the
mean amplitude $a_j$ of telescope $j$ as a function of $r_j$.
However, in this simplest form, the procedure is not very practical,
given the small sample of events remaining after such additional
cuts. To be able to use a larger sample of events, one has to
\begin{itemize}
\item select events with $a_i$ in a certain range, $a_{min} < a_i
< a_{max}$, and plot $a_j/a_i$ vs $r_j$, assuming that the shape of
the light pool does not change rapidly with energy, and that one
can average over a certain energy range
\item repeat the measurement of $a_j(r_j)/a_i$ for different (small) bins
in $r_i$, and combine these measurements after normalizing the distributions
at some fixed distance
\item Combine the results obtained for different pairs of telescopes $i,j$.
\end{itemize}
Care has to be taken not to introduce a bias due to the trigger
condition. For example, one has to ensure that the selection
criterion of at least three triggered telescopes is fulfilled regardless
of whether telescope $j$ has triggered or not, otherwise the selection
might enforce a minimum image {\em size} in telescope $j$.
To avoid truncation of images by the border of the camera, only images
with a maximum distance of $1.5^\circ$ between the image centroid and
the camera center were included, leaving a $0.6^\circ$ margin to
the edge of the field of view. Since
the image of the source if offset by $0.5^\circ$ from the camera
center, a maximum distance of $2.0^\circ$ is possible between the source
image and the centroid of the shower image.
Even after these selections, the comparison between data and shower models
is not completely straight forward. One should not, e.g., simply compare
data to the predicted photon flux at ground level since
\begin{itemize}
\item as is well known, the radial dependence
of the density of Cherenkov light depends on the solid angle over which
the light is collected, i.e., on the field of view of the camera
\item the experimental resolution in the
reconstruction of the shower core position causes a
certain smearing, which is visible in particular near the break
in the light distribution
at the Cherenkov radius
\item the selection of image pixels using the tail cuts results in a
certain loss of photons; this loss is the more significant the lower
the intensity in the image is, and the more diffuse the image is.
\end{itemize}
While the distortion in the measured radial distribution of Cherenkov
light due to the latter two effects is relatively modest (see
Fig.~\ref{fig_pool}), a detailed
comparison with Monte Carlo should take these effects into account by
processing Monte-Carlo generated events using the same procedure as
real data, i.e., by plotting the distance to the reconstructed core
position rather than the true core position, and by applying the same
tail cuts etc.
\begin{figure}[htb]
\begin{center}
\mbox{
\epsfxsize11.0cm
\epsffile{mc_final.eps}}
\end{center}
\caption
{Radial distribution of Cherenkov light for TeV $\gamma$-ray
showers, for unrestricted aperture of the photon detector (full line),
for a $2^\circ$ aperture (dashed), and
including the full camera simulation and image processing (shaded).
The curves are normalized at $r \approx $100~m.}
\label{fig_pool}
\end{figure}
For a first comparison between data and simulation,
showers from the zenith (zenith angle between
$10^\circ$ and $15^\circ$) were selected.
The range of distances $r_i$ from the shower core
to the reference telescope was restricted to the plateau region
between 50~m and 120~m. Smaller
distances were not used because of the large fluctuations of image
{\em size} close to the shower core, and larger distances were excluded
because of the relatively steep variation of light yield with
distance. The showers were further selected on an amplitude in the `reference'
telescope $i$ between 100 and 200 photoelectrons, corresponding to
a mean energy of about 1.3~TeV.
Contamination of the Mrk 501 on-source data sample by cosmic
rays was subtracted using an off-source region displaced from
the optical axis by the same amount as the source, but in
the opposite direction. The measured radial distribution
(Fig.~\ref{fig_dat2}(a))
shows the expected features: a relatively flat plateau out to distances
of 120~m, and a rapid decrease in light yield for larger distances.
The errors given in the Figure are purely statistical. To estimate the
influence of systematic errors, one can look at the consistency of
the data for different ranges in distance $r_i$ to the `reference'
telescope, one can compare results for different telescope combinations,
and one can study the dependence on the cuts applied. Usually,
the different data sets were consistent to better than $\pm 0.05$ units;
systematic effects certainly do not exceed a level of $\pm 0.1$ units.
Within these
errors, the measured distribution is reasonably well reproduced
by the Monte-Carlo
simulations.
\begin{figure}[p]
\begin{center}
\mbox{
\epsfysize18.0cm
\epsffile{reng1.eps}}
\end{center}
\caption
{Light yield as a function of shower energy, for image {\em size} in
the reference telescope between 100 and 200 photoelectrons (a),
200 and 400 photoelectrons (b), and 400 to 800 photoelectrons (c).
Events were selected
with a distance range between 50~m and 120~m from the reference telescope,
for zenith angles between $10^\circ$ and $15^\circ$.
The shaded bands indicate the Monte-Carlo results.
The distributions are normalized at $r \approx 100$~m. Only
statistical errors are shown.}
\label{fig_dat2}
\end{figure}
\begin{figure}[p]
\begin{center}
\mbox{
\epsfysize20.0cm
\epsffile{rall1.eps}}
\end{center}
\caption
{Light yield as a function of core distance, for zenith angles between
$10^\circ$ and $15^\circ$ (a), $15^\circ$ and $25^\circ$ (b), $25^\circ$ and
$35^\circ$ (c), and $35^\circ$ and $45^\circ$ (d). Events were selected
with a distance range between 50~m and 120~m from the reference telescope,
and an image {\em size} between 100 and 200 photoelectrons in the reference
telescope.
The shaded bands indicate the Monte-Carlo results.
The distributions are normalized at $r \approx 100$~m.
Only statistical errors are shown.}
\label{fig_dat3}
\end{figure}
Shower models predict that the distribution
of light intensity varies (slowly) with the shower
energy and with the zenith angle. Fig.~\ref{fig_dat2} compares the
distributions obtained for different {\em size} ranges $a_i$ of
100 to 200, 200 to 400, and 400 to 800 photoelectrons at distances
between 50~m and 120~m, corresponding
to mean shower energies of about 1.3, 2.5, and 4.5 TeV, respectively.
We note that the intensity close to the shower core increases with
increasing energy. This component of the Cherenkov light is generated
by penetrating particles near the shower core. Their number grows
rapidly with increasing shower energy, and correspondingly decreasing
height of the shower maximum. The increase in the mean light intensity
at small distances from the shower core is primarily caused by
long tails distribution of image {\em sizes} towards large {\em size}; the
median {\em size} is more or less constant.
The observed trends are well reproduced by the
Monte-Carlo simulations.
The dependence on zenith angle is
illustrated in Fig.~\ref{fig_dat3}, where zenith angles between
$10^\circ$ and $15^\circ$, $15^\circ$ and $25^\circ$, $25^\circ$ and
$35^\circ$, and $35^\circ$ and $45^\circ$ are compared. Events were
again selected for an image {\em size} in the `reference' telescope
between 100 and 200 photoelectrons, in a distance range of 50~m to
120~m \footnote{Core
distance is always measured in the plane perpendicular to the shower
axis}. The corresponding
mean shower energies for the four ranges in zenith angle are about
1.3~TeV, 1.5~TeV, 2~TeV, and 3~TeV.
For increasing zenith angles, the distribution of Cherenkov light
flattens for small radii, and the diameter of the light pool
increases. Both effects are expected, since for larger zenith
angles the distance between the telescope and the shower maximum
grows, reducing the number of penetrating particles, and resulting
in a larger Cherenkov radius. The simulations properly account for
this behaviour.
\begin{figure}[tb]
\begin{center}
\mbox{
\epsfxsize7.0cm
\epsffile{rms.eps}}
\end{center}
\caption
{Relative variation in the {\em size} ratio $a_j/a_i$ as a function
of $r_j$, for $r_i$ in the range 50~m to 120~m, and for image {\em size}
in the `reference' telescope between 100 and 200 photoelectrons.
Full circles refer to zenith angles between $10^\circ$ and $15^\circ$,
open circles to zenith angles between $25^\circ$ and $35^\circ$.}
\label{fig_rms}
\end{figure}
It is also of some interest to consider the fluctuations of
image {\em size}, $\Delta(a_j/a_i)$.
Fig.~\ref{fig_rms} shows the relative rms fluctuation in the
{\em size} ratio, as a function of $r_j$, for small ($10^\circ$ to
$15^\circ$) and for larger ($25^\circ$ and $35^\circ$) zenith
angles. The fluctuations are minimal near the Cherenkov radius;
they increase for larger distances, primarily due to the smaller
light yield and hence larger relative fluctuations in the number
of photoelectrons. In particular for the small zenith angles,
the fluctuations also increase for small radii, reflecting the
large fluctuations associated with the penetrating tail of the
air showers. For larger zenith angles, this effect is much reduced,
since now all shower particles are absorbed well above the telescopes;
more detailed studies show that already zenith angles of $20^\circ$
make a significant difference.
\section{Summary}
The stereoscopic observation of $\gamma$-ray induced air showers
with the HEGRA Cherenkov telescopes allowed for the first time
the measurement of the light distribution in the Cherenkov light
pool at TeV energies, providing a consistency check of one of the
key inputs for the calculation of shower energies based on the
intensity of the Cherenkov images. The light distribution shows a
characteristic variation with shower energy and with zenith angle.
Data are well reproduced by the Monte-Carlo
simulations.
\section*{Acknowledgements}
The support of the German Ministry for Research
and Technology BMBF and of the Spanish Research Council
CYCIT is gratefully acknowledged. We thank the Instituto
de Astrofisica de Canarias for the use of the site and
for providing excellent working conditions. We gratefully
acknowledge the technical support staff of Heidelberg,
Kiel, Munich, and Yerevan.
| {'timestamp': '1998-07-13T09:54:01', 'yymm': '9807', 'arxiv_id': 'astro-ph/9807119', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9807119'} |
\section{Introduction}
\label{sec:introduction}
A plethora of observations have led to confirm the standard $\Lambda$CDM framework as the most economical and successful model describing our current universe.
This simple picture (pressureless dark matter, baryons and a cosmological constant representing the vacuum energy) has been shown to provide an excellent fit to cosmological data.
However, there are a number of inconsistencies that persist and, instead of diluting with improved precision measurements, gain significance~\cite{Freedman:2017yms,DiValentino:2020zio,DiValentino:2020vvd,DiValentino:2020srs,Freedman:2021ahq,DiValentino:2021izs,Schoneberg:2021qvd,Nunes:2021ipq,Perivolaropoulos:2021jda,Shah:2021onj}.
The most exciting (i.e.\ probably non due to systematics) and most statistically significant ($4-6\sigma$) tension in the literature is the so-called Hubble constant tension, which refers to the discrepancy between cosmological predictions and low redshift estimates of $H_0$~\cite{Verde:2019ivm,Riess:2019qba,DiValentino:2020vnx}.
Within the $\Lambda$CDM scenario, Cosmic Microwave Background (CMB) measurements from the Planck satellite provide a value of $H_0=67.36\pm 0.54$~km s$^{-1}$ Mpc$^{-1}$ at 68\%~CL~\cite{Planck:2018vyg}.
Near universe, local measurements of $H_0$, using the cosmic distance ladder calibration of Type Ia Supernovae with Cepheids, as those carried out by the SH0ES team, provide a measurement of the Hubble constant $H_0=73.2\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ at 68$\%$~CL~\cite{Riess:2020fzl}.
This problematic $\sim 4\sigma$ discrepancy aggravates when considering other late-time estimates of $H_0$.
For instance, measurements from the Megamaser Cosmology Project~\cite{Pesce:2020xfe}, or those exploiting Surface Brightness Fluctuations~\cite{Blakeslee:2021rqi} only exacerbate this tension~\footnote{%
Other estimates are unable to disentangle between nearby universe and CMB measurements. These include results from the Tip of the Red Giant Branch~\cite{Freedman:2021ahq},
from the astrophysical strong lensing observations~\cite{Birrer:2020tax}
or from gravitational wave events~\cite{Abbott:2017xzu}.}.
As previously mentioned, the SH0ES collaboration exploits the cosmic distance ladder calibration of Type Ia Supernovae, which means that these observations do not provide a direct extraction of the Hubble parameter.
More concretely, the SH0ES team measures the absolute peak magnitude $M_B$ of Type Ia Supernovae \emph{standard candles} and then translates these measurements into an estimate of $H_0$ by means of the magnitude-redshift relation of the Pantheon Type Ia Supernovae sample~\cite{Scolnic:2017caz}.
Therefore, strictly speaking, the SH0ES team does not directly extract the value of $H_0$, and there have been arguments in the literature aiming to translate the Hubble constant tension into a Type Ia Supernovae absolute magnitude tension $M_B$~\cite{Camarena:2019rmj,Efstathiou:2021ocp,Camarena:2021jlr}.
In this regard, late-time exotic cosmologies have been questioned as possible solutions to the Hubble constant tension~\cite{Efstathiou:2021ocp,Camarena:2021jlr}, since within these scenarios, it is possible that the supernova absolute magnitude $M_B$ used to derive the low redshift estimate of $H_0$ is no longer compatible with the $M_B$ needed to fit supernovae, BAO and CMB data.
A number of studies have prescribed to use in the statistical analyses a prior on the intrinsic magnitude rather than on the Hubble constant $H_0$~\cite{Camarena:2021jlr,Schoneberg:2021qvd}.
Following the very same logic of these previous analyses, we reassess here the potential of interacting dark matter-dark energy cosmology~\cite{Amendola:1999er}
in resolving the Hubble constant (\cite{Kumar:2016zpg, Murgia:2016ccp, Kumar:2017dnp, DiValentino:2017iww, Yang:2018ubt, Yang:2018euj, Yang:2019uzo, Kumar:2019wfs, Pan:2019gop, Pan:2019jqh, DiValentino:2019ffd, DiValentino:2019jae, DiValentino:2020leo, DiValentino:2020kpf, Gomez-Valent:2020mqn, Yang:2019uog, Lucca:2020zjb, Martinelli:2019dau, Yang:2020uga, Yao:2020hkw, Pan:2020bur, DiValentino:2020vnx, Yao:2020pji, Amirhashchi:2020qep, Yang:2021hxg, Gao:2021xnk, Lucca:2021dxo, Kumar:2021eev,Yang:2021oxc,Lucca:2021eqy,Halder:2021jiv}
and references therein)
and/or the intrinsic magnitude $M_B$ tension, by demonstrating explicitly from a full analysis that the results are completely independent of whether a prior on $M_B$ or $H_0$ is assumed (see also the recent~\cite{Nunes:2021zzi}).
\section{Theoretical framework}
\label{sec:theory}
We adopt a flat cosmological model described by the Friedmann-Lema\^{i}tre-Robertson-Walker metric.
A possible parameterization of a dark matter-dark energy interaction is provided by the following expressions~\cite{Valiviita:2008iv,Gavela:2009cy}:
\begin{eqnarray}
\label{eq:conservDM}
\nabla_\mu T^\mu_{(dm)\nu} &=& Q \,u_{\nu}^{(dm)}/a~, \\
\label{eq:conservDE}
\nabla_\mu T^\mu_{(de)\nu} &=&-Q \,u_{\nu}^{(dm)}/a~.
\end{eqnarray}
In the equations above, $T^\mu_{(dm)\nu}$ and $T^\mu_{(de)\nu}$ represent the energy-momentum tensors for the dark matter and dark energy components respectively, the function $Q$ is the interaction rate between the two dark components, and $u_{\nu}^{(dm)}$ represents the dark matter four-velocity.
In what follows we shall restrict ourselves to the case in which the
interaction rate is proportional to the dark energy density $\rho_{de}$~\cite{Valiviita:2008iv,Gavela:2009cy}:
\begin{equation}
Q=\ensuremath{\delta{}_{DMDE}}\mathcal{H} \rho_{de}~,
\label{rate}
\end{equation}
where $\ensuremath{\delta{}_{DMDE}}$ is a dimensionless coupling parameter and
$\mathcal{H}=\dot{a}/a$~\footnote{The dot indicates derivative respect to conformal time $d\tau=dt/a$.}.
The background evolution equations in the coupled model considered
here read~\cite{Gavela:2010tm}
\begin{eqnarray}
\label{eq:backDM}
\dot{{\rho}}_{dm}+3{\mathcal H}{\rho}_{dm}
&=&
\ensuremath{\delta{}_{DMDE}}{\mathcal H}{\rho}_{de}~,
\\
\label{eq:backDE}
\dot{{\rho}}_{de}+3{\mathcal H}(1+\ensuremath{w_{\rm 0,fld}}){\rho}_{de}
&=&
-\ensuremath{\delta{}_{DMDE}}{\mathcal H}{\rho}_{de}~.
\end{eqnarray}
The evolution of the dark matter and dark energy density perturbations and velocities divergence field are described in \cite{DiValentino:2019jae} and references therein.
It has been shown in the literature that this model is free of instabilities
if the sign of the coupling $\ensuremath{\delta{}_{DMDE}}$ and the sign of $(1+\ensuremath{w_{\rm 0,fld}})$ are opposite,
where $\ensuremath{w_{\rm 0,fld}}$ refers to the dark energy equation of state~\cite{He:2008si,Gavela:2009cy}.
In order to satisfy such stability conditions, we explore three possible scenarios, all of them with a redshift-independent equation of state.
In Model A, the equation of state $\ensuremath{w_{\rm 0,fld}}$ is fixed to $-0.999$.
Consequently, since $(1+\ensuremath{w_{\rm 0,fld}}) >0$, in order to ensure a instability-free perturbation evolution, the dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$ is allowed to vary in a negative range.
In Model B, $\ensuremath{w_{\rm 0,fld}}$ is allowed to vary but we ensure that the condition $(1+\ensuremath{w_{\rm 0,fld}})>0$ is always satisfied.
Therefore, the coupling parameter $\ensuremath{\delta{}_{DMDE}}$ is also negative.
In Model C, instead, the dark energy equation of state is phantom ($\ensuremath{w_{\rm 0,fld}}<-1$), therefore the dark matter-dark energy coupling is taken as positive to avoid early-time instabilities.
We shall present separately the cosmological constraints for these three models, together with those corresponding to the canonical $\Lambda$CDM.
\begin{table}[t]
\centering
\begin{tabular}{c|c|c}
Model & Prior $\ensuremath{w_{\rm 0,fld}}$ & Prior $\ensuremath{\delta{}_{DMDE}}$ \\
\hline
A & -0.999 & [-1.0, 0.0]\\
B & [-0.999, -0.333] & [-1.0, 0.0] \\
C & [-3, -1.001]& [0.0, 1.0] \\
\end{tabular}
\caption{Priors of $\ensuremath{w_{\rm 0,fld}}$, $\delta$ in models A, B, C.}
\label{tab:priors}
\end{table}
\section{Datasets and Methodology}
\label{sec:data}
In this Section, we present the data sets and methodology employed to obtain the observational constraints on the model parameters by performing Bayesian Monte Carlo Markov Chain (MCMC) analyses.
In order to constrain the parameters, we use the following data sets:
\begin{itemize}
\item The Cosmic Microwave Background (CMB) temperature and polarization power spectra from the final release of Planck 2018, in particular we adopt the plikTTTEEE+lowl+lowE likelihood \cite{Aghanim:2018eyx,Aghanim:2019ame}, plus the CMB lensing reconstruction from the four-point correlation function~\cite{Aghanim:2018oex}.
\item Type Ia Supernovae distance moduli measurements from the \textit{Pantheon} sample~\cite{Scolnic:2017caz}. These measurements constrain the uncalibrated luminosity distance $H_0d_L(z)$, or in other words the slope of the late-time expansion rate (which in turn constrains the current matter energy density, $\Omega_{\rm 0,m}$). We refer to this dataset as \textit{SN}.
\item Baryon Acoustic Oscillations (BAO) distance and expansion rate measurements from the 6dFGS~\cite{Beutler:2011hx}, SDSS-DR7 MGS~\cite{Ross:2014qpa}, BOSS DR12~\cite{Alam:2016hwk} galaxy surveys,
as well as from the eBOSS DR14 Lyman-$\alpha$ (Ly$\alpha$) absorption~\cite{Agathe:2019vsu} and Ly$\alpha$-quasars cross-correlation~\cite{Blomqvist:2019rah}.
These consist of isotropic BAO measurements of $D_V(z)/r_d$
(with $D_V(z)$ and $r_d$ the spherically averaged volume distance and sound horizon at baryon drag, respectively)
for 6dFGS and MGS, and anisotropic BAO measurements of $D_M(z)/r_d$ and $D_H(z)/r_d$
(with $D_M(z)$ the comoving angular diameter distance and $D_H(z)=c/H(z)$ the radial distance)
for BOSS DR12, eBOSS DR14 Ly$\alpha$, and eBOSS DR14 Ly$\alpha$-quasars cross-correlation.
\item A gaussian prior on $M_B= -19.244 \pm 0.037$~mag~\cite{Camarena:2021jlr}, corresponding to the SN measurements from SH0ES.
\item A gaussian prior on the Hubble constant $H_0=73.2\pm 1.3$~km s$^{-1}$ Mpc$^{-1}$ in
agreement with the measurement obtained by the
SH0ES collaboration in~\cite{Riess:2020fzl}.
\end{itemize}
For the sake of brevity, data combinations are indicated as CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH) and CMB+SN+BAO+$M_B$ (CSBM).
Cosmological observables are computed with \texttt{CLASS}~\cite{Blas:2011rf,Lesgourgues:2011re}.
In order to derive bounds on the proposed scenarios, we modify the efficient and well-known cosmological package \texttt{MontePython}~\cite{Brinckmann:2018cvx}, supporting the Planck 2018 likelihood~\cite{Planck:2019nip}.
We make use of CalPriorSNIa, a module for \texttt{MontePython}, publicly available at \url{https://github.com/valerio-marra/CalPriorSNIa}, that implements an effective calibration prior on the absolute magnitude of Type Ia Supernovae~\cite{Camarena:2019moy,Camarena:2021jlr}.
\section{Main results and discussion}
\label{sec:results}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.7\textwidth]{H0.pdf}
\caption{Posterior distribution of the Hubble parameter in the $\Lambda$CDM model (black) and in interacting cosmologies, with priors on the parameters as given in Tab.~\ref{tab:priors}.
We show constraint obtained within model A (green), model B (red) and model C (blue)
for the CMB+SN+BAO data combination (solid lines),
CMB+SN+BAO+$H_0$ (dashed lines)
and CMB+SN+BAO+$M_B$ (dotted lines).}
\label{fig:h0}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{0_PlSB-vs-0_PlSBH-vs-0_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within the canonical $\Lambda$CDM picture, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_LCDM}
\end{center}
\end{figure*}
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.1193\pm0.0010$ & $0.1183\pm0.0009$ & $0.1183_{-0.0009}^{+0.0008}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.6889_{-0.0061}^{+0.0057}$ & $0.6958_{-0.0050}^{+0.0056}$ & $0.6956_{-0.0049}^{+0.0057}$ \\
$\Omega_{\rm 0,m}$ & $0.3111_{-0.0057}^{+0.0061}$ & $0.3042_{-0.0056}^{+0.0050}$ & $0.3044_{-0.0057}^{+0.0049}$ \\
$M_B$ & $-19.42\pm0.01$ & $-19.40\pm0.01$ & $-19.40\pm0.01$ \\
$H_0$ & $67.68_{-0.46}^{+0.41}$ & $68.21_{-0.41}^{+0.42}$ & $68.20_{-0.41}^{+0.41}$ \\
$\sigma_8$ & $0.8108_{-0.0058}^{+0.0061}$ & $0.8092_{-0.0065}^{+0.0060}$ & $0.8090_{-0.0059}^{+0.0064}$ \\
\hline
minimum $\chi^2$ & $3819.46$ & $3836.50$ & $3840.44$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the standard $\Lambda$CDM paradigm. We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_LCDM}
\end{table}
We start by discussing the results obtained within the canonical $\Lambda$CDM scenario. Table~\ref{tab:model_LCDM} presents the mean values and the $1\sigma$ errors on a number of different cosmological parameters.
Namely, we show the constraints on
$\omega_{cdm }\equiv\Omega_{0,cdm} h^2$,
the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$,
the current matter energy density $\Omega_{\rm 0,m}$,
the Supernovae Ia intrinsic magnitude $M_B$,
the Hubble constant $H_0$ and the clustering parameter $\sigma_8$
arising from three possible data combinations considered here and above described:
CMB+SN+BAO (CSB), CMB+SN+BAO+$H_0$ (CSBH), CMB+SN+BAO+$M_B$ (CSBM).
Interestingly, \emph{all} the parameters experience the very same shift regardless the prior is adopted on the Hubble constant or on the intrinsic Supernovae Ia magnitude $M_B$.
The mean value of $H_0$ coincides for both the CSBH and the CSBM data combinations, as one can clearly see from the dashed and dotted black lines in Fig.~\ref{fig:h0}.
Figure~\ref{fig:triangle_LCDM} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities on the parameters shown in Tab.~\ref{tab:model_LCDM}.
Notice that all the parameters are equally shifted when adding the prior on $H_0$ or on $M_B$, except for $\sigma_8$ which remains almost unchanged. Notice also that the value of the current matter density, $\Omega_{\rm 0,m}$, is smaller when a prior from SN measurements is considered:
due to the larger $H_0$ value that these measurements imply, in order to keep the CMB peaks structure unaltered, the value of $\Omega_{\rm 0,m}$ should be smaller to ensure that the product $\omega_m h^2$ is barely shifted.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.107_{-0.005}^{+0.011}$ & $0.09\pm0.01$ & $0.096_{-0.009}^{+0.011}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.723_{-0.028}^{+0.017}$ & $0.758_{-0.024}^{+0.026}$ & $0.754_{-0.028}^{+0.025}$ \\
$\Omega_{\rm 0,m}$ & $0.277_{-0.017}^{+0.028}$ & $0.242_{-0.026}^{+0.024}$ & $0.246_{-0.025}^{+0.028}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $-0.116_{-0.044}^{+0.100}$ & $-0.219_{-0.086}^{+0.083}$ & $-0.203_{-0.087}^{+0.093}$ \\
$M_B$ & $-19.40\pm0.02$ & $-19.38_{-0.01}^{+0.02}$ & $-19.37\pm0.02$ \\
$H_0$ & $68.59_{-0.79}^{+0.65}$ & $69.73_{-0.72}^{+0.71}$ & $69.67_{-0.85}^{+0.75}$ \\
$\sigma_8$ & $0.90_{-0.08}^{+0.04}$ & $1.01_{-0.11}^{+0.08}$ & $1.00_{-0.12}^{+0.07}$ \\
\hline
minimum $\chi^2$ & $3819.86$ & $3831.90$ & $3835.86$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model A, see Tab.~\ref{tab:priors}. We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_A}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{A_PlSB-vs-A_PlSBH-vs-A_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model A, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_A}
\end{center}
\end{figure*}
We focus now on Model A, which refers to an interacting cosmology with $\ensuremath{w_{\rm 0,fld}}=-0.999$ and $\ensuremath{\delta{}_{DMDE}}<0$.
Table~\ref{tab:model_A} presents the mean values and the $1\sigma$ errors on the same cosmological parameters listed above, with the addition of the coupling parameter $\ensuremath{\delta{}_{DMDE}}$, for the same three data combination already discussed.
Notice again that all the parameters are equally shifted to either smaller or larger values, regardless the prior is adopted on either $H_0$ or $M_B$. In this case the shift on the Hubble parameter is larger than that observed within the $\Lambda$CDM model, as one can notice from the blue curves depicted in
Fig.~\ref{fig:h0}.
Interestingly, we observe a $2\sigma$ indication in favor of a non-zero value of the coupling $\ensuremath{\delta{}_{DMDE}}$ when considering the CSBH and the CSBM data combinations.
Indeed, while the value of the minimum $\chi^2$ is almost equal to that obtained in the $\Lambda$CDM framework for the CSB data analyses, when adding either a prior on $H_0$ or on $M_B$,
the minimum $\chi^2$ value is \emph{smaller} than that obtained for the standard cosmological picture: therefore, the addition of a coupling \emph{improves} the overall fit.
Figure~\ref{fig:triangle_A} presents the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model A.
It can be noticed that the prior on the Hubble constant and on the intrinsic magnitude lead to the very same shift, and the main conclusion is therefore prior-independent:
there is a $\sim 2\sigma$ indication for a non-zero dark matter-dark energy coupling when considering either $H_0$ or $M_B$ measurements,
\emph{and} the value of the Hubble constant is considerably larger, alleviating the $H_0$ tension.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.077_{-0.014}^{+0.036}$ & $0.061_{-0.019}^{+0.034}$ & $0.065_{-0.017}^{+0.036}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.785_{-0.081}^{+0.034}$ & $0.825_{-0.070}^{+0.045}$ & $0.818_{-0.075}^{+0.041}$ \\
$\Omega_{\rm 0,m}$ & $0.215_{-0.034}^{+0.081}$ & $0.174_{-0.044}^{+0.069}$ & $0.182_{-0.041}^{+0.075}$ \\
$\ensuremath{w_{\rm 0,fld}}$ & $-0.909_{-0.090}^{+0.026}$ & $-0.917_{-0.082}^{+0.026}$ & $-0.918_{-0.081}^{+0.026}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $-0.35_{-0.14}^{+0.26}$ & $-0.45_{-0.16}^{+0.22}$ & $-0.43_{-0.15}^{+0.24}$ \\
$M_B$ & $-19.41\pm0.02$ & $-19.38\pm0.02$ & $-19.38\pm0.02$ \\
$H_0$ & $68.28_{-0.85}^{+0.79}$ & $69.68_{-0.75}^{+0.71}$ & $69.57_{-0.76}^{+0.75}$ \\
$\sigma_8$ & $1.30_{-0.51}^{+0.01}$ & $1.60_{-0.76}^{+0.06}$ & $1.53_{-0.71}^{+0.03}$ \\
\hline
minimum $\chi^2$ & $ 3819.96$ & $3832.28$ & $3836.24$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$,
the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model B, see Tab.~\ref{tab:priors}.
We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_B}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{B_PlSB-vs-B_PlSBH-vs-B_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model B, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_B}
\end{center}
\end{figure*}
Focusing now on Model B, which assumes a negative coupling $\ensuremath{\delta{}_{DMDE}}$ and a constant, but freely varying, dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$ within the $\ensuremath{w_{\rm 0,fld}}>-1$ region,
we notice again the same shift on the cosmological parameters, regardless the prior is introduced in the Hubble parameter ($H_0$) or in the Supernovae Ia intrinsic magnitude ($M_B$), as can be noticed from Tab.~\ref{tab:model_B}.
As in Model A, the value of $H_0$ in this interacting cosmology is larger than within the $\Lambda$CDM framework (see the red curves in Fig.~\ref{fig:h0}),
albeit slightly smaller than in Model A, due to the strong anti-correlation between $\ensuremath{w_{\rm 0,fld}}$ and $H_0$~\cite{DiValentino:2016hlg,DiValentino:2019jae}.
Consequently, a larger value of $\ensuremath{w_{\rm 0,fld}}>-1$ implies a lower value of $H_0$.
Nevertheless, a $2\sigma$ preference for a non-zero value of the dark matter-dark energy coupling is present also in this case, and also when the CSB dataset is considered:
for the three data combinations presented here, there is always a preference for a non-zero dark matter-dark energy coupling.
Notice that the minimum $\chi^2$ in Model B is smaller than that corresponding to the minimal $\Lambda$CDM framework, but slightly larger than that of Model A, which is nested in Model B. The differences between the minimum $\chi^2$ in Model A and Model B, however, are small
enough to be considered as numerical fluctuations. Since, as previously stated, $\ensuremath{w_{\rm 0,fld}}$ and $H_0$ are strongly anti-correlated, a more negative value of the dark energy equation of state (i.e.\ $\ensuremath{w_{\rm 0,fld}}=-0.999$ as in Model A, close to the prior limit) is preferred by both the CSBH and the CSBM data combinations.
In Fig.~\ref{fig:triangle_B} we depict the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained for Model B.
From a comparison to Fig.~\ref{fig:triangle_LCDM} and also confronting the mean values of Tab.~\ref{tab:model_B} to those shown in Tab.~\ref{tab:model_LCDM} (and, to a minor extent, to those in Tab.~\ref{tab:model_A}),
one can notice that the value of \ $\ensuremath{\Omega_{\rm 0,fld}}$ is much larger.
The reason for this is related to the lower value for the present matter energy density $\Omega_{\rm 0,m}$ (the values are also shown in the tables), which is required within the interacting cosmologies when the dark matter-dark energy coupling is negative.
In the context of a universe with a negative dark coupling, indeed, there is an energy flow from dark matter to dark energy.
Consequently, the (dark) matter content in the past is higher than in the standard $\Lambda$CDM scenario and the amount of intrinsic (dark) matter needed today is lower, because of the extra contribution from the dark energy sector.
In a flat universe, this translates into a much higher value of $\ensuremath{\Omega_{\rm 0,fld}}$.
On the other hand, a lower value of $\Omega_{m,0}$ requires a larger value of the clustering parameter $\sigma_8$ to be able to satisfy the overall normalization of the matter power spectrum. In any case, we find again that the addition of a prior on either $H_0$ or $M_B$ leads to exactly the very same shift for all the cosmological parameters.
Therefore, Model B also provides an excellent solution to the Hubble constant tension,
although at the expense of a very large $\sigma_8$.
\begin{table}[t]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Parameter & CSB & CSBH & CSBM \\
\hline
$\omega{}_{cdm }$ & $0.138_{-0.015}^{+0.008}$ & $0.137_{-0.016}^{+0.007}$ & $0.135_{-0.013}^{+0.008}$ \\
$\ensuremath{\Omega_{\rm 0,fld}}$ & $0.655_{-0.021}^{+0.032}$ & $0.671_{-0.018}^{+0.031}$ & $0.675_{-0.018}^{+0.027}$ \\
$\Omega_{\rm 0,m}$ & $0.345_{-0.032}^{+0.021}$ & $0.329_{-0.031}^{+0.018}$ & $0.325_{-0.027}^{+0.018}$ \\
$\ensuremath{w_{\rm 0,fld}}$ & $-1.087_{-0.042}^{+0.051}$ & $-1.131_{-0.044}^{+0.053}$ & $-1.117_{-0.044}^{+0.048}$ \\
$\ensuremath{\delta{}_{DMDE}}$ & $0.183_{-0.180}^{+0.061}$ & $0.173_{-0.170}^{+0.051}$ & $0.150_{-0.150}^{+0.051}$ \\
$M_B$ & $-19.41\pm0.02$ & $-19.38\pm0.02$ & $-19.37\pm0.02$ \\
$H_0$ & $68.29_{-0.91}^{+0.66}$ & $69.74_{-0.73}^{+0.75}$ & $69.67_{-0.77}^{+0.78}$ \\
$\sigma_8$ & $0.735_{-0.057}^{+0.045}$ & $0.748_{-0.041}^{+0.068}$ & $0.755_{-0.047}^{+0.051}$ \\
\hline
minimum $\chi^2$ & $3818.24$ & $3830.56$ & $3835.10$ \\
\hline
\end{tabular}
\caption{Mean values and 68\% CL errors on $\omega_{cdm }\equiv\Omega_{cdm} h^2$, the current dark energy density $\ensuremath{\Omega_{\rm 0,fld}}$, the current matter energy density $\Omega_{\rm 0,m}$, the dark energy equation of state $\ensuremath{w_{\rm 0,fld}}$,
the dimensionless dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}$, the Supernovae Ia intrinsic magnitude $M_B$, the Hubble constant $H_0$ and the clustering parameter $\sigma_8$ within the interacting model C, see Tab.~\ref{tab:priors}.
We also report the minimum value of the $\chi^2$ function obtained for each of the data combinations.}
\label{tab:model_C}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{C_PlSB-vs-C_PlSBH-vs-C_PlSBM_triangle.pdf}
\caption{68\% CL and 95\% CL allowed contours and one-dimensional posterior probabilities on a selection of cosmological parameters within model C, considering three data combinations: CMB+SN+BAO (red), CMB+SN+BAO+$H_0$ (blue) and CMB+SN+BAO+$M_B$ (green).}
\label{fig:triangle_C}
\end{center}
\end{figure*}
Finally, Tab.~\ref{tab:model_C} shows the mean values and the $1\sigma$ errors on the usual cosmological parameters explored along this study, for Model C.
Notice that this model benefits from both its interacting nature and from the fact that $\ensuremath{w_{\rm 0,fld}}<-1$ and $\ensuremath{\delta{}_{DMDE}}>0$.
Both features of the dark energy sector have been shown to be excellent solutions to the Hubble constant problem.
As in the previous cases, the shift in the cosmological parameters induced by the addition of a prior is independent of its nature, i.e.\ it is independent on whether a prior on $H_0$ or $M_B$ is adopted.
Within this model, the value of the Hubble constant is naturally larger than within the $\Lambda$CDM model (see the blue lines in Fig.~\ref{fig:h0}),
regardless of the data sets assumed in the analyses.
Despite its phantom nature, as in this particular case $\ensuremath{w_{\rm 0,fld}}<-1$ to ensure a instability-free evolution of perturbations, Model C provides the \emph{best-fits to any of the data combinations explored here, performing even better than} the minimal $\Lambda$CDM picture,
as one can clearly notice from the last row of Tab.~\ref{tab:model_C}.
This fact makes Model C a very attractive cosmological scenario which can provide a solution for the long-standing $H_0$ tension. We must remember that model C, however, has two degrees of freedom more than the standard $\Lambda$CDM paradigm.
Figure~\ref{fig:triangle_C} illustrates the two-dimensional allowed contours and the one-dimensional posterior probabilities obtained within Model C.
Notice that here the situation is just the opposite one of Model B: the value of $\ensuremath{\Omega_{\rm 0,fld}}$ is much smaller than in standard scenarios,
due to the larger value required for the present matter energy density $\Omega_{\rm 0,m}$ when the dark matter-dark energy coupling $\ensuremath{\delta{}_{DMDE}}>0$ and $\ensuremath{w_{\rm 0,fld}}<-1$.
This larger value of the present matter energy density also implies a lower value for the clustering parameter $\sigma_8$, in contrast to what was required within Model B.
\section{Final Remarks}
\label{sec:conclusions}
In this study we have tried to reassess the ability of interacting dark matter-dark energy cosmologies in alleviating the long-standing and highly significant Hubble constant tension.
Despite the fact that in the past these models have been shown to provide an excellent solution to the discrepancy between local measurements and high redshift, Cosmic Microwave Background estimates of $H_0$, there have been recent works in the literature questioning
their effectiveness, related to a misinterpretation of SH0ES data, which indeed does not directly extract the value of $H_0$.
We have therefore computed the ability of interacting cosmologies of reducing the Hubble tension by means of two possible different priors in the cosmological analyses:
a prior on the Hubble constant and, separately, a prior on Type Ia Supernova absolute magnitude.
We combine these priors with Cosmic Microwave Background (CMB), Type Ia Supernovae (SN) and Baryon Acoustic Oscillation (BAO) measurements,
showing that the constraints on the cosmological parameters are independent of the choice of prior, and that the Hubble constant tension is always alleviated.
This last statement is also prior-independent.
Furthermore, one of the possible interacting cosmologies considered here,
with a phantom nature, provides a better fit than the canonical $\Lambda$CDM framework for all the considered data combinations, but with two extra degrees of freedom.
We therefore conclude that interacting dark-matter dark-energy cosmologies still provide a very attractive and viable theoretical and phenomenological scenario
where to robustly relieve the Hubble constant tension,
regardless the method one adopts to process SH0ES data.
\begin{acknowledgments}
\noindent
SG acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 754496 (project FELLINI).
EDV is supported by a Royal Society Dorothy Hodgkin Research Fellowship.
OM is supported by the Spanish grants PID2020-113644GB-I00, PROMETEO/2019/083 and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019//860881-HIDDeN).
RCN acknowledges financial support from the Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP, S\~{a}o Paulo Research Foundation) under the project No. 2018/18036-5.
\end{acknowledgments}
| {'timestamp': '2021-11-08T02:04:43', 'yymm': '2111', 'arxiv_id': '2111.03152', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.03152'} |
\section{Introduction} \label{sec:introduction} \input{introduction}
\section{Related Work} \label{sec:related_work} \input{relatedWork}
\section{Model Description} \label{sec:model} \input{modelDescription}
\section{Experiments} \label{sec:experiments} \input{experiments}
\section{Conclusions and Future Work} \label{sec:conclusions} \input{conclusion}
{\small
\textbf{Acknowledgements}
\input{acknowledgements}
}
{\small
\bibliographystyle{ieee}
\subsection{Composable Activities Dataset} \label{subsec:composableActivities}
\subsection{Inference of per-frame annotations.}
\label{subsec:action_annotation}
The hierarchical structure and compositional
properties of our model enable it to output a predicted global activity,
as well as per-frame annotations of predicted atomic actions and poses for each body
region.
It is important to highlight that in the generation of the per-frame annotations, no prior temporal
segmentation of atomic actions is needed. Also, no post-processing of the output is performed. The
proficiency of our model to produce
per-frame annotated data, enabling action detection temporally and
spatially, make our model unique.
Figure \ref{fig:annotation} illustrates
the capability of our model to provide per-frame annotation of the atomic
actions that compose each activity. The accuracy of
the mid-level action prediction can be evaluated as in \cite{Wei2013}.
Specifically, we first obtain segments of the same predicted action in each
sequence, and then compare these segments with ground truth action labels. The
estimated label of the segment is assumed correct if the detected segment is
completely contained in a ground truth segment with the same label, or if the
Jaccard Index considering the segment and the ground truth label is greater
than 0.6. Using these criteria, the accuracy of the mid-level actions is
79.4\%. In many cases, the wrong action prediction is only highly local in time
or space, and the model is still able to correctly predict the activity label
of the sequence. Taking only the correctly predicted videos in terms of global
activity prediction, the accuracy of action labeling reaches 83.3\%. When consider this number, it
is
important to note that not every ground truth action label is accurate: the
videos were hand-labeled by volunteers, so there is a chance for mistakes in
terms of the exact temporal boundaries of the action. In
this sense, in our experiments we observe cases where the predicted
labels showed more accuracte temporal boundaries than the ground
truth.
\begin{figure*}[th]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_all_sequences_red.pdf}
\end{center}
\caption{Per-frame predictions of atomic actions for selected activities,
showing 20 frames of each video. Each frame is joined with the predicted action
annotations of left arm, right arm, left leg and right leg. Besides the prediction of the global
activity of the video, our algorithm is able to
correctly predict the atomic actions that compose each activity in each frame,
as well as the body regions that are active during the execution of the action.
Note that in the example video of the activity \emph{Walking while calling with
hands}, the \emph{calling with hands} action is correctly annotated even when
the subject change the waving hand during the execution of the activity.}
\label{fig:annotation}
\end{figure*}
\subsection{Robustness to occlusion and noisy joints.}
Our method is also capable of inferring action and activity labels even if some
joints are not observed. This is a common situation in practice,
as body motions induce temporal self-occlusions of body regions.
Nevertheless, due to the joint estimation of poses, actions, and activities,
our model is able to reduce the effect of this problem. To illustrate this, we
simulate a totally occluded region by fixing its geometry to the position
observed in the first frame.
We select which region to be completely occluded in every sequence using uniform sampling.
In this scenario, the accuracy of our preliminary model in \cite{Lillo2014} drops
by 7.2\%. Using our new SR setup including NI handling, the accuracy only drops
by 4.3\%, showing that the detection of non-informative poses helps the model
to deal with occluded regions. In fact, as we show in Section
\ref{subsec:exp_non_info_handling}, many of truly occluded regions in the
videos are identified using NI handling. In contrast, the drop in performance of
BoW is 12.5\% and HMM 10.3\%: simpler models are less capable of robustly dealing
with occluded regions, since their pose assignments rely only on the descriptor
itself, while in our model the assigned pose depends on the descriptor,
sequences of poses and actions, and the activity evaluated, making inference
more robust. Fig. \ref{fig:occlusions} shows some qualitative results of
occluded regions.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]
{./subject_1_6.pdf} \\
{\footnotesize Right arm occluded} \\
\includegraphics[width=0.999\linewidth]
{./subject_1_23.pdf}\\
{\footnotesize Left leg occluded} \\
\includegraphics[width=0.999\linewidth]
{./subject_1_8.pdf}\\
{\footnotesize Left arm occluded}\\
\end{center}
\caption{The occluded body regions are depicted in light blue. When an arm or
leg is occluded, our method still provides a good estimation of the underlying actions in each
frame.}
\label{fig:occlusions}
\end{figure}
In terms of noisy joints, we manually add random Gaussian noise to change the
joints 3D location of testing videos, using the SR setup and the GEO descriptor
to isolate the effect of the joints and not mixing the motion descriptor. Figure
\ref{fig:joint_noise} shows the accuracy of testing videos in terms of noise
dispersion $\sigma_{noise}$ measured in inches. For little noise, there is no
much effect in our model accuracy, as expected for the robustness of the
geometric descriptor. However, for more drastic noise added to every joint, the
accuracy drops dramatically. This behavior is expected, since for highly noisy
joints the model can no longer predict well the sequence of actions and poses.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_acc_vs_noise.pdf} \\
\end{center}
\caption{Performance of our model in presence of simulated Gaussian noise in
every joint, as a function of $\sigma_{noise}$ measured in inches. When the
noise is less than 3 inches in average, the model performance is not very
affected, while for bigger noise dispersion the model accuracy is drastically
affected. It is important no note that in our simulation, every joint is
affected to noise, while in a real setup, noisy joint estimation tend to occur
more rarely. } \label{fig:joint_noise}
\end{figure}
\subsection{Early activity prediction.}
Our model needs the complete video to make an accurate activity and action
prediction of a query video. In this section, we analyze the number of frames
(as a percentage of a complete activity sequence) needed
to make an accurate activity prediction. Figure \ref{fig:accuracy_reduced_frames}
shows the mean accuracy over the dataset (using leave-one-subject-out
cross-validation) in function of the
percentage of frames used by the classifier to label each video. We note that
considering 30\% of the frames, the classifier performs reasonable predictions,
while 70\% of frames are needed to closely match the
accuracy of using all frames.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]{./fig_acc_vs_frame_reduction.pdf}
\end{center}
\caption{Accuracy of activity recognition versus percentage of frames used in
Composable Activities dataset. In general, 30\% of the frames are needed to
perform reasonable predictions, while 70\% of frames are needed to closely match the
accuracy of using all frames.}
\label{fig:accuracy_reduced_frames}
\end{figure}
\subsection{Failure cases.}
We also study some of the failure cases that we observe during the
experimentation with our model.
Figure \ref{fig:errors} shows some error cases. It is interesting that
the sequences are confusing even for humans when only the skeleton is available
as in the figure. These errors probably will not be surpassed with the model
itself, and will need to use other sources of information like object
detectors, where a cup should be distinguished from a cellphone as in the
third row of Figure \ref{fig:errors}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.999\linewidth]
{./sbj1_1.pdf} \\
{\footnotesize Ground truth: Walking while calling with hands\\
Prediction: Walking while waving hand} \\
\includegraphics[width=0.999\linewidth]
{./sbj4_4.pdf}\\
{\footnotesize Ground truth: Composed activity 1\\
Prediction: Talking on cellphone and drinking} \\
\includegraphics[width=0.999\linewidth]
{./sbj4_6.pdf}\\
{\footnotesize Ground truth: Waving hand and drinking\\
Prediction: Talking on cellphone and scratching head} \\
\end{center}
\caption{Failure cases. Our algorithm tends to confuse activities that share very similar
body postures.}
\label{fig:errors}
\end{figure}
\begin{comment}
\subsubsection{New activity characterization}
As we mention in previous section, our model using sparse regularization and
non-negative weights on activity ($\alpha$) classifiers and action ($\beta$)
classifiers do not \emph{punish} poses that have no influence in the
activities. For this reason, our model is able to model a new composed activity
just combining the coefficients of two known activities, leaving the rest of
the parameters of the model untouched. We use an heuristic approach to combine
two models: givint two classes $c_1$ and $c_2$, their coefficients for a region
$r$ and action $a$ are $ \alpha^r_{c_1,a}$ and $ \alpha^r_{c_2,a}$
respectively. For a new class $c_{new}$ composed of classes $c_1$ and $c_2$, we
use the mean value of the coefficients \begin{equation}
\alpha^r_{{c_{new},a}} = \frac{(\alpha^r_{c_1,a} + \alpha^r_{c_2,a})}{2}
\end{equation}
only when the corresponding coefficients for are positive; in other case, we
use the maximum value of the two coefficients. For all subjects of the dataset,
we create all the combinations od two activities, and tested the new model
using three composed videos per subject. The average accuracy of the activity
$16+1$ is 90.2\%, and in average the activities that compose the new activity
drops its accuracy in 12.3\%, showing that we effectively incorporate a new
composed activity to the model at a little cost of getting more confusion over
the original activities. Moreover, the accuracy of action labeling for the new
class is 74.2\%, similar to the accuracy of the action labeling of the
original model, so we can effectively transfer the learning of atomic action
classifiers to new compositions of activities.
\begin{table}
\begin{tabular}
\hline
Activity group & Accuracy of new class & \\
\hline
Simple & 92.
Complex & 87.2\% & \\
\hline
All & 90.2\% & \\
\end{tabular}
\caption{}
\label{tab:acc_new_class}
\end{table}
\end{comment}
\subsection{Classification of Simple and Isolated Actions}
As a first experiment,
we evaluate the performance of our model on the task of simple and
isolated human action recognition in the MSR-Action3D dataset
\cite{WanLi2010}.
Although our model is tailored at recognizing complex
actions, this experiment verifies the performance of our model in the
simpler scenario of isolated atomic action classification.
The MSR-Action3D dataset provides pre-trimmed depth videos and estimated body poses
for isolated actors performing actions from 20
categories. We use 557 videos
in a similar setup to
\cite{Wang2012}, where videos from subjects 1, 3, 5, 7, 9 are used for
training and the rest for testing. Table \ref{tab:msr3d} shows that in this
dataset our model achieves classification accuracies comparable to
state-of-the-art methods.
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Our model & 93.0\% \\
\hline
L. Tao \etal \cite{Tao2015} & 93.6\% \\
C. Wang \etal \cite{Wang2013} & 90.2\% \\
Vemulapalli \etal \cite{Vemulapalli2014} & 89.5\% \\
\hline
\end{tabular}
\caption{\footnotesize
Recognition accuracy in the MSR-Action3D
dataset.}
\label{tab:msr3d}
\end{table}
\subsection{Detection of Concurrent Actions}
Our second experiment evaluates the performance of our model in a concurrent
action recognition setting. In this scenario, the goal is to predict
the temporal localization of actions that may occur concurrently in a long
video. We evaluate this task on the Concurrent Actions dataset \cite{Wei2013},
which
provides 61 RGBD videos and pose estimation data annotated with 12
action categories.
We use a similar evaluation setup as proposed by the authors.
We split the dataset into training and testing sets with a 50\%-50\% ratio.
We evaluate performance by measuring precision-recall: a detected action
is declared as a true positive if its temporal overlap with the ground
truth action interval is larger than 60\% of their union, or if
the detected interval is completely covered by the ground truth annotation.
Our model is tailored at recognizing complex actions that are composed
of atomic components. However, in this scenario, only atomic actions are
provided and no compositions are explicitly defined. Therefore, we apply
a simple preprocessing step: we cluster training videos into groups
by comparing the occurrence of atomic actions within each video.
The resulting groups are used as complex actions labels in the training
videos of this dataset.
At inference time, our model outputs a single labeling per video,
which corresponds to the atomic action labeling that maximizes the energy of
our model.
Since there are no thresholds to adjust, our model produces the single
precision-recall measurement reported in Table \ref{tab:concurrent}.
Our model outperforms the state-of-the-art method in this
dataset at that recall level.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|c|}
\hline
\textbf{Algorithm} & \textbf{Precision} & \textbf{Recall}\\
\hline
Our full model & 0.92 & 0.81 \\
\hline
Wei et al. \cite{Wei2013} & 0.85 & 0.81 \\
\hline
\end{tabular}
\caption{
\footnotesize
Recognition accuracy in the Concurrent Actions dataset. }
\label{tab:concurrent}
\end{table}
\subsection{Recognition of Composable Activities}
In this experiment, we evaluate the performance of our model to recognize complex
and composable human actions. In the evaluation, we use the Composable
Activities dataset \cite{Lillo2014},
which provides 693 videos of 14 subjects performing 16 activities.
Each activity is a spatio-temporal composition of atomic actions.
The dataset provides a total of 26 atomic actions that are shared across
activities. We train our model using two levels of supervision during training:
i) spatial annotations that map body regions to the execution of each action are made available
ii) spatial supervision is not available, and therefore the labels $\vec{v}$ to assign spatial regions to actionlets
are treated as latent variables.
Table \ref{tab:composable} summarizes our results. We observe that under both
training conditions, our model achieves comparable performance. This indicates
that our weakly supervised model can recover some of the information
that is missing while performing well at the activity categorization task.
In spite of using less
supervision at training time, our method outperforms state-of-the-art
methodologies that are trained with full spatial supervision.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Base model + GC, GEO desc. only, spatial supervision & 88.5\%\\
Base model + GC, with spatial supervision & 91.8\% \\
Our full model, no spatial supervision (latent $\vec{v}$) & 91.1\%\\
\hline
Lillo \etal \cite{Lillo2014} (without GC) & 85.7\% \\
Cao et al. \cite{cao2015spatio} & 79.0\% \\
\hline
\end{tabular}
\caption{
\footnotesize
Recognition accuracy in the Composable Activities
dataset.}
\label{tab:composable}
\end{table}
\subsection{Action Recognition in RGB Videos}
Our experiments so far have evaluated the performance of our model
in the task of human action recognition in RGBD videos.
In this experiment, we explore the use of our model in the problem of human
action recognition in RGB videos. For this purpose, we use the sub-JHMDB
dataset \cite{Jhuang2013}, which focuses on videos depicting 12 actions and
where most of the actor body is visible in the image frames.
In our validation, we use the 2D body pose configurations provided by the
authors and compare against previous methods that also use them. Given that
this dataset only includes 2D image coordinates for each body joint, we obtain
the geometric descriptor by adding a depth coordinate with a value $z = d$ to
joints corresponding to wrist and knees, $z = -d$ to elbows, and $z = 0$ to other joints,
so we can compute angles between segments, using $d = 30$ fixed with cross-validation. We summarize the results in Table
\ref{tab:subjhmdb},
which shows that our method outperforms alternative state-of-the-art techniques.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Our model & 77.5\% \\
\hline
Huang et al. \cite{Jhuang2013} & 75.6\% \\
Ch\'eron et al. \cite{Cheron2015} & 72.5\%\\
\hline
\end{tabular}
\caption{\footnotesize
Recognition accuracy in the sub-JHMDB dataset.}
\label{tab:subjhmdb}
\end{table}
\subsection{Spatio-temporal Annotation of Atomic Actions}
In this experiment, we study the ability of our model to provide spatial and
temporal annotations of relevant atomic actions. Table \ref{tab:annotation}
summarizes our results. We report precision-recall rates
for the spatio-temporal annotations predicted by our model in the
testing videos (first and second rows). Notice that this is a
very challenging task. The testing videos do no provide any label, and
the model needs to predict both, the temporal extent of each action and the
body regions associated with the execution of each action. Although the
difficulty of the task, our model shows satisfactory results being able to
infer suitable spatio-temporal annotations.
We also study the capability of the model to provide spatial and temporal
annotations during training. In our first experiment, each video
is provided
with the temporal extent of each action, so the model only needs to infer the
spatial annotations (third row in Table \ref{tab:annotation}). In a
second experiment, we do not provide any temporal or spatial annotation,
but only the global action label of each video (fourth row in Table
\ref{tab:annotation}). In both experiments, we observe that the model is
still able to infer suitable spatio-temporal annotations.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|c|c|}
\hline
\textbf{Videos} & \textbf{Annotation inferred} & \textbf{Precision} & \textbf{Recall}\\
\hline
Testing set & Spatio-temporal, no GC & 0.59 & 0.77 \\
Testing set & Spatio-temporal & 0.62 & 0.78 \\
\hline
Training set & Spatial only & 0.86 & 0.90\\
Training set & Spatio-temporal & 0.67 & 0.85 \\
\hline
\end{tabular}
\caption{
\footnotesize
Atomic action annotation performances in the Composable Activities
dataset. The results show that our model is able to recover spatio-temporal
annotations both at training and testing time.}
\label{tab:annotation}
\end{table}
\subsection{Effect of Model Components}
In this experiment,
we study the contribution of key components of the
proposed model. First, using the sub-JHMDB dataset,
we measure the impact of three components of our model: garbage collector for
motion poselets (GC), multimodal modeling of actionlets, and use of latent
variables to infer spatial annotation about body regions (latent $\vec{v}$). Table
\ref{tab:components} summarizes our experimental results.
Table \ref{tab:components} shows that the full version
of our model achieves the best performance, with each of the components
mentioned above contributing to the overall success of the method.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Algorithm} & \textbf{Accuracy}\\
\hline
Base model, GEO descriptor only & 66.9\%\\
Base Model & 70.6\%\\
Base Model + GC & 72.7\% \\
Base Model + Actionlets & 75.3\%\\
Our full model (Actionlets + GC + latent $\vec{v}$) & 77.5\% \\
\hline
\end{tabular}
\caption{
\footnotesize
Analysis of contribution to recognition performance from
each model component in the sub-JHMDB dataset.}
\label{tab:components}
\end{table}
Second, using the Composable Activities dataset, we also analyze the
contribution of the proposed self-paced learning scheme for initializing and
training our model. We summarize our results in
Table \ref{tab:initialization} by reporting action
recognition accuracy under different initialization schemes: i) Random: random
initialization of latent variables $\vec{v}$, ii) Clustering: initialize
$\vec{v}$ by first computing a BoW descriptor for the atomic action intervals
and then perform $k$-means clustering, assigning the action intervals to the
closer cluster center, and iii) Ours: initialize $\vec{v}$ using the proposed
self-paced learning scheme. Our proposed initialization scheme helps the model to achieve its best
performance.
\begin{table}[tb]
\footnotesize
\centering
\begin{tabular}{|l|c|}
\hline
\textbf{Initialization Algorithm} & \textbf{Accuracy}\\
\hline
Random & 46.3\% \\
Clustering & 54.8\% \\
Ours & 91.1\% \\
\hline
Ours, fully supervised & 91.8\%\\
\hline
\end{tabular}
\caption{
\footnotesize
Results in Composable Activities dataset, with latent $\vec{v}$ and different initializations. }
\label{tab:initialization}
\end{table}
\subsection{Qualitative Results}
Finally, we provide a qualitative analysis of
relevant properties of our model. Figure \ref{fig:poselets_img}
shows examples of moving poselets learned in the Composable
Activities dataset. We observe that each moving poselet captures
a salient body configuration that helps to discriminate among atomic
actions. To further illustrate this, Figure \ref{fig:poselets_img}
indicates the most likely underlying atomic action for each moving poselet.
Figure \ref{fig:poselets_skel} presents a similar analysis for moving
poselets learned in the MSR-Action3D dataset.
We also visualize the action annotations produced by our model.
Figure \ref{fig:actionlabels} (top) shows the action labels associated
with each body part in a video from the Composable Activities dataset.
Figure \ref{fig:actionlabels} (bottom) illustrates per-body part action
annotations for a video in the Concurrent Actions dataset. These
examples illustrate the capabilities of our model to correctly
annotate the body parts that are involved in the execution of each action,
in spite of not having that information during training.
\begin{figure}[tb]
\begin{center}
\scriptsize
Motion poselet \#4 - most likely action: talking on cellphone\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets1}
Motion poselet \#7 - most likely action: erasing on board\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets2}
Motion poselet \#19 - most likely action: waving hand\\
\includegraphics[trim=0 0 0 0.35cm, clip, width=0.49\textwidth]{Fig/poselets3}
\end{center}
\caption{
\footnotesize
Moving poselets learned from the Composable Activities
dataset.}
\label{fig:poselets_img}
\end{figure}
\begin{figure}[tb]
\begin{center}
\scriptsize
Motion poselet \#16 - most likely action: tennis swing\\
\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\textwidth]{Fig/poselets4}
Motion poselet \#34 - most likely action: golf swing\\
\includegraphics[trim=0 0 0cm 0cm,clip, width=0.49\textwidth]{Fig/poselets5}
Motion poselet \#160 - most likely action: bend\\
\includegraphics[trim=0 0 0cm 0cm, clip, width=0.49\textwidth]{Fig/poselets6}
\end{center}
\caption{
\footnotesize
Moving poselets learned from the MSR-Action3D
dataset.}
\label{fig:poselets_skel}
\end{figure}
\begin{figure}[tb]
\begin{center}
\scriptsize
\includegraphics[]{Fig/labels_acciones}
\end{center}
\caption{
\footnotesize
Automatic spatio-temporal annotation of atomic actions. Our method
detects the temporal span and spatial body regions that are involved in
the performance of atomic actions in videos.}
\label{fig:actionlabels}
\end{figure}
\begin{comment}
[GENERAL IDEA]
What we want to show:
\begin{itemize}
\item Show tables of results that can be useful to compare the model.
\item Show how the model is useful for videos of simple and composed actions, since now the level of annotations is similar.
\item Show how the inference produces annotated data (poses, actions, etc). In particular, show in Composable Activities and Concurrent actions how the action compositions are handled by the model without post-processing.
\item Show results in sub-JHMDB,showing how the model detects the action in the videos and also which part of the body performs the action (search for well-behaved videos). It could be interesting to show the annotated data over real RGB videos.
\item Show examples of poses (like poselets) and sequences of 3 or 5 poses for actions (Actionlets?)
\end{itemize}
\subsection{Figures}
The list of figures should include:
\begin{itemize}
\item A figure showing the recognition and mid-level labels of Composable Activities, using RGB videos
\item Comparison of action annotations, real v/s inferred in training set, showing we can recover (almost) the original annotations.
\item Show a figure similar to Concurrent Actions paper, with a timeline showing the actions in color. We can show that our inference is more stable than proposed in that paper, and it is visually more similar to the ground truth than the other methods.
\item Show a figure for sub-JHMDB dataset, where we can detect temporally and spatially the action without annotations in the training set.
\item Show Composable Activities and sub-JHMDB the most representative poses and actions.
\end{itemize}
\paragraph{Composable Activities Dataset}
In this dataset we show several results.
(1) Comparing TRAJ descriptor (HOF over trajectory);
(2) Compare the results using latent variables for action assignations to
regions, with different initializations;
(3) Show results of the annotations of the videos in inference.
We must include figures comparing the real annotations
and the inferred annotations for training data, to show we are able to get the
annotations only from data.
\subsection{Recognition of composable activities}
\label{subsec:experiments_summary}
\subsection{Impact of including motion features}
\label{subsec:exp_motionfeats}
\subsection{Impact of latent spatial assignment of actions}
\label{subsec:exp_vlatent}
\subsection{Impact of using multiple classifiers per semantic action}
\label{subsec:exp_multiple}
\subsection{Impact of handling non-informative poses}
\label{subsec:exp_non_info_handling}
\end{comment}
\begin{comment}
\subsection{CAD120 Dataset}
The CAD120 dataset is introduced in \cite{Koppula2012}. It is composed of 124
videos that contain activities in 10 clases performed by 4 actors. Activities
are related to daily living: \emph{making cereal}, \emph{stacking objects}, or
\emph{taking a meal}. Each activity is composed of simpler actions like
\emph{reaching}, \emph{moving}, or \emph{eating}. In this database, human-object
interactions are an important cue to identify the actions, so object
locations and object affordances are provided as annotations. Performance
evaluation is made through leave-one-subject-out cross-validation. Given
that our method does not consider objects, we use only
the data corresponding to 3D joints of the skeletons. As shown in Table
\ref{Table-CAD120},
our method outperforms the results reported in
\cite{Koppula2012} using the same experimental setup. It is clear that using
only 3D joints is not enough to characterize each action or activity in this
dataset. As part of our future work, we expect that adding information related
to objects will further improve accuracy.
\begin{table}
\centering
{\small
\begin{tabular}{|c|c|c|}
\hline
\textbf{Algorithm} & \textbf{Average precision} & \textbf{Average recall}\\
\hline
Our method & 32.6\% & 34.58\% \\
\hline
\cite{Koppula2012} & 27.4\% & 31.2\%\\
\cite{Sung2012} & 23.7\% & 23.7\% \\
\hline
\end{tabular}
}
\caption{Recognition accuracy of our method compared to state-of-the-art methods
using CAD120 dataset.}
\label{Table-CAD120}
\end{table}
\end{comment}
\subsection{Latent spatial actions for hierarchical action detection}
\subsection{Hierarchical activity model}
Suppose we have a video $D$ with $T$ frames, each frame described by a feature vector $x_t$. Assume we have available $K$ classifiers$\{w_k\}_{k=1}^K$ over the frame descriptors, such that each frame descriptor can be associated to a single classifier. If we choose the maximum response for every frame, encoded as $z_t = \argmax_k\{w_k^\top x_t\}$, we can build a BoW representation to feed linear action classifiers $\beta$, computing the histogram $h(Z)$ of $Z = \{z_1,z_2,\dots,z_T\}$ and using these histograms as a feature vector for the complete video to recognize single actions. Imagine now that we would like to use the scores of the maximum responses, $w_{z_t}^\top x_t$ as a potential to help discriminating between videos that present reliable poses from videos that does not. We can build a joint energy function, combining the action classifier score and the aggregated frame classifier scores, as
\begin{equation}
\label{eq:2-levels}
\begin{split}
E(D) &= \beta_{a}^\top h(Z) + \sum_{t=1}^T w_{z_t}^\top x_t \\ & = \sum_{t=1}^T\sum_{k=1}^K\left(\beta_{a,k} + w_k^\top x_t \right)\delta(z_t=k)
\end{split}
\end{equation}
What is interesting of Eq. (\ref{eq:2-levels}) is that it every term in the sum is tied for the value of $z_t$, creating a model such that all its components depends of the labeling $Z$. We can expand the previous model to more levels using the same philosophy. In fact, for a new level, we could create a new indicator $v_t$ for every frame that indicates the election of which classifier $\beta$ will be used (the same as $z_t$ indicates which classifier of $w$). If we name $w$ as \emph{pose classifiers}, and $\beta$ as \emph{action classifiers}, we can create a hierarchical model where multiple poses and actions can be present in a single video. Supposing we have $A$ actions; the energy for a three-level hierarchy could be, for an \emph{activity} $l$,
\begin{equation}
E(D) =\alpha_l^\top h(V) + \sum_{a=1}^A \beta_{a}^\top h^a(Z,V) + \sum_{t=1}^T w_{z_t}^\top x_t
\end{equation}
where $h^a(Z,V)$ refers to the BoW representation of $Z$ for those frames labeled as action $v_t = a$.
[NEW MODEL]
Recent work in action recognition \cite{Cheron2015,Tao2015, Wang2011,Jhuang2013} shows a resurgence of describing human actions as a collection of dynamic spatial parts that resembles Poselets. In line with these research, we split the human body into $R$ semantic regions. As modeling actions using the whole body is hard, separating the body into groups of limbs helps in recognition of actions, specially for complex datasets \cite{Tao2015}. Our wiew is that while poses are in general well defined in most research, little effort has been made to mine actions from videos, in terms of detecting the temporal spanning (action detection) and action localization. In addition to the fact that most action datasets are only single actions, there is a lack of research in the general setup where actions are combined in the same video. Nevertheless, a few works have noticed that humans usually performs complex action in real life \cite{Wei2013, Lillo2014}, providing their own datasets based in RGB-D cameras. In our work, we aim to group both worlds of single and composed actions in a single hierarchical model of three semantic levels, and using human body regions to improve the representativeness.
During training, we assume there is temporal annotations of actions. As we want our model to perform action localization, we model the action assignments $V_r$ in each region as latent variables during training, allowing the model to infer which human part execute the action without needing this kind of annotations in the training set, including a model for the initialization of action labels. In this way, we advance from a simple detection problem to infer also \emph{how} the subject executes the action, important in surveillance applications, health monitoring, between others. We also expand the modeling of recurrent patterns of poses to construct a general model for shared actions, aiming to handle multimodal information, which is produced by actions with the same label but with different execution patterns, or by changes in representation of actions such as varying camera view. We handle this problem by augmenting the number of action classifiers, where each original action acts as a parent node of several non-overlapping child actions. Finally, as we are using local information for poses, some frames could be noisy or representing an uncommon pose, not useful to build the pose models. We attack this issue by adding a garbage collector for poses, where only the most-informative poses are used by pose classifiers during learning. We describe these contributions in the following paragraphs.
\paragraph{[EDIT] Latent assignments of actions to human regions}
Knowing the parts of the body involved in the actions is highly appealing. Suppose we have $M$ videos, each video annotated with $Q_m$ action intervals. Each action interval can be associated with any number of regions, from $1$ to all $R$ regions. For example, a \emph{waving hand} action could be associated only with \emph{right\_arm}, while the action \emph{jogging} could be associated with the whole body. We want to learn the associations of actions and human parts for training videos, and we build these associations using latent variables. The main problem to solve is to how to get a proper initialization for actions, since there is a very high chance to get sticked in a local minimum far away of the optimum, producing bad results.
Our first contribution is a method to get a proper initialization of fine-grained spatial action labels, knowing only the time span of the actions. Using the known action intervals, we formulate the problem of action to region assignment as an optimization problem, constrained using structural information: the actions intervals must not overlap in the same region, and all the action intervals must be present at least in one region. We formulate this labeling problem as a binary Integer Linear Programming (ILP) problem. We define as $v_{r,q}^m=1$ when the action interval $q \in \{1,\dots,Q_m\}$ appears in region $r$ in the video $m$, and $v_{r,q}^m=0$ otherwise. We assume we have pose labels $z_{t,r}$ in each frame, independent for each region, learned via clustering the poses for all frames in all videos. For an action interval $q$, we use as descriptor the histogram of pose labels for each region in the action interval, defined for the video $m$ as $h_{r,q}^m$ . We can solve the problem of finding the correspondence between action intervals and regions in a formulation similar to $k$-means, using the structure of the problem as constraints in the labels, and using $\chi^2$ distance between the action interval descriptors and the cluster centers:
\begin{equation}
\begin{split}
P1) \quad \min_{v,\mu} &\sum_{m=1}^M \sum_{r=1}^R \sum_{q=1}^{Q_m} v_{r,q}^m d( h_{r,q}^m - \mu_{a_q}^r) -\frac{1}{\lambda} v_{r,q}^m\\
\text{s. to}
\quad
& \sum_{r=1}^R v_{r,q}^m \ge 1\text{, }\forall q\text{, }\forall m \\
& v_{r,q_1}^m + v_{r,q_2}^m \le 1 \text{ if } q_1\cap q_2 \neq \emptyset \text{, }\forall r\text{, }\forall m\\
& v_{r,q}^m \in \{0,1\}\text{, }\forall q\text{, }\forall{r}\text{, }\forall m
\end{split}
\end{equation}
with
\begin{equation}
d( h_{r,q}^m - \mu_{a_q}^r) = \sum_{k=1}^K (h_{r,q}^m[k] - \mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\mu_{a_q}^r[k]).
\end{equation}
$\mu_{a_q}^r$ are computed as the mean of the descriptors with the same action label within the same region. We solve $P1$ iteratively as $k$-means, finding the cluster centers for each region $r$, $\mu_{a}^r$ using the labels $v_{r,q}^m$, and then finding the best labeling given the cluster centers, solving an ILP problem. Note that the first term of the objective function is similar to a $k$-means model, while the second term resembles the objective function of \emph{self-paced} learning as in \cite{Kumar2010}, fostering to balance between assigning a single region to every action, towards assigning all possible regions to the action intervals when possible.
[IL: INCLUDE FIGURE TO SHOW P1 GRAPHICALLY]
We describe the further changes in the hierarchical model of \cite{Lillo2014} in the learning and inference sections.
\paragraph{[EDIT] Representing semantic actions with multiple atomic sequences}.
As the poses and atomic actions in \cite{Lillo2014} model are shared, a single classifier is generally not enough to model multimodal representations, that occur usually in complex videos. We modify the original hierarchical model of \cite{Lillo2014} to include multiple linear classifiers per action. We create two new concepts: \textbf{semantic actions}, that refer to actions \emph{names} that compose an activity; and \textbf{atomic sequences}, that refers to the sequence of poses that conform an action. Several atomic sequences can be associated to a single semantic action, creating disjoint sets of atomic sequences, each set associated to a single semantic action. The main idea is that the action annotations in the datasets are associated to semantic actions, whereas for each semantic action we learn several atomic sequence classifiers. With this formulation, we can handle the multimodal nature of semantic actions, covering the changes in motion, poses , or even changes in meaning of the action according to the context (e.g. the semantic action ``open'' can be associated to opening a can, opening a door, etc.).
Inspired by \cite{Raptis2012}, we first use the \emph{Cattell's Scree test} for finding a suitable number of atomic sequence for every semantic action. Using the semantic action labels, we compute a descriptor for every interval using normalized histograms of pose labels. Then, for a particular semantic action $u$, we compute the the eigenvalues $\lambda_u$ of the affinity matrix of the semantic action descriptors, using $\chi^2$ distance. For each semantic action $u \in \{1,\dots,U\}$ we find the number of atomic sequences $G_u$ as $G_u = \argmin_i \lambda_{i+1}^2 / (\sum_{j=1}^i \lambda_j) + c\cdot i$, with $c=2\cdot 10^{-3}$. Finally, we cluster the descriptors corresponding to each semantic action using k-means, using a different number of clusters for each semantic action $u$ according to $G_u$. This approach generates non-overlapping atomic sequences, each associated to a single semantic action.
To transfer the new labels to the model, we define $u(v)$ as the function that given the atomic sequence label $v$, returns the corresponding semantic action label $u$. The energy for the activity level is then
\begin{equation}
E_{\text{activity}} = \sum_{u=1}^U\sum_{t=1}^T \alpha_{y,u}\delta(u(v_t)=u)
\end{equation}
For the action and pose labels the model remains unchanged. Using the new atomic sequences allows a richer representation for actions, while in he activity level, several atomic sequences will map to a single semantic action. This behavior resembles a max-pooling operation, where we will choose at inference the atomic sequences that best describe the performed actions in the video, keeping the semantics of the original labels.
\paragraph{Towards a better representation of poses: adding a garbage collector}
The model in \cite{Lillo2014} uses all poses to feed action classifiers. Out intuition is that only a subset of poses in each video are really discriminative or informative for the actions performed, while there is plenty of poses that corresponds to noisy or non-informative ones. [EXPAND] Our intuition is that low-scored frames in terms of poses (i.e. a low value of $w_{z_t}^\top x_t$ in Eq. (\ref{eq:energy2014})) make the same contribution as high-scored poses in higher levels of the model, while degrading the pose classifiers at the same time since low-scored poses are likely to be related to non-informative frames. We propose to include a new pose, to explicitly handling those low-scored frames, keeping them apart for the pose classifiers $w$, but still adding a fixed score to the energy function to avoid normalization issues and to help in the specialization of pose classifiers. We call this change in the model a \emph{garbage collector} since it handles all low-scores frames and group them having a fixed energy score $\theta$. In practice, we use a special pose entry $K+1$ to identify the non-informative poses. The equation representing the energy for pose level is
\begin{equation} \label{Eq_poseEnergy}
E_{\text{poses}} = \sum_{t=1}^T \left[ {w_{z_t}}^\top x_{t}\delta(z_{t} \le K) + \theta
\delta(z_{t}=K+1)\right]
\end{equation}
where $\delta(\ell) = 1$ if $\ell$ is true and $\delta(\ell) = 0$ if
$\ell$ is false. The action level also change its energy:
\begin{equation}
\begin{split}
\label{Eq_actionEnergy}
E_{\text{actions}} = \sum_{t=1}^T \sum_{a=1}^A \sum_{k=1}^{K+1} \beta_{a,k} \delta(z_t = k) \delta(v_t = a).
\end{split}
\end{equation}
\begin{comment}
Integrating all contribution detailed in previous sections, the model is written as:
Energy function:
\begin{equation}
E = E_{\text{activity}} + E_{\text{action}} + E_{\text{pose}}
+ E_{\text{action transition}} + E_{\text{pose transition}}.
\end{equation}
\begin{equation}
E_{\text{poses}} = \sum_{t=1}^T \left[ {w_{z_t}}^\top x_{t}\delta(z_{t} \le K) + \theta
\delta(z_{t}=K+1)\right]
\end{equation}
\begin{equation}
E_{\text{actions}} = \sum_{t=1}^T \sum_{a=1}^A \sum_{k=1}^{K+1} \beta_{a,k} \delta(z_t = k) \delta(v_t = a).
\end{equation}
\begin{equation}
h_g^{r}(U) = \sum_{t} \delta_{u_{t,r}}^g
\end{equation}
So the energy in the activity level is
\begin{equation}
E_{\text{activity}} = \sum_{r} {\alpha^r_{y}}^\top h^{r}(U) = \sum_{r,g,t} \alpha^r_{y,g} \delta_{u_{t,r}}^g
\end{equation}
\begin{equation}
E_{\text{action transition}} = \sum_{r,a,a'} \gamma^r_{a',a} \sum_{t} \delta_{v_{t-1,r}}^{a'}\delta_{v_{t,r}}^a
\end{equation}
\begin{equation}
E_{\text{pose transition}} =\sum_{r,k,k'} \eta^r_{k',k}\sum_{t}\delta_{z_{t-1,r}}^{k'}\delta_{z_{t,r}}^{k}
\end{equation}
\end{comment}
\subsection{Inference}
\label{subsec:inference}
The input to the inference algorithm is a new video sequence with features
$\vec{x}$. The task is to infer the best complex action label $\hat y$, and to
produce the best labeling of actionlets $\hat{\vec{v}}$ and motion poselets $\hat{\vec{z}}$.
{\small
\begin{equation}
\hat y, \hat{\vec{v}}, \hat{\vec{z}} = \argmax_{y, \vec{v},\vec{z}} E(\vec{x}, \vec{v}, \vec{z}, y)
\end{equation}}
We can solve this by exhaustively enumerating all values of complex actions $y$, and solving for $\hat{\vec{v}}$ and $\hat{\vec{z}}$ using:
\small
\begin{equation}
\begin{split}
\hat{\vec{v}}, \hat{\vec{z}} | y ~ =~ & \argmax_{\vec{v},\vec{z}} ~ \sum_{r=1}^R \sum_{t=1}^T \left( \alpha^r_{y,u(v{(t,r)})}
+ \beta^r_{v_{(t,r)},z_{(t,r)}}\right. \\
&\quad\quad \left.+ {w^r_{z_{(t,r)}}}^\top x_{t,r} \delta(z_{(t,r)} \le K) + \theta^r \delta_{z_{(t,r)}}^{K+1} \right. \\
& \quad\quad \left.+ \gamma^r_{v_{({t-1},r)},v_{(t,r)}} + \eta^r_{z_{({t-1},r)},z_{(t,r)}} \vphantom{{w^r_{z_{(t,r)}}}^\top x_{t,r}} \right). \\
\end{split}
\label{eq:classify_inference}
\end{equation}
\normalsize
\subsection{Learning} \label{subsec:learning}
\textbf{Initial actionlet labels.} An important step in the training process is
the initialization of latent variables. This is a challenging due to the lack
of spatial supervision: at each time instance, the available atomic actions can be associated with
any of the $R$ body regions.
We adopt the machinery of
self-paced
learning \cite{Kumar:EtAl:2010} to provide a suitable solution and
formulate the association between actions and body regions as an
optimization problem. We constrain this optimization using two structural
restrictions:
i) atomic actions intervals must not overlap in the same region, and
ii) a labeled atomic action must be present at least in one region. We
formulate the labeling
process as a binary Integer Linear Programming (ILP) problem, where we define
$b_{r,q}^m=1$ when action interval $q \in \{1,\dots,Q_m\}$ is active in region
$r$ of video $m$; and $b_{r,q}^m=0$ otherwise. Each action interval $q$ is
associated with a single atomic action. We assume that we have initial
motion poselet labels
$z_{t,r}$ in each frame and region.
We describe the action interval $q$ and region $r$ using
the histogram $h_{r,q}^m$ of motion poselet labels. We can find
the correspondence between action intervals and regions using a formulation
that resembles the operation of$k$-means, but using the
structure of the problem to constraint the labels:
\small
\begin{equation}
\begin{split}
\text{P1}) \quad \min_{b,\mu} &\sum_{m=1}^M \sum_{r=1}^R \sum_{q=1}^{Q_m} b_{r,q}^m
d( h_{r,q}^m - \mu_{a_q}^r) -\frac{1}{\lambda} b_{r,q}^m\\
\text{s.t.}
\quad
& \sum_{r=1}^R b_{r,q}^m \ge 1\text{, }\forall q\text{, }\forall m \\
& b_{r,q_1}^m + b_{r,q_2}^m \le 1 \text{ if } q_1\cap q_2 \neq \emptyset
\text{,
}\forall r\text{, }\forall m\\
& b_{r,q}^m \in \{0,1\}\text{, }\forall q\text{, }\forall{r}\text{, }\forall m
\end{split}
\end{equation}
with
\begin{equation}
d( h_{r,q}^m - \mu_{a_q}^r) = \sum_{k=1}^K (h_{r,q}^m[k] -
\mu_{a_q}^r[k])^2/(h_{r,q}^m[k] +\mu_{a_q}^r[k]).
\end{equation}
\normalsize
Here, $\mu_{a_q}^r$ are the means of the descriptors with action
label $a_q$ within region $r$. We solve $\text{P1}$ iteratively using a block coordinate
descending scheme, alternating between solving $b_{r,q}^m$ with $\mu_{a}^r$
fixed, which has a trivial solution; and then fixing $\mu_{a}^r$ to solve
$b_{r,q}^m$, relaxing $\text{P1}$ to solve a linear program. Note that the second term
of the objective function in $\text{P1}$ resembles the objective function of
\emph{self-paced} learning \cite{Kumar:EtAl:2010}, managing the balance between
assigning a single region to every action or assigning all possible regions to
the respective action interval.
\textbf{Learning model parameters.}
We formulate learning the model parameters as a Latent Structural SVM
problem \cite{Yu:Joachims:2010}, with latent variables for motion
poselets $\vec{z}$ and actionlets $\vec{v}$. We find values for parameters in
equations
(\ref{eq:motionposelets}-\ref{eq:actionletstransition}),
slack variables $\xi_i$, motion poselet labels $\vec{z}_i$, and actionlet labels $\vec{v}_i$,
by solving:
{\small
\begin{equation}
\label{eq:big_problem}
\min_{W,\xi_i,~i=\{1,\dots,M\}} \frac{1}{2}||W||_2^2 + \frac{C}{M} \sum_{i=1}^M\xi_i ,
\end{equation}}
where
{\small \begin{equation}
W^\top=[\alpha^\top, \beta^\top, w^\top, \gamma^\top, \eta^\top, \theta^\top],
\end{equation}}
and
{\small
\begin{equation} \label{eq:slags}
\begin{split}
\xi_i = \max_{\vec{z},\vec{v},y} \{ & E(\vec{x}_i, \vec{z}, \vec{v}, y) + \Delta( (y_i,\vec{v}_i), (y, \vec{v})) \\
& - \max_{\vec{z}_i}{ E(\vec{x}_i, \vec{z}_i, \vec{v}_i, y_i)} \}, \; \;\; i\in[1,...M].
\end{split}
\end{equation}}
In Equation (\ref{eq:slags}), each slack variable
$\xi_i$ quantifies the error of the inferred labeling for
video $i$. We solve Equation (\ref{eq:big_problem}) iteratively using the CCCP
algorithm \cite{Yuille:Rangarajan:03}, by solving for
latent labels $\vec{z}_i$ and $\vec{v}_i$ given model parameters $W$,
temporal atomic action annotations (when available), and labels of complex actions occurring in
training videos (see Section \ref{subsec:inference}). Then, we solve for
$W$ via 1-slack formulation using Cutting Plane algorithm
\cite{Joachims2009}.
The role of the loss function $\Delta((y_i,\vec{v}_i),(y,\vec{v}))$ is to penalize inference errors during
training. If the true actionlet labels are known in advance, the loss function is the same as in \cite{Lillo2014} using the actionlets instead of atomic actions:
\small \begin{equation}
\Delta((y_i,\vec{v}_i),(y,\vec{v})) = \lambda_y(y_i \ne y) + \lambda_v\frac{1}{T}\sum_{t=1}^T
\delta({v_t}_{i} \neq v_t),
\end{equation}
\normalsize
\noindent where ${v_t}_{i}$ is the true actionlet label. If the spatial ordering of actionlets is unknown (hence the latent
actionlet formulation), but the temporal composition is known, we can compute a
list $A_t$ of possible actionlets for frame $t$, and include that information
on the loss function as
\small \begin{equation}
\Delta((y_i,\vec{v}_i),(y,\vec{v})) = \lambda_y(y_i \ne y) + \lambda_v\frac{1}{T}\sum_{t=1}^T
\delta(v_t \notin A_t)
\end{equation}
\normalsize
\subsection{Body regions}
We divide the body pose into $R$ fixed spatial regions and independently compute
a pose feature vector for each region. Figure \ref{fig:skeleton_limbs_regions}
illustrates the case when $R = 4$ that we use in all our experiments. Our body
pose feature vector consists of the concatenation of two descriptors. At frame
$t$ and region $r$, a descriptor $x^{g}_{t,r}$ encodes geometric information
about the spatial configuration of body joints, and a descriptor $x^{m}_{t,r}$
encodes local motion information around each body joint position.
We use the geometric descriptor from \cite{Lillo2014}:
we construct six segments that connect pairs of joints at each
region\footnote{Arm segments: wrist-elbow, elbow-shoulder, shoulder-neck, wrist-shoulder, wrist-head, and neck-torso; Leg segments: ankle-knee, knee-hip, hip-hip center, ankle-hip, ankle-torso and hip center-torso}
and compute 15 angles between those segments.
Also, three angles are calculated between a plane formed by three
segments\footnote{Arm plane: shoulder-elbow-wrist; Leg plane: hip-knee-ankle} and
the remaining three non-coplanar segments, totalizing an 18-D geometric descriptor (GEO) for every region.
Our motion descriptor is based on tracking motion trajectories of key points
\cite{WangCVPR2011}, which in our case coincide with body joint positions.
We extract a HOF descriptor
using 32x32 RGB patches centered at the joint location for a temporal window of 15
frames. At each joint location, this produces a 108-D descriptor,
which we concatenate across all joints in each a region to obtain our motion descriptor. Finally,
we apply PCA to reduce the dimensionality of our concatenated motion descriptor
to 20. The final descriptor is the concatenation of the geometric and
motion descriptors, $x_{t,r} = [x_{t,r}^g ; x_{t,r}^m]$.
\subsection{Hierarchical compositional model}
We propose a hierarchical compositional model that spans three semantic
levels. Figure \ref{fig:overview} shows a schematic of our model. At the
top level, our model assumes that each input video has a single complex action
label $y$. Each complex action is composed of a
temporal and spatial arrangement of atomic actions with labels $\vec{u}=[u_1,\dots,u_T]$, $u_i \in \{1,\dots,S\}$.
In turn, each atomic action consists of several non-shared \emph{actionlets}, which correspond to representative sets of pose configurations for action identification, modeling the multimodality of each atomic action.
We capture actionlet assignments in $\vec{v}=[v_1,\dots,v_T]$, $v_i \in \{1,\dots,A\}$.
Each actionlet index $v_i$ corresponds to a unique and known actomic action label $u_i$, so they are related by a mapping $\vec{u} = \vec{u}(\vec{v})$. At the
intermediate level, our model assumes that each actionlet is composed of a
temporal arrangement of a subset from $K$ body poses, encoded in $\vec{z} = [z_1,\dots,z_T]$, $z_i \in \{1,\dots,K\}$,
where $K$ is a hyperparameter of the model.
These subsets capture pose geometry and local motion, so we call them \emph{motion poselets}.
Finally, at the bottom level, our model identifies motion poselets
using a bank of linear classifiers that are applied to the incoming frame
descriptors.
We build each layer of our hierarchical model on top of BoW
representations of labels. To this end, at the bottom level of our hierarchy, and for
each body region, we learn a dictionary of motion poselets. Similarly, at the mid-level of our hierarchy, we learn a dictionary of actionlets, using the BoW representation of motion poselets as inputs. At each of these levels,
spatio-temporal activations of the respective dictionary words are used
to obtain the corresponding histogram encoding the BoW representation.
The next two sections provide
details on the process to represent and learn the dictionaries of motion
poselets and actionlets. Here we discuss our
integrated hierarchical model.
We formulate our hierarchical model using an energy function.
Given a video of $T$ frames corresponding to complex action $y$ encoded by descriptors $\vec{x}$, with the label vectors $\vec{z}$ for motion poselets,
$\vec{v}$ for actionlets and $\vec{u}$ for atomic actions, we
define an energy function for a video as:
\small
\begin{align}\label{Eq_energy}
E(\vec{x},&\vec{v},\vec{z},y) = E_{\text{motion poselets}}(\vec{z},\vec{x}) \nonumber \\&+ E_{\text{motion poselets BoW}}(\vec{v},\vec{z}) +
E_{\text{atomic actions BoW}}(\vec{u}(\vec{v}),y) \nonumber \\
& + E_{\text{motion poselets transition}}(\vec{z}) + E_{\text{actionlets
transition}}(\vec{v}).
\end{align}
\normalsize
Besides the BoW representations and motion poselet classifiers
described above, Equation (\ref{Eq_energy}) includes
two energy potentials that encode information related to
temporal
transitions between pairs of motion poselets ($E_{\text{motion poselets
transition}}$) and
actionlets ($E_{\text{actionlets transition}}$).
The energy potentials are given by:
{\small
\begin{align}
\label{eq:motionposelets}
&E_{\text{mot. poselet}}(\vec{z},\vec{x}) = \sum_{r,t} \left[ \sum_{k} {w^r_k}^\top
x_{t,r}\delta_{z_{(t,r)}}^{k} + \theta^r \delta_{z_{(t,r)}}^{K+1}\right] \\
&E_{\text{mot. poselet BoW}}(\vec{v},\vec{z}) = \sum_{r,a,k} {\beta^r_{a,k}}\delta_{v_{(t,r)}}^{a}\delta_{z_{(t,r)}}^{k}\\
\label{eq:actionlets_BoW}
&E_{\text{atomic act. BoW}}(\vec{u}(\vec{v}),y) =\sum_{r,s} {\alpha^r_{y,s}}\delta_{u(v_{(t,r)})}^{s} \\
&E_{\text{mot. pos. trans.}}(\vec{z}) =
\sum_{r,k_{+1},k'_{+1}} \eta^r_{k,k'}
\sum_{t} \delta_{z_{(t-1,r)}}^{k}\delta_{z_{(t,r)}}^{k'} \\
\label{eq:actionletstransition}
&E_{\text{acttionlet trans.}}(\vec{v}) =\sum_{r,a,a'} \gamma^r_{a,a'}
\sum_{t}
\delta_{v_{(t-1,r)}}^{a}\delta_{v_{(t,r)}}^{a'}
\end{align}
}
Our goal is to
maximize $E(\vec{x},\vec{v},\vec{z},y)$, and obtain the
spatial and temporal arrangement
of motion poselets $\vec{z}$ and actionlets $\vec{v}$, as well as, the underlying
complex action $y$.
In the previous equations, we use $\delta_a^b$ to indicate the Kronecker delta function $\delta(a = b)$, and use indexes $k \in \{1,\dots,K\}$ for motion poselets, $a \in \{1,\dots,A\}$ for actionlets, and $s \in \{1,\dots,S\}$ for atomic actions.
In the energy term for motion poselets,
$w^r_k$ are a set of $K$ linear pose classifiers applied to frame
descriptors $x_{t,r}$, according to the label of the latent variable $z_{t,r}$.
Note that there is a special label $K+1$; the role of this label will be
explained in Section \ref{subsec:garbage_collector}.
In the energy potential associated to
the BoW representation for motion poselets, $\vec{\beta}^r$ denotes a set of $A$
mid-level classifiers, whose inputs are histograms of motion
poselet labels at those frame annotated as actionlet $a$. At the highest level,
$\alpha^r_{y}$ is a linear classifier associated with complex action $y$, whose
input is the histogram of atomic action labels,
which are related to actionlet assignments by the mapping function $\vec{u}(\vec{v})$. Note that all classifiers
and labels here correspond to a single region $r$. We add the contributions of all
regions to compute the global energy of the video. The transition terms act as
linear classifiers $\eta^r$ and $\gamma^r$ over histograms of temporal transitions of motion poselets
and temporal transitions of actionlets respectively. As we have a special label $K+1$ for motion poselets, the summation index
$k_{+1}$ indicates the interval $\lbrack 1,\dots,K+1 \rbrack$.
\subsection{Learning motion poselets}
In our model, motion poselets are learned by treating them as latent variables
during training. Before training, we fix the number of motion poselets per region to $K$.
In every region $r$, we learn an independent
set of pose classifiers $\{w^r_k\}_{k=1}^K$, initializing the motion poselet
labels using the $k$-means algorithm. We learn pose classifiers,
actionlets and complex actions classifiers jointly, allowing the model to discover
discriminative motion poselets useful to detect and recognize complex actions.
As shown in previous work, jointly learning linear
classifiers to identify body parts and atomic actions improves recognition
rates \cite{Lillo2014,Wang2008}, so here we follow a similar hierarchical
approach, and integrate learning
of motion poselets with the learning of actionlets.
\subsection{Learning actionlets}
\label{sec:learningactionlets}
A single linear classifier does not offer enough flexibility to identify atomic
actions that exhibit high visual variability. As an example, the atomic action
``open'' can be associated with ``opening a can'' or ``opening a
book'', displaying high variability in action execution. Consequently, we
augment our hierarchical model including multiple classifiers to
identify different modes of action execution.
Inspired by \cite{Raptis2012}, we use the \emph{Cattell's Scree test} to
find a suitable number of actionlets to model each atomic
action. Specifically, using the atomic action labels, we compute a descriptor
for every video interval using
normalized histograms of initial pose labels obtained with $k$-means. Then, for a particular atomic action
$s$, we compute the eigenvalues $\lambda(s)$ of the affinity matrix of the
atomic action descriptors, which is build using $\chi^2$ distance. For each
atomic action
$s \in \{1,\dots,S\}$, we find the number of actionlets $G_s$ as $G_s =
\argmin_i {\lambda(s)}_{i+1}^2 / (\sum_{j=1}^i {\lambda(s)}_j) + c\cdot i$, with $c=2\cdot
10^{-3}$. Finally, we cluster the descriptors from each atomic
action $s$ running $k$-means with $k = G_s$. This scheme generates
a set of non-overlapping actionlets to model each single atomic
action. In our experiments, we notice that the number of actionlets used to
model each atomic action varies typically from 1 to 8.
To transfer the new labels to the model, we define $u(v)$ as a function that
maps from actionlet label $v$ to the corresponding atomic action label
$u$. A dictionary of actionlets provides a richer representation for actions,
where several actionlets will map to a single atomic action. This behavior
resembles a max-pooling operation, where at inference time we will choose the
set of actionlets that best describe the performed actions in the video, keeping
the semantics of the original atomic action labels.
\subsection{A garbage collector for motion poselets}
\label{subsec:garbage_collector}
While poses are highly informative for action recognition, an input video
might contain irrelevant or idle zones, where the underlying poses are noisy
or non-discriminative to identify the actions being performed in the video. As
a result, low-scoring motion poselets could degrade the pose classifiers during
training, decreasing their performance. To deal with this problem, we include in
our model a \emph{garbage collector} mechanism for motion poselets. This
mechanism operates by assigning all low-scoring motion poselets to
the $(K+1)$-th pose dictionary entry. These collected poses are
associated with a learned score lower than $\theta^r$, as in Equation
(\ref{eq:motionposelets}). Our experiments show that this mechanism leads
to learning more discriminative motion poselet classifiers.
\input{learning}
\input{inference}
\subsection{Video Representation} \label{subsec:videorepresentation}
[EXPLAIN BETTER, ADD FIGURE]
Our model is based on skeleton information encoded in joint annotations. We use the same geometric descriptor as in \cite{Lillo2014}, using angles between segments connecting two joints, and angles between these segments and a plane formed by three joints. In addition to geometry, other authors \cite{Zanfir2013,Tao2015,Wang2014} have noticed that including local motion information is beneficial to the categorization of videos. Moreover, in \cite{zhu2013fusing} the authors create a fused descriptor using spatio-temporal descriptors and joint descriptors, showing that they combined perform better than separated. With this is mind, we augment the original geometric descriptor with motion information: when there is only skeleton jonints data, we use the displacement of vectors (velocity) as a motion descriptor. If RGB video is available, we use the HOF descriptor extracted from the trajectory of the joint in a small temporal window.
For the geometric descriptor, we use 6 segments per human action (see Fig. XXXX). The descriptor is composed by the angles between the segments (15 angles), and the angles between a plane formed by three segments and the non-coplanar segments (3 angles). For motion descriptor, we use either the 3D velocity of every joint in each region as a concatenated vector (18 dimensions), or the concatenated HOF descriptor of the joint trajectories, transformed to a low-dimensional space using PCA (20 dimensions).
| {'timestamp': '2016-06-17T02:01:41', 'yymm': '1606', 'arxiv_id': '1606.04992', 'language': 'en', 'url': 'https://arxiv.org/abs/1606.04992'} |
\section{introduction}
Recent discovery of Weyl semimetals (WSMs)~\cite{Lv2015TaAs,Xu2015TaAs,Yang2015TaAs} in realistic materials has stimulated tremendous research interest in topological semimetals, such as WSMs, Dirac semimetals, and nodal line semimetals~\cite{volovik2003universe,Wan2011,Balents2011,Burkov2011,Hosur2013,Vafek2014}, as a new frontier of condensed matter physics after the discovery of topological insulators~\cite{qi2011RMP, Hasan2010}.
The WSMs are of particular interest not only because of their exotic Fermi-arc-type surface states but also because of their appealing bulk chiral magneto-transport properties, such as the chiral anomaly effect~\cite{Xiong2015,Huang2015anomaly,Arnold2015}, nonlocal transport~\cite{Parameswaran2014,Baum2015}, large magnetoresistance, and high mobility~\cite{Shekhar2015}.
Currently discovered WSM materials can be classified into two groups. One group breaks crystal inversion symmetry but preserves time-reversal symmetry (e.g., TaAs-family transition-metal pnictides~\cite{Weng2015,Huang2015}and WTe$_2$- and MoTe$_2$-family transition-metal dichalcogenides~\cite{Soluyanov2015WTe2,Sun2015MoTe2,Wang2016MoTe2,Koepernik2016,Deng2016,Jiang2016}). The other group breaks time-reversal symmetry in ferromagnets with possible tilted moments (e.g., magnetic Heusler GdPtBi~\cite{Hirschberger2016,Shekhar2016} and YbMnBi$_2$~\cite{Borisenko2015}). An antiferromagnetic (AFM) WSM compound has yet to be found, although Y$_2$Ir$_2$O$_7$ with a noncoplanar AFM structure was theoretically predicted to be a WSM candidate~\cite{Wan2011}.
In a WSM, the conduction and valence bands cross each other linearly through nodes called Weyl points. Between a pair of Weyl points with opposite chiralities (sink or source of the Berry curvature)~\cite{volovik2003universe}, the emerging Berry flux can lead to the anomalous Hall effect (AHE) ~\cite{Burkov2014}, as observed in GdPtBi~\cite{Hirschberger2016,Shekhar2016}, and an intrinsic spin Hall effect (SHE), as predicted in TaAs-type materials~\cite{Sun2016}, for systems without and with time-reversal symmetry, respectively. Herein, we raise a simple recipe to search for WSM candidates among materials that host strong AHE or SHE.
Recently, Mn$_3$X (where $\rm X=Sn$, Ge, and Ir), which exhibit noncollinear antiferromagetic (AFM) phases at room temperature, have been found to show large AHE~\cite{Kubler2014,Chen2014,Nakatsuji2015,Nayak2016} and SHE~\cite{Zhang2016}, provoking our interest to investigate their band structures. In this work, we report the existence of Weyl fermions for Mn$_3$Ge and Mn$_3$Sn compounds and the resultant Fermi arcs on the surface by \textit{ab initio} calculations, awaiting experimental verifications. Dozens of Weyl points exist near the Fermi energy in their band structure, and these can be well understood with the assistance of lattice symmetry.
\section{methods}
The electronic ground states of Mn$_3$Ge and Mn$_3$Sn were calculated by using density-functional theory (DFT) within the Perdew-Burke-Ernzerhof-type generalized-gradient approximation (GGA)~\cite{Perdew1996} using the Vienna {\it ab initio} Simulation Package (\textsc{vasp})~\cite{Kresse1996}. The $3d^6 4s^1$, $4s^24p^2$, and $5s^2 5p^2$ electrons were considered as valance electrons for Mn, Ge, and Sn atoms, respectively. The primitive cell with experimental crystal parameters $a=b=5.352$ and $c=4.312$ \AA~ for Mn$_3$Ge
and $a=b=5.67$ and $c=4.53$ \AA~ for Mn$_3$Sn
were adopted. Spin-orbit coupling (SOC) was included in all calculations.
To identify the Weyl points with the monopole feature, we calculated the Berry curvature distribution in momentum space.
The Berry curvature was calculated based on a tight-binding Hamiltonian based on localized Wannier functions\cite{Mostofi2008} projected from the DFT Bloch wave functions. Chosen were atomic-orbital-like Wannier functions, which include Mn-$spd$ and Ge-$sp$/Sn-$p$ orbitals, so that the tight-binding Hamiltonian is consistent with the symmetry of \textit{ab initio} calculations.
From such a Hamiltonian, the Berry curvature can be calculated using the Kubo-formula approach\cite{Xiao2010},
\begin{equation}
\begin{aligned}\label{equation1}
\Omega^{\gamma}_n(\vec{k})= 2i\hbar^2 \sum_{m \ne n} \dfrac{<u_{n}(\vec{k})|\hat
v_{\alpha}|u_{m}(\vec{k})><u_{m}(\vec{k})|\hat v_{\beta}|u_{n}(\vec{k})>}{(E_{n}(\vec{k})-E_{m}(\vec{k}))^2},
\end{aligned}
\end{equation}
where $\Omega^{\gamma}_n(\vec{k})$ is the Berry curvature in momentum space for a given band $n$,
$\hat{v}_{\alpha (\beta, \gamma)}=\frac{1}{\hbar}\frac{\partial\hat{H}}{\partial k_{\alpha (\beta, \gamma)}}$ is the velocity operator with $\alpha,\beta,\gamma=x,y,z$, and $|u_{n}(\vec{k})\rangle$ and $E_{n}(\vec{k})$ are the eigenvector and eigenvalue of the Hamiltonian $\hat{H}(\vec{k})$, respectively. The summation of $\Omega^{\gamma}_n(\vec{k})$ over all valence bands gives the Berry curvature vector $\mathbf{\Omega} ~(\Omega^x,\Omega^y,\Omega^z)$.
In addition, the surface states that demonstrate the Fermi arcs were calculated on a semi-infinite surface, where the momentum-resolved local density of states (LDOS) on the surface layer was evaluated based on the Green's function method. We note that the current surface band structure corresponds to the bottom surface of a half-infinite system.
\section{Results and Discussion}
\subsection{Symmetry analysis of the antiferromagnetic structure}
Mn$_3$Ge and Mn$_3$Sn share the same layered hexagonal lattice (space group $P6_3/mmc$, No. 193).
Inside a layer, Mn atoms form a Kagome-type lattice with mixed triangles and hexagons and Ge/Sn atoms are located at the centers of these hexagons.
Each Mn atom carries a magnetic moment of 3.2 $\mu$B in Mn$_3$Sn and 2.7 $\mu$B in Mn$_3$Ge.
As revealed in a previous study~\cite{Zhang2013}, the ground magnetic state is a
noncollinear AFM state, where Mn moments align inside the $ab$ plane and form 120-degree angles with neighboring moment vectors, as shown in Fig.\ref{stru}b. Along the $c$ axis, stacking two layers leads to the primitive unit cell.
Given the magnetic lattice, these two layers can be transformed into each other by inversion symmetry or with a mirror reflection ($M_y$) adding a half-lattice ($c/2$) translation, i.e., a nonsymmorphic symmetry $\{M_y|\tau = c/2\}$. In addition, two other mirror reflections ($M_x$ and $M_z$) adding time reversal (T), $M_x T$ and $M_z T$, exist.
In momentum space, we can utilize three important symmetries, $M_x T$, $M_z T$, and $M_y$, to understand the electronic structure and locate the Weyl points. Suppose a Weyl point with chirality $\chi$ (+ or $-$) exists at a generic position $\mathbf{k}~(k_x,k_y,k_z)$.
Mirror reflection reverses $\chi$ while time reversal does not and both of them act on $\mathbf{k}$. The transformation is as follows:
\begin{equation}
\begin{aligned}
M_x T : & ~ (k_x,k_y,k_z) \rightarrow (k_x, -k_y, -k_z); &~\chi &\rightarrow -\chi \\
M_z T : &~ (k_x,k_y,k_z) \rightarrow (-k_x, -k_y, k_z); &~ \chi &\rightarrow -\chi \\
M_y : &~ (k_x,k_y,k_z) \rightarrow (k_x, -k_y, k_z); &~ \chi &\rightarrow -\chi \\
\end{aligned}
\label{symmetry}
\end{equation}
Each of the above three operations doubles the number of Weyl points. Thus, eight nonequivalent Weyl points can be generated at $(\pm k_x,+k_y,\pm k_z)$ with chirality $\chi$ and
$(\pm k_x,-k_y,\pm k_z)$ with chirality $-\chi$ (see Fig. 1d). We note that the $k_x=0/\pi$ or $k_z=0/\pi$ plane can host Weyl points. However, the $k_y=0/\pi$ plane cannot host Weyl points, because $M_y$ simply reverses the chirality and annihilates the Weyl point with its mirror image if it exists. Similarly the $M_y$ mirror reflection requires that a nonzero anomalous Hall conductivity can only exist in the $xz$ plane (i.e., $\sigma_{xz}$), as already shown in Ref.~\onlinecite{Nayak2016}.
In addition, the symmetry of the 120-degree AFM state is slightly broken in the materials, owing to the existence of a tiny net moment ($\sim$0.003 ~$\mu$B per unit cell)~\cite{Nakatsuji2015,Nayak2016,Zhang2013}. Such weak symmetry breaking seems to induce negligible effects in the transport measurement. However, it gives rise to a perturbation of the band structure, for example, shifting slightly the mirror image of a Weyl point from its position expected, as we will see in the surface states of Mn$_3$Ge.
\begin{figure
\begin{center}
\includegraphics[width=0.45\textwidth]{figure1.png}
\end{center}
\caption{ Crystal and magnetic structures of Mn$_3X$ (where $\rm X = Sn$ or Ge) and related symmetry.
(a) Crystal structure of Mn$_3$X. Three mirror planes are shown in purple, corresponding to
\{$M_y|\tauup=c/2$\}, $M_xT$, and $M_zT$ symmetries.
(b) Top view along the $c$ axis of the Mn sublattice. Chiral AFM with an angle of 120 degrees between neighboring magnetic moments is formed in each Mn layer.
The mirror planes that correspond to $M_xT$ and \{$M_y|\tauup=c/2$\} are marked by dashed lines.
(c) Symmetry in momentum space, $M_y$, $M_xT$, and $M_zT$.
If a Weyl point appears at $(k_x,k_y,k_z)$, eight Weyl points in total can be generated at $(\pm k_x,\pm k_y,\pm k_z)$ by the above three symmetry operations. For convenience, we choose the $k_y=\pi$ plane for $M_y$ here.
}
\label{stru}
\end{figure}
\begin{table
\caption{
Positions and energies of Weyl points in first Brillouin zone for Mn$_3$Sn.
The positions ($k_x$, $k_y$, $k_z$) are in units of $\pi$.
Energies are relative to the Fermi energy $E_F$.
Each type of Weyl point has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, k_z=0)$.
}
\label{table:Mn3Sn}
\centering
\begin{tabular}{cccccc}
\toprule
\hline
Weyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\
\hline
W$_1$ & $-0.325$ & 0.405 & 0.000 & $-$ & 86 \\
W$_2$ & $-0.230$ & 0.356 & 0.003 & + & 158 \\
W$_3$ & $-0.107$ & 0.133 & 0.000 & $-$ & 493 \\
\hline
\end{tabular}
\end{table}
\begin{table
\caption{
Positions and energies of Weyl points in the first Brillouin zone for Mn$_3$Ge.
The positions ($k_x$, $k_y$, $k_z$) are in units of $\pi$.
Energies are relative to the Fermi energy $E_F$.
Each of W$_{1,2,7}$ has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, k_z=0)$.
W$_4$ has four copies at $(k_x \approx 0, \pm k_y, \pm k_z)$ and
W$_9$ has two copies at $(k_x \approx 0, \pm k_y, k_z =0)$.
Each of the other Weyl points has four copies whose coordinates can be generated
from the symmetry as $(\pm k_x, \pm k_y, \pm k_z)$.
} \label{table:Mn3Ge}
\centering
\begin{tabular}{@{}cccccc@{}}
\toprule
\hline
Weyl point & $k_x$ & $k_y$ & $k_z$ & Chirality & Energy (meV) \\
\hline
W$_1$ & $-0.333$ & 0.388 & $-0.000$ & $-$ & 57 \\
W$_2$ & 0.255 & 0.378 & $-0.000$ & + & 111 \\
W$_3$ & $-0.101$ & 0.405 & 0.097 & $-$ & 48 \\
W$_4$ & $-0.004$ & 0.419 & 0.131 & + & 8 \\
W$_5$ & $-0.048$ & 0.306 & 0.164 & + & 77 \\
W$_6$ & 0.002 & 0.314 & 0.171 & $-$ & 59 \\
W$_7$ & $-0.081$ & 0.109 & 0.000 & + & 479 \\
W$_8$ & 0.069 & $-0.128$ & 0.117 & + & 330 \\
W$_9$ & 0.004 & $-0.149$ & $-0.000$ & + & 470 \\
\hline
\end{tabular}
\end{table}
\subsection{Weyl points in the bulk band structure}
The bulk band structures are shown along high-symmetry lines in Fig.~\ref{bandstrucure} for Mn$_3$Ge and Mn$_3$Sn. It is not surprising that the two materials exhibit similar band dispersions.
At first glance, one can find two seemingly band degenerate points at $Z$ and $K$ points, which are below the Fermi energy. Because of $M_z T$ and the nonsymmorphic symmetry \{$M_y|\tauup=c/2$\}, the bands are supposed to be quadruply degenerate at the Brillouin zone boundary $Z$, forming a Dirac point protected by the nonsymmorphic space group~\cite{Young2012,Schoop2015,Tang2016}. Given the slight mirror symmetry breaking by the residual net magnetic moment, this Dirac point is gapped at $Z$ (as shown in the enlarged panel) and splits into four Weyl points, which are very close to each other in $k$ space. A tiny gap also appears at the $K$ point. Nearby, two additional Weyl points appear, too. Since the Weyl point separations are too small near both $Z$ and $K$ points, these Weyl points may generate little observable consequence in experiments such as those for studying Fermi arcs. Therefore, we will not focus on them in the following investigation.
\begin{figure
\begin{center}
\includegraphics[width=0.45\textwidth]{figure2.png}
\end{center}
\caption{
Bulk band structures for (a) Mn$_3$Sn and (b) Mn$_3$Ge along high-symmetry lines with SOC.
The bands near the $Z$ and $K$ (indicated by red circles) are expanded to show details in (a).
The Fermi energy is set to zero.}
\label{bandstrucure}
\end{figure}
Mn$_3$Sn and Mn$_3$Ge are actually metallic, as seen from the band structures. However, we retain the terminology of Weyl semimetal for simplicity and consistency. The valence and conduction bands cross each many times near the Fermi energy, generating multiple pairs of Weyl points. We first investigate the Sn compound. Supposing that the total valence electron number is $N_v$, we search for the crossing points between the $N_v ^{\rm th}$ and $(N_v +1) ^{\rm th}$ bands.
As shown in Fig.~\ref{bc_Mn3Sn}a, there are six pairs of Weyl points in the first Brillouin zone; these can be classified into three groups according to their positions, noted as W$_1$, W$_2$, and W$_3$. These Weyl points lie in the $M_z$ plane (with W$_2$ points being only slightly off this plane owing to the residual-moment-induced symmetry breaking) and slightly above the Fermi energy. Therefore, there are four copies for each of them according to the symmetry analysis in Eq.~\ref{symmetry}.
Their representative coordinates and energies are listed in Table~\ref{table:Mn3Sn} and also indicated in Fig.~\ref{bc_Mn3Sn}a. A Weyl point (e.g., W$_1$ in Figs.~\ref{bc_Mn3Sn}b and ~\ref{bc_Mn3Sn}c) acts as a source or sink of the Berry curvature $\mathbf{\Omega}$, clearly showing the monopole feature with a definite chirality.
In contrast to Mn$_3$Sn, Mn$_3$Ge displays many more Weyl points. As shown in Fig.~\ref{bc_Mn3Ge}a and listed in Table~\ref{table:Mn3Ge}, there are nine groups of Weyl points. Here W$_{1,2,7,9}$ lie in the $M_z$ plane with W$_9$ on the $k_y$ axis, W$_4$ appears in the $M_x$ plane, and the others are in generic positions. Therefore, there are four copies of W$_{1,2,7,4}$, two copies of W$_9$, and eight copies of other Weyl points.
Although there are many other Weyl points in higher energies owing to different band crossings, we mainly focus on the current Weyl points that are close to the Fermi energy. The monopole-like distribution of the Berry curvature near these Weyl points is verified; see W$_1$ in Fig.~\ref{bc_Mn3Ge} as an example.
Without including SOC, we observed a nodal-ring-like band crossing in the band structures of both Mn$_3$Sn and Mn$_3$Ge. SOC gaps the nodal rings but leaves isolating band-touching points, i.e., Weyl points. Since Mn$_3$Sn exhibits stronger SOC than Mn$_3$Ge, many Weyl points with opposite chirality may annihilate each other by being pushed by the strong SOC in Mn$_3$Sn. This might be why Mn$_3$Sn exhibits fewer Weyl points than Mn$_3$Ge.
\begin{figure
\begin{center}
\includegraphics[width=0.5\textwidth]{figure3.png}
\end{center}
\caption{Surface states of Mn$_3$Sn.
(a) Distribution of Weyl points in momentum space.
Black and white points represent Weyl points with $-$ and + chirality, respectively.
(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.
(d) Fermi surface at $E_F= 86$ meV crossing the W$_1$ Weyl points.
The color represents the surface LDOS.
Two pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.
(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.
(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.
}
\label{bc_Mn3Sn}
\end{figure}
\begin{figure
\begin{center}
\includegraphics[width=0.5\textwidth]{figure4.png}
\end{center}
\caption{ Surface states of Mn$_3$Ge.
(a) Distribution of Weyl points in momentum space.
Black and white points represent Weyl points with $-$' and + chirality, respectively. Larger points indicate two Weyl points ($\pm k_z$) projected into this plane.
(b) and (c) Monopole-like distribution of the Berry curvature near a W$_1$ Weyl point.
(d) Fermi surface at $E_F= 55$ meV crossing the W$_1$ Weyl points.
The color represents the surface LDOS.
Two pairs of W$_1$ points are shown enlarged in the upper panels, where clear Fermi arcs exist.
(e) Surface band structure along a line connecting a pair of W$_1$ points with opposite chirality.
(f) Surface band structure along the white horizontal line indicated in (d). Here p1 and p2 are the chiral states corresponding to the Fermi arcs.
}
\label{bc_Mn3Ge}
\end{figure}
\subsection{Fermi arcs on the surface}
The existence of Fermi arcs on the surface is one of the most significant consequences of Weyl points inside the three-dimensional (3D) bulk. We first investigate the surface states of Mn$_3$Sn that have a simple bulk band structure with fewer Weyl points. When projecting W$_{2,3}$ Weyl points to the (001) surface, they overlap with other bulk bands that overwhelm the surface states. Luckily, W$_1$ Weyl points are visible on the Fermi surface. When the Fermi energy crosses them, W$_1$ Weyl points appear as the touching points of neighboring hole and electron pockets. Therefore, they are typical type-II Weyl points~\cite{Soluyanov2015WTe2}. Indeed, their energy dispersions demonstrate strongly tilted Weyl cones.
The Fermi surface of the surface band structure is shown in Fig.~\ref{bc_Mn3Sn}d for the Sn compound. In each corner of the surface Brillouin zone, a pair of W$_1$ Weyl points exists with opposite chirality. Connecting such a pair of Weyl points, a long Fermi arc appears in both the Fermi surface (Fig.~\ref{bc_Mn3Sn}d) and the band structure (Fig.~\ref{bc_Mn3Sn}e). Although the projection of bulk bands exhibit pseudo-symmetry of a hexagonal lattice, the surface Fermi arcs do not. It is clear that the Fermi arcs originating from two neighboring Weyl pairs (see Fig.~\ref{bc_Mn3Sn}d) do not exhibit $M_x$ reflection, because the chirality of Weyl points apparently violates $M_x$ symmetry. For a generic $k_x$--$k_z$ plane between each pair of W$_1$ Weyl points, the net Berry flux points in the $-k_y$ direction. As a consequence, the Fermi velocities of both Fermi arcs point in the $+k_x$ direction on the bottom surface (see Fig.~\ref{bc_Mn3Sn}f). These two right movers coincide with the nonzero net Berry flux, i.e., Chern number $=2$.
For Mn$_3$Ge, we also focus on the W$_1$-type Weyl points at the corners of the hexagonal Brillouin zone. In contrast to Mn$_3$Sn, Mn$_3$Ge exhibits a more complicated Fermi surface. Fermi arcs exist to connect a pair of W$_1$-type Weyl points with opposite chirality, but they are divided into three pieces as shown in Fig.~\ref{bc_Mn3Ge}d. In the band structures (see Figs. ~\ref{bc_Mn3Ge}e and f), these three pieces are indeed connected together as a single surface state. Crossing a line between two pairs of W$_1$ points, one can find two right movers in the band structure, which are indicated as p1 and p2 in Fig. ~\ref{bc_Mn3Ge}f. The existence of two chiral surface bands is consistent with a nontrivial Chern number between these two pairs of Weyl points.
\section{Summary}
In summary, we have discovered the Weyl semimetal state in the chiral AFM compounds Mn$_3$Sn and Mn$_3$Ge by {\it ab~initio} band structure calculations.
Multiple Weyl points were observed in the bulk band structures, most of which are type II.
The positions and chirality of Weyl points are in accordance with the symmetry of the magnetic lattice.
For both compounds, Fermi arcs were found on the surface, each of which connects a pair of Weyl points with opposite chirality, calling for further experimental investigations such as angle-resolved photoemission spectroscopy.
The discovery of Weyl points verifies the large anomalous Hall conductivity observed recently in titled compounds.
Our work further reveals a guiding principle to search for Weyl semimetals among materials
that exhibit a strong anomalous Hall effect.
\begin{acknowledgments}
We thank Claudia Felser, J{\"u}rgen K{\"u}bler and Ajaya K. Nayak for helpful discussions.
We acknowledge the Max Planck Computing and Data Facility (MPCDF) and Shanghai Supercomputer Center for computational resources and the German Research Foundation (DFG) SFB-1143 for financial support.
\end{acknowledgments}
| {'timestamp': '2016-08-18T02:05:38', 'yymm': '1608', 'arxiv_id': '1608.03404', 'language': 'en', 'url': 'https://arxiv.org/abs/1608.03404'} |
\section{Introduction}
Conformal invariance was first recognised to be of physical interest when it was realized that the Maxwell equations are covariant under the $15$-dimensional conformal group \cite{Cu,Bat}, a fact that motivated a more detailed analysis of conformal invariance in other physical contexts such as General Relativity, Quantum Mechanics or high energy physics \cite{Ful}. These applications further suggested to study conformal invariance in connection with the physically-relevant groups, among which the Poincar\'e and Galilei groups were the first to be considered. In this context, conformal extensions of the Galilei group have been considered in Galilei-invariant field theories, in the study of possible dynamics of interacting particles as well as in the nonrelativistic AdS/CFT correspondence
\cite{Bar54,Hag,Hav,Zak,Fig}. Special cases as the (centrally extended) Schr\"odinger algebra $\widehat{\mathcal{S}}(n)$ corresponding to the maximal invariance group of the
free Schr\"odinger equation have been studied in detail by various authors, motivated by different applications such as the kinematical invariance of hierarchies of partial differential equations, Appell systems, quantum groups or representation theory \cite{Ni72,Ni73,Do97,Fra}. The class of Schr\"odinger algebras can be generalized in natural manner to the so-called conformal Galilei algebras $\mathfrak{g}_{\ell}(d)$ for (half-integer) values $\ell\geq \frac{1}{2}$,
also corresponding to semidirect products of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ with a Heisenberg algebra but with a higher dimensional characteristic representation.\footnote{By characteristic representation we mean the representation of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ that describes the action on the Heisenberg algebra.} Such algebras, that can be interpreted as a nonrelativistic analogue of the conformal algebra, have been used in a variety of contexts, ranging from classical (nonrelativistic) mechanics, electrodynamics and fluid dynamics to higher-order Lagrangian mechanics \cite{Ai12,Tac,Du11,St13}
The algebraic structure of the conformal Galilei algebra $\mathfrak{g}_{\ell}(d)$ for values of $\ell\geq \frac{3}{2}$ and its representations have been analyzed in some detail, and an algorithmic procedures to compute their Casimir operators have been proposed (see e.g. \cite{Als17,Als19} and references therein). In the recent note \cite{raub}, a synthetic formula for the Casimir operators of the $\mathfrak{g}_{\ell}(d)$ algebra has been given. Although not cited explicitly, the
procedure used there corresponds to the so-called ``virtual-copy" method, a technique well-known for some years that enables to compute the Casimir operators of a Lie algebra using those of its maximal semisimple subalgebra (\cite{Que,C23,C45,SL3} and references therein).
\medskip
\noindent
In this work, we first propose a further generalization of the conformal Galilei algebras $\mathfrak{g}_{\ell}(d)$, replacing the $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ subalgebra of the latter by the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$. As the defining representation $\rho_d$ of $\mathfrak{so}(p,q)$ is real for all values $p+q=d$ \cite{Tits}, the structure of a semidirect product with a Heisenberg Lie algebra remains unaltered. The Lie algebras $\mathfrak{Gal}_{\ell}(p,q)$ describe a class of semidirect products of semisimple and Heisenberg Lie algebras among which $\mathfrak{g}_{\ell}(d)$ corresponds to the case with a largest maximal compact subalgebra.
Using the method developed in \cite{C45}, we construct a virtual copy of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ in the enveloping algebra of $\mathfrak{Gal}_{\ell}(p,q)$ for all half-integer values of $\ell$ and any $d=p+q\geq 3$. The Casimir operators of these Lie algebras are determined combining the analytical and the matrix trace methods, showing how to compute them explicitly in terms of the determinant of a polynomial matrix.
\medskip
\noindent We further determine the exact number of Casimir operators for the unextended Lie algebras $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ obtained by factorizing
$\mathfrak{Gal}_{\ell}(p,q)$ by its centre. Using the reformulation of the Beltrametti-Blasi formula in terms of the Maurer-Cartan equations, we show that albeit the number $\mathcal{N}$ of invariants increases considerably for fixed $\ell$ and varying $d$, a generic polynomial formula at most quadratic in $\ell$ and $d$ that gives the exact value of $\mathcal{N}$ can be established. Depending on the fact whether the relation $d\leq 2\ell+2$ is satisfied or not, it is shown that $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ admits a complete set of invariants formed by operators that do not depend on the generators of the Levi subalgebra. An algorithmic procedure to compute these invariants by means of a reduction to a linear system is proposed.
\section{Maurer-Cartan equations of Lie algebras and Casimir operators }
Given a Lie algebra $ \frak{g}=\left\{X_{1},..,X_{n}\; |\;
\left[X_{i},X_{j}\right]=C_{ij}^{k}X_{k}\right\}$ in terms of
generators and commutation relations, we are principally interested
on (polynomial) operators
$C_{p}=\alpha^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$ in the
generators of $\frak{s}$ such that the constraint $
\left[X_{i},C_{p}\right]=0$,\; ($i=1,..,n$) is satisfied. Such an
operator can be shown to lie in the centre of the enveloping
algebra of $\frak{g}$ and is called a (generalized) Casimir
operator. For semisimple Lie algebras, the determination of
Casimir operators can be done using structural properties
\cite{Ra,Gel}. However, for non-semisimple Lie algebras the relevant
invariant functions are often rational or even transcendental
functions \cite{Bo1,Bo2}. This suggests to develop a method in order to
cover arbitrary Lie algebras. One convenient approach is the
analytical realization. The generators of the Lie algebra
$\frak{s}$ are realized in the space $C^{\infty }\left(
\frak{g}^{\ast }\right) $ by means of the differential operators:
\begin{equation}
\widehat{X}_{i}=C_{ij}^{k}x_{k}\frac{\partial }{\partial x_{j}},
\label{Rep1}
\end{equation}
where $\left\{ x_{1},..,x_{n}\right\}$ are the coordinates in a dual basis of
$\left\{X_{1},..,X_{n}\right\} $. The invariants of $\frak{g}$ hence correspond to solutions of the following
system of partial differential equations:
\begin{equation}
\widehat{X}_{i}F=0,\quad 1\leq i\leq n. \label{sys}
\end{equation}
Whenever we have a polynomial solution of (\ref{sys}), the
symmetrization map defined by
\begin{equation}
{\rm Sym(}x_{i_{1}}^{a_{1}}..x_{i_{p}}^{a_{p}})=\frac{1}{p!}\sum_{\sigma\in
S_{p}}x_{\sigma(i_{1})}^{a_{1}}..x_{\sigma(i_{p})}^{a_{p}}\label{syma}
\end{equation}
allows to rewrite the Casimir operators in their usual form
as central elements in the enveloping algebra of $\frak{g}$,
after replacing the variables $x_{i}$ by the corresponding
generator $X_{i}$. A maximal set of functionally
independent invariants is usually called a fundamental basis. The
number $\mathcal{N}(\frak{g})$ of functionally independent
solutions of (\ref{sys}) is obtained from the classical criteria
for differential equations, and is given by the formula
\begin{equation}
\mathcal{N}(\frak{g}):=\dim \,\frak{g}- {\rm
sup}_{x_{1},..,x_{n}}{\rm rank}\left( C_{ij}^{k}x_{k}\right),
\label{BB}
\end{equation}
where $A(\frak{g}):=\left(C_{ij}^{k}x_{k}\right)$ is the matrix
associated to the commutator table of $\frak{g}$ over the given
basis \cite{Be}.\newline
The reformulation of condition (\ref{BB}) in terms of differential forms (see e.g. \cite{C43})
allows to compute $\mathcal{N}(\frak{g})$ quite efficiently and even to
obtain the Casimir
operators under special circumstances \cite{Peci,C72}. In terms of the
Maurer-Cartan equations, the Lie algebra $\frak{g}$
is described as follows: If $\left\{ C_{ij}
^{k}\right\} $ denotes the structure tensor over the basis $\left\{ X_{1},..,X_{n}\right\} $,
the identification of the dual space $\frak{g}^{\ast}$ with the
left-invariant 1-forms on the simply connected Lie group the Lie algebra of which is isomorphic to $\frak{g}$ allows to define an exterior
differential $d$ on $\frak{g}^{\ast}$ by
\begin{equation}
d\omega\left( X_{i},X_{j}\right) =-C_{ij}^{k}\omega\left(
X_{k}\right) ,\;\omega\in\frak{g}^{\ast}.\label{MCG}
\end{equation}
Using the coboundary operator $d$, we rewrite $\frak{g}$ as a
closed system of $2$-forms%
\begin{equation}
d\omega_{k}=-C_{ij}^{k}\omega_{i}\wedge\omega_{j},\;1\leq
i<j\leq\dim\left( \frak{g}\right) ,\label{MC2}
\end{equation}
called the Maurer-Cartan equations of $\frak{g}$.
In order to reformulate equation (\ref{BB}) in this context, we consider the linear subspace
$\mathcal{L}(\frak{g})=\mathbb{R}\left\{ d\omega_{i}\right\}
_{1\leq i\leq \dim\frak{g}}$ of $\bigwedge^{2}\frak{g}^{\ast}$
generated by the $2$-forms $d\omega_{i}$. Now, for
a generic element $\omega=a^{i}d\omega_{i}\,\;\left(
a^{i}\in\mathbb{R}\right) $ of $\mathcal{L}(\frak{g})$ there
exists a positive integer $j_{0}\left( \omega\right)
\in\mathbb{N}$ such that $\bigwedge^{j_{0}\left( \omega\right)
}\omega\neq0$ and $\bigwedge ^{j_{0}\left( \omega\right)
+1}\omega\equiv0$. We define the scalar $j_{0}\left(
\frak{g}\right) $ as the maximal rank of generic elements,
\begin{equation}
j_{0}\left( \frak{g}\right) =\max\left\{ j_{0}\left(
\omega\right) \;|\;\omega\in\mathcal{L}(\frak{g})\right\},
\label{MCa1}
\end{equation}
As shown in \cite{C43}, this is a scalar invariant of the Lie algebra $\frak{g}$ that
satisfies the relation
\begin{equation}
\mathcal{N}\left( \frak{g}\right) =\dim\frak{g}-2j_{0}\left( \frak{g}%
\right). \label{BB1}
\end{equation}
\medskip
\section{Virtual copies of semisimple Lie algebras}
\noindent The method of virtual copies, essentially developed in \cite{SL3}, constitutes a natural generalization
of a method due to Ch. Quesne (see \cite{Que}) that combines the boson formalism and enveloping algebras of Lie algebras in
order to compute Casimir operators of semidirect products
$\frak{s}\overrightarrow {\frak{\oplus}}_{R}\frak{r}$ of simple Lie algebras $\frak{s}$ and solvable algebras $\mathfrak{r}$.
\medskip
\noindent We briefly recall the procedure, the details of which can be found in \cite{SL3}: Let $\frak{g}$ be
a non-semisimple Lie algebra admitting the Levi decomposition
$\frak{g}=\frak{s}\overrightarrow{\frak{\oplus}}_{\Gamma}\frak{r}$,
where $\frak{s}$ denotes the Levi subalgebra, $\Gamma$ the characteristic representation and $\frak{r}$ the
radical, i.e., the maximal solvable
ideal of $\frak{g}$. Let $\left\{
X_{1},..,X_{n},Y_{1},..,Y_{m}\right\} $ be a basis such that
$\left\{ X_{1},..,X_{n}\right\} $ spans $\frak{s}$ and $\left\{
Y_{1},..,Y_{m}\right\} $ spans $\frak{r}$. We further suppose
that the structure tensor in $\frak{s}$ is given by
\begin{equation}
\left[ X_{i},X_{j}\right] =C_{ij}^{k}X_{k}.\label{ST}
\end{equation}
We now define operators $X_{i}^{\prime}$ in the enveloping algebra
of $\frak{g}$ by means of
\begin{equation}
X_{i}^{\prime}=X_{i}\,f\left( Y_{1},..,Y_{m}\right) +P_{i}\left(
Y_{1},..,Y_{m}\right) ,\label{OP1}%
\end{equation}
where $P_{i}$ is a homogeneous polynomial of degree $k$ and $f$ is
homogeneous of degree $k-1$. We require
the constraints
\begin{eqnarray}
\left[ X_{i}^{\prime},Y_{k}\right] & =0,\label{Bed1}\\
\left[ X_{i}^{\prime},X_{j}\right] & =\left[
X_{i},X_{j}\right] ^{\prime}:=C_{ij}^{k}\left(
X_{k}f+P_{k}\right).\label{Bed2}
\end{eqnarray}
to be satisfied for all generators. This leads to
conditions on $f$ and $P_{i}$. It can be shown that condition (\ref{Bed1}) leads to
\begin{equation}
\left[ X_{i}^{\prime},Y_{j}\right] =\left[ X_{i}f,Y_{j}\right] +\left[ P_{i}%
,Y_{j}\right] =X_{i}\left[ f,Y_{j}\right] +\left[
X_{i},Y_{j}\right]
\,f+\left[ P_{i},Y_{j}\right] .\label{Eq1}%
\end{equation}
By homogeneity, we can reorder the terms according to
their degree, so that $X_{i}\left[ f,Y_{j}\right] $ is
homogeneous of degree $k-1$ in the variables $\left\{
Y_{1},..,Y_{m}\right\} $ and $\left[ X_{i},Y_{j}\right]
\,f+\left[ P_{i},Y_{j}\right] $ of degree $k$. Hence the conditions
\begin{eqnarray}
\left[ f,Y_{j}\right] =0,\;
\left[ X_{i},Y_{j}\right] \,f+\left[ P_{i},Y_{j}\right]
=0\label{Eq1A}
\end{eqnarray}
are satisfied, showing that $f$ is a Casimir operator
of the radical $\frak{r}$. Expanding the condition (\ref{Bed2}) and taking into
account the homogeneity degrees, after a routine computation we find that the system
\begin{eqnarray}
\left[ X_{i},X_{j}\right] \,f-X_{i}\left[ X_{j},f\right] =C_{ij}
^{k}X_{k}f,\quad
\left[ P_{i},X_{j}\right] =C_{ij}^{k}P_{k}\label{Eq3}
\end{eqnarray}
is satisfied for any indices $i,j$. Using now
(\ref{ST}), the first identity reduces to
\begin{equation}
X_{i}\left[ X_{j},f\right] =0.
\end{equation}
From this we conclude that the function $f$ is a Casimir operator of $\frak{g}$ that depends
only on the variables of the radical $\frak{r}$. The second
identity in (\ref{Eq3}) implies that $P_{i}$ transforms under the
$X_{j}^{\prime}s$ like a generator of the semisimple part
$\frak{s}$. Taken together, it follows that the operators
$X_{i}^{\prime}$ fulfill the condition
\begin{eqnarray}
\left[ X_{i}^{\prime},X_{j}^{\prime}\right] & =f\left[ X_{i},X_{j}\right] ^{\prime}.
\end{eqnarray}
We shall say that the operators $X_{i}^{\prime}$
generate a virtual copy of $\frak{s}$ in the enveloping algebra of
$\frak{g}$. If $f$ can be
identified with a central element of $\mathfrak{g}$, as happens for a radical isomorphic to a Heisenberg
algebra, the virtual copy actually generates a copy in
$\mathcal{U}\left( \frak{g}\right) $ \cite{Que,C23}. The computation of the invariants of $\mathfrak{g}$ reduces to application of the following result proved in \cite{SL3}:
\begin{theorem}
Let $\frak{s}$ be the Levi subalgebra of $\frak{g}$ and
let $X_{i}^{\prime}=X_{i}\,f\left( \mathbf{Y}\right)
+P_{i}\left(\mathbf{Y}\right) $ be homogeneous polynomials in the
generators of $\frak{g}$ satisfying equations (\ref{Eq1A}) and
(\ref{Eq3}). If $C=\sum\alpha ^{i_{1}..i_{p}}X_{i_{1}}..X_{i_{p}}$
is a Casimir operator of $\frak{s}$ having degree $p$, then
$C^{\prime}=\sum\alpha^{i_{1}..i_{p}}X_{i_{1}}^{\prime
}..X_{i_{p}}^{\prime}$ is a Casimir operator of $\frak{g}$ of
degree $(\deg f+1)p$. In particular, $\mathcal{N}\left( \frak{g}\right)
\geq\mathcal{N}\left( \frak{s}\right) +1.$
\end{theorem}
\medskip
\noindent
The independence of the invariants obtained in such manner follows at once from the
conditions (\ref{Bed1}) and (\ref{Bed2}). For the particular case of a radical isomorphic to a
Heisenberg Lie algebra, it follows that the number of non-central invariants is given by the rank
of the semisimple part, i.e., $\mathcal{N}\left( \frak{g}\right)
=\mathcal{N}\left( \frak{s}\right) +1$ (see \cite{C45} for a proof).
\section{The conformal generalized pseudo-Galilean Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ }
\noindent Structurally, the conformal Galilean algebra $\mathfrak{\widehat{g}}_\ell(d)$ is a semidirect product of the semisimple Lie algebra $\mathfrak{s}=\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(d)$ and a Heisenberg Lie algebra of dimension $N= d(2\ell+1)+1$. The action of $\mathfrak{s}$ over the radical is given by the characteristic representation
$\widehat{\Gamma}=\left(D_{\ell}\otimes \rho_d\right)\oplus \Gamma_0$, where $D_{\ell}$ denotes the irreducible representation of $\mathfrak{sl}(2,\mathbb{R})$ with highest weight $2\ell$ and dimension $2\ell+1$, $rho_d$ is the defining $d$-dimensional representation of $\mathfrak{so}(d)$ and $\Gamma_0$ denotes the trivial representation.
\noindent
Considering the basis (see e.g \cite{Als17}) given by the generators $\left\{ H, D, C, E_{ij}=-E_{ji}, P_{n,i}\right\}$
with $n = 0,1, 2, \ldots, 2\ell; \, i,j =1, 2, \ldots, d$,
the commutators are
\begin{eqnarray}
\fl [D,H]= 2H,\quad [D,C]= -2C,\quad [C, H]=D, \nonumber\\
\fl[H,P_{n,i}]=-nP_ {n-1,i},\,\,[D,P_{n,i}]=2(\ell - n)P_ {n,i}, \,\,[C,P_{n,i}]=(2\ell - n)P_{n+1,i},\nonumber \\
\fl[E_{ij}, P_{n,k} ]=\delta_{ik}P_{n,j} - \delta_{jk}P_{n,i}, \,\,
[E_{ij}, E_{k\ell}] =\delta_{ik}E_{j\ell} + \delta_{j \ell}E_{ik} - \delta_{i \ell}E_{jk} - \delta_{jk}E_{i \ell} \nonumber\\
\fl \left[ P_{m,i},P_{n,j}\right]= \delta _{ij} \delta _{m+n,2\ell} I_{m} M ,\quad\qquad I_{m}= (-1)^{m+\ell+1/2} (2\ell -m)! \,\ m! \label{CG3}
\end{eqnarray}
\noindent The invariants can be deduced from the Casimir operators of the semisimple subalgebra of $\mathfrak{g}$ by replacing the generators by expressions of the type
(\ref{OP1}) that generate a virtual copy of $\mathfrak{s}$. For the case of the conformal generalized Galilean algebra $\widehat{\mathfrak{g}}_{\ell}(d)$, these invariants have recently been given implicitly in \cite{raub} essentially applying this method, although the nomenclature use there is referred to as ``disentaglement" of the generators.
\subsection{The conformal generalized pseudo-Galilean algebra}
\noindent Introducing a non-degenerate metric tensor of signature $(p,q)$, the structure of conformal Galilean algebras can be easily extended to the pseudo-orthogonal Lie algebras $\mathfrak{so}(p,q)$ ($p+q=d$) along the same lines.
The pseudo-orthogonal algebra $\frak{so}(p,q)$ with
$d=p+q$ is given by the $\frac{1}{2}d(d-1)$ operators
$E_{\mu\nu}=-E_{\nu\mu}$ satisfying:
\begin{eqnarray*}
\left[ E_{\mu \nu },E_{\lambda \sigma }\right] &=&g_{\mu \lambda
}E_{\nu \sigma }+g_{\mu \sigma }E_{\lambda \nu }-g_{\nu \lambda
}E_{\mu \sigma
}-g_{\nu \sigma }E_{\lambda \mu } \\
\left[ E_{\mu \nu },P_{\rho }\right] &=&g_{\mu \rho }P_{\nu
}-g_{\nu \rho }P_{\mu },
\end{eqnarray*}
where $g={\rm diag}\left( 1,..,1,-1,..,-1\right)$ is the matrix of the non-degenerate metric. Let $\rho_1$ be the $d$-dimensional defining representation of $\frak{so}(p,q)$ and define the tensor product $\Gamma=D_{\ell}\otimes \rho_1$ for $\ell\in\mathbb{Z}+\frac{1}{2}$. Then $\Gamma$ is an irreducible representation of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(p,q)$ that satisfies the condition $\Gamma_0\subset \Gamma\wedge \Gamma $, i.e., the wedge product of $\Gamma$ contains a copy of the trivial representation. Following the characterization given in \cite{C45}, this implies that the Lie algebra $\left(\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(d)\right)\overrightarrow{\oplus}_{\Gamma\oplus\Gamma_0}\mathfrak{h}_{N}$
with $N=d(2\ell+1)$ is well defined. Over the basis ${ H, D, C, E_{ij}=-E_{ji}, P_{n,i},M}$ with $0\leq n \leq 2\ell$ and $1\leq i<j\leq p+q$, the brackets are given by
\begin{eqnarray}
[D,H]= 2H,\quad [D,C]= -2C,\quad [C, H]=D, \nonumber\\
\,\,[H,P_{n,i}]=-nP_ {n-1,i},\,\,[D,P_{n,i}]=2(\ell - n)P_ {n,i}, \,\,[C,P_{n,i}]=(2\ell - n)P_{n+1,i},\label{CG3} \\
\,\,[E_{ij}, P_{n,k} ]=g_{ik}P_{n,j} - g_{jk}P_{n,i}, \,\,
[E_{ij}, E_{k\ell}] =g_{ik}E_{j\ell} + g_{j \ell}E_{ik} - g_{i \ell}E_{jk} - g_{jk}E_{i \ell}, \nonumber\\
\left[ P_{n,k},P_{m,l}\right]= g_{ij} \delta _{m+n,2\ell} I_{m} M ,\quad\qquad I_{m}= (-1)^{m+\ell+1/2} (2\ell -m)! \,\ m!.\nonumber
\end{eqnarray}
\noindent As commented above, the number of Casimir operators is given by $2+\left[\frac{d}{2}\right]$ and can be deduced in closed form by
means of the virtual copy method.
\begin{proposition}
For any $\ell\in\mathbb{Z}+\frac{1}{2}\geq \frac{1}{2}$, the operators%
\begin{eqnarray}
\fl \widetilde{D}=D\,M+\sum_{i=1}^{d}\sum_{s=0}^{q}\left( -1\right) ^{s+q-1}\frac{\mu
^{1}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l-s,i},\nonumber \\
\fl \widetilde{H}=H\,M+\sum_{i=1}^{d}\sum_{s=0}^{q-1}\left( -1\right) ^{s+q-1}\frac{\mu
^{2}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l-1-s,i}-\sum_{i=1}^{d}\frac{1}{2\Gamma(q+1)^2g_{ii}} P_{q,i}^2,\nonumber \\
\fl\widetilde{C}=C\,M+\sum_{i=1}^{d}\sum_{s=0}^{q}\left( -1\right) ^{s+q}\frac{\mu
^{3}\left( s,q\right)}{g_{ii}} P_{s,i}P_{2l+1-s,i}-\sum_{i=1}^{d}\frac{1}{2\Gamma(q+1)^2g_{ii}} P_{q+1,i}^2,\nonumber \\
\fl \widetilde{E}_{i,j}=M E_{i,j}+ \sum_{s=0}^{l}\frac{(-1)^{\frac{2l-1}{2}+s}}{s!\; (2l-s)!}\left(P_{s,i}P_{2l-s,j}-P_{s,j}P_{sl-s,i}\right),\; 1\leq i<j\leq d,
\label{NE3}
\end{eqnarray}
with coefficients defined by
\begin{eqnarray}
\fl \mu ^{1}\left( s,q\right) =2^{\frac{s-2}{2}}\left( 1+\sqrt{2}+\left( -1\right)
^{s}\left( \sqrt{2}-1\right) \right) \prod_{a=0}^{\left[ \frac{s+1}{2}\right]
-1}\left( q-\left[ \frac{s}{2}\right] -a\right) \prod_{b=s+1-\left[ \frac{s}{%
2}\right] }^{s}\left( 2q+3-2b\right) ,\nonumber\\
\fl \mu ^{2}\left( s,q\right) =\frac{1}{s!\; \Gamma(2q+1-s)},\quad \mu ^{3}\left( s,q\right) =\frac{1}{(s-1)!\; \Gamma(2q+2-s)}
\end{eqnarray}
generate a (virtual) copy of $\mathfrak{sl}(2,\mathbb{R})\oplus\frak{so}\left(p,q\right) $ in the
enveloping algebra of $\mathfrak{Gal}_{\ell}(p,q)$.
\end{proposition}
\noindent The proof, albeit long and computationally cumbersome, is completely straightforward and reduces to a direct verification of the
conditions (\ref{Bed1}) and (\ref{Bed2}) with the choice $f=M$, taking into account the following relations between the generators and quadratic products:
\begin{eqnarray*}
\left[D,P_{n,i}P_{m,j}\right]=2\left(2\ell-m-n\right)\;P_{n,i}P_{m,j},\\
\left[H,P_{n,i}P_{m,j}\right]=-\left(n P_{n-1,i}P_{m,j}+m P_{n,i}P_{m-1,j}\right),\\
\left[D,P_{n,i}P_{m,j}\right]=(2\ell-m)P_{n+1,i}P_{m,j}+(2\ell-m)M P_{n,i}P_{m+1,j},\\
\left[M E_{i,j},M E_{k,l}\right]=M^2\left(g_{i,k}E_{j,l}+g_{j,l}E_{i,k}-g_{i,l}E_{j,k}-g_{j,k}E_{i,l}\right),\\
\left[E_{i,j},P_{n,k}P_{m,l}\right]=-\left(g_{i,k}P_{n,j}P_{m,l}-g_{j,l}P_{n,k}P_{m,i}+g_{i,l}P_{n,k}P_{m,j}-g_{j,k}P_{n,i}P_{m,l}\right) ,\\
\left[P_{n,i}P_{m,j},P_{q,k}\right]=-I_{q}M\left(g_{i,k}\delta_{n+q}^{2\ell}P_{m,j}+g_{j,k}\delta_{m+q}^{2\ell}P_{n,i}\right).\\
\end{eqnarray*}
In particular, for the metric tensor $g_{ii}=1$ corresponding to the compact orthogonal algebra $\mathfrak{so}(d)$, we obtain an equivalent realization to
the disentaglement conditions given in \cite{raub}.
\subsection{Explicit formulae for the Casimir operators of $\mathfrak{Gal}_{\ell}(p,q)$}
\noindent Once the (virtual) copy of the semisimple Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ is found, explicit expression for the Casimir operators can be immediately deduced, in its
unsymmetrized analytic form, by means of the well known trace methods (see e.g. \cite{Ra,Gel,Gr64,Per,Po66,Ok77,Mac}). To this extent, let $\left\{d,h,c,e_{i,j},p_{n,k}\right\}$
be the coordinates in $\mathfrak{Gal}_{\ell}(p,q)^{\ast}$ and let $\left\{\widehat{d},\widehat{h},\widehat{c},\widehat{e}_{i,j},\widehat{p}_{n,k}\right\}$ denote the analytical counterpart of the operators in (\ref{NE3}). As the simple subalgebras $\mathfrak{sl}(2,\mathbb{R})$ and $\mathfrak{so}(p,q)$ commute, it follows at once that any invariant of $\mathfrak{Gal}_{\ell}(p,q)$ must be also an invariant of the subalgebra $\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N$. Semidirect products of $\mathfrak{sl}(2,\mathbb{R})$ and a Heisenberg
Lie algebra are well-known to possess only one Casimir operators besides the central generator \cite{C45}, the analytic expression of which is given by
\begin{equation}
C^{\prime}_{4}= \widehat{d}^2-4\widehat{c}\widehat{h}\label{ins}
\end{equation}
This invariant can also be described as a determinant as follows (see e.g. \cite{C23}): Let $B=\left\{X_{2\ell(k-1)+k+2+s}\:\; 1\leq k\leq d,\; 1\leq s\leq 2\ell+1\right\}$ be a basis of
$\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ such that $\left\{D,H,C\right\}=\left\{X_1,X_2,X_3\right\}$ and such that the element
$X_{2\ell(k-1)+k+2+s}$ coresponds to the generator $P_{s,k}$ for $1\leq k\leq d,\; 1\leq s\leq 2\ell+1$. The commutators of $\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ are then described in uniform manner by
\begin{equation}
\left[X_i,X_j\right]= C_{ij}^{k}X_{k},\; 1\leq i<j,k\leq (d+1)(2\ell+1).
\end{equation}
Let $B^{\ast}=\left\{x_{2\ell(k-1)+k+2+s}\:\; 1\leq k\leq d,\; 1\leq s\leq 2\ell+1\right\}$ be the dual basis of $B$ and define be the polynomial matrix $A$ of order $4 + (2 \ell + 1) d$, the entries of which are given by
\begin{eqnarray}
A_{i,j}= C_{ij}^{k} x_{k},\quad 1\leq i,j\leq 3 + (2 \ell + 1) d,\nonumber\\
A_{i,2 + (2 \ell + 1) d}= -A_{2 + (2 \ell + 1) d,i}x_{i},\quad 1\leq i\leq 3,\label{maxa}\\
A_{j,2 + (2 \ell + 1) d}=-A_{2 + (2 \ell + 1) d,j}=\frac{1}{2}x_j,\quad j\geq 4.\nonumber
\end{eqnarray}
It follows from the analysis in \cite{C23} that the determinant $\det{A}$ provides the non-central Casimir invariant of the Lie algebra $\left(\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N\right)$ . Comparing the result with that deduced from (\ref{ins}) using the copy in the enveloping algebra, we have the relation
\begin{equation}
\det(A)=\prod_{s=1}^{d}\prod_{m=0}^{2\ell}\left(2\ell-m\right)!m!\;M^{2\ell d+d-4}\left(C_{4}^{\prime}\right)^2.\label{insa}
\end{equation}
\medskip
\noindent Similarly, we can consider the invariants of $\mathfrak{Gal}_{\ell}(p,q)$ that are simultaneously invariants of the subalgebra $\mathfrak{so}(p,q)\overrightarrow{\oplus}_{\Gamma}\mathfrak{h}_N$ with $d=p+q$.
For the pseudo-orthogonal Lie algebra $\frak{so}(p,q)$, a maximal set of Casimir operators is well known to be given
by the coefficients $C_{k}$ of the characteristic polynomial $P(T)$ of the matrix
\begin{equation}
B_{p,q}:=\left(
\begin{array}{cccccc}
0 & .. & -g_{jj}e_{1j} & .. & -g_{NN}e_{1,N} \\
: & & : & & : & \\
g_{11}e_{1j} & .. & 0 & .. & -g_{NN}e_{j,N} \\
: & & : & & : & \\
g_{11}e_{1,N} & .. & g_{jj}e_{j,N} & .. & 0 &
\end{array}
\right)\label{MA2}
\end{equation}
\noindent The same formula, replacing the generators $e_{i,j}$ by those $\widetilde{e}_{i,j}$ of the virtual copy will provide us with the invariants of $\mathfrak{Gal}_{\ell}(p,q)$ that only depend on the generators of $\frak{so}(p,q)$ and the characteristic representation $\Gamma$.
\begin{proposition}
A maximal set of $\left[\frac{d}{2}\right]$ independent Casimir operators of $\mathfrak{Gal}_{\ell}(p,q)$ depending only on the generators of $\frak{so}(p,q)$ and the $\left\{P_{0,i},\cdots P_{2\ell,i}\right\}$ with $1\leq i\leq p+q=d$
is given by the coefficients $\widetilde{C}_{k}$ of the polynomial $P(T)$ defined by
\begin{equation}
P(T):=\det \left( B_{p,q}-T\;\mathrm{Id}_{N}\right) , \label{Pol1}
\end{equation}
where
\begin{equation}
B_{p,q}:=\left(
\begin{array}{cccccc}
0 & .. & -g_{jj}\widetilde{e}_{1j} & .. & -g_{NN}\widetilde{e}_{1,N} \\
: & & : & & : & \\
g_{11}\widetilde{e}_{1j} & .. & 0 & .. & -g_{NN}\widetilde{e}_{j,N} \\
: & & : & & : & \\
g_{11}\widetilde{e}_{1,N} & .. & g_{jj}\widetilde{e}_{j,N} & .. & 0 &
\end{array}
\right)
\end{equation}
\end{proposition}
The actual symmetric representatives ${\rm Sym}(\widetilde{C}_k)$ of the invariants as elements in the enveloping algebra are obtained from the symmetrization map (\ref{syma}).
\medskip
\noindent It follows that the orders of the $1+\left[\frac{p+q}{2}\right]$ non-central invariants of $\mathfrak{Gal}_{\ell}(p,q)$ are
\begin{itemize}
\item $4,4,8,\cdots ,2(p+q-1)$ if $d=p+q$ is odd,
\item $4,4,8,\cdots ,2(p+q)-4,p+q$ if $d=p+q$ is even.
\end{itemize}
\section{The unextended case}
\noindent As the centre of the Lie algebra $\mathfrak{Gal}_{\ell}(p,q)$ is one-dmensional, the corresponding factor algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q)=\mathfrak{Gal}_{\ell}(p,q)/Z\left(\mathfrak{Gal}_{\ell}(p,q)\right)$ inherits the structure of a semidirect product of the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus \mathfrak{so}(p,q)$ with the Abelian Lie algebra of dimension $d(2\ell+1)$, where the characteristic representation $\Gamma$ is given by $D_{\ell}\otimes \rho_1$. As this Lie algebra contains in particular the affine Lie algebra $\mathfrak{sl}(2,\mathbb{R})\overrightarrow{\oplus}_{D_{\ell}^{d}} \mathbb{R}^{(2\ell d+d)}$ as well as the multiply-inhomogeneous algebra $\mathfrak{so}(p,q)\overrightarrow{\oplus}_{\rho^{2\ell+1}} \mathbb{R}^{(2\ell d+d)}$, it is expected that the number of Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ will be much higher than that of $\mathfrak{Gal}_{\ell}(p,q)$. An exception is given by the special case $\overline{\mathfrak{Gal}}_{\frac{1}{2}}(p,q)$, isomorphic to the unextended Schr\"odinger algebra $\widehat{\mathcal{S}}(p+q)$, for which the number of invariants is given by $\mathcal{N}(\widehat{\mathcal{S}}(p+q))=1+\left[\frac{p+q}{2}\right]$, constituting the only case where the number of (non-central) Casimir operators of the extension is preserved when passing to the factor Lie algebra.
\begin{proposition}\label{pro4}
For any $\ell\in \mathbb{Z}+\frac{1}{2}\geq \frac{1}{2}$ and $p+q=d\geq 3$ the number $\mathcal{N}(\mathfrak{g})$ of Casimir operators of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ is given by
\begin{eqnarray}
\mathcal{N}(\mathfrak{g})=\left\{
\begin{array}[c]{rc}
1+\left[\frac{d}{2}\right], & \ell=\frac{1}{2},\; d\geq 3\\[0.1cm]
\frac{1}{2}\left(4\ell d+3d-d^2-6\right), & \ell\geq \frac{3}{2},\; d\leq 2\ell+2\\[0.1cm]
2\ell^2+2\ell-\frac{5}{2}+\left[\frac{d}{2}\right], & \ell\geq \frac{3}{2},\; d\geq 2\ell+3 \\
\end{array}
\right.
\end{eqnarray}
\end{proposition}
\noindent To prove the assertion, the best strategy is to use the reformulation of the formula (\ref{BB}) in terms of differential forms \cite{C43}.
Let $\left\{\theta_1,\theta_2,\theta_3,\omega_{i,j},\sigma_{n,j}\right\}$ with $1\leq i,j\leq d$, $0\leq n\leq 2\ell$ be a basis of 1-forms dual to the basis $\left\{H,D,C,E_{i,j},P_{n,j}\right\}$ of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. Then the Maurer-Cartan equations are given by
\begin{eqnarray}
d\theta_1=-\theta_2\wedge\theta_3,\quad d\theta_2=2\theta_1\wedge\theta_2,\quad d\theta_3=-2\theta_1\wedge\theta_3,\nonumber\\
d\omega_{i,j}=\sum_{s=1}^{d} g_{ss} \omega_{i,s}\wedge\omega_{j,s},\quad 1\leq i<j\leq d,\nonumber\\
d\sigma_{0,j}=2\ell \theta_1\wedge\sigma_{0,j}-\theta_2\wedge\sigma_{1,j}+\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{0,s},,\quad 1\leq j\leq d,\label{MCA}\\
d\sigma_{n,j}=2(\ell-n) \theta_1\wedge\sigma_{n,j}-(n+1)\theta_2\wedge\sigma_{n+1,j}+(2\ell+1-n)\theta_3\wedge\sigma_{n-1,j}\nonumber\\
\quad +\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{n,s},\quad 1\leq n\leq 2\ell-1,\; 1\leq j\leq d,\nonumber\\
d\sigma_{2\ell,j}=-2\ell \theta_1\wedge\sigma_{2\ell,j}+\theta_3\wedge\sigma_{2\ell-1,j}+\sum_{s=1}^{d}g_{ss}\omega_{s,j}\wedge\sigma_{n,s},,\quad 1\leq j\leq d,.\nonumber
\end{eqnarray}
We first consider the case $\ell=\frac{1}{2}$ corresponding to the unextended Schr\"{o}dinger algebra. For $%
d\leq 4$ the assertion follows at once considering the 2-form
\begin{equation*}
\Xi _{1}=d\sigma _{0,1}+d\sigma _{1,d},
\end{equation*}
that has rank $5$ for $d=3$ and rank $7$ for $d=4$ respectively. For values $%
d\geq 5$ we define the forms
\begin{equation*}
\Xi _{1}=d\sigma _{0,1}+d\sigma _{1,d},\;\Xi _{2}=\sum_{s=0}^{\alpha
}d\omega _{2+2s,3+2s},\;\alpha =\frac{2d-11-\left( -1\right) ^{d}}{4}.
\end{equation*}
Proceeding by induction, it can be easily shown that the product
\begin{equation*}
\bigwedge^{d+1}d\sigma _{0,1}\bigwedge^{d-2}d\sigma
_{0,1}\bigwedge^{d-4}d\omega _{2,3}\cdots \bigwedge^{d-4-2\alpha }d\sigma
_{2+2\alpha ,3+2\alpha }
\end{equation*}
contains all of the 1-forms associated to generators of the Lie algebra $
\widehat{\mathcal{S}}\left( d\right) $ with the following exceptions
\begin{equation}
\theta _{3},\omega _{2,3},\omega _{4,5},\cdots ,\omega _{d-2,d-1},\sigma
_{1,d}. \label{exe}
\end{equation}
Counting the latter elements we conclude that
\begin{equation}
2d-1+\sum_{s=0}^{\alpha }\left( d-4-2s\right) =\mu =\frac{1}{4}\left(
d^{2}+3d-4-2\left[ \frac{d}{2}\right] \right) . \label{exec}
\end{equation}
Therefore, taking the 2-form $\Xi =\Xi _{1}+\Xi _{2}$, it is straightforward
to verify that it satisfies
\begin{equation*}
\bigwedge^{\mu }\Xi =\bigwedge^{d+1}d\sigma _{0,1}\bigwedge^{d-2}d\sigma
_{0,1}\bigwedge^{d-4}d\omega _{2,3}\cdots \bigwedge^{d-4-2\alpha }d\omega
_{2+2\alpha ,3+2\alpha }+\cdots \neq 0,
\end{equation*}
showing that
\begin{equation*}
\mathcal{N}\left( \widehat{\mathcal{S}}\left( d\right) \right) =1+\left[ \frac{d}{2}\right] .
\end{equation*}
\noindent This argumentation, with slight modifications, generalizes naturally for any value $\ell\geq \frac{3}{2}$, where it is also necessary to distinguish two cases, depending whether $d=p+q\leq 2\ell +2$ or $d>2\ell+2$.
\begin{enumerate}
\item Let $d=p+q\leq 2\ell +2$. In this case the dimension of the characteristic representation $\Gamma$ is clearly larger than that of the Levi subalgebra, so that a 2-form of maximal rank can be constructed using only the differential forms associated to the generators $P_{n,k}$. Consider the 2-form in (\ref{MCA}) given by $\Theta=\Theta_1+\Theta_2$, where
\begin{eqnarray}
\Theta_1=d\sigma_{0,1}+d\sigma_{2\ell,d}+d\sigma_{2\ell-1,d-1},\;
\Theta_2=\sum_{s=1}^{d-4} d\sigma_{s,s+1}.\label{difo1}
\end{eqnarray}
Using the decomposition formula $\bigwedge^{a}\Theta=\sum_{r=0}^{a} \left(\bigwedge^{r}\Theta_1\right) \wedge \left(\bigwedge^{a-r}\Theta_2\right)$ we obtain that
\begin{eqnarray}
\fl \bigwedge^{\frac{1}{2}\left(6-d+d^2\right)}\Theta= &\bigwedge^{d+1}d\sigma_{0,1}\wedge\bigwedge^{d-1}d\sigma_{2\ell,d}\wedge\bigwedge^{d-3}d\sigma_{2\ell-1,d-1}\wedge
\bigwedge^{d-4}d\sigma_{1,2}\wedge\nonumber\\
& \wedge\bigwedge^{d-5}d\sigma_{2,3}\wedge\bigwedge^{d-6}d\sigma_{3,4}\wedge\cdots \bigwedge^{2}d\sigma_{d-5,d-4}\wedge d\sigma_{d-4,d-3}+\cdots \neq 0.\label{pro2}
\end{eqnarray}
As $\frac{1}{2}\left(6-d+d^2\right)=\dim\left(\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)\right)$, the 2-form $\Theta$ is necessarily of maximal rank, as all the generators of the Levi subalgebra appear in some term of the product (\ref{pro2}) and no products of higher rank are possible due to the Abelian nilradical. We therefore conclude that $j(\mathfrak{g})=\frac{1}{2}\left(6-d+d^2\right)$ and by formula (\ref{BB1}) we have
\begin{equation}
\mathcal{N}(\mathfrak{g})= \frac{1}{2}\left(4\ell d+3d-d^2-6\right).\label{inva1}
\end{equation}
\item Now let $d \geq 2\ell +3$. The main difference with respect to the previous case is that a generic form $\omega\in\mathcal{L}(\mathfrak{g})$ of maximal rank must necessarily contain linear combinations of the 2-forms $d\omega_{i,j}$ corresponding to the semisimple part of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. Let us consider first the 2-form
\begin{equation}
\Xi_1= \Theta_1+\Theta_2,
\end{equation}
where $\Theta_1$ is the same as in (\ref{difo1}) and $\Theta_2$ is defined as
\begin{equation}
\Theta_2=\sum_{s=0}^{2\ell-3} d\sigma_{1+s,2+s}.
\end{equation}
In analogy with the previous case, for the index $\mu_1=(2\ell+1)d+(\ell+2)(1-2\ell)$ the first term of the following product does not vanish:
\begin{equation}
\fl \bigwedge^{\mu_1}\Xi_1=\bigwedge^{d+1}d\sigma_{0,1}\bigwedge^{d-1}d\sigma_{2\ell,d}\bigwedge^{d-3}d\sigma_{2\ell-1,d-1}
\bigwedge^{d-4}d\sigma_{1,2}\cdots \bigwedge^{d-1-2\ell}d\sigma_{2\ell-2,2\ell-1}+\cdots \neq 0.\label{Pot1}
\end{equation}
This form, although not maximal in $\mathcal{L}(\mathfrak{g})$, is indeed of maximal rank when restricted to the subspace $\mathcal{L}(\mathfrak{r})$ generated by the 2-forms $d\sigma_{n,k}$ with $0\leq n\leq 2\ell$, $1\leq k\leq d$.
This means that the wedge product of $\bigwedge^{\mu_1}\Xi_1$ with any other $d\sigma_{n,k}$ is identically zero. Hence, in order to construct a 2-form of maximal rank in $\mathcal{L}(\mathfrak{g})$, we have to consider a 2-form $\Xi_2$ that is a linear combination of the differential forms associated to the generators of the Levi subalgebra of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$. As follows at once from (\ref{Pot1}), the forms $\theta_1,\theta_2,\theta_3$ associated to $\mathfrak{sl}(2,\mathbb{R})$-generators have already appeared, thus it suffices to restrict our analysis to linear combinations of the forms $d\omega_{i,j}$ corresponding to the pseudo-orthogonal Lie algebra $\mathfrak{so}(p,q)$. Specifically, we make the choice
\begin{equation}
\Xi_2= \sum_{s=0}^{\nu}d\omega_{3+2s,4+2s},\quad \nu=\frac{1}{4}\left(2d-4\ell-9+(-1)^{1+d}\right).
\end{equation}
Consider the integer $\mu_2=\frac{1}{4}\left(11+(d-4\ell)(1+d)-4\ell^2-2\left[\frac{d}{2}\right]\right)$ and take the 2-form $\Xi=\Xi_1+\Xi_2$. A long but routine computation shows that following identity is satisfied:
\begin{eqnarray}
\fl \bigwedge^{\mu_1+\mu_2}\Xi =& \left(\bigwedge^{\mu_1}\Xi_1\right)\wedge \left(\bigwedge^{\mu_2}\Xi_2\right) \nonumber\\
& = \left(\bigwedge^{\mu_1}\Xi_1\right)\wedge\bigwedge^{d-6}d\omega_{3,4}\bigwedge^{d-8}d\omega_{5,6}\cdots \bigwedge^{d-6-2\nu}d\omega_{3+2\nu,4+2\nu}+\cdots \neq 0.\label{pro1}
\end{eqnarray}
We observe that this form involves $\mu_1+2\mu_2$ forms $\omega_{i,j}$ from $\mathfrak{so}(p,q)$, hence there remain $\frac{d(d-1)}{2}-\mu_1-2\mu_2$ elements of the pseudo-orthogonal that do not appear in the first term in (\ref{pro1}). From this product and (\ref{MCA}) it can be seen that these uncovered elements are of the type $\left\{\omega_{i_1,i_1+1},\omega_{i_2,i_2+1},\cdots \omega_{i_r,i_r+1}\right\}$ with the subindices satisfying $i_{\alpha+1}-i_{\alpha}\geq 2$ for $1\leq \alpha\leq r$, from which we deduce that no other 2-form $d\omega_{i_\alpha,i_\alpha+1}$, when multiplied with $\bigwedge^{\mu_1+\mu_2}\Xi $ will be different from zero.
We conclude that $\Xi$ has maximal rank equal to $j_0(\mathfrak{g})=\mu_1+\mu_2$, thus applying (\ref{BB1}) we find that
\begin{equation}
\fl \mathcal{N}(\mathfrak{g})= 3 + \frac{d(d-1)}{2}+ (2 \ell + 1) d-2(\mu_1+\mu_2)= 2\ell^2+2\ell-\frac{5}{2}+\left[\frac{d}{2}\right],
\end{equation}
as asserted.
\end{enumerate}
\medskip
\noindent In Table \ref{Tabelle1} we give the numerical values for the number of Casimir operators of the Lie algebras $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ with $d=p+q\leq 12$, and where the linear increment with respect to $\ell$ can be easily recognized.
\smallskip
\begin{table}[h!]
\caption{\label{Tabelle1} Number of Casimir operators for $\overline{\mathfrak{Gal}}_{\ell}(p,q)$.}
\begin{indented}\item[]
\begin{tabular}{c||cccccccccc}
$\;d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ \\\hline
{$\ell=\frac{1}{2}$} & $2$ & $3$ & $3$ & $4$ & $4$ & $5$ & $5$
& $6$ & $6$ & $7$ \\
{$\ell=\frac{3}{2}$} & $6$ & $7$ & $7$ & $8$ & $8$ & $9$ & $9$
& $10$ & $10$ & $11$ \\
{$\ell=\frac{5}{2}$} & $12$ & $15$ & $17$ & $18$ & $18$ & $19$
& $19$ & $20$ & $20$ & $21$ \\
{$\ell=\frac{7}{2}$} & $18$ & $23$ & $27$ & $30$ & $32$ & $33$
& $33$ & $34$ & $34$ & $35$ \\
{$\ell=\frac{9}{2}$} & $24$ & $31$ & $37$ & $42$ & $46$ & $49$
& $51$ & $52$ & $52$ & $53$ \\
{$\ell=\frac{11}{2}$} & $30$ & $39$ & $47$ & $54$ & $60$ & $65
$ & $69$ & $72$ & $74$ & $75$%
\end{tabular}
\end{indented}
\end{table}
\medskip
\noindent As follows from a general property concerning virtual copies \cite{C45}, Lie algebras of the type $\mathfrak{g}=\mathfrak{s}\overrightarrow{\oplus} \mathfrak{r}$ with an Abelian radical $\mathfrak{r}$ do not admit virtual copies of $\mathfrak{s}$ in $\mathcal{U}\left(\mathfrak{g}\right)$. Thus for Lie algebras of this type the Casimir invariants must be computed either directly from system (\ref{sys}) or by some other procedure. Among the class $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, an exception is given by the unextended (pseudo-)Schr\"odinger algebra $\overline{\mathfrak{Gal}}_{\frac{1}{2}}(p,q)\simeq \widehat{\mathcal{S}}(p,q)$, where the invariants can be deduced from those of the central extension $\widehat{\mathcal{S}}(p,q)$ by the widely used method of contractions (see e.g. \cite{IW,We}). For the remaining values $\ell\geq \frac{3}{2}$ the contraction procedure is useless in practice, given the high number of invariants. However, an interesting property concerning the invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ emerges when we try to find the Casimir operators $F$ that only depend on variables $p_{n,k}$ associated to generators $P_{n,k}$ of the radical, i.e., such that the condition
\begin{equation}
\quad \frac{\partial F}{\partial x}=0,\quad \forall x\in\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q).\label{kond}
\end{equation}
is satisfied. As will be shown next, the number of such solutions tends to stabilize for high values of $d=p+q$, showing that almost any invariant will depend on all of the variables in $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, implying that finding a complete set of invariants is a computationally formidable task, as there is currently no general method to derive these invariants in closed form.
\begin{proposition}
Let $\ell\geq \frac{3}{2}$. For sufficiently large $d$, the number of Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$ depending only on the variables $p_{n,k}$ of the Abelian radical is constant and given by
\begin{equation}
\mathcal{N}_1(S)=2\ell^2+3\ell-2.\label{sr2}
\end{equation}
\end{proposition}
\noindent The proof follows analyzing the rank of the subsystem of (\ref{sys}) corresponding to the differential operators $\widehat{X}$ associated to the generators of the Levi subalgebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ and such that condition (\ref{kond}) is fulfilled. Specifically, this leads to the system $S$ of PDEs
\begin{eqnarray}
\widehat{D}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} (2\ell-n)p_{n,i}\frac{\partial F}{\partial p_{n,i}}=0,\;
\widehat{H}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} n p_{n-1,i}\frac{\partial F}{\partial p_{n,i}}=0,\nonumber\\
\widehat{C}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} (2\ell-n)p_{n+1,i}\frac{\partial F}{\partial p_{n,i}}=0,\label{kond2}\\
\widehat{E}_{j,k}^{\prime}(F):=\sum_{n=0}^{2\ell}\sum_{i=1}^{d} \left( g_{ij} p_{n,k} -g_{ik} p_{n,j}\right) \frac{\partial F}{\partial p_{n,i}}=0, 1\leq j<k\leq d.\nonumber
\end{eqnarray}
This system consists of $\frac{1}{2}\left(6-d+d^2\right)$ equations in $(2\ell+1)d$ variables that becomes overdetermined for increasing values of $d$ (and fixed $\ell$). In Table \ref{Tabelle2} the rank of such systems is given for values $d\leq 15$, showing that for fixed $\ell$, from $d\geq 2\ell+1$ onwards, the rank of the system increases always by the same constant amount, given precisely by $2\ell+1$.
\begin{table}[h!]
\caption{\label{Tabelle2} Rank of system (\ref{kond2}).}
\begin{indented}\item[]
\begin{tabular}{c||ccccccccccccc}
$d$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$& $14$& $15$ \\ \hline
$\ell =\frac{3}{2}$ & 6 & 9 & 13 & 17 & 21 & 25 & 29 & 33 & 37 & 41 & 45 & 49 & 53\\
$\ell =\frac{5}{2}$ & 6 & 9 & 13 & 18 & 24 & 30 & 36 & 42 & 48 & 54 & 60 & 66 & 72\\
$\ell =\frac{7}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 47 & 55 & 63 & 71 & 79
& 87 \\
$\ell =\frac{9}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 68 & 78 & 88
& 98 \\
$\ell =\frac{11}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 69 & 81 &
93 & 105 \\
$\ell =\frac{13}{2}$ & 6 & 9 & 13 & 18 & 24 & 31 & 39 & 48 & 58 & 69 & 81 & 94 & 108
\end{tabular}
\end{indented}
\end{table}
\noindent With these observations, it is not difficult to establish that for any $\ell\geq \frac{3}{2}$ and $d\geq 2\ell+1$ the rank of the system (\ref{kond2}) is given by
\begin{equation}
{\rm rank}\; S =\left(2+d\right)+\ell\left(2d-3\right)-2\ell^2.\label{kond3}
\end{equation}
As the number of variables is $(2\ell+1)d$, we conclude that the system admits exactly
\begin{equation}
\mathcal{N}_1(S)= (2\ell+1)d- {\rm rank}\; S = 2\ell^2+3\ell-2
\end{equation}
solutions satisfying the constraint (\ref{kond}). Further, comparison with Proposition \ref{pro4} allows us to establish that for any fixed $\ell$ and $d\leq 2\ell+2$, the following identity holds:
\begin{equation}
\mathcal{N}\left(\overline{\mathfrak{Gal}}_{\ell}(p,q)\right)=\mathcal{N}_1(S).\label{trox}
\end{equation}
For increasing values of $d$, there appear additional invariants that necessarily depend on variables associated to the generators of the Levi subalgebra of $\overline{\mathfrak{Gal}}_{\ell}(p,q) $.
\medskip
\noindent Although there is currently no algorithmic procedure to construct a complete set of invariants of these Lie algebras for arbitrary values $d>2\ell+2$, those invariants of $\mathfrak{Gal}_{\ell}(p,q)$ satisfying the condition (\ref{kond}) can be easily computed by means of a reduction argument that leads to a linear system. To this extent, consider the last of the equations in (\ref{kond2}). As the generators of $\mathfrak{so}(p,q)$ permute the generators of the Abelian radical, it is straightforward to verify that the quadratic polynomials
\begin{equation}
\Phi_{n,s}= \sum_{k=1}^{d} \frac{g_{11}}{g_{kk}}\;p_{n,k}p_{n+s,k},\; 0\leq n\leq 2\ell,\; 0\leq s\leq 2\ell-n.\label{ELE}
\end{equation}
are actually solutions of these equations. Indeed, any solution of the type (\ref{kond}) is built up from these functions. Let $\mathcal{M}_d=\left\{\Phi_{n,s},\; 0\leq n\leq 2\ell,\; 0\leq s\leq 2\ell-n\right\}$. The cardinal of this set is given by $2\ell^2+3\ell+1$, and we observe that not all of the elements in $\mathcal{M}_d$ are independent. It follows by a short computation that
\begin{equation}
\widehat{D}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\; \widehat{H}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\; \widehat{C}^{\prime}(\mathcal{M}_d)\subset \mathcal{M}_d,\label{ELE2}
\end{equation}
showing that this set is invariant by the action of $\mathfrak{sl}(2,\mathbb{R})$. Therefore, we can construct the solutions of system (\ref{kond2}) recursively using polynomials in the new variables $\Phi_{n,s}$. Specifically, renumbering the elements in $\mathcal{M}_d$ as $\left\{u_{1},\cdots ,u_{2\ell^2+3\ell+1}\right\}$, for any $r\geq 2$ we define a polynomial of degree $2r$ as
\begin{equation}
\Psi_r= \sum_{1\leq i_1< \cdots <i_r\leq |\mathcal{M}_d|} \alpha^{i_1\cdots i_r} u_{i_1}u_{i_2}\cdots u_{i_r},\; i_1+\cdots i_r=r.\label{poly}
\end{equation}
Now, imposing the constraints
\begin{equation}
\widehat{D}^{\prime}(\Psi_r)=0,\; \widehat{H}^{\prime}(\Psi_r)=0,\; \widehat{C}^{\prime}(\Psi_r)=0,\label{ELE3}
\end{equation}
leads to a linear system in the coefficients $\alpha^{i_1\cdots i_r}$, the solutions of which enable us to find the polynomials that satisfy system (\ref{kond2}). Alternatively, the functions
$\Phi_{n,s}$ can be used as new variables to reduce the equations in (\ref{ELE3}) to a simpler form, which may be computationally more effective, albeit the underlying argument is essentially the same \cite{Dick}. In the case where the identity (\ref{trox}) holds, this reduction procedure allows us to obtain a complete set of invariants for the Lie algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q) $.
\medskip
\noindent As an example to illustrate the reduction, consider the 18-dimensional Lie
algebra $\overline{\frak{Gal}}_{\frac{3}{2}}\left( 3\right) $. As $d<2\ell+2$,
formula (\ref{trox}) applies and the algebra has 6 Casimir operators. From these,
two of order four in the generators can be derived from the central
extension $\frak{Gal}_{\frac{3}{2}}\left( 3\right) $ by contraction \cite
{We}. In this case, the set $\mathcal{M}_{3}$ has ten elements that we
enumerate as follows:
\begin{equation*}
\left\{ \Phi _{00},\Phi _{01},\Phi _{02},\Phi _{03},\Phi _{10},\Phi
_{11},\Phi _{12},\Phi _{20},\Phi _{21},\Phi _{30}\right\} =\left\{
u_{1},\cdots ,u_{10}\right\} .
\end{equation*}
The action of the differential operators associated to $\frak{sl}\left( 2,%
\mathbb{R}\right) $ on $\mathcal{M}_{3}$ is explicitly given in Table \ref{Tabelle3}.
\begin{table}[h!]
\caption{\label{Tabelle3} Transformation rules of variables $u_i$ under the $\mathfrak{sl}(2,\mathbb{R})$-action (\ref{kond2}).}
\footnotesize\rm
\begin{tabular}{@{}*{1}{c|cccccccccc}}
& $u_{1}$ & $u_{2}$ & $u_{3}$ & $u_{4}$ & $u_{5}$ & $u_{6}$ & $u_{7}$ & $%
u_{8}$ & $u_{9}$ & $u_{10}$ \\[0.1cm] \hline
$\widehat{D}^{\prime }$ & $6u_{1}$ & $4u_{2}$ & $2u_{3}$ & $0$ & $2u_{5}$ & $%
0$ & $-2u_{7}$ & $-2u_{8}$ & $-4u_{9}$ & $-6u_{10}$ \\
$\widehat{H}^{\prime }$ & $0$ & $-u_{1}$ & $-2u_{2}$ & $-3u_{3}$ & $-2u_{2}$
& $-u_{3}-2u_{5}$ & $-u_{4}-3u_{6}$ & $-4u_{6}$ & $-2u_{7}-3u_{8}$ & $-6u_{9}
$ \\
$\widehat{C}^{\prime }$ & $6u_{2}$ & $2u_{3}+3u_{5}$ & $u_{4}+3u_{6}$ & $%
3u_{7}$ & $4u_{6}$ & $u_{2}+2u_{8}$ & $2u_{9}$ & $2u_{9}$ & $u_{10}$ & $0$%
\end{tabular}
\end{table}
It follows from this action that polynomials $\Psi _{r}$ in the $u_{i}$ that satisfy the system (\ref
{ELE3}) are the solutions of the following system of linear first-order
partial differential equations:
{\footnotesize
\begin{equation}
\fl
\begin{tabular}{rr}
$6u_{1}\frac{\partial F}{\partial u_{1}}+4u_{2}\frac{\partial F}{\partial
u_{2}}+2u_{3}\frac{\partial F}{\partial u_{3}}+2u_{5}\frac{\partial F}{%
\partial u_{5}}-2u_{7}\frac{\partial F}{\partial u_{7}}-2u_{8}\frac{\partial
F}{\partial u_{8}}-4u_{9}\frac{\partial F}{\partial u_{9}}-6u_{10}\frac{%
\partial F}{\partial u_{10}}$ & $=0,$ \\
$-u_{1}\frac{\partial F}{\partial u_{2}}-2u_{2}\frac{\partial F}{\partial
u_{3}}-3u_{3}\frac{\partial F}{\partial u_{4}}-2u_{2}\frac{\partial F}{%
\partial u_{5}}-\left( u_{3}+2u_{5}\right) \frac{\partial F}{\partial u_{6}}%
-\left( u_{4}+3u_{6}\right) \frac{\partial F}{\partial u_{7}}-4u_{6}\frac{%
\partial F}{\partial u_{8}}$ & \\
$-\left( 2u_{7}+3u_{8}\right) \frac{\partial F}{\partial u_{9}}-6u_{9}\frac{%
\partial F}{\partial u_{10}}$ & $=0,$ \\
$6u_{2}\frac{\partial F}{\partial u_{1}}+\left( 2u_{3}+3u_{5}\right) \frac{%
\partial F}{\partial u_{2}}+\left( u_{4}+3u_{6}\right) \frac{\partial F}{%
\partial u_{3}}+3u_{7}\frac{\partial F}{\partial u_{4}}+4u_{6}\frac{\partial
F}{\partial u_{5}}+\left( u_{2}+2u_{8}\right) \frac{\partial F}{\partial
u_{6}}+2u_{9}\frac{\partial F}{\partial u_{7}}$ & \\
$+2u_{9}\frac{\partial F}{\partial u_{8}}+u_{10}\frac{\partial F}{\partial
u_{9}}$ & $=0.$%
\end{tabular}\label{reda}
\end{equation}
}
\noindent This system admits two quadratic solutions given by
\begin{eqnarray*}
F_{1}
&=&3u_{4}^{2}+27u_{6}^{2}-18u_{3}u_{7}-27u_{5}u_{8}+12u_{2}u_{9}-3u_{1}u_{10},
\\
F_{2}
&=&27u_{6}^{2}-5u_{4}^{2}+18u_{4}u_{6}+12u_{3}u_{7}-4u_{1}u_{10}+24u_{2}u_{9}-36\left( u_{5}u_{7}+u_{3}u_{8}\right) .
\end{eqnarray*}
Incidentally, these are the invariants that are obtained by contraction from those of the centrally-extended algebra $\mathfrak{Gal}_{
\frac{3}{2}}\left( 3\right) $.
In addition, there exist four additional independent fourth-order solutions,
the explicit expression of which is omitted because of its length. We
conclude that a complete set of Casimir operators of $\overline{\frak{Gal}}_{%
\frac{3}{2}}\left( 3\right) $ is given by two fourth-order polynomials in
the generators (corresponding to the quadratic solutions of (\ref{reda}))
and four invariants of order eight corresponding to the fourth-order
solutions of (\ref{reda}).
\section{Final remarks}
We have seen that the generalized conformal Galilean algebras $\widehat{\mathfrak{g}}_{\ell}(d)$ based on the semisimple Lie algebra $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(d)$ can be extended naturally to pseudo-Galilean algebras possessing a Levi subalgebra isomorphic to $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ introducing a nondegenerate metric tensor into the orthogonal part. Virtual copies of $\mathfrak{sl}(2,\mathbb{R})\oplus\mathfrak{so}(p,q)$ in the enveloping algebra of the semidirect product can be obtained simultaneously for all (half-integer) values of $\ell$ and $p+q=d$. The resulting Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ can be seen, to a certain extent, as ``real" forms of the conformal Galilean algebra $\widehat{\mathfrak{g}}_{\ell}(d)$, their main structural difference residing in the maximal compact subalgebra. Whether these Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ have some definite physical meaning is still an unanswered question, but it is conceivable that they appear in the context of dynamical groups of higher order Lagrangian systems or as the (maximal) invariance symmetry group of a (hierachy of) partial differential equations. The search of physical realizations of the Lie algebras $\mathfrak{Gal}_{\ell}\left( p,q\right) $ is currently being developed.
\smallskip
\noindent We observe that the obstructions found for integer values of $\ell$ and leading to the so-called exotic extensions (see e.g. \cite{Als19} and reference therein) are a direct consequence of the incompatibility of the odd-dimensional representation $D_{\ell}$ with a Heisenberg algebra. Indeed, as shown in \cite{C45}, the necessary and sufficient condition for a semidirect product $\mathfrak{s}\overrightarrow {\oplus}_{\Gamma\oplus \Gamma_0}\mathfrak{h}_n$ to exist is that the (nontrivial) characteristic representation $\Gamma$ satisfies the condition $\Gamma\wedge \Gamma\supset \Gamma_0$. For the decomposition of $\Gamma$ into irreducible components, this implies in particular that an irreducible representation of $\mathfrak{s}$ must appear with the same multiplicity as its dual or be self-dual. Therefore, in order to further generalize the notion of Galilean algebras to $\mathfrak{sl}(2,\mathbb{R})$-representations with even highest weight, the characteristic representation $\Gamma$ must have the form
\begin{equation}
\Gamma =\left(D_\ell\oplus D_\ell\right)\otimes \rho_d
\end{equation}
As happens with any coupling of a semisimple Lie algebra $\mathfrak{s}$ and a Heisenberg Lie algebra $\mathfrak{h}_n$, the (noncentral) Casimir operators of the semidirect product
$\mathfrak{s}\overrightarrow {\oplus}_{\Gamma\oplus \Gamma_0}\mathfrak{h}_n$ can be constructed using the invariants of $\mathfrak{s}$ by means of the virtual copy method \cite{Que,C45}. Application of this procedure in combination with the trace method provides explicit expressions for the invariants of $\mathfrak{Gal}_{\ell}\left( p,q\right) $ for arbitrary values of $\ell$ and $p+q=d$, comprising in particular the case $\widehat{\mathfrak{g}}_{\ell}(d)=\mathfrak{Gal}_{\ell}\left( d,0\right) $ recently announced \cite{raub}.
\medskip
\noindent The case of the unextended conformal pseudo-Galilean algebra $\overline{\mathfrak{Gal}}_{\ell}(p,q) $ corresponding to the factor of $\mathfrak{Gal}_{\ell}(p,q) $ by its centre has also been considered. As this Lie algebra has an Abelian radical, it does not admit a virtual copy in the corresponding enveloping algebra, hence their invariants must be computed by other means. The number of Casimir operators for arbitrary values of the parameters has been computed by means of the Maurer-Cartan equations of $\overline{\mathfrak{Gal}}_{\ell}(p,q)$, where a varying increasement behaviour for the number of invariants in dependence of the proportion between the dimension of the pseudo-orthogonal subalgebra and the dimension $2\ell+1$ of the $\mathfrak{sl}(2,\mathbb{R})$-representation $D_\ell$ has been observed. Although explicit formulae for the Casimir invariants of $\overline{\mathfrak{Gal}}_{\ell}(p,q) $ with $\ell\geq \frac{3}{2}$ can probably not be found generically, it has been shown that the functions depending only on variables of the radical provide a complete set of invariants for the Lie algebra whenever the condition $d\leq 2\ell+2$ is satisfied. A procedure that reduces the computation of such invariants to solving a linear system has been proposed. However, even with this systematization, the problem still involves cumbersome computations, as the orders of such invariants are quite elevated and there is currently no result that allows to predict these orders. For values $d\geq 2\ell+3$, where there exist Casimir operators that do not satisfy the condition (\ref{kond}), no valuable ansatz has been found that allows to find them systematically. Any kind of progress in this direction would constitute a useful tool for the generic analysis of invariant functions of semidirect products of semisimple and Abelian Lie algebras, a class that up to certain relevant special cases has still not been exhaustively studied.
\medskip
\noindent
\section*{Acknowledgment}
During the
preparation of this work, the RCS was financially supported by
the research project MTM2016-79422-P of the AEI/FEDER (EU). IM was supported
by the Australian Research Council Discovery Grant DP160101376 and Future Fellowship FT180100099.
\section*{References}
| {'timestamp': '2019-04-24T02:04:30', 'yymm': '1904', 'arxiv_id': '1904.10101', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.10101'} |
\section{Introduction}
The fascinating discoveries of the quantum Hall effect (QHE)
originally found in single two dimensional electron layers, have
also been extended to double layer systems , thanks to the
development of techniques for growing GaAs heterostructures
containing two separated layers of two-dimensional electron gas
(see for example references \cite{Eisen}). Apart from finding
plateaus in Hall conductivity at total filling fractions $\nu$
corresponding to the "direct sum" of the familiar integral and
odd-denominator fractional QHE in the individual single layers,
experiments also show the occurence of new plateaus which are
intrinsic to double-layer systems and rely on interlayer quantum
coherence and correlations. On the theoretical front, a large
body of work has already been done on double-layer systems. An
extensive list of references to this literature has been given
in the lucid review of this subject by Girvin and MacDonald
\cite{GirvMac} and in the paper by Moon ${\it et al}$ \cite{Moon}.
Generally one analyses double layer systems by attributing to the
electrons, in addition to their spatial coordinates on the
plane, a two-component "pseudospin" whose up and down
components refer to the amplitude for the electron to be in the
first and second layers respectively. The real physical spin of
the electrons is assumed , as a staring approximation, to be
fully polarised by the strong magnetic field and hence frozen
as a degree of freedom. However, even when real physical spin is
suppressed, the use of a pseudospin to represent the layer
degree of freedom maps the double layer spinless problem into a
monolayer problem with spin \cite{Mac}. Such a mapping allows one
to borrow for double layer systems, the rich body of insights and
results available from single layer systems with real spin.
Thus one may expect a fully symmetric (polarised) pseudospin
state to be energetically preferred because of a combination of
Coulomb repulsion and the Pauli principle which forces an
associated antisymmetric spatial wavefunction, just as in
itenerant ferromagnetism. Further, the relevance of Skyrmions to
systems with real spin, predicted by theoretical considerations
\cite{Sondhi}, \cite{Fertig} and supported by experimental evidence
\cite{Barrett}, has in turn prompted studies of similar topological
excitations in spinless double layer systems, but now involving
pseudospin (See Girvin and MacDonald
\cite{GirvMac}, Moon ${\it et al}$ \cite{Moon} and references given therein).
Because of interplane-intraplane anisotropy in Coulomb repulsion
between electrons located in the two layers, as well the
capacitance energy of maintaining unequal charge density in the
two layers, the effective Action governing pseudospin enjoys
only U(1) symmetry of rotations about the z-axis (the direction
perpendicular to the x-y plane of the layers). Finiteness of
the capacitance energy between the two layers requires that
asymptotically the pseudospin must lie on the easy (x-y) plane.
The basic topological excitations in that case are the so-called
merons which are vortices in pseudospin with a winding number of
one-half (with respect to the second Homotopy group $\Pi_{2}$.
These are similar to vortices in the X-Y model, but non singular
at the origin since the pseudospin is not restricted to lie on
the x-y plane. But like the former they do have an energy that
grows logarithmically with size. One can also have meron
anti-meron bound pairs whose energy is finite. Such a pair is
topologically equivalent to Skyrmions and carries unit winding
number. (For an introduction to such topological excitations,
their winding numbers, etc. see reference \cite{Raj}.)
The possibilty of topological excitations like merons and
bimerons in double layer systems has generated much interest, in
part because of the excitement surrounding the Skyrmion
excitations in systems with real spin, and in part because of
the additional possibility here of Kosterlitz-Thoulles type
\cite{KT} phase transitions caused by the break-up of bound bimerons into
separated meron pairs \cite{GirvMac},\cite{Moon}. Bimeron
solutions have already been extensively studied in a body of
papers by Girvin, MacDonald and co-workers \cite{Brey}
\cite{Yang} and \cite{Moon} . These calculations are based on
optimising microscopic wavefunctions with respect to the
microscopic interaction Hamiltonian.
We will also calculate bimeron solutions and their energies
here, but by using an alternate method. An Effective Action for
slowly varying pseudospin textures has already been obtained by
Moon et al \cite{Moon}. If one extremises that Action one will
get differential equations which the unit-vector valued field of
psuedo spin configurations ${\vec m}(\vec r )$ should obey in
the classical limit.
In this paper we solve these coupled non-linear differential equations,
through a combination of analytically motivated ansatze followed
by numerical calculations. We obtain bimerons as approximate
time-independent solutions with appropriate topologically
non-trivial boundary conditions, for a range
of separations between the meron and its partner the anti-meron
and also for a set of different inter-layer distances. The
dependence of the bimeron texture on these variables is
discussed. They turn out to be reasonably similar to
what one would expect on
general grounds. We also obtain the energy of this bimeron as a
function of the separation between the meron centers. We
include in this energy contributions coming from the
pseudospin stiffness, its anisotropy, the capacitance energy and the
Coulomb energy. By minimising this energy with respect to
the meron separation, we are also able to give an independent
value for the optimal meron separation in a bimeron. We compare
these results with earlier work, including our own.
Apart from this, our work also enables us to independently check
the validity of a physical picture often used \cite{Yang} in
estimating bimeron energies, namely, that they can be viewed as
a pair of rigid objects carrying electric charge of ${1 \over
2}$ and a logarithmically growing energy. A work somewhat
similar in spirit to ours, but in the context of Skyrmions
of real spin systems was done by Abolfath ${\it et. al.}$ who
compared results obtained from solving a non-linear differential equation
with those obtained from microscopic calculations \cite {Abolfath}.
For yet another way of approaching meron solutions, starting from
a Chern- Simons field theory see the work of Ichinose and Sekiguchi
\cite{Ichinose}.
In an earlier paper \cite{Ghosh} we had done a similar study of
single meron solutions. But the present work is much more
complicated at the computational level. Single meron solutions are
circularly symmetric, with the spin component on the plane
pointing along the coordinate direction. Thus the only unknown,
namely, the spin-z component obeys an ordinary (though
non-linear) differential equation in the radius variable $r$.
Further, the boundary conditions relevant to a single meron can
be imposed, conveniently, at the end points $r=0$ and $r= \
\infty$. By contrast the boundary conditions characterising a bimeron
are $m_z = \pm 1$ at two finite points on the plane where the two merons
have their centers. The spin direction is also not simply related to
the coordinate direction, so that there are two independent
fields, say, $m_z$ and $tan^{-1}\bigg( {m_x \over m_y} \bigg)$,
(since the ${\vec m}$ is constrained to be a unit vector)
which obey coupled partial differential equations on the plane.
We found it quite challenging to analyse these coupled equations
analytically as far as posible, and use that information to
employ an appropriate ansatz and coordinate system to numerically
solve the equtions (using a desk-top computer).
Finally, we should reiterate that our work here clearly
relies heavily on the advances already
made by the Indiana group \cite{Moon}, \cite{Yang}, \cite{Brey}
and is to be viewed as something which will hopefully augment
their findings.
\section{The Spin Texture Equations .}
The differential equations obeyed by spin textures is obtained
by extremising an effective action which has already been
derived by Moon {\it et al} \cite{Moon} starting from the basic
microscopic physics. See also Ezawa \cite{z}. These results were
summarised in our earlier paper \cite{Ghosh}. Briefly , the
pseudospin texture of a state is described by a classical {\it
unit} vector ${\vec m}(\vec r )$ which gives the local direction
of the pseudospin. Here ${\vec r} $ is the coordinate on the x-y
plane carrying the layers, while the magnetic field B is along
the z-direction. The fully polarised "ferromagnetic" ground
state corresponds to ${\vec m}$ pointing
everywhere in the same direction, say, along the x-axis.
Using this as the reference state, any other state
with some arbitrary texture ${\vec m}(\vec r )$ is given by
performing a local pseudospin rotation on this uniform ground
state . The leading low-wavelength terms in the effective
Action for time independent configurations ${\vec m}(\vec r )$,
as obtained by Moon {\it et al} \cite{Moon} is
\begin{equation}
I ({\vec m})=\int d^{2}r \ \bigg[\frac{1}{2} \rho_{A} \big(\nabla
m_{z})^{2} + \frac{1}{2} \rho_{E} \big((\nabla m_{x})^{2} +
(\nabla m_{y})^{2}\big) +
\beta \ m_{z}^{2} \bigg] \ + \ C_{1}[m] \ \ + \ C_{2}[m] \
\label {Eff} \end{equation}
where
\begin{equation}
C_{1}[{\bf m}] \ = \ \frac{1}{2}\int d{\vec r}d{\vec r'}V({\vec r}-{\vec r'})
q({\vec r})q({\vec r'})
\end{equation}
and
\begin{equation} C_{2}[{\bf m}] \ \equiv {e^{2}d^{2} \over 32\pi^{2}\epsilon}
\int d^{2}r\int
d^{2}r'({m_{z}({\bf r})\nabla^{2}m_{z}({\bf r'}) \over |({\bf
r}-{\bf r'}|})
\end{equation}
The constants $\rho_A$ and
$\rho_E$ are pseudospin stiffness parameters whose physical origin is the
exculsion principle (Hund's rule) mentioned earlier. They are given by
\begin{eqnarray} \rho_A \ &=& \ \big( {\nu \over 32 \pi^2}\big) \int_{0}^{\infty} dk
k^3 \ V^A_k \ exp({-k^2 \over 2}) \nonumber \\
\rho_E \ &=& \ \big( {\nu \over 32 \pi^2}\big) \int_{0}^{\infty} dk
k^3 \ V^E_k \ exp({-k^2 \over 2}) \label{rho} \end{eqnarray}
where $V^A_k \ = \ 2\pi e^2 /(\epsilon k)$ and $V^E_k \ = \
(exp (-kd) 2\pi e^2) /(\epsilon k) $
are the Fourier transforms of the Coulomb interactions between electrons
in the same and different layers respectively.
All distances (and inverse wave vectors) are in units of the
magnetic length {\it l}.
The $ \beta m_{z}^{2}$ term represents the so-called capacitance or
charging energy needed to maintain unequal amounts of charge density
in the two layers. Recall that the z-component of pseudospin represents
the difference between the densities in the two layers. The constant
$\beta $ is given by
\begin{equation} \beta \ = \ \big( {\nu \over 8 \pi^2}\big) \int_{0}^{\infty} dk
\ k \ (V^{z}(0) - V^{z}(k)) \ exp({-k^2 \over 2}) \label{beta} \end{equation}
where $V^z_k = {1 \over2} (V^A_k - V^E_k)$.
Finally, $q({\vec r})$ is the topological density associated with pseudospin
texture, which is also its charge density \cite{Sondhi}. It is given by
\begin{equation}
q({\vec r})=-\frac{\bf\nu}{8\pi}\epsilon_{\nu \mu}{\bf m}({\vec r}).[
{ \partial_{\nu}}{\bf m}({\vec r}){\times}
{ \partial_{\mu}}{\bf m}({\vec r})]
\label{topo}\end{equation}
We will refer to the the non-local term $C_1$, as the Coulomb term since
it has been identified as the Coulomb energy associated with topological
structures in the pseudospin textures \cite{Moon}, \cite{Sondhi}. The
other non local term $C_2$ arises in the gradient expansion but is not
amenable to simple physical interpretation.
The field equations are obtained by extremising this Hamiltonian with respect
to the independent field variables, which can be taken to be
$m_z$ and
$\alpha \equiv tan^{-1}\bigg( {m_x \over m_y} \bigg)$ . This $\alpha$ is
just the azimuthal angle of the projection of ${\vec m}$ on to the x-y plane.
The non-local terms
$C_1$ ansd $C_2$ in the Action (\ref{Eff}) will render the field
equations into coupled integro-differential equations. While in the
single meron
case we did solve such an integro differential equation \cite{Ghosh},
for the more complicated
case of bimerons we will be content to solve the equations in the absence
of the integral terms $C_1$ ansd $C_2$ .The contributions of these
terms can however be included in the total energy, but by using solutions of
the local equations. In mild justification of this strategy, we will
find later that the Coulomb energy $C_2$ for instance is less than
half the energy from the local terms in eq. (\ref{Eff}).
The coupled field equations for $m_z$ and
$\alpha \equiv tan^{-1}\bigg( {m_x \over m_y} \bigg)$ resulting
from eq. (\ref{Eff}) in the absence of $C_1$ and $C_2$ are
\begin{equation}
\rho_{A}\nabla^{2}m_{z} \ + \ \rho_{E} m_{z} \bigg( \frac{(\nabla
m_{z})^{2}}{(1-m_{z}^{2})^{2}}+\frac{m_{z} \nabla^{2}m_{z}}{1-m_{z}^{2}}+
\nabla^{2}\alpha \bigg) \ - \ 2\beta m_{z} \ \\ = \ 0 \label{mz} \end{equation}
and
\begin{equation} {\vec \nabla} .\big[ (1-m_{z}^{2}){\vec \nabla} \alpha \big]=0
\label{alpha} \end{equation}
\section {Bipolar coordinates}
To find bimeron solutions we have to numerically solve the
coupled partial differential
equations (PDE) in (\ref{mz}) and (\ref{alpha}).
The defining boundary condition of a bimeron is $m_z \ = \ \pm 1$ at
the points $(0, \pm a)$
Our strategy will be to use
the known exact solution of these equations in the Non-Linear Sigma Model
(NLSM) limit, and solve the full equations
iteratively starting with the NLSM solution.
The NLSM limit is realised when the layer separation $d$ goes
to zero in which case we see from their defining equations above that $ \rho_A
\ = \ \rho_E $, i.e. the stiffness is isotropic and further that the
capacitance coefficient $\beta$ vanishes. Then , with $C_1$ and $C_2$ also
neglected, the action in (\ref{Eff}) is just that of the NLSM, all of whose
solutions are exactly known \cite{Raj}. They are conveniently described by
the complex field w(z) which represents the stereographic
projection of the unit sphere of textures ${\vec m}$. It is defined by
\begin{equation} w(z) \equiv {m_x + im_y \over (1 - m_z)} \end{equation}
where z = x+iy.
Our texture variables $m_z$ and $\alpha$
are related to w(z) by
\begin{eqnarray} m_z \ &=& \ {|w|^2 - 1 \over |w|^2 + 1} \nonumber \\
and \ \ \ \ \alpha \ &=& \ arg \ (w) \label{mzw} \end{eqnarray}
Any analytic function w(z) will be a solution of the NLSM.
In particular the function
\begin{equation} w(z) \ = \ {z - a \over z + a} \label{NLSM} \end{equation}
represents the bimeron, with the points (0,-a) and (0,a) representing
the centers of the two merons, where the solution gives $m_z = \pm 1$
respectively. It may be checked that (\ref{NLSM}) satisfies the coupled
equations (\ref{mz}) and (\ref{alpha}) in the isotropic limit.
When the interlayer separation d is not zero, we have to cope with
the coupled field equations (\ref{mz}) and (\ref{alpha}) with both
the anisotropic stiffness and capacitance terms present. Some analysis
of this system was done long ago by Ogilvie and Guralnik \cite{Ogil}
who studied the NLSM with the mass (capacitance) term included but
no anisotropy. ( An ansatz suggested
in ref (\cite{Ogil}) does not work ,as we will show below.)
Meanwhile Watanabe and Otsu \cite{Wata} studied the anisotropic NLSM but
without the mass term. Both made considerable progress analytically,
but neither offered exact or numerical solutions. Here we
will try to solve (\ref{mz}) and (\ref{alpha}) numerically after
including both the capacitance and anisotropic terms .
To do so , it will be convenient to use a bipolar coordinate system to
describe the x-y plane, as
might be expected when we have to impose boudary conditions at two
finite points (0,-a) and (0,a). These coordinates, $\eta$ and $\phi$,
are defined by
\begin{eqnarray} \ \ \ \eta \equiv \ \ ln \ |{z-a \over z+a}| \nonumber \\
and \ \ \ \phi \equiv arg \ \bigg( {z-a \over z+a } \bigg)
\label{etaphi} \end{eqnarray}
This coordinate set has many advantages \cite{Margenau}.
The points (0,-a) and (0,a) at which we have to impose
boundary conditions are now mapped into $\eta \rightarrow \pm
\infty $. The full x-y plane is mapped in $(\eta,\phi)$ coordinates to an
infinite strip with $\eta = [-\infty, +\infty]$ and $\phi =
[-\pi, \pi]$. Finally, it is clear upon comparing eq(\ref{etaphi})
to eq (\ref{NLSM}) that this set of coordinates is
closely related to the exact NLSM bimeron solution. Clearly the
the exact NLSM solution (\ref{NLSM})
corresponds to the simple expressions
\begin{eqnarray} m_z \ = \ tanh \eta \nonumber \\
and \ \ \ \ \alpha \ = \ \phi \end{eqnarray}
Away from the NLSM limit, since this is an
orthogonal coordinate system with simple expressions for the
gradient, divergence and Laplacian,
the equations (\ref{mz}) and (\ref{alpha}) become
\begin{eqnarray}
\ \bigg[(\frac{ \rho_{A}-\rho_{E}}{\rho_{E}}) + \frac{1}{1-m_{z}^{2} }\bigg]
(\partial_{\eta}^{2}m_{z} +\partial_{\phi}^{2}m_{z}) +\frac
{m_{z}({\partial_{\eta}m_{z} +\partial_{\phi}m_{z}})^{2}}{({1-m_{z}^{2}})^{2}}
+m_{z}(({\partial_{\eta}\alpha +\partial_{\phi}\alpha})^{2}
\nonumber \\
- \frac{2\beta}{\rho_{E}} \ \ Q^{2} \ (\eta, \phi) = 0 \label{mz1}\end{eqnarray}
\begin{equation} (1-m_{z}^{2})(\partial_{\eta}^{2}\alpha +\partial_{\phi}^{2}\alpha)
-2m_{z}({\partial_{\eta}m_{z} \partial_{\eta}\alpha +\partial_{\phi}m_{z}
\partial_{\phi}\alpha}). =0 \label{alpha1} \end{equation}
where
\begin{equation} Q^{2} \ (\eta, \phi) \ = \frac{a^{2}}{({\cosh{\eta}-\cos{\phi}})^{2}}
\end{equation}
is the Jacobian of this coordinate transformation.
Now let us analyse these equations as different terms are included in stages.
(a) In the NLSM limit, our exact solution has $\alpha = \phi$.
Then (\ref{alpha1}) forces $m_z$ to be a function of $\eta$ alone ,
$m_z = m_z \ (\eta)$. Upon inserting this into the other equation (\ref{mz1})
it becomes an {\it ordinary} non-linear differential equation . This
is the advantage of this choice of coordinates. The
solution can be verified to be $m_{z} = tanh(\eta)$.
(b) Next let us include anisotropy $( \rho_A \neq \rho_E)$ ,
while still keeping the capacitance term zero $(\beta = 0)$.
Once again we can set $\alpha = \phi$, and consequently $m_z = \
m_z (\eta) $, which will obey again an ordinary differential
equation given by
\begin{equation}
\ \bigg[(\frac{ \rho_{A}-\rho_{E}}{\rho_{E}}) + \frac{1}{1-m_{z}^{2} }\bigg]
(\partial_{\eta}^{2} \ m_{z} ) +\frac
{m_{z}(\partial_{\eta}m_{z} )^{2}}{(1-m_{z}^{2})^{2}}
+ m_{z} \\ = 0 \label{mz2}\end{equation}
This has no analytic solution, but can be solved relatively
easily numerically, being just an ordinary differential equation in
the variable $\eta$. As boundary conditions we impose
$ m_z = o$ \ \ \ at $\eta = 0$ and $m_z = 1$ at $\eta = \infty$,
(Note that the equation above
is symmetric under $\eta \rightarrow \ - \eta$, so that we can choose the
solution to be antisymmetric, i.e. $ \ m_z (- \eta) = - m_z( \eta)$).
The resulting numerical solutions for different values of layer
separation $d$ (on which the anisotropy depends), are shown in fig 1.
One can see that with increasing the layer separation, and hence
increasing anisotropy in the stiffness, the pseudospin component
$m_{z}$ reaches its asymptotic value more slowly.
(c) Finally let us also include the capacitance term and consider
the equations
(\ref{mz1} and \ref{alpha1}) in full. Now the ansatz $\alpha = \phi$ is
no longer sustainable, in contrast to what has been suggested in
ref (\cite{Ogil}). The substitution of the ansatz $\alpha = \phi $ in
equation (\ref{alpha1}) would again force $\partial_{\phi} m_z = 0$ ,i.e.
$ m_{z} = m_{z}(\eta) $. But now this is in contradiction with
equation (\ref {mz1}) which has an explicit $\phi $ dependence
through the last (capacitance) term
$\frac{2\beta}{\rho_{E}} \ \ Q^{2} \ (\eta, \phi)$. Therefore ,once
one includes the capacitance term in equation(\ref{mz1})
both $\alpha$ and $m_{z}$ become functions of both $\eta$ and $\phi$.
One has unavoidably to
solve the coupled non-linear PDE for $m_z = m_{z}(\eta,\phi)$ and
$\alpha= \alpha(\eta,\phi)$.
We do this by employing what we believe is a
good ansatz for $\alpha$ which approximately satisfies \ref{alpha1}).
We then solve the other equation (\ref{mz1}) numerically after
inserting that ansatz for $\alpha$.
Our ansatz is been motivated by the following arguments.
One can see from equation(\ref{mz1}) that the troublesome $\phi$ dependent
term $Q^2$ is negligibly small in the large $\eta$ region
$(Q \sim sech (\eta))$
and is most dominant in the small $\eta$ region.
Hence $\alpha$ will still approach $\phi$ as $\eta
\rightarrow \infty$ but needs to be modified substantially
in the small $\eta$ region where however $m_{z} \ll 1$.
When $m_{z} \ll 1$ equation (\ref{alpha1}) can be approximated by
\begin{equation} \bigtriangledown^{2}\alpha=0 \label{laplace}\end{equation}
This is just Laplace's equation in two dimension whose solutions
are all harmonic functions. With this in mind we choose our ansatz for
$\alpha$ as follows :
\begin{equation}
\alpha = \phi -B \kappa exp(-|\eta|)sin(\phi)
\label{alpha2}\end{equation}
where
\begin{equation}
\kappa \equiv (\frac{2\beta}{\rho_{E}})^{1\over 2} \ \ a \ \end{equation}
This solves Laplace's equation
and satisfies all the required boundary conditions and asymptotic behaviour,
namely
\begin{eqnarray}
\alpha \rightarrow \phi \ \ \ \ &as \ \ \ \ \eta \rightarrow
\pm \infty \nonumber \\
\alpha = 0 \ \ \ &when \ \ \ \ \ \phi =0 \nonumber \\
\alpha = \pi \ \ \ &when \ \ \ \ \phi =\pi \nonumber \\
\alpha = \phi \ \ \ &when \ \ \ \kappa =0 \label{boundary} \end{eqnarray}.
Note that the ansatz has a cusp at $\eta = 0$. This need not cause
concern. Some such cusps can be
expected on physical grounds and are familiar in soliton physics. The
point is that each meron feels some force due to the other (Coulomb
plus a logarithmic force) at arbitrary separation. We would expect them
to move because of this force, and cannot strictly
speaking expect a static bimeron solution to exist at arbitrary
separation. But a cusp, like the
one in the above ansatz, amounts to a delta function in the second
derivative and can be interpreted as a external force just at $\eta = 0$
which can "hold the two merons together" at arbitrary separation. For
more discussion of this point see Rajaraman and Perring and Skyrme
\cite{RR} where this technique was used to get intersoliton forces betwen
one dimensional solitons.
The constant B is chosen by minimising the energy.
Substituting this ansatz in equation (\ref{mz1}) we then solved it
numerically subject to the boundary condition
\begin{eqnarray}
m_{z} \ = 0 \ \ \ \ &at \ \ \ \eta =0\nonumber \\
m_{z} \ = \pm 1 \ \ \ &when \ \ \ \eta = \pm \infty \label{kboundary}
\end {eqnarray}.
It is sufficient to solve the equation in the first quadrant .i.e.
$(\eta [0,\infty] and \phi [0,\pi])$ . For the rest of the
quadrants solutions can be obtained by writing
\begin{eqnarray}
m_{z}(-\eta,\phi)=-m_{z}(\eta,\phi) = -m_{z}(\eta,-\phi)\nonumber \\
\alpha(-\eta,\phi)=\alpha(\eta,\phi)=-\alpha(\eta,-\phi) \nonumber \\
\end{eqnarray}
which is consistent with the invariance of equations
(\ref{mz1})and(\ref{alpha1}) under the transformation $\eta
\rightarrow -\eta$ and $ \phi \rightarrow -\phi$.
\section{Numerical Procedure}
Before proceeding to solve this PDE (\ref{mz1}) we must take note of the
fact that the last term of the equation
(\ref{mz1}) is singular at the point
$(\eta=0,\phi=0)$. This point corresponds to spatial infinity
on the parent x-y plane.
As one moves near this point the leading
singularity in the equation, coming from the $Q^2$ term,
goes like $\frac{4\kappa^2}{(\eta^{2} +\phi^{2})^{2}}$
with other subleading singuarities of the form
$\frac{1}{\sqrt(\eta^{2} +\phi^{2})}$. It can be seen that this leading
singularity can be offset by requiring that $m_{z}$ behave as
$\bigg[exp-\bigg(\frac{2\kappa}{\sqrt{\eta^{2}+\phi^{2}}}\bigg)
\bigg] \ g (\eta,\phi)$, where this $g (\eta,\phi)$,
is a more smooth function for which one solves numerically.
This corresponds, in more familiar polar coordinates $(r,\theta)$ to
writing $m_z$ in the form $[exp-(\frac{\kappa r}{a})] $
\ ${\tilde g} \ (r,\theta)$. That $m_z$ will suffer
such an exponential fall-off
as $r \rightarrow \infty $ can also be inferred directly from the
"mass term " $2\beta m_z$ in the original field equation (\ref{mz}).
Similarly one can also verify that the
cancellation of the subleading singular terms can be achievedd by
requiring that $g$ has to behave like
$\sqrt(\eta^{2}+\phi^{2})$ as $\eta, \ \phi \ \rightarrow 0$.
Given this functional form of $m_{z}$ near the origin of the $ \eta, \phi$
plane, the boundary conditions (\ref{kboundary}), and the
ansatz (\ref{alpha2} )for $\alpha$ we solved equation
(\ref{mz1}) through an iterative procedure. We start with the solution
for $\kappa$=0 but with full anisotropy, which can be obtained relatively
easily from the ordinary differential equation (\ref{mz2}).
We then use this solution as input to obtain the
solution for $\kappa $ equal to a small number $\epsilon$
through the Newton-Raphson method
\cite{numer}. The solution for $\epsilon$ is then used as input
to obtain the solution for $2\epsilon$ and so on. This procedure
is repeated until one reaches the desired value of $\kappa$. The
advantage of this
procedure is that one can make $\epsilon$ arbritrarily small
to make the Newton-Raphson method converge. In this way we obtained
solutions for different values of the ansatz parameter B
for each value of $\kappa$.
\section{RESULTS and DISCUSSION}
Our solutions of equations(\ref{mz1}) and along with the ansatz
(\ref{alpha2}) give us the value of the pseudospin vector ${\vec m}$ as a
function of $\eta$ and $\phi$, or equivalently, the value of the vector-field
${\vec m}$ on a lattice of points on the parent x-y plane. We repeated
this calculation for a set of values
of the paramenter B in the ansatz (\ref{alpha2}).
We found that as one varies B starting from 0, the energy does not vary much
as B goes 0 to 0.1, but then it increases sharply after
B=0.1. This behaviour is seen to be common to all $\kappa$ and all $a$.
Hence we take B to equal 0.1. and solve the PDE for a variety of
values of layer separation {\it d}, and bimeron separation {\it a}
({\it a} is actually half of the meron-antimeron separation) . Together
all these solutions represnt a large body of calculated data. But it is
neither feasible nor very interesting to try to display it all in this
paper. Instead we will try to bring out salient features of our solutions
through examples.
Recall from (\ref{mz2})that in the absence of the capacitance term $m_z$
had no $\phi$ -
dependence. To give some feel for how the $m_z$ varies with $\phi$ in the
presence of the capacitance term, we plot in fig. 2 the solution
$m_{z}(\eta)$ of equation (\ref{mz1}) for a set of values for $\phi$. This
solution corresponds to $ d= 0.7$ and $a=3.158$. The sequence of curves shown
correspond to $\phi$ equal to 0, 0.2$\pi$, 0.47$\pi$, and 0.94$\pi$
respectively with the outermost one belonging to $\phi$ equal to 0. As we
have discussed earlier, as $\eta$ and $\phi$ tend to zero, the solution
should damp exponentially as
$exp(-\frac{\kappa}{\sqrt{\eta^{2}+\phi^{2}}})$. Correspondingly we see in
fig.2 that the low $\phi$ curves rise very slowly as $\eta$ increases away
from zero. We also give for comparison, in the form of the dotted curve,
the function tanh ($\eta$) which is the solution in the NLSM limit. The
comparison shows that the restructuring of the pseudospin texture due to
the capacitance and term and anisotropy is considerable.
As an alternate representation of our results, we show in fig. 3
the projection of ${\vec m}$ on the x-y plane, for the example of
$d$ equal to
0.7 and $\kappa$ equal to 4.4. (All lengths throughout this article are
in units of the magnetic length ${\it l}$). The length of each
arrow gives the magnitude of its easy-plane projection
$\sqrt{m_{x}^{2}+m_{y}^2}$
and its direction gives the azimuthal angle
of the projected vector , namely, $\alpha = \
tan^{-1}\bigg(\frac{m_{y}}{m_{x}}\bigg)$.
The plot clearly shows that our " bimeron" solution is indeed
a meron-antimeron pair. Note that, as desired,
$\vec m$ lies along the x-axis asymptotically. This picture
closely resembles the general structure obtained by Brey {\it et.al.}
\cite{Brey}. The data corresponding to all other values of $d$ and $a$
we studied have a similar behavior.
In fig.4 we plot the topological charge density given in eq.(\ref{topo})
as a function of $\eta$ and $\phi$ in the presence of all the local terms
in the field equations, including anisotropic ones. In viewing
this figure it may be
helpful to remember that large $|\eta|$ corresponds to the meron centers
while $\eta = 0, \phi=0$ corresponds to spatial infinity. $ \phi=
\pi $ corresponds to the line joining the two merons.As topological
charge density is symmetric when either of the co-ordinate variable
changes sign we show the contours only in the first quadrant where both
$\eta$ and $\phi$ are positive.
Next let us turn to the energetics of these bimeron solutions. In fig.5.
we show how the "local" energy i.e. the contribution from the local
terms in the energy functional (all terms in eq(\ref{Eff}) except
for $C_1$ and $C_2$) varies when one changes the separation $2a$
between the meron and antimeron centres. The appearance of a minimum
is quite conspiquous and generic to all the layer separations for
which the energy is calculated. The example in fig. 5 corresponds
to a layer separation of 0.7.
In fig.6 we plot the Coulomb energy $C_{1}$ evaluated using our solution
of the equation (\ref{mz1}), as a function of the bimeron separation. The
continuous curve is the best fit to our calculated points . Sometimes in
the literature, a phenomenological estimate of bimeron energetics is made
assuming that it can be viewed as a bound pair of two merons, each
symmetrical, undistorted by the other and carrying a charge of
$\frac{e}{2}$ . Such a pair would have a Coulomb energy of $\frac{1}{8a}$
(in units of $\frac{e^2}{\epsilon {\it l}}$ that we are using). To see how
good an approximation this simple picture is, we give in the same fig.6,
in the form of a broken line the plot of this function $\frac{1}{8a}$
. We see that the value of the Coulomb energy we get from the actual bimeron
solution is much larger than what the simple two-charge picture would
give. This is presumably because each meron is considerably squashed
(polarised) by the close proximity of the other. In our earlier work on
single merons \cite{Ghosh}, we had found that at the layer separation
($d=0.7$) used in fig.6, the core-radius of individual merons is about 2,
which is of the same order as the meron-separation in fig 6. In fact we
can see that the gap between the two curves in fig.6 is higher for smaller
$a$ where the individual merons are squeezed together more. Of course our
results, while indicative , may not be quantitatively unambiguous. For
instance, recall that our solution was obtained using only the local terms
in the differential equation and the Coulomb energy was calculated by
substituting this solution into the integral $C_1$. The non-local Coulomb
term's influence ${\underline on}$ the solution has no been included.
In fig.7 we plot the variation of three terms in the energy fuctional
namely the contribution from the local terms (capacitance+gradient energy)
,$C_{1}$ and $C_{2}$ as a function of the bimeron separation.The data
presented here corresponds to layer separation $d$ equal to 0.7$\it l$
but this behaviour is representative of almost all the layer separations
( 0.5, 0.6, 0.7 and 0.8 for which we have found solutions.
The trend of all three contributions is the same
for the other layer separations also with only slight changes in the slope of
the curves.
Our calculations were done for different bimeron separations
$a$, for each layer separation $d$. In reality, the exact
solution should exist only for some optimal bimeron separation
$a$ for each value of $d$. One can ask if our calculations
would reveal this by minimising the total energy at some
particular $a$. To see this, we have shown in Fig. 8 the total
energy at d=0.7 (i.e. the sum of all three contributions plotted
in the fig.7) as a function of bimeron separation $a$. As we
can see from fig.7, the total energy keeps decreasing with $a$,
all the way to about $ a = 3.2$, which is the highest value
upto which we could calculate , given limitations of our
computing facilties. However, the decrease is clearly levelling
off and is indicative that a minimum may exist at around a=4 or
5. What we have done, in drawing fig.8, is to obtain a
best-fit-curve of the data points upto a=3.2 and extrapolate
that curve upto a=4.5. For what it is worth such extrapolation indicates a
minimum at about a=4. This corresponds to a meron-antimeron separation of
of about 8, larger than what Yang and
MacDonald found by entirely different methods (see their fig. 2)
\cite{Yang}. Their value of the meron separation for $d=0.7$ is
about 4.5. We attribute this discrepency to the fact , noted
already in our discussion of fig.6, that the Coulomb energy in our
explicit calculation of the bimeron solution is higher than
the undistorted meron pair estimate used in ref(\cite{Yang}).
The actual larger Coulomb repulsion is, we believe responsible for the
larger optimal meron separation that we get.
We saw that the Coulomb interaction energy between the two merons as given
by the term $C_1$ in the present calculation differs quite a bit from the
simple picture of the bimeron as a pair of undistorted merons of charge $
\frac{e}{2}$ each. One can ask if there is a similar discrepency in the
non-Coulombic energy as well. This is the subject of Table 1.
In the picture of a bimeron as a pair of merons \cite{Moon} ,
\cite{Yang}, \cite{Ghosh}, it will have energy equal to
\begin{equation} E_{prev} \equiv \ 2E_{mc} + \ 2\pi \rho_{E} \ \ ln \bigg(
\frac{2a}{R_{mc}}\bigg)
\end{equation}
where $E_{mc}$ and $R_{mc}$ are respectively the core energy
and radius of a single merons, which have a logarithmic interaction
with each other because of the logarithmic divergence of the self energy
of single merons. (As stated already we are leaving out their
Coulomb interaction in the comparison being done in this table.)
This $E_{prev}$ has been calculated in our previous work
\cite{Ghosh}. It can be compared with the local part of the
energy in the present calculation. Such a comparison is given in
Table 1 for different values of $d$, using the optimal value of
the meron separation $a$ which minimises $E_{local}$. We see
that the comparison is not bad considering the completely
different ways of estimating this energy in this paper and in
earlier literature.
In conclusion, our solution for the bimeron obtained
by directly solving the coupled partial differential equations that
the bimeron texture obeys provides an alternate way of
obtaining the profiles and energies of these objects. As far as
the local part of the energy is concerned, the results are in
broad agremment with microscopic derivations earlier. But the
Coulomb energy we obtain is higher by a factor of about 2 from earlier
simple estimates because in actuality, the two merons in close
proximity will not behave like undistorted symmetrical merons.
\section{Acknowledgements} We are indebted to Awadesh Prasad for
his unstinting help on many fronts. SG would also like to thank Sujit
Biswas and Anamika Sarkar for helpful discussions on the numerical work.
SG acknowledges the support of a CSIR Grant no.
9/263(225)/94-EMR-I.dt.2.9.1994.
\begin{figure}
\label{fig1}
\caption{The solution $m_{z}(\eta)$ of equation (\ref{mz2}).
The three continuous curves correspond, as you go outwards, to three different
values of layer separation $d$ equal to 0.5, 0.6 and 0.7 respectively
in the unit of magnetic length ${\it l}$.
The dotted curve corresponds to the exact solution of NLSM i.e.
$m_z =tanh(\eta)$.}
\end{figure}
\begin{figure}
\label{fig2}
\caption{The solution $m_{z}(\eta)$ of equation (\ref{mz1})
for a set of values for $\phi$.The curves correspond, as you go inwards, to
$\phi = 0 , 0.2\pi, 0.47\pi, 0.94\pi$ respectively with
the outermost one corresponds to $\phi$ equal to 0.The layer separation
$d$ is equal to 0.7{\it l} and bimeron separation $a$ is equal to
3.158{\it l}.
The dotted curve at the top again corresponds to
$m_z =tanh(\eta)$.}
\end{figure}
\begin{figure}
\label{fig3}
\caption{This figure gives the magnitude and direction
of x-y projection of $\bf m$ at different points on the plane.
The layer separation and the bimeron separation are same as in
fig. 2.}
\end{figure}
\begin{figure}
\label{fig4}
\caption{A contour plot of the topological charge density of
the bimeron when both the capacitace term and the anisotropy
term is incorporated.This particular plot corresponds to a
layer separation $d$equal to 0.7 and bimeron separation $a$
equal to 3.158 both in the unit of magnetic length {\it l}.
The number against each contour(shown by broken curves)
denotes the corresponding charge density.}
\end{figure}
\begin{figure}
\label{fig5}
\caption{This figures gives the plot of the energy($E_{local}$)
coming from the
local terms in the action as a function of bimeron separation $a$
in the unit of magnetic length ${\it l}$. The unit of energy is
$\frac{e^{2}}{\epsilon {\it l}}$. The points correspond to the actually
computed values of the energy while the continuous curve is the best
fitted curve to it. The form of the best-fit curve is $E = A + B
(a-C)^{2}$ where A and B and C are found out to be
.223,.008 and 2.76 respectively.
This data corresponds to a layer separation $d$ equal to $0.7{\it l}$}
\end{figure}
\begin{figure}
\label{fig6}
\caption{This figure gives the plot of the coulomb energy as a function of
bimeron separation $a$ in units of magnetic length ${\it l}$.
The unit of energy is $\frac{e^2}{\epsilon\it l}$. The upper
curve is our computed value of the coulomb energy integral $C_1$
using the solution of equation (\ref{mz1})(points). The contious
line is the best curve to these points. The form of the best fitted
curve is $E = \frac{A}{a^{B}}$ where A and B are found out to be 0.847
and 0.821 respectively.
The dotted curve
at the bottom corresponds to the Coulomb energy
that the bimeron would have, if viewed as a bound pair of two
point charges of $\frac{e}{2}$ each, separated by a distance
$2a$ . This data corresponds to a layer separation
$d$ equal to 0.7}
\end{figure}
\begin{figure}
\label{fig7}
\caption{this figure gives a relative estimate of the contribution
of the three type of terms in the action, namely, the local terms,
$C_{1}$ and $C_{2}$, as a function of bimeron separation $a$. The units
are as specified in the earlier figures.This data also corresponds
to a layer separation $0.7{\it l}$}
\end{figure}
\begin{figure}
\label{fig8}
\caption{A plot of the total energy $E(total)$
as a function of bimeron separation $a$, for a
layer separation of 0.7.
This curve was obtained by extrapolating the curve fitted to the
calculated values going upto $a = 3.2$ . }
\end{figure}
\newpage
\noindent Table 1: The optimal bimeron separation
($a$), the bimeron local energy( $E_{local}$ ) and meron
pair energy ($E_{prev}$) from our previous work \cite{Ghosh} as
a function of the layer separation $d$.
The unit of energy is $\frac{e^{2}}{\epsilon l}$ and the unit of length
is ${\it l}$
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
d & $a$ & $E_{local}$ & $E_{prev}$ \\
\hline
0.5 & 3.30 & .270 & .217 \\
\hline
0.6 & 3.16 & .248 & .226 \\
\hline
0.7 &2.72 & .223 & .224 \\
\hline
0.8 & 2.39 & .201 & .214 \\
\hline
\end{tabular}\\
\end{center}
| {'timestamp': '1998-07-20T13:41:57', 'yymm': '9807', 'arxiv_id': 'cond-mat/9807275', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/9807275'} |
\section{Introduction}\label{sec1}
\IEEEPARstart{M}{assive multiple-input multiple-output} (mMIMO) is a well-recognized radio frequency (RF) technology for highly
spectrum-efficient communications \cite{6375940}. Current mMIMO technology has two main architectures, i.e., the
analogue-digital hybrid architecture and the fully digital (FD) architecture (see \cite{1519678}).
In the FD-mMIMO system, every antenna element is connected
to a dedicated RF chain. It has been revealed that the energy consumption of each RF chain grows exponentially with the resolution of signal quantizers
\cite{6457363, 761034}. This has recently motivated the use of low-resolution (mainly $1$-$3$ bit) quantizers for FD-mMIMO
(e.g. \cite{5351659, DBLP:journals/corr/RisiPL14, 6891254}). The information-theoretic study of low-resolution quantized FD-mMIMO can be found in the literature
(e.g. \cite{6891254, 7106472, 7080890}).
In the scope of estimation and detection theory, digital signal processing problems such as channel estimation, synchronization and
signal detection can be fundamentally changed due to the use of low-resolution quantizers \cite{6987288, 7088639, 9311778 }. This is because wireless systems
become non-linear and non-Gaussian, and such violates the hypothesis of linear and Gaussian process that is commonly adopted in
the conventional MIMO systems. Specifically for the signal detection problem, the maximum-likelihood detection becomes even more
complicated as it is no longer equivalent to the integer least-squares problem \cite{4475570,9145094}. This is particularly true for FD-mMIMO
systems with $1$-bit quantizers.
In order to improve the computational efficiency, a number of near-maximum-likelihood algorithms have been reported in the literature (e.g. \cite{7439790, 8240630, 8345169}). However, they are still too complex to implement in practice.
Approximate message passing (AMP) algorithms could offer near Bayesian-optimum solutions with much lower computational complexities (e.g. \cite{7355388, 7426735, 8234637}).
Nevertheless, linear algorithms are practically more appealing for their lower complexities and simple architectures.
The foundation of linear detection algorithms lies in a linear system model. Therefore, the central task of linear algorithm design is to find
a good linear approximation of the non-linear communication system. In the literature, one of the widely used linear approximation models is
the additive quantization noise model (AQNM) originally proposed in \cite{mezghani11}. It assumes the quantization distortion to be
additive white Gaussian noise (AWGN) and correlated with the input of the quantizer. With the AQNM model, the LMMSE channel equalization and symbol-by-symbol detection algorithm has been extensively studied.
Moreover, the AQNM model has been employed for the information-theoretic
study of the achievable rate, capacity bounds or spectral efficiencies in \cite{7307134, 7308988, 7876856, 7896590, 7420605}.
In \cite{Mezghani2012}, a modified-AQNM model has been proposed by making the quantization noise uncorrelated with the input signal
through Wiener-Hopf filtering. This modified version renders the derivation of auto-covariances and cross-covariances involved in the
LMMSE analysis much easier. When the input signal is white Gaussian, we show in Section \ref{sec2b2} that the modified-AQNM is equivalent
to the original AQNM model. Using the Bussgang's theory\footnote{Please
see \cite{Bussgang52} for the detail of Bussgang's theory.}, the modified-AQNM model has been further generalized.
Specifically for the $1$-bit quantization, the quantization noise is actually far from the Gaussian assumption. Then, the Bussgang's theory
has been used in \cite{Mezghani2012} to derive an exact form of the relevant auto-covariances and cross-covariances for the LMMSE
channel equalizer. Other relevant works that use the Bussgang's theory for linear receiver design
or performance analysis can be found in \cite{nguyen2019linear,7931630,7894211}.
The hypothesis of Gaussian quantization noise renders the AQNM model and its variations not sufficiently accurate for some
cases (see the detailed discussion in \cite{8337813}). Moreover, it has been observed that the AQNM-based LMMSE channel equalizer can introduce
a scalar ambiguity in the signal amplitude. This scalar ambiguity is not a problem for constant-modulus modulations such as M-ary
phase-shift-keying (M-PSK). However, it is detrimental to non-constant-modulus modulations such as M-ary quadrature-amplitude-modulation (M-QAM),
and thus it must be appropriately handled for instance through the energy normalization \cite{7439790,nguyen2019linear,tsefunda}.
After all, the major concern is that the inaccuracy of the AQNM models could disadvantage the receiver optimization as far as
non-constant-modulus modulations are concerned \cite{9144509,7247358}. Arguably, the generalized-AQNM model does take into account the scaling
ambiguities. However, we find the current studies rather intuitive, and that a more rigorous analytical study is needed to develop a deeper
understanding of the quantization distortion as well as its impact on the LMMSE channel equalizer. This forms the major motivation of our work.
The major contribution of our work lies in the employment of Hermite polynomials to develop the aforementioned deeper understanding.
This study results in a novel linear approximation model using the second-order Hermite expansion (SOHE).
In brief, the SOHE model can be described by the following vector form (see the detail in Section \ref{sec3})
\begin{equation}\label{eqn001}
\mathbf{y}=\mathcal{Q}_b(\mathbf{r})\approx\lambda_b\mathbf{r}+\mathbf{q}_b,
\end{equation}
where $\mathcal{Q}_b(\cdot)$ is the $b$-bit uniform quantizer introduced in \cite{Proakis2007},
$\mathbf{r}, \mathbf{y}\in\mathbb{C}^{K\times 1}$ are the input and output of the quantizer, respectively,
$\lambda_b$ is the coefficient of the first-order Hermite kernel which is a function of the resolution of the quantizer ($b$),
and $\mathbf{q}_b\in\mathbb{C}^{K\times 1}$ is the quantization distortion with its characteristics related to the resolution of the
quantizer ($K$: the size of relevant vectors). The SOHE model differs from the existing AQNM models mainly in two folds:
{\em 1)} The Hermite coefficient ($\lambda_b$) in the SOHE model describes how the signal energy changes with respect to the
resolution of the quantizer. The relationship between $\lambda_b$ and the resolution $b$ is mathematically formulated,
based on which the characteristics of $\lambda_b$ are exhibited through our analytical work.
{\em 2)} The quantization distortion ($\mathbf{q}_b$) is modeled as the second-order Hermite polynomial of the input signal
($\mathbf{r}$). There is no imposed assumption for the quantization distortion to be white Gaussian as well as their correlation behavior
with the input signal. It will be shown in Section \ref{sec3}, through mathematical analysis, that the cross-correlation between $\mathbf{q}_b$
and the input $\mathbf{r}$ depends on the stochastic behavior of the input signal. When the input is an independent white Gaussian process,
the quantization distortion can be considered to be uncorrelated with the input signal.
With the above distinctive features, we find that the SOHE model can be used to explain almost all interesting phenomena observed so far
in the research of low-resolution quantized MIMO signal detection. When using the SOHE model for the LMMSE analysis, our analytical work shows
that the current LMMSE algorithm should be enhanced by incorporating a symbol-level normalization mechanism, and thereby resulting in an
enhanced-LMMSE (e-LMMSE) channel equalizer. The performance gain of e-LMMSE is demonstrated through extensive computer simulations in
Rayleigh fading channels.
In addition, as response to the reviewers' comments, we enrich our technical contribution with the SOHE-based LMMSE channel estimation approach.
It is found that the SOHE-LMMSE channel estimator can offer comparable sum spectral efficiency (SE) with the state-of-the-art (SOTA) because the performance is limited by the channel estimation error.
The rest of this paper is organized as follows. Section II presents the system model, preliminaries and problem statement.
Section III presents the Hermite expansion model. Section IV presents the LMMSE analysis. Section V presents the simulation results,
and finally Section VI draws the conclusion.
\subsubsection*{Notations}
Regular letter, lower-case bold letter, and capital bold letter represent scalar, vector, and matrix, respectively.
$\Re(\cdot)$ and $\Im(\cdot)$ represent the real and imaginary parts of a complex number, respectively.
The notations $[\cdot]^T$, $[\cdot]^H$, $[\cdot]^*$, $[\cdot]^{-1}$, $\left \| \cdot \right \|$, $\mathrm{trace}(\cdot)$ and
$\mathbb{D}(\cdot)$ represent the transpose, Hermitian, conjugate, inverse, Euclidean norm, trace and a matrix formed by the diagonal of a matrix
(a vector or a scalar if appropriate), respectively. $\mathbb{E}\left [ \cdot \right ]$ denotes the expectation, $\mathbf{I}$ denotes the identity matrix,
and $\otimes$ denotes the Kronecker product.
\section{System Model, Preliminaries and\\ Problem Statement}\label{sec2}
This section introduces the mathematical model of the uplink MIMO signal reception with low-resolution quantizers. This is then followed by
a review of current linear approximation models as well as their related LMMSE channel equalizers. This review is important in the sense that it can
help to understand the SOTA as well as their differences from the SOHE model. It is perhaps worth noting that we do not put an
emphasis on the mMIMO system mainly to keep our work as generic as possible.
\subsection{System Model}\label{sec2a}
Similar to many other works in the SOTA analysis (e.g. \cite{7307134, 7876856, 7896590, 7420605}), we also consider a narrowband FD-mMIMO network, where a set of single-antenna transmitters $(N)$ simultaneously send their messages to
a receiver having a large number of receive antennas $(K)$. Denote $s_n$ to be the information-bearing symbol sent by the $n^\mathrm{th}$ transmitter ($n=0,...,N-1$). It is commonly assumed that $s_n$ is drawn from a finite alphabet-set with equal probability and fulfills:
$\mathbb{E}(s_n)=0$, $\mathbb{E}(s_ns_n^*)=1$, $\mathbb{E}(s_ns_m^*)=0$, $_{\forall n\neq m}$.
With the ideal quantization, the received discrete-time signal at the baseband ($\mathbf{r}$) is expressible as
\begin{equation}\label{eqn002}
\mathbf{r}=\sum_{n=0}^{N-1}\mathbf{h}_ns_n+\mathbf{v},
\end{equation}
where $\mathbf{h}_n\in\mathbb{C}^{K\times1}$ is the channel vector corresponding to the $n^\mathrm{th}$ transmitter to the receiver link,
and $\mathbf{v}\in\mathbb{C}^{K\times1}$ is the white Gaussian thermal noise with zero mean and auto-covariance $N_0\mathbf{I}$.
Define $\mathbf{H}\triangleq[\mathbf{h}_0, ..., \mathbf{h}_{N-1}]$ and
$\mathbf{s}\triangleq[s_0,...,s_{N-1}]^T$. The linear model \eqref{eqn002} can be rewritten into the following matrix form
\begin{equation}\label{eqn003}
\mathbf{r}=\mathbf{H}\mathbf{s}+\mathbf{v}.
\end{equation}
Feeding $\mathbf{r}$ into the $b$-bit low-resolution quantizer results in
\begin{equation}\label{eqn004}
\mathbf{y}=\mathcal{Q}_b(\Re(\mathbf{r}))+j\mathcal{Q}_b(\Im(\mathbf{r})),
\end{equation}
where the quantization is individually performed in the real and imaginary domains.
To reconstruct the signal block $\mathbf{s}$ at the receiver (i.e., the signal detection), the channel knowledge $\mathbf{H}$ is usually assumed
in the literature (e.g. \cite{5592653, 8320852, 8610159, 7155570}). There are also quite a few published works discussing
about the channel estimation as well as the signal
reconstruction based upon various channel knowledge imperfections (e.g. \cite{7439790, 7355388, 7247358, 5501995, 708938}).
Those are indeed very interesting research issues. However,
in order to make our work well focused on the signal reconstruction, we assume the availability of $\mathbf{H}$ throughout the paper
and describe the signal reconstruction procedure as the following input-output relationship
\begin{equation}\label{eqn005}
\hat{\mathbf{s}}=g(\mathbf{y}, \mathbf{H}),
\end{equation}
where $\hat{\mathbf{s}}$ is the reconstructed version of $\mathbf{s}$. In the following contents, our discussion will be focused on
the linear approximation models and LMMSE analysis. Optimum and near-optimum approaches are not relevant to our discussion
and therefore skipped.
\subsection{Linear Approximation Models and LMMSE Analysis}\label{sec2b}
Our SOTA analysis shows that there are mainly three linear models to approximate the non-linear model \eqref{eqn004}, and they can
lead to different LMMSE formulas.
\subsubsection{The AQNM Model}\label{sec2b1}
this model can be mathematically described by (see \cite{mezghani11, 5351659})
\begin{equation}\label{eqn006}
\mathbf{y}\approx\mathbf{z}_A\triangleq\mathbf{r}+\mathbf{q}_A.
\end{equation}
There are two assumptions for the AQNM model:
\begin{itemize}
\item[A1)] The quantization distortion $\mathbf{q}_A$ is AWGN;
\item[A2)] $\mathbf{q}_A$ is correlated with the input signal $\mathbf{r}$.
\end{itemize}
With this linear approximation model, the LMMSE channel equalizer ($\mathbf{G}^\star$) can be obtained by solving the following MMSE objective function
\begin{IEEEeqnarray}{ll}
\mathbf{G}^\star&=\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{y}\|^2,\label{eqn007}\\
&\approx\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{z}_A\|^2\label{eqn008}.
\end{IEEEeqnarray}
The solution to \eqref{eqn008} is provided in \cite{mezghani11}, i.e.,
\begin{equation}\label{eqn009}
\mathbf{G}^\star=\mathbf{H}^H(N_0\mathbf{I}+\mathbf{HH}^H+\mathrm{nondiag}(\rho_b\mathbf{HH}^H))^{-1},
\end{equation}
where $\rho_b$ is the distortion factor indicating the relative amount of quantization noise
generated, and it is a function of $b$; see the specific discussion in the relevant literature \cite{1057548, 6891254, 7106472}.
\subsubsection{The Modified-AQNM Model}\label{sec2b2}
the mathematical form of this linear model is given by (see \cite{Mezghani2012}):
\begin{equation}\label{eqn010}
\mathbf{y}\approx\mathbf{z}_B\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{r}+\mathbf{q}_B,
\end{equation}
where $\mathbf{C}_{yr}$ is the cross-covariance matrix between $\mathbf{y}$ and $\mathbf{r}$, $\mathbf{C}_{rr}$ is the
auto-covariance matrix of $\mathbf{r}$, and $\mathbf{q}_B$ is the quantization distortion. Different from the AQNM model in \eqref{eqn006},
the assumption here is:
\begin{itemize}
\item[A3)] the quantization distortion $(\mathbf{q}_B)$ is uncorrelated with the input signal $\mathbf{r}$.
\end{itemize}
Moreover, the condition A1) is not always assumed.
Define $\overline{\mathbf{H}}\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{H}$ and
$\mat{\varepsilon}\triangleq\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{v}+\mathbf{q}_B$. The modified-AQNM model \eqref{eqn010}
can be represented by the following canonical form
\begin{equation}\label{eqn011}
\mathbf{z}_B=\overline{\mathbf{H}}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
The auto-covariance matrix of $\mat{\varepsilon}$ is given by \cite[(9)]{Mezghani2012},
\begin{equation}\label{eqn012}
\mathbf{C}_{\varepsilon\varepsilon}=\mathbf{C}_{yy}-\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{C}_{ry}
+N_0\mathbf{C}_{yr}\mathbf{C}_{rr}^{-1}\mathbf{C}_{rr}^{-1}\mathbf{C}_{ry}.
\end{equation}
This is however too complex for the LMMSE analysis.
Applying the assumption A1) onto the quantization distortion $\mathbf{q}_B$, it has been shown that the following approximation of
covariance matrices applies
\begin{equation}\label{eqn013}
\mathbf{C}_{ry}\approx(1-\rho_b)\mathbf{C}_{rr}\approx\mathbf{C}_{yr},
\end{equation}
\begin{equation}\label{eqn014}
\mathbf{C}_{\varepsilon\varepsilon}\approx(1-\rho_b)^2N_0\mathbf{I}+(1-\rho_b)\rho_b\mathbb{D}(\mathbf{C}_{rr}).
\end{equation}
Applying \eqref{eqn013} into \eqref{eqn011} results in
\begin{equation}\label{eqn015}
\mathbf{z}_B\approx(1-\rho_b)\mathbf{H}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
Then, the LMMSE objective function reads as
\begin{equation}\label{eqn016}
\mathbf{G}^\star=\underset{\mathbf{G}}{\arg\min}~\mathbb{E}\|\mathbf{s}-\mathbf{G}\mathbf{z}_B\|^2.
\end{equation}
Solving \eqref{eqn016} results in
\begin{equation}\label{eqn017}
\mathbf{G}^\star=(1-\rho_b)^{-1}\mathbf{H}^H\Big(\mathbf{C}_{rr}+\frac{\rho_b}{1-\rho_b}\mathbb{D}(\mathbf{C}_{rr})
\Big)^{-1},
\end{equation}
where $\mathbf{C}_{rr}=\mathbf{HH}^H+N_0\mathbf{I}$. This equation seems to be different from \eqref{eqn009}. However, if
we incorporate the term $(1-\rho_b)^{-1}$ into the auto-covariance term inside the bracket, \eqref{eqn017} immediately turns
into \eqref{eqn009}. Arguably, we can consider the linear approximations \eqref{eqn006} and \eqref{eqn010} to be equivalent
when the assumption A1) is adopted.
\subsubsection{The Generalized-AQNM Model}\label{sec2b3}
the modified-AQNM model can be extended to the following generalized version (see \cite{Mezghani2012})
\begin{equation}\label{eqn018}
\mathbf{z}_C=\mathbf{\Lambda}_b\mathbf{r}+\mathbf{q}_C,
\end{equation}
where the quantization distortion $\mathbf{q}_C$ is assumed to be uncorrelated with $\mathbf{r}$ (i.e., the assumption A3),
and $\mathbf{\Lambda}_b$ is a diagonal matrix with its characteristics related to the low-resolution quantizer.
Consider the quantizer $y=\mathcal{Q}_b(x),~x\in(-\infty, \infty)$, to be a stair function with its input range being divided into $M=2^b$
sub-ranges \footnote{
The dynamic of sub-ranges is promised by the automatic gain control (AGC), which aims at keeping the amplitude of the output signal $y$ substantially constant or to vary only within a small range \cite{664234, 1092057}. In order to focus on the analysis of the low-resolution quantization process, the ideal AGC is assumed in this paper.} .
Define $(\tau_m, \tau_{m+1})$ to be the $m^\mathrm{th}$ sub-range. The quantizer can be represented by
\begin{equation}\label{eqn019}
\mathcal{Q}_b(x)=x_m,~x\in(\tau_m, \tau_{m+1}),~_{m=0, ..., M-1,}
\end{equation}
where in general $x_m$ can be an appropriately chosen value within the range of $(\tau_m, \tau_{m+1})$ depending on the design
specification \cite{1057548,Liu2021vtc}; and $\tau_0=-\infty$, $\tau_{M}=\infty$. Then, the diagonal matrix $\mathbf{\Lambda}_b$ is
expressed by
\begin{IEEEeqnarray}{ll}\label{eqn020}
\mathbf{\Lambda}_b&=\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}
\sum_{m=0}^{M-1}\frac{x_m}{\sqrt{\pi}}\Big(
\exp(-\tau_m^2\mathbb{D}(\mathbf{C}_{rr})^{-1})\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad-\exp(-\tau_{m+1}^2\mathbb{D}(\mathbf{C}_{rr})^{-1})\Big).
\end{IEEEeqnarray}
Generally, the analysis of $\mathbf{C}_{\varepsilon\varepsilon}$ is highly complex, and it does not result in a closed-form solution.
Specifically for the special case of symmetric $1$-bit quantization, the assumption of Gaussian quantization noise is not suitable.
Using the Bussgang's theorem, the approximations \eqref{eqn013}-\eqref{eqn014} can now be replaced by the
exact forms (a slightly alternated version from \cite{Mezghani2012, nguyen2019linear})
\begin{equation}\label{eqn021}
\mathbf{C}_{yr}=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr},
\end{equation}
\begin{IEEEeqnarray}{ll}\label{eqn022}
\mathbf{C}_{\varepsilon\varepsilon}=&\frac{2}{\pi}\Big[\arcsin\Big(\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\Big)-\nonumber\\
&\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{C}_{rr}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}+
N_0\mathbb{D}(\mathbf{C}_{rr})^{-1}\Big].
\end{IEEEeqnarray}
Applying the above results for the LMMSE analysis leads to
\begin{equation}\label{eqn023}
\mathbf{z}_B=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{H}\mathbf{s}+\mat{\varepsilon}.
\end{equation}
With \eqref{eqn018}-\eqref{eqn020}, it is rather trivial to obtain the following form of LMMSE
\begin{equation}\label{eqn024}
\mathbf{G}^\star=\sqrt{\frac{2}{\pi}}\mathbb{D}(\mathbf{C}_{rr})^{-\frac{1}{2}}\mathbf{H}
\Big(\mathbf{C}_{\varepsilon\varepsilon}+\frac{2}{\pi}\mathbb{D}(\mathbf{C}_{rr})^{-1}\mathbf{HH}^H\Big)^{-1}.
\end{equation}
Here, we emphasize that \eqref{eqn024} holds only for the $1$-bit quantizer.
\subsection{Statement of The Research Problem}
Section \ref{sec2b} shows already intensive research and appealing contributions on the linear approximation models as well as their relevant LMMSE analaysis.
Nevertheless, there is still a need for a more extensive and rigorous study on this issue, which can make the linear approximation
research more comprehensive and accurate. Moreover, a more comprehensive study could help to develop novel understanding of the behavior of
LMMSE channel equalizer in the context of low-resolution MIMO signal reception. The following sections are therefore motivated.
\section{Hermite Polynomial Expansion for Linear Approximation}\label{sec3}
This section presents the Hermite polynomial expansion of the low-resolution quantization function as well as key characteristics of the
SOHE model.
\subsection {Hermite Polynomial Expansion and The SOHE Model}
We start from the Laplace's Hermite polynomial expansion (see the definition in \cite[Chapter 22]{Poularikas_1999}) which is employed to
represent the quantization function $y=\mathcal{Q}_b(x),~x\in(-\infty, \infty)$. The Hermite transform of $\mathcal{Q}_b(x)$ is given by (see
\cite{60086})
\begin{equation}\label{eqn025}
\omega_l=\frac{1}{\sqrt{\pi}2^ll!}\int_{-\infty}^{\infty}\mathcal{Q}_b(x)\exp(-x^2)\beta_l(x)\mathrm{d}x,
\end{equation}
where $\beta_l(x)$ is the Rodrigues' formula specified by
\begin{equation}\label{eqn026}
\beta_l(x)=(-1)^l\exp(x^2)\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big].
\end{equation}
With this result, the Hermite polynomial expansion of $\mathcal{Q}_b(x)$ is given by
\begin{equation}\label{eqn027}
\mathcal{Q}_b(x)=\lim_{L\rightarrow\infty}\sum_{l=1}^{L}\omega_l\beta_l(x).
\end{equation}
The expression of $\omega_l$ can be simplified by plugging \eqref{eqn026} into \eqref{eqn025}, i.e.,
\begin{equation}\label{eqn028}
\omega_l=\frac{(-1)^l}{\sqrt{\pi}2^ll!}\int_{-\infty}^{\infty}\mathcal{Q}_b(x)\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big]\mathrm{d}x.
\end{equation}
Applying \eqref{eqn019} into \eqref{eqn028} results in
\begin{equation}\label{eqn029}
w_l=\frac{(-1)^l}{\sqrt{\pi}2^ll!}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}
\Big[\frac{\partial^l}{\partial x^l}\exp(-x^2)\Big]\mathrm{d}x.
\end{equation}
The SOHE model is based on the second-order Hermite expansion as below (i.e., $L=2$ in \eqref{eqn027})
\begin{IEEEeqnarray}{ll}\label{eqn030}
\mathcal{Q}_b(x)&=\sum_{l=1}^{2}w_l\beta_l(x)+O(w_3\beta_3(x)),\\
&=\lambda_bx+q_b(x),\label{eqn031}
\end{IEEEeqnarray}
where $\lambda_b$ is the coefficient corresponding to the first-order Hermite kernel, and $q_b$ is the second-order
approximation of the quantization noise. Their mathematical forms are specified by
\begin{equation}\label{eqn032}
\lambda_b=2\omega_1,
\end{equation}
\begin{equation}\label{eqn033}
q_b(x)=4\omega_2x^2-2\omega_2+O(\omega_3\beta_3(x)).
\end{equation}
The derivation from \eqref{eqn030} to \eqref{eqn031} is by means of computing \eqref{eqn026} for $l=1,2$.
The mathematical work is rather trivial and thus omitted.
{\em Remark 1:}
The SOHE model in \eqref{eqn031} is certainly not accurate enough to capture the exact characteristics of the low-resolution quantizer.
This is also true for all other existing linear approximation models. An accurate Hermite model often requires $L=100$ or more, and this is however
too complex for an analytical study. Nevertheless, we will show that the SOHE model can already reflect key characteristics of the low-resolution
quantizer.
\subsection{The Scalar-SOHE Model and Characteristics}\label{3b}
The SOHE model is a linear approximation of the low-resolution quantizer, and thus it is not very different from other existing linear models
if solely based on their mathematical forms. On the other hand, the key parameters of SOHE (i.e., $\lambda_b$ and $q_b(x)$) show
different characteristics from the others.
{\em 1)} Characteristics of the Hermite coefficient $\lambda_b$ can be summarized by the following statement.
\begin{thm}\label{thm01}
Consider the case of symmetric $b$-bit quantization with the following setup in \eqref{eqn029}
\begin{equation}\label{eqn034}
x_m=\left\{\begin{array}{ll}
\tau_{m+1},&\tau_{m+1}>0\\
\tau_m,&\tau_{m+1}<0
\end{array}\right.
\end{equation}
The Hermite coefficient $\lambda_b$
has the following properties:
\begin{equation}\label{eqn035}
\lambda_b\geq 1~\mathrm{and} \lim_{b\rightarrow\infty}\lambda_b=1.
\end{equation}
\end{thm}
\begin{IEEEproof}
See Appendix \ref{A}.
\end{IEEEproof}
With the ideal AGC, we assume that the input and output signals can be optimally scaled to meet the quantization boundaries.
{\em Theorem \ref{thm01}} provides two implications: {\em 1)} low-resolution quantizers can introduce a scalar ambiguity $\lambda_b$,
which often amplifies the input signal in the digital domain. The principle on how the signal is amplified is analytically explained in
Appendix \ref{A}; {\em 2)} In the SOHE model, the scalar ambiguity vanishes with the increase of resolution ($b$ or $M$). This is in line
with the phenomenon that can be observed in reality. In other words, the SOHE model as well as the proof in Appendix \ref{A} can well
explain the phenomenon of scalar ambiguity occurred in our practice.
{\em 2)} Unlike other linear approximation models, the SOHE model does not impose the assumptions A1) and A2) (see Section \ref{sec2b})
onto the quantization noise $q_b$. Instead, $q_b$ is described as a function of the input signal $x$, with their statistical behaviors being
analytical studied here.
\begin{thm}\label{thm02}
Suppose: C1) the input signal $x$ follows $\mathbb{E}(x)=0$. The cross-correlation between $x$ and $q_b$ depends on the third-order central moments of $x$.
When the input signal $(x)$ is AWGN, the quantization noise can be considered to be uncorrelated with the input signal. Moreover, for the
case of $b\rightarrow\infty$, the following result holds
\begin{equation}\label{eqn036}
\lim_{b\rightarrow\infty}q_b(x)=0.
\end{equation}
\end{thm}
\begin{IEEEproof}
See Appendix \ref{B}.
\end{IEEEproof}
The implication of {\em Theorem \ref{thm02}} lies in two folds: {\em 1)} the quantization noise cannot be easily assumed to be uncorrelated with the input signal. {\em Theorem \ref{thm02}} provides sufficient conditions for the hypothesis of uncorrelated quantization noise; {\em 2)} due to the use of second-order expansion for quantization function, it is possible that the SOHE-based quantization noise cannot well represent the characteristics of ideal quantization like \eqref{eqn036}. However, {\em Theorem \ref{thm02}} confirms that with the increasing of resolutions, the quantization noise which is a function of the input signal would approximate to zero.
{\em Remark 2:}
It is worthwhile to note that, for complex-valued signals, the quantization process is applied individually in the real and imaginary domains.
Therefore, {\em Theorems \ref{thm01}-\ref{thm02}} apply straightforwardly to the complex-valued input signal.
\subsection{The Vector-SOHE Model and Characteristics}
The vector representation of the SOHE model has no fundamental difference from the scalar-SOHE model presented
in \eqref{eqn031}. It can be obtained by applying \eqref{eqn031} into \eqref{eqn004}
\begin{IEEEeqnarray}{ll}\label{eqn037}
\mathbf{y}&=\lambda_b\mathbf{r}+\mathbf{q}_b,\\
&=\lambda_b\mathbf{H}\mathbf{s}+\underbrace{\lambda_b\mathbf{v}+\mathbf{q}_b}_{\triangleq\mat{\varepsilon}_b}.\label{eqn038}
\end{IEEEeqnarray}
The vector form of the quantization noise is specified by
\begin{equation}\label{eqn039}
\mathbf{q}_b=4\omega_2\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)-2\omega_2,
\end{equation}
where $\Re(\mathbf{r})^2$ or $\Im(\mathbf{r})^2$ denotes the corresponding real-vector with the Hadamard power of $2$.
With {\em Theorem \ref{thm02}}, we can reach the following conclusion about the vector-SOHE model.
\begin{cor}\label{cor1}
Suppose that C2) each element of $\mathbf{H}$ is independently
generated; and C3) the number of transmit antennas ($N$) is sufficiently large. The following cross-covariance matrix can be obtained
\begin{equation}\label{eqn040}
\mathbf{C}_{qv}=\mathbb{E}(\mathbf{q}_b\mathbf{v}^H)=\mathbf{0}.
\end{equation}
\end{cor}
\begin{IEEEproof}
The condition C2) ensures that each element of the vector $[\mathbf{Hs}]$ is a sum of $N$ independently generated random variables.
With the condition C3), the central limit theorem tells us that each element of $[\mathbf{Hs}]$ is
asymptotically AWGN. Since the thermal noise $\mathbf{v}$ is AWGN and independent from $[\mathbf{Hs}]$,
the received signal $\mathbf{r}$ is approximately AWGN. In this case, {\em Theorem \ref{thm02}} tells us
\begin{equation}\label{eqn041}
\mathbf{C}_{qr}=\mathbb{E}(\mathbf{q}_b\mathbf{r}^H)=\mathbf{0}.
\end{equation}
Plugging \eqref{eqn003} into \eqref{eqn041} results in
\begin{IEEEeqnarray}{ll}\label{eqn042}
\mathbf{C}_{qr}&=\mathbb{E}(\mathbf{q}_b(\mathbf{Hs}+\mathbf{v})^H),\\
&=\mathbb{E}(\mathbf{q}_b(\mathbf{Hs})^H)+\mathbf{C}_{qv}=\mathbf{0}.\label{eqn043}
\end{IEEEeqnarray}
Since $\mathbf{v}$ is independent from $[\mathbf{Hs}]$, the only case for \eqref{eqn043} to hold is that both cross-covariance terms are zero.
\eqref{eqn040} is therefore proved.
\end{IEEEproof}
\begin{cor}\label{cor2}
Given the conditions C2) and C3), the auto-covariance matrix of the quantization noise ($\mathbf{C}_{qq}$) has the following
asymptotical form
\begin{equation}\label{eqn044}
\mathbf{C}_{qq}=4\omega_2^2\Big(4\sigma_r^4\mathbf{I}+(2\sigma_r^4-\sigma_r^2+1)(\mathbf{1}\otimes\mathbf{1}^T)\Big),
\end{equation}
where $\sigma_{r}^2$ denotes the variance of $r_k, _{\forall k}$ when $N\rightarrow\infty$.
\end{cor}
\begin{IEEEproof}
See Appendix \ref{C}.
\end{IEEEproof}
\begin{thm}\label{thm03}
Suppose that C4) the information-bearing symbols $s_n, _{\forall n},$ have their third-order central moments fulfilling the condition:
$\mathbb{E}(\Re(s_n)^3)=0$; $\mathbb{E}(\Im(s_n)^3)=0$. Then, the following cross-covariance holds
\end{thm}
\begin{equation}\label{eqn045}
\mathbf{C}_{\varepsilon s}=\mathbb{E}(\mat{\varepsilon}_b\mathbf{s}^H)=\mathbf{0}.
\end{equation}
\begin{IEEEproof}
The cross-covariance in \eqref{eqn045} can be computed as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{\varepsilon s}&=\mathbb{E}((\lambda_b\mathbf{v}+\mathbf{q}_b)\mathbf{s}^H),\label{eqn046}\\
&=\lambda_b\mathbf{C}_{vs}+\mathbb{E}(\mathbf{q}_b\mathbf{s}^H),\label{eqn047}\\
&=\mathbf{C}_{qs}\label{eqn048}.
\end{IEEEeqnarray}
The derivation from \eqref{eqn047} to \eqref{eqn048} is due to the mutual independence between $\mathbf{s}$ and $\mathbf{v}$.
Appendix \ref{D} shows
\begin{equation}\label{eqn049}
\mathbf{C}_{qs}=\mathbf{0}.
\end{equation}
The result \eqref{eqn045} is therefore proved.
It is perhaps worthwhile to note that in wireless communications,
$s_n$ is normally centrosymmetric (such as M-PSK and M-QAM) and equally probable. In this case, it is not hard to find that the
condition C4) does hold in reality.
\end{IEEEproof}
In summary, {\em Corollary \ref{cor1}} exhibits the conditions for the quantization noise to be uncorrelated with the thermal noise as well as
the noiseless part of the received signal. The condition C3) indicates the need for a sufficiently large number of transmit-antennas ($N$). However,
this does not mean to require a very large $N$ in practice. Let us take an example of $N=8$. Each element of $\mathbf{r}$ is a
superposition of $(2N)=16$ independently generated real random-variables, and this can already lead to a reasonable asymptotical result.
{\em Corollary \ref{cor2}} exhibits the auto-covariance matrix of $\mathbf{q}_b$, which is an asymptotical result for $N\rightarrow\infty$.
The exact form of $\mathbf{C}_{qq}$ is very tedious and we do not have the closed-form. Nevertheless, \eqref{eqn045} already provides
sufficient physical essence for us to conduct the LMMSE analysis.
Finally, {\em Theorem \ref{thm03}} shows that the quantization noise is uncorrelated with the information-bearing symbols. All of
these results are useful tools to our LMMSE analysis in Section \ref{sec4}.
\section{LMMSE Analysis with The Vector-SOHE Model}\label{sec4}
The primary aim of this section is to employ the vector-SOHE model \eqref{eqn037}-\eqref{eqn038} to conduct the LMMSE analysis, with which those interesting phenomena observed in the current LMMSE algorithm can be well explained. In addition, a better understanding of the behavior of the current LMMSE algorithm helps to find us an enhanced version particularly for signals with non-constant modulus modulations.
\subsection{The SOHE-Based LMMSE Analysis}\label{sec4a}
Vector-SOHE is still a linear model. It does not change the classical form of the LMMSE, i.e., $\mathbf{G}^\star=\mathbf{C}_{sy}\mathbf{C}_{yy}^{-1}$ still holds. Despite, the cross-covariance matrix $\mathbf{C}_{sy}$ can now be computed by
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{sy}&=\mathbb{E}\Big(\mathbf{s}(\lambda_b\mathbf{H}\mathbf{s}+\mat{\varepsilon}_b)^H\Big),\label{eqn050}\\
&=\lambda_b\mathbf{C}_{ss}\mathbf{H}^H+\mathbf{C}_{s\varepsilon},\label{eqn051}\\
&=\lambda_b\mathbf{H}^H.\label{eqn052}
\end{IEEEeqnarray}
The derivation from \eqref{eqn051} to \eqref{eqn052} is due to the fact $\mathbf{C}_{s\varepsilon}=\mathbf{0}$ (see {\em Theorem \ref{thm03}}) as well as the assumption that $x_n, \forall n,$ are uncorrelated with respect to $n$ (see the assumption above \eqref{eqn002}).
The auto-covariance matrix $\mathbf{C}_{yy}$ can be represented by
\begin{equation}
\mathbf{C}_{yy}=\lambda_b^2\mathbf{HH}^H+\mathbf{C}_{\varepsilon\varepsilon},\label{eqn053}
\end{equation}
where
\begin{IEEEeqnarray}{ll}\label{eqn054}
\mathbf{C}_{\varepsilon\varepsilon}&=\lambda_b^2N_0\mathbf{I}+\mathbf{C}_{qq}+\lambda_b(\mathbf{C}_{qv}+\mathbf{C}_{vq}),\\
&=\lambda_b^2N_0\mathbf{I}+\mathbf{C}_{qq}+2\lambda_b\Re(\mathbf{C}_{qv}).\label{eqn055}
\end{IEEEeqnarray}
Then, the LMMSE formula can be represented by
\begin{equation}\label{eqn056}
\mathbf{G}^\star=\lambda_b^{-1}\mathbf{H}^H(\mathbf{HH}^H+\lambda_b^{-2}\mathbf{C}_{\varepsilon\varepsilon})^{-1}.
\end{equation}
Provided the conditions C2) and C3), \eqref{eqn056} turns into
(see {\em Corollary \ref{cor1}})
\begin{equation}\label{eqn057}
\mathbf{G}^\star=\lambda_b^{-1}\mathbf{H}^H(\mathbf{HH}^H+N_0\mathbf{I}+\lambda_b^{-2}\mathbf{C}_{qq})^{-1},
\end{equation}
where $\mathbf{C}_{qq}$ can be substituted by \eqref{eqn044} in {\em Corollary \ref{cor2}}.
\subsection{Comparison between Various LMMSE Formulas}\label{sec4b}
Given that the generalized-AQNM model (see Section \ref{sec2b3}) was only studied for the $1$-bit quantizer, we mainly conduct
the LMMSE comparison between the SOHE model and the (modified) AQNM model. As shown in Section \ref{sec2b2},
the modified-AQNM model does not give a different LMMSE formula from the AQNM model when the Gaussian quantization noise is assumed.
Therefore, our study is quickly focused on the comparison with the AQNM model.
Basically, there are two major differences in their LMMSE forms:
{\em 1)} The SOHE-LMMSE formula has a scaling factor $\lambda_b^{-1}$, which plays the role of equalizing the scalar ambiguity
inherent in the SOHE model (see \eqref{eqn037}-\eqref{eqn038}). As shown in {\em Theorem \ref{thm01}}, this scalar ambiguity is introduced
in the low-resolution quantization procedure. It amplifies the signal energy in the digital domain and vanishes with the increase of resolutions.
This theoretical conclusion coincides well with the phenomenon observed in the literature (e.g. \cite{nguyen2019linear,9144509}).
{\em 2)} In the AQNM-LMMSE formula \eqref{eqn009}, the impact of the quantization noise is described by the term
$\mathrm{nondiag}(\rho\mathbf{HH}^H)$. This implies that the quantization noise is modeled as a linear distortion.
However, such is not the case for the SOHE-LMMSE formula. As shown in \eqref{eqn045} and \eqref{eqn057}, the auto-covariance matrix
$\mathbf{C}_{qq}$ involves the terms $\sigma_r^2$ and $\sigma_r^4$; and higher-order components are approximated in the SOHE model.
Although \eqref{eqn045} is only an asymptotical and approximate result, it carries a good implication in the sense that the quantization noise
would introduce non-linear effects to the LMMSE. Due to the modeling mismatch, the AQNM-LMMSE algorithm can suffer additional
performance degradation.
Denote $\mathbf{G}^\star_{\eqref{eqn009}}$ and $\mathbf{G}^\star_{\eqref{eqn057}}$ to be the corresponding LMMSE formulas with
respect to the AQNM and SOHE models. Section \ref{sec2a} indicates that they share the same size, i.e., $(N)\times(K)$.
Assuming that $\mathbf{G}^\star_{\eqref{eqn009}}$ has the full row rank, we are able to find a $(N)\times(N)$ matrix $\mathbf{\Theta}$
fulfilling
\begin{equation}\label{eqn058}
\mathbf{\Theta}\mathbf{G}^\star_{\eqref{eqn009}}=\mathbf{G}^\star_{\eqref{eqn057}}.
\end{equation}
Denote $(\mathbf{G}^\star_{\eqref{eqn009}})^\dagger$ to be the pseudo inverse of $\mathbf{G}^\star_{\eqref{eqn009}}$.
The matrix $\mathbf{\Theta}$ can be obtained through
\begin{equation}\label{eqn059}
\mathbf{\Theta}=\mathbf{G}^\star_{\eqref{eqn057}}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\Big)^\dagger.
\end{equation}
Therefore, the impact of the modeling mismatch inherent in the AQNM-LMMSE can be mitigated through a linear transform.
Suppose that the matrix $\mathbf{G}^\star_{\eqref{eqn009}}$ has the full row rank. The modeling-mismatch induced
performance degradation inherent in the AQNM-LMMSE algorithm can be mitigated through the linear transform specified in
\eqref{eqn058}, where the scaling factor $\lambda_b$ is incorporated in the matrix $\mathbf{\Theta}$.
\subsection{Enhancement of The AQNM-LMMSE Algorithm}
The SOHE-LMMSE formula describes more explicitly the impact of non-linear distortion in the channel equalization.
However, the SOHE-LMMSE formula cannot be directly employed for the channel equalization mainly due to two reasons:
{\em 1)} the auto-covariance matrix $\mathbf{C}_{qq}$ does not have a closed-form in general; and {\em 2)} the scalar
$\lambda_b$ defined in \eqref{eqn032} comes only from the first-order Hermite kernel. However, other odd-order Hermite
kernels also contribute to $\lambda_b$. The omission of the third- and higher-order Hermite kernels can make the computation of
$\lambda_b$ inaccurate. Fortunately, the analysis in \eqref{eqn058} and\eqref{eqn059} show that the SOHE-LMMSE formula can be translated into
the AQNM-LMMSE formula through a linear transform. In other words, there is a potential to enhance the AQNM-LMMSE algorithm
by identifying the linear transform $\mathbf{\Theta}$.
Denote $\hat{\mathbf{s}}_{\eqref{eqn057}}\triangleq\mathbf{G}^\star_{\eqref{eqn057}}\mathbf{y}$
and $\hat{\mathbf{s}}_{\eqref{eqn009}}\triangleq\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}$ to be the outputs of the SOHE-LMMSE
channel equalizer and the AQNM-LMMSE channel equalizer, respectively. Applying the result \eqref{eqn058}-\eqref{eqn059} yields
\begin{equation}\label{eqn060}
\hat{\mathbf{s}}_{\eqref{eqn009}}=\mathbf{\Theta}^{-1}\hat{\mathbf{s}}_{\eqref{eqn057}}.
\end{equation}
Generally, it is not easy to identify $\mathbf{\Theta}$ and remove it from $\hat{\mathbf{s}}_{\eqref{eqn009}}$. On the other hand,
if $\mathbf{G}^\star_{\eqref{eqn057}}$ and $\mathbf{G}^\star_{\eqref{eqn009}}$ are not too different, \eqref{eqn059} implies that
$\mathbf{\Theta}$ can be considered to be approximately diagonal. In this case, the linear transform reduces to symbol-level scalar ambiguities.
Assume that the channel-equalized result $\hat{\mathbf{s}}_{\eqref{eqn057}}$ does not have such scalar ambiguities. It is easy to understand
the scalar ambiguities of $\hat{\mathbf{s}}_{\eqref{eqn009}}$ come from $\lambda_b\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}$. In other
words, we can have the following approximation
\begin{equation}\label{eqn061}
\mathbf{\Theta}^{-1}\approx\lambda_b\mathbb{D}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}\Big).
\end{equation}
In \eqref{eqn061}, $\lambda_b$ is the only unknown notation which must be determined. {\em Theorem \ref{thm01}} shows that
the effect of $\lambda_b$ is the block-level energy amplification, of which the value can be computed using \eqref{appa6}. Finally, we conclude the following form of enhanced LMMSE channel equalizer (eLMMSE)
\begin{equation}\label{eqn063}
\mathbf{G}_e=\frac{1}{\lambda_b}\mathbb{D}\Big(\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{H}\Big)^{-1}\mathbf{G}^\star_{\eqref{eqn009}}.
\end{equation}
\section{Simulation Results and Discussion}\label{sec5}
Computer simulations were carried out to elaborate our theoretical work in Section \ref{sec3} and Section \ref{sec4}.
Similar to the AQNM models, the SOHE model cannot be directly evaluated through computer simulations.
Nevertheless, their features can be indirectly demonstrated through the evaluation of their corresponding LMMSE channel equalizers.
Given various LMMSE channel equalizers discussed in Section \ref{sec2} and Section \ref{sec4}, it is perhaps useful to provide a brief summary here for the sake of clarification:
\begin{itemize}
\item AQNM-LMMSE: this is the LMMSE channel equalizer shown in \eqref{eqn009}.
As shown in Section \ref{sec2b2}, the LMMSE channel equalizer \eqref{eqn017} is equivalent to \eqref{eqn009}; and thus it is not demonstrated in our simulation results.
\item B-LMMSE: this is the LMMSE channel equalizer shown in \eqref{eqn024}. This channel equalizer is specially designed and optimized for the $1$-bit quantizer. Therefore, it will only be demonstrated in our simulation results for the $1$-bit quantizer.
\item N-LMMSE: this is the AQNM-LMMSE channel equalizer normalized by the term $\|\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}\|$.
\item NB-LMMSE: this is the B-LMMSE channel equalizer normalized by the term $\|\mathbf{G}^\star_{\eqref{eqn024}}\mathbf{y}\|$.
Both the N-LMMSE and NB-LMMSE channel equalizers have been studied in \cite{7439790,nguyen2019linear,tsefunda}.
\item e-LMMSE: this is the e-LMMSE channel equalizer proposed in \eqref{eqn063}. As shown in Section \ref{sec4}, this e-LMMSE channel equalizer is driven by the SOHE model.
\end{itemize}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.25]{1bit_MSE_comparisons_dB.eps}
\caption{
The MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\label{fig01}
\end{figure}
In our computer simulations, the e-LMMSE channel equalizer is compared to the SOTA (i.e., AQNM-LMMSE, B-LMMSE, N-LMMSE and NB-LMMSE) in terms of their MSE as well as bit-error-rate (BER) performances. The MSE is defined by
\begin{equation}\label{eqn064}
\mathrm{MSE}\triangleq\frac{1}{(N)(I)}\sum_{i=0}^{I-1}\|\mathbf{G}_i^\star\mathbf{y}_i-\mathbf{s}_i\|^2,
\end{equation}
where $I$ denotes the number of Monte Carlo trials.
All the simulation results were obtained by taking average of sufficient number of Monte Carlo trials. For each trial, the wireless MIMO narrowband channel was generated according to independent complex Gaussian distribution (Rayleigh in amplitude), and this is the commonly used simulation setup in the literature \cite{7458830, 6987288}. In addition, the signal-to-noise ratio (SNR) is defined by the average received bit-energy per receive antenna to the noise ratio (Eb/N0), and the transmit power for every transmit antenna is set to be identical. The low-resolution quantization process follows the design in \cite{7037311}, which for 1-bit quantizer, binary quantization is taken; for quantizer other than 1-bit (i.e., 2, 3-bit), the ideal AGC is assumed and the quantization is determined by quantization steps \cite{1057548}.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.35]{MSE_23bit_comparisons_dB.eps}
\caption{The MSE performance as a function of Eb/N0 for the $N$-by-$K$ multiuser-MIMO systems with $1$-bit quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dashed] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(2/32)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(4/64)$,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$(N/K)=(8/128)$.}\label{fig02}
\end{figure*}
According to the measures used in computer simulations, we divide the simulation work into two experiments.
One is designed to examine the MSE performance, and the other is for the BER performance.
In our simulation results, we demonstrate the performances mainly for $16$-QAM. This is due to two reasons:
{\em 1)} all types of LMMSE channel equalizers offer the same performances for M-PSK modulations.
This phenomenon has already been reported in the literature and also discussed in Section \ref{sec1};
and {\em 2)} higher-order QAM modulations exhibit almost the same basic features as those of $16$-QAM.
On the other hand, they perform worse than $16$-QAM due to their increased demand for the resolution of quantizers.
Those observations are not really novel and thus abbreviated.
\subsubsection*{Experiment 1}\label{exp1}
The objective of this experiment is to examine the MSE performance of various LMMSE channel equalizers.
For all simulations, we keep the transmit antenna to receive antenna ratio to be a constant (e.g. $N/K=1/16$).
\figref{fig01} depicts the MSE performances of various LMMSE channel equalizers as far as the $1$-bit quantizer is concerned.
Generally, it can be observed that all the MSE performances get improved by increasing the size of MIMO.
This phenomenon is fully in line with the principle of mMIMO.
It can also be observed that both the AQNM-LMMSE and the B-LMMSE channel equalizers perform poorly throughout the whole SNR range.
This is because the AQNM models do not capture the scaling ambiguity as described in the SOHE model. When the normalization operation is applied,
the AQNM-LMMSE and the B-LMMSE channel equalizers turn into their corresponding N-LMMSE and NB-LMMSE equalizers, respectively.
Interestingly, their performances get significantly improved, and thereby outperforming the e-LMMSE channel equalizer for most of cases.
On one hand, this is the additional evidence showing the missing of scaling ambiguity in the AQNM models; and on the other hand,
it is shown that the NB-LMMSE is indeed the optimized LMMSE channel equalizer for the $1$-bit quantizer.
Nevertheless, we can see that the e-LMMSE approach still offers very comparable MSE performances with the N-LMMSE and NB-LMMSE approaches.
This provides the indirect evidence showing that the SOHE model offers a good approximation for the $1$-bit quantizer.
Then, we carry on our simulations for $2$- and $3$-bit low-resolution quantizers, respectively, and illustrate their MSE performances in \figref{fig02}.
It is perhaps worth emphasizing that the B-LMMSE and NB-LMMSE channel equalizers are not examined here since they are devised only for the $1$-bit quantizer.
The first thing coming into our sight is that the e-LMMSE shows the best MSE performance for almost all the demonstrated cases.
This is a very good evidence to support our theoretical work about the SOHE model as well as the SOHE-based LMMSE analysis.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{two_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 2$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=16$ receive antennas.}\label{fig03}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{four_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 4$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\label{fig04}
\end{figure*}
When going down to the detail, specifically for the $2$-bit quantizer, the N-LMMSE approach demonstrates very comparable performance
with the e-LMMSE approach in the case of larger MIMO (i.e. $(N/K)=(8/128)$). However, its performance gets quickly degraded with the decrease of
the MIMO size. Take the example of Eb/N0$=5$ dB. For the case of $(N/K)=(8/128)$,
both the e-LMMSE and the N-LMMSE approaches have their MSEs at around $-22.6$ dB,
while the AQNM-LMMSE has its MSE at around $-16.8$ dB. Both the e-LMMSE and the N-LMMSE outperform the AQNM-LMMSE by around $6$ dB.
When the size of MIMO reduces to $(N/K)=(4/64)$, the e-LMMSE shows the best MSE (around $-21.2$ dB).
The MSE for N-LMMSE and AQNM-LMMSE becomes $-18.9$ dB and $-17.7$ dB, respectively.
The N-LMMSE underperforms the e-LMMSE by around $2.3$ dB, although it still outperforms the AQNM-LMMSE by around $1.2$ dB.
By further reducing the size of MIMO to $(N/K)=(2/32)$, the e-LMMSE has its MSE performance degraded to $-19.6$ dB.
The MSE for N-LMMSE and AQNM-LMMSE now becomes $-14.9$ dB and $-17.4$ dB, respectively.
The e-LMMSE outperforms the AQNM-LMMSE by around $2.2$ dB and the N-LMMSE by around $4.7$ dB.
The major reason for this phenomenon to occur is that the AQNM model assumes the quantization distortion and the input signal to be Gaussian.
This assumption becomes less accurate with the use of less transmit antennas. Moreover, the use of less receive antennas has the spatial de-noising ability reduced. The term used for normalization gets more negatively influenced by the noise as well as the quantization distortion.
The SOHE model does not assume the input signal and the quantization distortion to be Gaussian, and thus it gets the least negative impact.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{eight_123bit_enhanced_comparison.eps}
\caption{The BER performance as a function of Eb/N0 for $N= 8$ transmitters, $16$-QAM systems with different resolutions of quantizers,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=128$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas.}\label{fig05}
\end{figure*}
Due to the same rationale, the similar phenomenon can also be observed for the $3$-bit quantizer.
Again, the e-LMMSE approach shows the best performance for almost all the cases. Apart from that, there are two notable differences that worth a mention:
{\em 1)} the performance of AQNM-LMMSE is quite close to that of e-LMMSE for all sizes of MIMO. This is because the $3$-bit quantizer is of reasonably good resolution for $16$-QAM modulations, and this largely mitigates the discrimination between the AQNM model and the SOHE model;
and 2) the N-LMMSE performs really poorly when compared with the others. This implies the inaccuracy of using the term $\|\mathbf{G}^\star_{\eqref{eqn009}}\mathbf{y}\|$ for the normalization.
After all, the experiment about the MSE evaluation confirms our theoretical work in Sections \ref{sec2}-\ref{sec4} and demonstrates the major
advantages of the SOHE model as well as the e-LMMSE channel equalizer from the MSE perspective.
\subsubsection*{Experiment 2}\label{exp2}
It is common knowledge that an MMSE-optimized approach is not necessarily optimized for the detection performance.
This motivates us to examine the average-BER performance for various LMMSE channel equalizers in this experiment.
Basically, this experiment is divided into three sub-tasks, with each having a fixed number of transmit antennas.
\figref{fig03} depicts the case of $N=2$ transmit antennas. Generally, the use of more receive antennas can largely improve the BER performance.
This conclusion is true for all types of examined low-resolution quantizers. In other words, all LMMSE channel equalizers can enjoy the
receiver-side spatial diversity.
Specifically for the $1$-bit quantizer, the AQNM-based LMMSE approaches (i.e., AQNM-LMMSE and B-LMMSE) generally underperform their corresponding normalized version (i.e., N-LMMSE and NB-LMMSE). This phenomenon fully coincides with their MSE behaviors shown in
{\em Experiment 1}- \figref{fig01}. The e-LMMSE approach does not demonstrate remarkable advantages in this special case. It offers the best BER
at the SNR range around Eb/N0 $=2$ dB, and then the BER grows with the increase of SNR. Such phenomenon is not weird, and this occurs quite
often in systems with low-resolution quantizers and other non-linear systems due to the physical phenomenon called stochastic resonance \cite{RevModPhys.70.223}. Similar phenomenon also occurs in the AQNM-LMMSE approach. It means that, for low-resolution quantized systems, additive noise could be constructive to the signal detection at certain SNRs, especially for the QAM constellations (e.g. \cite{7247358, 7894211, 9145094, jacobsson2019massive, She2016The}).
The theoretical analysis of constructive noise in the signal detection can be found in Kay's work \cite{809511}
( interested readers please see Appendix \ref{E} for the elaboration of the phenomenon of constructive noise.)
Interestingly, the normalized approaches do not show
considerable stochastic resonance phenomenon within the observed SNR range.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.27]{four_123bit_SE_comparison.eps}
\caption{The sum SE as a function of Eb/N0 for $N= 4$ transmitters for systems with different resolutions of quantizers, different LMMSE based channel estimators and ZF channel equalizer.
\protect\tikz[baseline]{\protect\draw[line width=0.2mm] (0,.5ex)--++(0.6,0) ;}~$K=64$ receive antennas,
\protect\tikz[baseline]{\protect\draw[line width=0.2mm, dash dot] (0,.5ex)--++(0.6,0) ;}~$K=32$ receive antennas.}\label{fig06}
\end{figure*}
When the resolution of quantizer increases to $b=2$ bit, the e-LMMSE approach demonstrates the significant performance gain for most of cases.
For instance, the e-LMMSE significantly outperforms the AQNM-LMMSE for the higher SNR range (i.e., Eb/N0 $>0$ dB).
The N-LMMSE approach performs the worst in all the cases. This observation is well in line with our observation in the MSE performance
(see \figref{fig02}), and they share the same rationale.
When the resolution of quantizer increases to $b=3$ bit, both the e-LMMSE and the AQNM-LMMSE approaches offer excellent BER performances.
Their performances are very close to each other, and the e-LMMSE only slightly outperforms the AQNM-LMMSE for the case of $K=16$.
This reason for this phenomenon is the same as that for the MSE performance, which has also been explained in {\em Experiment 1}.
In a short summary, the e-LMMSE approach shows significant advantages for $2$-bit quantizers. This is the case where the SOHE model offers a
better mathematical description than the AQNM models, and at the same time the resolution is not high enough to support higher-order modulations.
This is particularly true for the case of $N=2$ transmit antennas, where the input signal and quantization distortion can be assumed to be white Gaussian.
Now, we increase the number of transmit antennas $(N)$ to investigate how the BER performance will be influenced.
Accordingly, the number of receive antennas $(K)$ is also increased. The BER results for the case of $N=4$ are plotted in \figref{fig04}.
Let us begin with the $3$-bit quantizer. We have almost the same observation as for the case of $N=2$ transmit antennas.
The e-LMMSE approach performs slightly better than the AQNM-LMMSE approach. The performance difference is not really considerable.
When it comes to the case of $2$-bit quantizer, their difference in BER gets largely increased, and the e-LMMSE approach
demonstrates significant advantages. It is worth noting that the N-LMMSE approach offers comparable performances
with the AQNM-LMMSE approach. This is because the increase of transmit antennas brings the input signal and quantization distortion closer to
the white Gaussian. This rationale is also explained in the MSE analysis. For the case of $1$-bit quantizer, there is no much new phenomenon observed
in comparison with the case of $N=2$ transmit antennas; apart from that the stochastic resonance phenomenon becomes less significant.
When the number of transmit antennas increases to $N=8$, the BER results are plotted in \figref{fig05}.
For the case of $3$-bit quantizer, the e-LMMSE approach demonstrates slightly more considerable gain,
and the N-LMMSE approach gets its performance even closer to the others.
Similar phenomenon can be observed for the case of $2$-bit quantizer, where the N-LMMSE offers considerably close performance to
the e-LMMSE approach. The AQNM-LMMSE approach performs the worst. This phenomenon is also observed in the MSE analysis.
Again, for the $1$-bit quantizer, the NB-LMMSE approach offers the best BER performance, as it is devised and optimized for this special case.
Similar to the phenomenon observed in {\em Experiment 1}, the performance of e-LMMSE is not the best for the $1$-bit quantized system. This is because, for the $1$-bit quantized system, there exists an optimum LMMSE channel equalizer using the arcsine law \cite{Mezghani2012,Papoulis_2002}.
Despite, the proposed e-LMMSE approach can still provide comparable performance over the closed-form approach. When it comes to the $3$-bit quantizer, it can be found that e-LMMSE has only a slight BER gain over the AQNM-LMMSE. It is known that one of the characteristics of SOHE model is that it is not based on the Gaussian quantization noise assumption. However, when the resolution of quantizer rises to $3$-bit, the distribution of quantization noise very approximates to the Gaussian distribution and such results in similar performances between e-LMMSE and AQNM-LMMSE.
\subsubsection*{Experiment 3}\label{exp3}
As response to the reviewers' comments, we add this experiment to examine the SOHE-based channel estimation and its corresponding channel equalization.
For this experiment, SOTA approaches can include those results reported in \cite{7931630, 7894211,rao2021massive}.
It is perhaps worth noting that the literature \cite{rao2021massive} considers the use of sigma-delta quantizer, which takes advantage of oversampling to achieve an enhanced performance.
This is however not the case for our work as well as those in \cite{7931630, 7894211}.
For the sake of fair comparison, we only conduct the performance comparison between our work and the result in \cite{7931630, 7894211}.
In this experiment, the performance is evaluated through the sum SE defined by \cite{rao2021massive}
\begin{equation}\label{eqn067}
\mathrm{SE} =\frac{T-P}{T}\sum_n^NR_n,
\end{equation}
where $T$ is the length of the coherence interval, and $R_n$ the achievable rate for each transmitter-to-receiver link defined in \cite{7931630, 7894211}.
This is because the sum SE is the widely considered metric in the SOTA \cite{7931630, 7894211,rao2021massive}, where $T$ is commonly set to $200$.
Similar to \eqref{eqn003}, the mathematical model for low-resolution quantized mMIMO channel estimation is given in the vectorized form
\begin{equation}\label{eqn065}
\mathbf{r}_p = \bar{\mathbf{\Phi}}\bar{\mathbf{h}}+\bar{\mathbf{v}}_p,
\end{equation}
where $\bar{\mathbf{h}}=\mathrm{vec}(\mathbf{H})$, $\bar{\mathbf{\Phi}}=(\mathbf{\Phi} \otimes \mathbf{I}_K)$ and $\mathbf{\Phi}\in \mathbb{C}^{N\times P}$ is the pairwise orthogonal pilot matrix, which is composed of submatrices of the discrete Fourier transform (DFT) operator \cite{Biguesh_1bit}. During training, all $N$ users simultaneously transmit their pilot sequences of $P$ symbols to the BS.
Feeding \eqref{eqn065} to the low-resolution quantizer, we should have the output $\mathbf{y}_p \in \mathbb{C}^{KP\times 1}$, which is similar to \eqref{eqn004}.
Regarding the LMMSE channel estimation algorithms, we should have the closed-form B-LMMSE estimator for 1-bit quantized model in \cite{7931630} and AQNM-LMMSE and N-LMMSE estimators for other resolutions.
Those channel estimators are compared with the SOHE-LMMSE channel estimator in \eqref{eqn063}.
Given the LMMSE estimator $\mathbf{W}^*$, the channel estimate can be expressed as $\hat{\mathbf{H}}=\mathrm{unvec}(\mathbf{W}^*\mathbf{y}_b)$. For the sake of fairness, we employ the zero-forcing (ZF) algorithm for the channel equalization as it has been used by the SOTA, i.e.,
$\mathbf{G}_{\text{ZF}} = \mathbf{\hat{\mathbf{H}}}^H(\hat{\mathbf{H}}\hat{\mathbf{H}}^H)^{-1}$.
\figref{fig06} depicts the sum SE performance of various LMMSE channel estimators while $N=4$ transmitters and $K= 32, 64$ receive antennas are considered. The length of the pilot is considered as $P=N$. Similar to the phenomenon observed in above experiments, the rising up of the number of receive antennas and resolution of quantizers can offer significant SE gain.
When the resolution of quantizer is $b=1$ bit, the B-LMMSE algorithm has the best sum SE over other LMMSE channel estimators, and the gap can be approximately 4 bit/s/Hz. This phenomenon is not wired as the B-LMMSE is the closed-form algorithm for 1-bit quantized model \cite{7931630}. SOHE-LMMSE and AQNM-LMMSE channel estimators do not demonstrate advantages in this special scenario, but it can be found that SOHE-LMMSE can achieve almost the same sum SE as the N-LMMSE channel estimator, while AQNM-LMMSE approach performs the worst in such model.
When the resolution of quantizer increases to $b=2$ bit, all three types (i.e., SOHE-LMMSE, AQNM-LMMSE and N-LMMSE) of channel estimators share the similar sum SE. For instance, they can have their sum SE reaching at 16 bit/s/Hz for $K=32$ and 20 bit/s/Hz for $K=64$ for the four-user system. When it comes to the case of 3-bit quantizer, we have almost the same observation as for the case of $b=2$ bit quantizer. The performance difference between all three types of channel estimators is not really considerable for high Eb/N0. When the Eb/N0 $>$ 0dB, for $K=64$, the AQNM-LMMSE channel estimator can slightly outperform the N-LMMSE and SOHE-LMMSE channel estimators. As it is discussed in Section \ref{sec4}-\ref{sec5}, the scalar ambiguity will be detrimental for QAM modulations. However, the setup of each element of the pilot matrix $\mathbf{\Phi}$ follows unit power and all pilot sequences are pairwise orthogonal; similar to the analysis for LMMSE channel equalization for PSK constellations, the scalar ambiguity does not show any side effect on such case. This explains the reason why the SOHE-LMMSE channel estimator has the same sum SE compared with current version LMMSE algorithms.
\section{Conclusion}
In this paper, a novel linear approximation method, namely SOHE, has been proposed to model the low-resolution quantizers.
The SOHE model was then extended from the real-scalar form to the complex-vector form, and the latter was applied and extensively studied in
the low-resolution quantized multiuser-MIMO uplink signal reception. It has been shown that the SOHE model does not require
those assumptions employed in the AQNM model as well as its variations. Instead, it uses the first-order Hermite kernel to model the
signal part and the second-order Hermite kernel to model the quantization distortion. Such equipped us with sufficient flexibility and
capacity to develop deeper and novel understanding of the stochastic behavior and correlation characteristics of the quantized signal
as well as the non-linear distortion. Through our intensive analytical work, it has been unveiled that the low-resolution quantization could result
in a scalar ambiguity. In the SOHE model, this scalar ambiguity is characterized by the coefficient of the first-order Hermite kernel.
However, it is not accurately characterized in other linear approximation models due to the white-Gaussian assumption.
When applying the SOHE model for the LMMSE analysis,
it has been found that the SOHE-LMMSE formula carries the Hermite coefficient, which equips the SOHE-LMMSE channel equalizer with
the distinct ability to remove the scalar ambiguity in the channel equalization. It has been shown that the SOHE-LMMSE formula involves
higher-order correlations, and this prevents the implementation of the SOHE-LMMSE channel equalizer. Nevertheless, it was also found that
the SOHE-LMMSE formula could be related to the AQNM-LMMSE formula through a certain linear transform. This finding motivated the
development of the e-LMMSE channel equalizer, which demonstrated significant advantages in the MSE and BER performance evaluation. All of
the above conclusions have been elaborated through extensive computer simulations in the independent Rayleigh-fading channels.
\appendices
\section{Proof of Theorem \ref{thm01}}\label{A}
With the equations \eqref{eqn028} and \eqref{eqn032}, the coefficient $\lambda_b$ can be computed as follows
\begin{IEEEeqnarray}{ll}\label{appa1}
\lambda_b&=\frac{-1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}
\Big[\frac{\partial}{\partial x}\exp(-x^2)\Big]\mathrm{d}x,\\
&=\frac{-1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\int_{\tau_m}^{\tau_{m+1}}(-2x)\exp(-x^2)\mathrm{d}x,\label{appa2}\\
&=\frac{1}{\sqrt{\pi}}\sum_{m=0}^{M-1}x_m\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)
\Big).\label{appa3}
\end{IEEEeqnarray}
We first examine the limit of $\lambda_b$ when $b\rightarrow\infty$. It is equivalent to the following case
\begin{IEEEeqnarray}{ll}
\lim_{b\rightarrow\infty}\lambda_b
&=\frac{1}{\sqrt{\pi}}\lim_{M\rightarrow\infty}\sum_{m=0}^{M-1}x_m\Big(
\exp(-\tau_m^2)\nonumber
\\&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\exp(-\tau_{m+1}^2)\Big). \label{appa4}
\end{IEEEeqnarray}
For $M\rightarrow\infty$, the discrete-time summation in \eqref{appa4} goes back to the integral in \eqref{eqn028}.
Since it is an ideal quantization, we have $x_m=x$, and thereby having
\begin{equation}\label{appa5}
\lim_{b\rightarrow\infty}\lambda_b=\frac{2}{\sqrt{\pi}}\int_{-\infty}^{\infty}x^2\exp(-x^2)\mathrm{d}x=1.
\end{equation}
The derivation of \eqref{appa5} can be found in \cite[p. 148]{Papoulis_2002}.
For the symmetric quantization, \eqref{appa3} can be written into
\begin{equation}\label{appa6}
\lambda_b
=\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}x_m\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)
\Big).
\end{equation}
Consider the particular range of $x\in(\tau_m, \tau_{m+1}]$ and $\tau_m>0$, in which $\exp(-x^2)$ is a monotonically
decreasing function of $x$. Then, we have
\begin{equation}\label{appa7}
\exp(-\tau_m^2)\geq\exp(-x^2),~x\in(\tau_m, \tau_{m+1}].
\end{equation}
and consequently have
\begin{equation}\label{appa8}
(\tau_{m+1})\exp(-\tau_m^2)\geq\int_0^{\tau_{m+1}}\exp(-x^2)\mathrm{d}x.
\end{equation}
Applying \eqref{eqn034} and \eqref{appa8} into \eqref{appa6} results in
\begin{IEEEeqnarray}{ll}\label{appa9}
\lambda_b&=\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}\tau_{m+1}\Big(
\exp(-\tau_m^2)-\exp(-\tau_{m+1}^2)\Big),\\
&\geq\frac{2}{\sqrt{\pi}}\sum_{m=M/2}^{M-1}\Big[\int_0^{\tau_{m+1}}\exp(-x^2)\mathrm{d}x\nonumber\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-(\tau_{m+1})\exp(-\tau_{m+1}^2)\Big],\\
&\geq\frac{2}{\sqrt{\pi}}\int_0^{\infty}\exp(-x^2)\mathrm{d}x=1.\label{appa11}
\end{IEEEeqnarray}
{\em Theorem \ref{thm01}} is therefore proved.
\section{Proof of Theorem \ref{thm02}}\label{B}
With the quantization noise model \eqref{eqn033}, the cross-correlation between $x$ and $q_b$ can be computed by
\begin{IEEEeqnarray}{ll}\label{appb1}
\mathbb{E}(xq_b)&\approx\mathbb{E}(x(4\omega_2x^2-2\omega_2)),\\
&\approx4\omega_2\mathbb{E}(x^3)-2\omega_2\mathbb{E}(x).\label{appb2}
\end{IEEEeqnarray}
With the condition C1), \eqref{appb2} is equivalent to
\begin{equation}\label{appb3}
\mathbb{E}(xq_b)\approx 4\omega_2\mathbb{E}(x^3).
\end{equation}
When $x$ is AWGN, the third-order term $\mathbb{E}(x^3)$ in \eqref{appb3} equals to 0 (see \cite[p. 148]{Papoulis_2002}).
This leads to the observation that $\mathbb{E}(xq_b)=0$
and then the first part of {\em Theorem \ref{thm02}} is therefore proved.
To prove the limit \eqref{eqn036}, we first study the coefficient $\omega_2$ in \eqref{eqn033}. For $b\rightarrow\infty$,
$\omega_2$ goes back to the formula specified in \eqref{eqn028}. Then, we can compute $\omega_2$ as follows
\begin{IEEEeqnarray}{ll}
\omega_2&=\frac{1}{8\sqrt{\pi}}\int_{-\infty}^{\infty}x\Big[\frac{\partial^2}{\partial x^2}\exp(-x^2)\Big]\mathrm{d}x,\label{appb4}\\
&=\frac{1}{8\sqrt{\pi}}\int_{-\infty}^{\infty}x\Big[(-2+4x^2)\exp(-x^2)\Big]\mathrm{d}x,\label{appb5}\\
&=-\frac{1}{4\sqrt{\pi}}\int_{-\infty}^{\infty}x\exp(-x^2)\mathrm{d}x\nonumber\\
&\quad\quad\quad\quad\quad\quad+\frac{1}{2\sqrt{\pi}}\int_{-\infty}^{\infty}x^3\exp(-x^2)\mathrm{d}x.\label{appb6}
\end{IEEEeqnarray}
It is well known that (also see \cite[p. 148]{Papoulis_2002})
\begin{equation}\label{appb7}
\int_{-\infty}^{\infty}x^l\exp(-x^2)\mathrm{d}x=0, ~l=1, 3;
\end{equation}
and thus we can obtain $\omega_2=0$ for the case of $b\rightarrow\infty$. Applying this result into \eqref{eqn033} leads to
the conclusion in \eqref{eqn036}.
\section{Proof of {\em Corollary \ref{cor2}}}\label{C}
With \eqref{eqn039}, we can compute $\mathbf{C}_{qq}$ as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}
&=\mathbb{E}(\mathbf{q}_b\mathbf{q}_b^H),\label{app08}\\
&=4\omega_2^2\Big(4\underbrace{\mathbb{E}\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)\Big(\Re(\mathbf{r})^2-j\Im(\mathbf{r})^2\Big)^T}_{\triangleq\mathbf{C}_{qq}^{(1)}}-\nonumber\\
&\quad2\underbrace{\mathbb{E}\Big(\Big(\Re(\mathbf{r})^2+j\Im(\mathbf{r})^2\Big)\otimes\mathbf{1}^T\Big)}_{\triangleq\mathbf{C}_{qq}^{(2)}}-\nonumber\\
&\quad2\underbrace{\mathbb{E}\Big(\mathbf{1}\otimes\Big(\Re(\mathbf{r})^2-j\Im(\mathbf{r})^2\Big)^T\Big)}_{\triangleq\mathbf{C}_{qq}^{(3)}}+\mathbf{1}\otimes\mathbf{1}^T.\Big)
\label{app09}
\end{IEEEeqnarray}
We start from $\mathbf{C}_{qq}^{(2)}$ in \eqref{app09}. Given the conditions C3) and C4), the proof in {\em Corollary \ref{cor1}} shows
that $\mathbf{r}$ is asymptotically zero-mean complex
Gaussian with the covariance to be approximately $\sigma_r^2\mathbf{I}$.
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}^{(2)}&=\Big(\mathbb{E}\Big(\Re(\mathbf{r})^2\Big)+j\mathbb{E}\Big(\Im(\mathbf{r})^2\Big)\Big)\otimes\mathbf{1}^T,\label{app10}\\
&=\frac{\sigma_r^2}{2}(\mathbf{1}+j\mathbf{1})\otimes\mathbf{1}^T,\label{app11}
\end{IEEEeqnarray}
Analogously, the following result holds
\begin{equation}
\mathbf{C}_{qq}^{(3)}=\frac{\sigma_r^2}{2}\mathbf{1}\otimes(\mathbf{1}-j\mathbf{1})^T.\label{app12}
\end{equation}
Then, we can obtain
\begin{equation}\label{app13}
2\Big(\mathbf{C}_{qq}^{(2)}+\mathbf{C}_{qq}^{(3)}\Big)=\sigma_r^2\mathbf{1}\otimes\mathbf{1}^T.
\end{equation}
Now, we come to the last term $\mathbf{C}_{qq}^{(1)}$, which can be computed as follows
\begin{IEEEeqnarray}{ll}
\mathbf{C}_{qq}^{(1)}&=\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)
+\mathbb{E}\Big(\Im(\mathbf{r})^2\Im(\mathbf{r}^T)^2\Big)+\nonumber\\
&\quad j\Big(\mathbb{E}\Big(\Im(\mathbf{r})^2(\Re(\mathbf{r}^T)^2\Big)-\mathbb{E}\Big(\Re(\mathbf{r})^2(\Im(\mathbf{r}^T)^2\Big)\Big).
\label{app14}
\end{IEEEeqnarray}
Since $\Re(\mathbf{r})$ and $\Im(\mathbf{r})$ follow the identical distribution, we can easily justify
\begin{IEEEeqnarray}{ll}\label{app15}
\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)&=\mathbb{E}\Big(\Im(\mathbf{r})^2\Im(\mathbf{r}^T)^2\Big), \\
\mathbb{E}\Big(\Im(\mathbf{r})^2(\Re(\mathbf{r}^T)^2\Big)&=\mathbb{E}\Big(\Re(\mathbf{r})^2(\Im(\mathbf{r}^T)^2\Big).
\label{app16}
\end{IEEEeqnarray}
Applying \eqref{app15} into \eqref{app14} results in
\begin{equation}\label{app17}
\mathbf{C}_{qq}^{(1)}=2\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big).
\end{equation}
Plugging \eqref{app17} and \eqref{app13} into \eqref{app09} yields
\begin{equation}\label{app17a}
\mathbf{C}_{qq}=4\omega_2^2\Big(8\mathbb{E}\Big(\Re(\mathbf{r})^2\Re(\mathbf{r}^T)^2\Big)+(1-\sigma_r^2)(\mathbf{1}\otimes\mathbf{1}^T)\Big).
\end{equation}
It is not hard to derive (see \cite[p. 148]{Papoulis_2002})
\begin{equation}\label{app18}
\mathbb{E}\Big(\Re(r_k)^4\Big)=\frac{3\sigma_r^4}{4}.
\end{equation}
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(\Re(r_k)^2\Re(r_m)^2\Big)&=\mathbb{E}\Big(\Re(r_k)^2\Big)\mathbb{E}\Big(\Re(r_m)^2\Big), _{\forall k\neq m,}\label{app19}\\
&=\frac{\sigma_r^4}{4}.\label{app20}
\end{IEEEeqnarray}
Applying \eqref{app18} and \eqref{app20} into \eqref{app17} yields
\begin{equation}\label{app21}
\mathbf{C}_{qq}^{(1)}=\frac{\sigma_r^4}{2}(2\mathbf{I}+\mathbf{1}\otimes\mathbf{1}^T).
\end{equation}
Further applying \eqref{app21} into \eqref{app17a} yields the result \eqref{eqn044}. {\em Corollary \ref{cor2}} is therefore proved.
\section{Proof of \eqref{eqn049}}\label{D}
Consider the element-wise cross-correlation between the $m^\mathrm{th}$ element of $\mathbf{q}_b$ (denoted by $q_m$) and the
$k^\mathrm{th}$ element of $\mathbf{s}$, i.e.,
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(q_ms_k^*\Big)&=\mathbb{E}\Big(\Re(s_k)\Re(q_m)+\Im(s_k)\Im(q_m)\Big)+\nonumber\\
&\quad j\mathbb{E}\Big(\Re(s_k)\Im(q_m)-\Im(s_k)\Re(q_m)\Big),\label{app01}\\
&=2\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big).\label{app02}
\end{IEEEeqnarray}
Using \eqref{eqn033}, we can obtain
\begin{IEEEeqnarray}{ll}
\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big)&=\mathbb{E}\Big(\Re(s_k)(4\omega_2\Re(r_m)^2-2\omega_2)\Big),\nonumber\label{app03}\\
&=4\omega_2\mathbb{E}\Big(\Re(s_k)\Re(r_m)^2\Big).\label{app04}
\end{IEEEeqnarray}
The term $\Re(r_m)$ can be represented by
\begin{equation}\label{app05}
\Re(r_m)=\Re(s_k)\Re(h_{m,k})+\gamma_m+\Re(v_m),
\end{equation}
where $\gamma_m$ is the sum of all corresponding terms that are uncorrelated with $\Re(s_k)$, and $h_{m,k}$ is the $(m,k)^\mathrm{th}$
entry of $\mathbf{H}$. Define $\epsilon_m\triangleq\gamma_m+\Re(v_m)$. We apply \eqref{app05} into \eqref{app04} and obtain
\begin{IEEEeqnarray}{ll}
\mathbb{E}&\Big(\Re(s_k)\Re(r_m)^2\Big)=\Re(h_{m,k})^2\mathbb{E}\Big(\Re(s_k)^3\Big)+\nonumber\\
&\quad\quad\underbrace{2\Re(h_{m,k})\mathbb{E}\Big(\Re(s_k)^2\epsilon_m\Big)+\mathbb{E}\Big(\Re(s_k)\epsilon_m^2\Big)}_{=0}.\label{app06}
\end{IEEEeqnarray}
Plugging \eqref{app06} into \eqref{app04} yields
\begin{equation}\label{app07}
\mathbb{E}\Big(\Re(s_k)\Re(q_m)\Big)=4\omega_2\Re(h_{m,k})^2\mathbb{E}\Big(\Re(s_k)^3\Big).
\end{equation}
The condition C4) ensures that the third-order central moments $\mathbb{E}\Big(\Re(s_k)^3\Big)=0$. Hence, we can conclude
$\mathbb{E}\Big(q_ms_k^*\Big)=0, \forall m,k$. The result \eqref{eqn049} is therefore proved.
\section{Elaborative Explanation of the Phenomenon of Constructive Noise}\label{E}
As response to the review comment, we find it important to elaborate the phenomenon of constructive noise in the low-resolution signal detection. To better explain the phenomenon, we consider the case where two different information-bearing symbol blocks termed $\mathbf{s}^{(1)}$ and $\mathbf{s}^{(2)}$, $\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)}$, are transmitted to the receiver separately.
In the case of very high SNR or perhaps more extremely the noiseless case, their corresponding received blocks can be expressed by
\begin{equation}\label{appe1}
\mathbf{r}^{(1)}=\mathbf{H}\mathbf{s}^{(1)},~\mathbf{r}^{(2)}=\mathbf{H}\mathbf{s}^{(2)},
\end{equation}
where the noise $\mathbf{v}$ is omitted for now because it is negligibly small.
In this linear system, there exists a perfect bijection between $\mathbf{s}$ and $\mathbf{r}$ and we have $\mathbf{r}^{(1)}\neq \mathbf{r}^{(2)}$.
For this reason, the receiver can reconstruct the information-bearing symbol block from $\mathbf{r}$ without error.
The noise will only introduce the detrimental impact to the signal detection.
However, such is not the case for the system with low-resolution ADC.
To make the concept easy to access, we consider the special case of $1$-bit ADC, the output of which is
\begin{equation}\label{appe2}
\mathbf{y}^{(1)}=\mathcal{Q}_b(\mathbf{H}\mathbf{s}^{(1)}),~\mathbf{y}^{(2)}=\mathcal{Q}_b(\mathbf{H}\mathbf{s}^{(2)}).
\end{equation}
The nonlinear function $\mathcal{Q}_b(\cdot)$ can destroy the input-output bijection as hold in the linear system.
Here, we use a simple numerical example to explain the phenomenon.
To fulfill the condition $\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)}$, we let $\mathbf{s}^{(1)}=[-1+3j, 3-j]^T$ and $\mathbf{s}^{(2)}=[-3+1j, 1-3j]^T$.
Moreover, to simply our discussion, we let $\mathbf{H}=[\mathbf{I}_{2\times2}, \mathbf{I}_{2\times2}]^T$.
Then, the output of the $1$-bit ADC is given by
\begin{equation}\label{appe3}
\mathbf{y}^{(1)}=\mathbf{y}^{(2)}=[-1+j, 1-j, -1+j, 1-j]^T.
\end{equation}
In the probability domain, we have
\begin{equation}\label{appe4}
\mathrm{Pr}(\mathbf{y}^{(1)}\neq\mathbf{y}^{(2)}|\mathbf{H}, \mathbf{x}^{(1)}, \mathbf{x}^{(2)})=0.
\end{equation}
It means that there is no bijection between $\mathbf{y}$ and $\mathbf{s}$ in this case; and for this reason, the receiver is not able to successfully reconstruct $\mathbf{s}$ from $\mathbf{y}$ even in the noiseless case.
Now, we increase the noise power (or equivalently reduce the SNR).
Due to the increased randomness, we understand that a positive-amplitude signal can become a negative-amplitude signal.
Denote $s$ to be a real scalar drawn from the discrete finite-set $\{-3, -1, 1, 3\}$ and $v$ the Gaussian noise. It is trivial to have
\begin{equation}\label{appe5}
\mathrm{Pr}(s+v>0|s=-1)>\mathrm{Pr}(s+v>0|s=-3).
\end{equation}
As shown in \cite{9145094}, with the decrease of SNR from a large value (e.g., the noiseless case), the difference between these two probabilities will quickly increase at the beginning, and then converge to a certain non-zero value.
It means that the noise helps to discriminate the ADC output $\mathbf{y}^{(1)}$ and $\mathbf{y}^{(2)}$, i.e.,
\begin{equation}\label{appe6}
\mathrm{Pr}(\mathbf{y}^{(1)}\neq\mathbf{y}^{(2)}|\mathbf{H}, \mathbf{x}^{(1)}, \mathbf{x}^{(2)})\neq 0,
\end{equation}
and the probability increases with the decrease of SNR, and this helps the signal detectability \cite{809511}.
Since the probability converges to a certain value at some SNR, further reducing the SNR will not improve the signal detectability but will only degrade the detection performance. For the general case, the converged probability of \eqref{appe6} can be found in \cite{9145094}, i.e.,
\begin{equation}\label{rev01}
\mathrm{Pr}(\mathbf{y}^{(1)}=\mathbf{y}^{(2)}|\mathbf{s}^{(1)}\neq\mathbf{s}^{(2)})
=\frac{(\mathcal{L}^N)(\mathcal{L}^N-1)}{2^{(2K+1)}},
\end{equation}
where $\mathcal{L}$ is the modulation order. Finally, when the resolution of the quantizer increases, the communication system becomes closer to linear, for which the noise becomes less constructive.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {'timestamp': '2021-09-14T02:15:26', 'yymm': '2109', 'arxiv_id': '2109.05334', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.05334'} |
\section{Introduction}
A covering array $CA(n,k,g)$ is a $k\times n$ array on $\mathbb{Z}_g$ with the property that any two rows are qualitatively independent. The number $n$ of columns
in such array is called its size. The smallest possible size of a covering array is denoted
\begin{equation*}
CAN(k,g)=\min_{n\in \mathbb{N}}\{n~:~ \mbox{there exists a } CA(n,k,g)\}
\end{equation*}
Covering arrays are generalisations of both orthogonal arrays and Sperner systems. Bounds and constructions of covering arrays have been derived from algebra, design theory, graph theory, set systems
and intersecting codes \cite{chatea, kleitman, sloane, stevens1}. Covering arrays have industrial applications in many disparate applications in which factors or components interact, for example, software and circuit testing, switching networks, drug screening and data compression \cite{korner,ser,Cohen}. In \cite{karen}, the definition of a covering array has been extended to include a graph structure.
\begin{definition}\rm (Covering arrays on graph). A covering array on a graph $G$ with alphabet size $g$ and $k=|V(G)|$ is a $k\times n$ array on $\mathbb{Z}_g$.
Each row in the array corresponds to a vertex in the graph $G$. The array has the property that any two rows which correspond to adjacent vertices in $G$ are qualitatively independent.
\end{definition}
\noindent A covering array on a graph $G$ will be denoted by $CA(n,G,g)$. The smallest possible covering array on a graph $G$ will be denoted
\begin{equation*}
CAN(G,g)=\min_{n\in \mathbb{N}}\{n~:~ \mbox{there exists a } CA(n,G,g)\}
\end{equation*}
Given a graph $G$ and a positive integer $g$, a covering array on $G$ with minimum size is called {\it optimal}. Seroussi and Bshouly proved that determining the existence of an optimal binary
covering array on a graph is an NP-complete problem \cite{ser}. We start with a review of some definitions and results from product graphs in Section \ref{productgraph}. In Section \ref{bound},
we show that for all graphs $G_1$ and $G_2$,
$$\max_{i=1,2}\{CAN(G_i,g)\}\leq CAN(G_1\Box G_2,g)\leq CAN( \max_{i=1,2}\{\chi(G_i)\},g).$$ We look for graphs $G_1$ and $G_2$ where the lower bound on $CAN(G_1\Box G_2)$ is
achieved. In Section \ref{Cayley}, we give families of Cayley graphs that achieves this lower bound on covering array number on graph product. In Section \ref{Approx}, we present a polynomial time
approximation algorithm with approximation ratio $\log(\frac{V}{2^{k-1}})$ for constructing covering array on
graph $G=(V,E)$ having more than one prime factor with respect to the Cartesian product.
\section{Preliminaries} \label{productgraph}
In this section, we give several definitions from product graphs that we use in this article.
A graph product is a binary operation on the set of all finite graphs. However among all possible associative graph products
the most extensively studied in literature are the Cartesian product, the direct product,
the strong product and the lexicographic product.
\begin{definition}\rm
The Cartesian product of graphs $G$ and $H$, denoted by $G\Box H$, is the graph with
\begin{center}
$V(G\Box H) = \{(g, h) \lvert g\in V(G) \mbox{ and } h \in V(H)\}$,
\\ $E(G\Box H) = \{ (g, h)(g', h') \lvert g = g', hh' \in E(H), \mbox{ or } gg' \in E(G), h=h' \}$.
\end{center}
The graphs $G$ and $H$ are called the {\it factors} of the product $G \Box H$.
\end{definition}
\noindent In general, given graphs $G_1,G_2,...,G_k$, then $G_1 \Box G_2 \Box \cdots \Box G_k$, is the graph with vertex set
$V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and two vertices $(x_1,x_2,\ldots, x_k)$ and
$(y_1, y_2,\ldots,y_k)$ are adjacent if and only if $x_iy_i \in E(G_i)$ for exactly one index $1\leq i\leq k$ and $x_j = y_j$ for each index $j \not= i$.\\
\begin{definition}\rm
The direct product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\times G_2\times \cdots \times G_k$, is the graph with vertex
set $V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and for which vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are
adjacent precisely if $x_iy_i \in E(G_i)$ for each index $i$.
\end{definition}
\begin{definition}\rm
The strong product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\boxtimes G_2\boxtimes \cdots \boxtimes G_k$, is the graph with vertex set
$V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and distinct vertices $(x_1,x_2,\ldots,x_k)$ and $(y_1,y_2,\ldots,y_k)$ are adjacent if and only if
either $x_iy_i\in E(G_i)$ or $x_i=y_i$ for each $1\leq i\leq k$. We note that in general $E(\boxtimes_{i=1}^k {G_i}) \neq E(\Box_{i=1}^k G_i) \cup E(\times_{i=1}^k G_i)$, unless $k=2$.
\end{definition}
\begin{definition}\rm
The lexicographic product of graphs $G_1,G_2,...,G_k$, denoted by $G_1\circ G_2\circ \cdots \circ G_k$, is the graph with
vertex set $V(G_1) \times V(G_2) \times \cdots \times V(G_k) $, and two vertices $(x_1,x_2,...,x_k)$ and $(y_1,y_2,...,y_k)$ are
adjacent if and only if for some index $j\in \{1,2,...,k\}$ we have $x_jy_j \in E(G_j)$ and $x_i =y_i$ for each index $1\leq i < j$.
\end{definition}
Let $G$ and $H$ be graphs with vertex sets $V(G)$ and $V(H)$, respectively. A {\it homomorphism} from $G$ to $H$ is a map
$\varphi~:~V(G)\rightarrow V(H)$ that preserves adjacency: if $uv$ is an edge in $G$, then $\varphi(u)\varphi(v)$ is an edge in $H$.
We say $G\rightarrow H$ if there is a homomorphism from $G$ to $H$, and $G \equiv H$ if $G\rightarrow H$ and $H\rightarrow G$.
A {\it weak homomorphism} from $G$ to $H$ is a map $\varphi~:~V(G)\rightarrow V(H)$ such that if $uv$ is an edge in $G$, then either
$\varphi(u)\varphi(v)$ is an edge in $H$, or $\varphi(u)=\varphi(v)$. Clearly every homomorphism is automatically a weak homomorphism.
Let $\ast$ represent either the Cartesian, the direct or the strong product of graphs, and consider a product $G_1\ast G_2\ast \ldots\ast G_k$.
For any index $i$, $1\leq i\leq k$, a {\it projection map} is defined as:
$$p_i~:~G_1\ast G_2\ast \ldots\ast G_k \rightarrow G_i ~\mbox{where} ~p_i(x_1,x_2,\ldots,x_k)=x_i.$$ By the definition of the Cartesian, the direct, and the strong product of
graphs, each $p_i$ is a weak homomorphism. In the case of direct product, as $(x_1,x_2,\ldots,x_k)(y_1,y_2,\ldots,y_k)$ is an an edge of $G_1\times G_2 \times,\ldots,\times G_k$
if and only if $x_iy_i\in E(G_i)$ for each $1\leq i\leq k$., each projection $p_i$ is actually a homomorphism. In the case of lexicographic product, the first projection map that is projection on first component is a weak homomorphism, where as in general the projections to the other
components are not weak homomorphisms. \\
A graph is {\it prime} with respect to a given graph product if it is nontrivial and cannot be represented as the product of two nontrivial
graphs. For the Cartesian product,
it means that a nontrivial graph $G$ is prime if $G=G_1\Box G_2$ implies that either $G_1$ or $G_2$ is $K_1$. Similar observation is
true for other three products. The uniqueness of the prime factor decomposition of connected graphs with respect to the
Cartesian product was first shown by Subidussi $(1960)$, and independently by Vizing $(1963)$. Prime factorization is not unique
for the Cartesian product in the class of possibly disconnected simple graphs \cite{HBGP}. It is known that any connected graph factors
uniquely into prime graphs with respect to the Cartesian product.
\begin{theorem}(Sabidussi-Vizing)
Every connected graph has a unique representation as a product of prime graphs, up to isomorphism and the order of the factors. The number of prime factors is
at most $\log_2 {V}$.
\end{theorem}
\noindent For any connected graph $G=(V,E)$, the prime factors of $G$ with respect to the Cartesian product can be computed in $O(E \log V) $ times and $O(E)$ space. See Chapter 23, \cite{HBGP}.
\section{Graph products and covering arrays}\label{bound}
Let $\ast$ represent either the Cartesian, the direct, the strong, or the lexicographic product operation.
Given covering arrays $CA(n_1,G_1,g)$ and $CA(n_2,G_2,g)$, one can construct covering array on $G_1 \ast G_2$ as follows: the row corresponds
to the vertex $(a,b)$ is obtained by horizontally concatenating the row corresponds to the vertex $a$ in $CA(n_1,G_1,g)$ with the row
corresponds to the vertex $b$ in $CA(n_2,G_2,g)$. Hence an obvious upper bound for the covering array number is given by
\begin{center}
$CAN(G_1 \ast G_2, g) \leq CAN(G_1, g) + CAN(G_2, g) $
\end{center}
We now propose some improvements of this bound. A column of a covering array is {\it constant} if, for some symbol $v$, every entry in the
column is $v$. In a {\it standardized } $CA(n,G,g)$ the first column is constant. Because symbols within each row can be permuted independently,
if a $CA(n,G,g)$ exists, then a standardized $CA(n,G,g)$ exists.
\begin{theorem}
Let $G=G_1\boxtimes G_2\boxtimes \cdots \boxtimes G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$, then there exists a
$CA(n,G,g)$ where $n=\underset{i=1}{\overset{k}\sum} n_i -k$. Hence,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k$.
\end{theorem}
\begin{proof} Without loss of generality, we assume that for each $1\leq i\leq g$, the first column of $CA(n_i,G_i,g)$
is a constant column on symbol $i$ and for each $g+1\leq i\leq k$, the first column of $CA(n_i,G_i,g)$ is a constant
column on symbol 1.
Let $C_i$ be the array
obtained from $CA(n_i,G_i,g)$ by removing the first column. Form an array $A$ with
$\underset{i=1}{\overset{k}\prod} |V(G_i)|$ rows and
$\underset{i=1}{\overset{k}\sum} n_i -k$ columns, indexing rows as $(v_1,v_2,...,v_k)$, where $v_i\in V(G_i)$.
Row $(v_1,v_2,...,v_k)$ is
obtained by horizontally concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\leq i\leq k$.
Consider two distinct rows $(u_1,u_2,\ldots,u_k)$ and $(v_1,v_2,\ldots,v_k)$ of $A$ which correspond to adjacent vertices in $G$.
Two distinct vertices $(u_1,u_2,\ldots,u_k)$ and $(v_1,v_2,\ldots,v_k)$ are adjacent if and only if
either $u_iv_i\in E(G_i)$ or $u_i=v_i$ for each $1\leq i\leq k$. Since the vertices are distinct, $u_iv_i\in E(G_i)$ for at least one index $i$.
When $u_i=v_i$, all pairs of the form $(a,a)$ are covered. When $u_iv_i\in E(G_i)$ all remaining pairs are covered because two different rows of $C_i$ correspond to adjacent vertices in $G_i$ are selected.
\end{proof}
\noindent Using the definition of strong product of graphs we have following result as a corollary.
\begin{corollary}
Let $G=G_1\ast G_2\ast \cdots \ast G_k$, $k\geq 2$ and $g$ be a positive integer, where $\ast\in\{\Box,\times\}$. Then,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k$.
\end{corollary}
\noindent The lemma given below will be used in Theorem \ref{product}.
\begin{lemma}\label{karenlemma} (Meagher and Stevens \cite{karen})
Let $G$ and $H$ be graphs. If $G\rightarrow H$ then $CAN(G,g)\leq CAN(H,g)$.
\end{lemma}
\begin{theorem}\label{product}
Let $G=G_1\times G_2\times \cdots \times G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a
$CA(n,G,g)$ where $n=\min\limits_{i} n_i$. Hence, $CAN(G,g)\leq \underset{i}{\overset{}\min}$ $ CAN(G_i,g)$.
\end{theorem}
\begin{proof}
Without loss of generality assume that $n_1 = \min\limits_{i} {n_i} $. It is known that $G_1\times G_2\times \cdots \times G_k\rightarrow G_1$. Using Lemma \ref{karenlemma}, we have $CAN(G,g)\leq CAN(G_1,g)$.
\end{proof}
\begin{theorem}
Let $G=G_1\circ G_2\circ \cdots \circ G_k$, $k\geq 2$ and $g$ be a positive integer.
Suppose for each $1\leq i\leq k$ there exists a $CA(n_i,G_i,g)$. Then there exists a
$CA(n,G,g)$ where $n=\underset{i=1}{\overset{k}\sum} n_i -k+1$. Hence,
$CAN(G,g)\leq \underset{i=1}{\overset{k}\sum} CAN(G_i,g)-k+1$.
\end{theorem}
\begin{proof} We assume that for each $1\leq i\leq k$, the first column of $CA(n_i,G_i,g)$
is a constant column on symbol $1$.
Let $C_1= CA(n_1,G_1,g)$. For each $2\leq i\leq k$ remove the first column of $CA(n_i,G_i,g)$ to form $C_i$ with $n_i-1$ columns. Without loss of generality assume first column of each $CA(n_i,G_i,g)$ is constant
vector on symbol 1 while for each $2\leq i\leq k$, $C_i$ is the array obtained from $CA(n_i,G_i,g)$ by removing the first
column. Form an array $A$ with $\underset{i=1}{\overset{k}\prod} |V(G_i)|$ rows and $\underset{i=1}{\overset{k}\sum} n_i -k+1$ columns, indexing
rows as $(v_1,v_2,..,v_k)$, $v_i\in V(G_i)$. Row $(v_1,v_2,\ldots,v_k)$ is obtained by horizontally
concatenating the rows correspond to the vertex $v_i$ of $C_i$, for $1\leq i\leq k$. If two vertices
$(v_1,v_2,...,v_k)$ and $(u_1,u_2,...,u_k)$ are adjacent in $G$ then either $v_1u_1\in E(G_1)$ or $v_ju_j\in E(G_j)$ for
some $j\geq 2$ and $v_i=u_i$ for each $i< j$. In first case rows from $C_1$ covers each ordered pair of symbols while in second case
rows from $C_j$ covers each ordered pair of symbol probably except $(1,1)$. But this pair appears in each $C_i$ for $i<j$. Hence $A$
is a covering array on $G$.
\end{proof}
\begin{definition} \rm A {\it proper colouring} on a graph is an assignment of colours to each vertex such that adjacent vertices receive a different colour. The chromatic number of a graph $G$, $\chi(G)$,
is defined to be the size of the smallest set of colours such that a proper colouring exists with that set.
\end{definition}
\begin{definition}\rm
A {\it maximum clique} in a graph $G$ is a maximum set of pairwise adjacent vertices. The maximum clique number of a graph $G$, $\omega(G)$, is defined to be the size of a maximum clique.
\end{definition}
\noindent Since there are homomorphisms $K_{\omega(G)}\rightarrow G\rightarrow K_{\chi(G)}$, we can
find bound on the size of a covering array on a graph from the graph's chromatic number and clique number. For all graphs $G$,
$$CAN(K_{\omega(G)},g)\leq CAN(G,g)\leq CAN(K_{\chi(G)},g).$$
\noindent We have the following results on proper colouring of product graphs \cite{chromatic}
$$\chi(G_1 \Box G_2) = \max \{ \chi(G_1), \chi(G_2)\}.$$
For other graph products there are no explicit formulae for chromatic number but following bounds are mentioned in \cite{HBGP}.
$$\chi(G_1 \times G_2) \leq \min \{ \chi(G_1), \chi(G_2)\}$$
$$\chi(G_1 \boxtimes G_2) \leq \chi(G_1 \circ G_2) \leq \chi(G_1) \chi(G_2).$$
A proper colouring of $G_1 \ast G_2$ with $\chi(G_1 \ast G_2)$ colours is equivalent to a homomorphism from
$G_1 \ast G_2$ to $K_{\chi(G_1 \ast G_2)}$ for any $\ast \in\{\Box, \times, \boxtimes, \circ \}$.
Hence $$CAN(G_1 \Box G_2, g) \leq CAN(K_{\max\{ \chi(G_1), \chi(G_2)\}},g)$$
$$CAN(G_1 \times G_2, g) \leq CAN(K_{\min\{ \chi(G_1), \chi(G_2)\}},g) $$
$$CAN(G_1 \boxtimes G_2, g) \leq CAN(K_{\chi(G_1)\chi(G_2)},g) $$
$$CAN(G_1 \circ G_2, g) \leq CAN(K_{\chi(G_1)\chi(G_2)},g) .$$
\noindent Note that $G_1\rightarrow G_1 \ast G_2$ and $G_2\rightarrow G_1 \ast G_2$ for $\ast \in\{\Box,\boxtimes,\circ\}$
which gives
$$max\{CAN(G_1, g), CAN(G_2, g)\}\leq CAN(G_1 \ast G_2, g).$$
We now describe colouring construction of covering array on graph $G$. If $G$ is a $k$-colourable graph then build a covering array $CA(n, k, g)$ and without loss of generality associate
row $i$ of $CA(n, k, g)$ with colour $i$ for $1\leq i\leq k$. In order to construct $CA(n,G,g)$, we assign row $i$ of $CA(n, k, g)$ to all the vertices having colour $i$ in $G$.
\begin{definition}\rm An orthogonal array $OA(k,g)$ is a $k\times g^2$ array with entries from $\mathbb{Z}_g$ having the properties that
in every two rows, each ordered pair of symbols from $\mathbb{Z}_g$ occurs exactly once.
\end{definition}
\begin{theorem}\label{OA} \cite{Colbourn} If $g$ is prime or power of prime, then one can construct $OA(g+1,g)$.
\end{theorem}
The set of rows in an orthogonal array $OA(k,g)$ is a set of $k$ pairwise qualitatively independent vectors from
$\mathbb{Z}_g^{g^2}$. For $g=2$, by Theorem \ref{OA}, there are three qualitatively independent vectors from
$\mathbb{Z}_2^{4}$. Here we give some examples where the lower bound
on $CAN(G_1\Box G_2,g)$ is achieved, that is, $CAN(G_1\Box G_2,g)=max\{CAN(G_1,g), CAN(G_2,g)\}. $
\begin{example} \rm If $G_1$ and $G_2$ are bicolorable graphs, then $\chi(G_1 \Box G_2)=2$. Let $x_1$ and $x_2$ be two qualitatively independent vectors
in $\mathbb{Z}_g^{g^2}$. Assign vector $x_i$ to all the vertices of $G_1 \Box G_2$ having colour $i$ for $i=1,2$ to get a covering array with $CAN(G_1 \Box G_2, g) = g^2.$
\end{example}
\begin{example}\rm If $G_1$ and $G_2$ are complete graphs, then $CAN(G_1 \Box G_2, g) = max\{CAN(G_1, g), CAN(G_2, g)\}. $
\end{example}
\begin{example} \rm If $G_1$ is bicolorable and $G_2$ is a complete graph on $k\geq 3$ vertices, then
$CAN(G_1 \Box G_2, g) = CAN(G_2, g)$. In general, if $\chi(G_1) \leq \chi(G_2)$ and $G_2$ is a complete graph, then
$CAN(G_1 \Box G_2, g) = CAN(G_2, g)$.
\end{example}
\begin{example} \rm If $P_m$ is a path of length $m$ and $C_n$ is an odd cycle of length $n$, then $\chi(P_m \Box C_n)=3$. Using Theorem \ref{OA}, we
get a set of three qualitatively independent vectors in $\mathbb{Z}_g^{g^2}$ for $g\geq 2$. Then the colouring construction of covering arrays gives us a covering
array on $P_m\Box C_n$ with $CAN(P_m\Box C_n, g) = g^2$.
\end{example}
\begin{lemma}\cite{HBGP} Let $G_1$ and $G_2$ be graphs and $Q$ be a clique of $G_1\boxtimes G_2$. Then
$Q= p_1(Q)\boxtimes p_2(Q)$, where $p_1(Q)$ and $p_2(Q)$ are cliques of $G_1$ and $G_2$, respectively.
\end{lemma}
Hence a maximum size clique of $G_1\boxtimes G_2$ is product of maximum size cliques from $G_1$ and $G_2$. That is,
$\omega(G_1\boxtimes G_2)= \omega(G_1)\omega(G_2)$. Using the graph homomorphism, this results into another lower bound on
$CAN(G_1\boxtimes G_2,g)$ as $CAN(K_{\omega(G_1)\omega(G_2)},g)\leq CAN(G_1\boxtimes G_2,g)$. Following are some examples
where this lower bound can be achieved.
\begin{example} \rm If $G_1$ and $G_2$ are nontrivial bipartite graphs then
$\omega(G_1 \boxtimes G_2)= \chi(G_1\boxtimes G_2)$ which is 4. Hence $CAN(G_1\boxtimes G_2,g)= CAN(K_4,g)$, which is of optimal
size.
\end{example}
\begin{example}\rm If $G_1$ and $G_2$ are complete graphs, then $G_1\boxtimes G_2$ is again a complete graph. Hence
$CAN(G_1\boxtimes G_2,g)= CAN(K_{\omega(G_1\boxtimes G_2)},g)$.
\end{example}
\begin{example}\rm If $G_1$ is a bipartite graph and $G_2$ is a complete graph on $k\geq 2$ vertices, then
$\omega(G_1\boxtimes G_2)= \chi(G_1\boxtimes G_2)= 2k$. Hence $CAN(G_1\boxtimes G_2,g)= CAN(K_{2k},g)$.
\end{example}
\begin{example}\rm If $P_m$ is a path of length $m$ and $C_n$ is an odd cycle of length $n$, then
$\omega(P_m\boxtimes C_n)=4$ and $\chi(P_m \boxtimes C_n)=5$. Here we have $CAN(K_4,g)\leq CAN(G,g)\leq CAN(K_5,g)$. For $g\geq 4$, using Theorem \ref{OA}, we
get a set of five qualitatively independent vectors in $\mathbb{Z}_g^{g^2}$. Then the colouring construction of
covering arrays gives us a covering array on $P_m\boxtimes C_n$ with $CAN(P_m\boxtimes C_n, g) = g^2$.
\end{example}
\section{Optimal size covering arrays over the Cartesian product of graphs } \label{Cayley}
\begin{definition} \rm Two graphs $G_1=(V,E)$ and $G_2=(V^{\prime},E^{\prime})$ are said to be isomorphic if there is a bijection mapping $\varphi$ from the vertex set $V$ to the vertex set $V^{\prime}$ such that $(u,v)\in E$ if and only if $(\varphi(u),\varphi(v))\in E^{\prime}$. The mapping $\varphi$ is called an isomorphism. An automorphism of a graph is an isomorphism from the graph to itself.
\end{definition}
\noindent The set of all automorphisms of a graph $G$ forms a group, denoted $Aut(G)$, the automorphism group of $G$.
\begin{theorem}\label{A}
Let $G_1$ be a graph having the property that $Aut(G_1)$ contains a fixed point free automorphism which maps every vertex to its neighbour.
Then for any bicolourable graph $G_2$, $$CAN(G_1 \square G_2,g)=CAN(G_1,g).$$
\end{theorem}
\begin{proof} Consider the set $\Gamma=\{\phi \in Aut(G_1)~|~ \phi(u)\in N(u)-\{u\} \mbox{ for all } u\in V(G_1)\}$ where $N(u)$ denotes the set of neighbours of $u$.
From the assumption, $\Gamma$ is not empty.
Consider a 2-colouring of $G_2$ with colours $0$ and $1$. Let $W_0=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=0\}$ and $W_1=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=1\}$. Note that $W_0 $ and $W_1$ partition $V(G_1\square G_2)$ in two two parts.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$.
Form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_0$, row $(u,v)$ is row $u$ of $CA(G_1,g)$; otherwise if
$(u,v)\in W_1$, row $(u,v)$ is row $\phi(u)$ of $CA(G_1,g)$. We verify that $C$ is a $CA(G_1\Box G_2, g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$
of $C$. \\ (i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$, then $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 \sim u_2$ and $v_1=v_2$.
When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_0$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $u_1$ and $u_2$ of $CA(G_1,g)$ respectively.
As $u_1\sim u_2$, rows $u_1$ and $u_2$ are
qualitatively independent in $CA(G_1,g)$. When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_1$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $\phi(u_1)$ and $\phi(u_2)$ of $CA(G_1,g)$ respectively. As $\phi(u_1)\sim \phi(u_2)$, rows $\phi(u_1)$ and $\phi(u_2)$ are
qualitatively independent in $CA(G_1,g)$. Therefore, rows $(u_1,v_1)$ and $(u_2,v_2)$ are
qualitatively independent in $C$.\\
(ii) Let $(u_1,v_1)\in W_0$ and $(u_2,v_2)\in W_1$. In this case, $ (u_1,v_1) \sim (u_2,v_2)$ if and only if $u_1=u_2$ and $v_1\sim v_2$. Let $u_1=u_2=u$. Rows $(u,v_1)$ and $(u,v_2)$ are rows $u$ and $\phi(u)$ of $CA(G_1,g)$.
As $\phi $ is a fixed point free automorphism that maps every vertex to its neighbour, $u$ and $\phi(u)$ are adjacent in $G_1$. Therefore, the rows indexed by $u$ and $\phi(u)$ are qualitatively independent
in $CA(G_1,g)$; therefore, rows $(u_1,v_1)$ and $(u_2,v_2)$ are
qualitatively independent in $C$.\\
\end{proof}
\begin{definition}\rm
Let $H$ be a finite group and $S$ be a subset of $H\smallsetminus \{id\}$ such that $S = -S$ (i.e., $S$ is closed under inverse). The Cayley graph of $H$ generated by $S$, denoted
$Cay(H,S)$, is the undirected graph $G=(V,E)$ where $V=H$ and $E=\{(x,sx)~|~x\in H, s\in S\}$. The Cayley graph is connected if and only if $S$ generates $H$.
\end{definition}
\noindent Through out this article by $S = -S$ we mean, $S$ is closed under inverse for a given group operation
\begin{definition}\rm
A circulant graph $G(n,S)$ is a Cayley graph on $\mathbb{Z}_n$. That is, it is a graph whose vertices are labelled $\{0,1,\ldots,n-1\}$, with two vertices labelled $i$ and
$j$ adjacent iff $i-j ~(\mbox{mod}~n)\in S$, where $S\subset \mathbb{Z}_n$ with $S=-S$ and $0\notin S$.
\end{definition}
\begin{corollary}
Let $G_1(n,S)$ be a circulant graph and $G_2$ be a bicolorable graph, then $CAN(G_1(n,S) \Box G_2, g) = CAN(G_1(n,S), g)$.
\end{corollary}
\begin{proof} Let $i$ and $j$ be any two adjacent vertices in $G_1(n,S)$. We define a mapping $\phi$ from $\mathbb{Z}_n$ as follows:
\begin{center}
$\phi(k) = k+j-i ~(\mbox{mod}~ n)$
\end{center}
It is easy to verify that $\phi$ is an automorphism
and it sends every vertex to its neighbour. Hence $\phi \in \Gamma$ and the result
follows.
\end{proof}
For a group $H$ and $S \subseteq H$, we denote conjugation of $S$ by elements of itself as
\begin{center}
$S^S = \{ ss's^{-1} | s, s'\in S\}$
\end{center}
\begin{corollary}
Let $H$ be a finite group and $S \subseteq H\smallsetminus \{id\}$ is a generating set for $H$ such that $S = -S$ and
$S^S = S$. Then for
$G_1 = Cay(H, S)$ and any bicolorable graph $G_2$,
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1, g)$
\end{center}
\end{corollary}
\begin{proof}
We will show that there exists a $\phi \in Aut(G_1)$ such that $\phi$ is stabilizer free.
Define $\phi : H \rightarrow H$ as $\phi(h) = sh$ for some $s\in S$.
It it easy to check that $\phi$ is bijective and being $s \neq id$ it is stabilizer free. Now to prove it is a
graph homomorphism we need to show it is an adjacency preserving map. It is sufficient to prove that $(h, s'h)\in E(G_1)$
implies $(sh, ss'h) \in E(G_1)$. As $ss'h = ss's^{-1}sh$ and $ss's^{-1} \in S$, we have $(sh, ss'h)\in E(G_1)$.
Hence $\phi \in \Gamma $ and Theorem \ref{A} implies the result.
\end{proof}
\begin{example}\rm
For any abelian group $H$ and $S$ be a generating set such that $S = -S$ and $id \notin S$, we always get $S^S = S$.
\end{example}
\begin{example}\rm
For $H = Q_8 = \{\pm1, \pm i, \pm j, \pm k\}$ and $S = \{\pm i, \pm j\}$, we have $S^S = S$ and $S = -S$.
\end{example}
\begin{example}\rm
For $H= D_8 = \langle a, b | a^2 = 1 = b^4, aba = b^3\rangle$ and $S= \{ab, ba\}$, we have $S^S = S$ and $S = -S$.
\end{example}
\begin{example}\rm
For $H = S_n$ and $S =$ set of all even cycles, we have $S^S = S$ and $S = -S$
\end{example}
\begin{theorem}
Let $H$ be a finite group and $S$ be a generating set for $H$ such that
\begin{enumerate}
\item $S = -S$ and $id \notin S$
\item $S^S = S$
\item there exist $s_1$ and $s_2$ in $S$ such that $s_1 \neq s_2$ and $s_1s_2 \in S$
\end{enumerate}
then for $G_1= Cay(H, S)$ and any three colourable graph $G_2$
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1,g)$
\end{center}
\end{theorem}
\begin{proof} Define three distinct automorphisms of $G_1$, $\sigma_{i} : H\rightarrow H$, for $i=0,1,2$, as $\sigma_0(u)=u$, $\sigma_1(u)=s_1u$, $\sigma_2(u)=s_2^{-1}u$.
Consider a three colouring of $G_2$ using the colours $0, 1$ and $2$. Let $W_i=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=i\}$ for $i=0,1,2$.
Note that $W_0 $, $W_1$, and $W_2$ partition $V(G_1\square G_2)$ into three parts.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$. Using $CA(G_1,g)$, form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_i$, row $(u,v)$ is row $\sigma_i(u)$ of $CA(G_1,g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$ of $C$. \\
(i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 \sim u_2$ and $v_1=v_2$.
When $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_0$, rows $(u_1,v_1)$ and $(u_2,v_2)$ are rows $u_1$ and $u_2$ of $CA(G_1,g)$.
As $u_1 \sim u_2$ in $G_1$, the rows $u_1$ and $u_2$ are qualitatively independent in $CA(G_1,g)$. Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_1$ (res. $W_2$). Similarly, as $s_1u_1\sim s_1u_2$ (res. $s_2^{-1}u_1 \sim s_2^{-1}u_1$)
the rows indexed by
$s_1u_1$ and $s_1u_2$ (res. $s_2^{-1}u_1$ and $s_2^{-1}u_2$) are qualitatively independent in $CA(G_1,g)$.
Hence the rows
$(u_1,v_1)$ and $(u_2,v_2)$ are qualitatively independent in $C$.\\
(ii) Let $(u_1,v_1)\in W_i$ and $(u_2,v_2)\in W_j$ for $0\leq i\neq j\leq 2$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 = u_2$ and $v_1\sim v_2$.
Let $u_1=u_2=u$.\\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_1$, then rows $(u,v_1)$ and $(u,v_2)$ are rows $u$ and $s_1u$ of $CA(G_1,g)$ respectively. Then as $u\sim s_1u$ the rows indexed by $(u,v_1)\in W_0$ and $(u,v_2)\in W_1$ are qualitatively independent in $C$. \\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_2$. Then, as $u\sim s_2^{-1}u$, the rows indexed by $(u,v_1)\in W_0$ and $(u,v_2)\in W_2$ are qualitatively independent in $C$. \\
Let $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$. Then, as $s_1u\sim s_2^{-1}u$, the rows indexed by $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$ are qualitatively independent in $C$.
\end{proof}
\begin{theorem}
Let $H$ be a finite group and $S$ is a generating set for $H$ such that
\begin{enumerate}
\item $S = -S$ and $id \notin S$
\item $S^S = S$
\item $\exists s_1$ and $s_2$ in $S$ such that $s_1 \neq s_2$ and $s_1s_2, s_1s_2^{-1}\in S$
\end{enumerate}
then for $G_1 = Cay(H, S)$ and any four colourable graph $G_2$
\begin{center}
$CAN(G_1 \Box G_2, g) = CAN(G_1, g)$
\end{center}
\end{theorem}
\begin{proof}
Define four distinct automorphisms of $G_1$, $\sigma_i:H\rightarrow H$, $ i=0,1,2,3$ as $\sigma_0(u)=u$, $\sigma_1(u)=s_1u$, $\sigma_2(u)=s_2u$ and
$\sigma_3(u)=s_1s_2 u$. Consider a four colouring of $G_2$ using the colours $0, 1, 2$ and $3$. Let $W_i=\{(u,v)\in V(G_1\square G_2) ~|~\mbox{colour}(v)=i\}$ for $i=0,1,2,3$.
Let the rows of covering array $CA(G_1,g)$ be indexed by $u_1,u_2,\ldots,u_k$. Form an array $C$ with $|V(G_1 \Box G_2)|$ rows and $CAN(G_1,g)$
columns, indexing rows as $(u,v)$ for $1\leq u\leq |V(G_1)|$, $1\leq v \leq |V(G_2)|$. If $(u,v)\in W_i$, row $(u,v)$ is row $\sigma_i(u)$ of $CA(G_1,g)$. Consider two adjacent vertices $(u_1,v_1)$ and $(u_2,v_2)$ of $C$. \\
(i) Let $(u_1,v_1)$ and $(u_2,v_2)$ belong to $W_i$. It is easy to verify that $(u_1,v_1)$ and $(u_2,v_2)$ are qualitatively independent.\\
(ii) Let $(u_1,v_1)\in W_i$ and $(u_2,v_2)\in W_j$ for $0 \leq i\neq j\leq 3$. In this case, $(u_1,v_1)\sim(u_2,v_2)$ if and only if $u_1 = u_2$ and $v_1\sim v_2$.
Let $u_1=u_2=u$.\\
Let $(u,v_1)\in W_0$ and $(u,v_2)\in W_i$ for $i=1,2,3$, then row $(u,v_1)$ and $(u,v_2)$ are
rows $u$ and $\sigma_i(u)$ of $CA(G_1,g)$ respectively.
Then as $u\sim \sigma_i(u)$ the rows $(u,v_1)$ and $(u,v_2)$ are qualitatively independent. \\
\noindent Let $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$. Then rows $(u,v_1)$ and $(u,v_2)$ are rows $s_1u$ and $s_2u$ of $CA(G_1,g)$. As $s_1u = s_1s_2^{-1}s_2u$ and $s_1s_2^{-1}\in S$, we get $s_1u\sim s_2u$. Hence the rows $(u,v_1)\in W_1$ and $(u,v_2)\in W_2$ are qualitatively independent. Similarly, as $s_1u=s_1 s_2^{-1}s_1^{-1}s_1s_2u$ and $s_1 s_2^{-1}s_1^{-1}\in S$ being $S^S=S$, we have
$s_1u\sim s_1s_2u$. Hence the rows $(u,v_1)\in W_1$ and $(u,v_2)\in W_3$ are qualitatively independent. \\
Let $(u,v_1)\in W_2$ and $(u,v_2)\in W_3$. As $s_2u=s_1^{-1}s_1s_2u$ and $s_1^{-1}\in S$, we get $s_2u\sim s_1s_2u$.
Hence the rows $(u,v_1)\in W_2$ and $(u,v_2)\in W_3$ are qualitatively independent.
\end{proof}
\begin{example}
$G = Q_8$ and $S= \{\pm i, \pm j, \pm k\}$. Here $s_1=i$ and $s_2=j$.
\end{example}
\begin{example}
$G = Q_8$ and $S= \{-1,\pm i, \pm j\}$. Here $s_1=-1$ and $s_2=i$.
\end{example}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\small{
\matrix[matrix of math nodes, anchor=south west,
nodes={circle, draw, minimum size = 0.4cm},
column sep = {0.5cm},
row sep={0.35cm}]
{
& |(0)| & & |(2)| & & \\
& & & & & |(3)| \\
|(4)| & & & & & & |(-4)| \\
& &|(5)| & & |(6)| & & & & |(00)| & & |(02)| & & \\
& & &|(7)| & & & & & & & & & |(03)| \\
& & & & & & & |(04)| & & & & & & |(-04)| \\
& & |(1)| & & |(i)| & & & & &|(05)| & & |(06)| & \\
& & & & & & |(j)| & & & &|(07)| & & \\
&|(-k)| & & & & & & |(k)| \\
& & &|(-j)| & & |(-1)| & &\\
& & & &|(-i)| & & &\\
};}
\begin{scope}[style=thick]
\foreach \from/\to/\weight/\where
in { 4/2/1/above, 4/3/1/above, 4/-4/1/right, 4/7/1/above, 4/5/1/right,
0/5/1/above, 0/7/1/above, 0/6/1/above, 0/3/1/above, 0/2/1/above,
2/7/1/above, 2/6/1/right, 2/-4/1/above,
3/5/1/right, 3/6/1/below, 3/-4/1/right, -4/7/1/above, -4/5/1/above,
6/7/1/below, 6/5/1/below,
-k/i/1/above, -k/j/1/above, -k/k/1/right, -k/-i/1/above, -k/-j/1/right,
1/-j/1/above, 1/-i/1/above, 1/-1/1/above, 1/j/1/above, 1/i/1/above,
i/-i/1/above, i/-1/1/right, i/k/1/above,
j/-j/1/right, j/-1/1/below, j/k/1/right, k/-i/1/above, k/-j/1/above,
-1/-i/1/below, -1/-j/1/below,
04/02/1/above, 04/03/1/above, 04/-04/1/right, 04/07/1/above, 04/05/1/right,
00/05/1/above, 00/07/1/above, 00/06/1/above, 00/03/1/above, 00/02/1/above,
02/07/1/above, 02/06/1/right, 02/-04/1/above,
03/05/1/right, 03/06/1/below, 03/-04/1/right, -04/07/1/above, -04/05/1/above,
06/07/1/below, 06/05/1/below}
\draw (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in { 4/-k/1/above, 0/1/1/above, 2/i/1/right, 3/j/1/above, -4/k/1/right,
6/-1/1/above, 5/-j/1/above, 7/-i/1/above}
\draw[gray] (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in { 04/-k/1/above, 00/1/1/above, 02/i/1/right, 03/j/1/above, -04/k/1/right,
06/-1/1/above, 05/-j/1/above, 07/-i/1/above}
\draw[red] (\from) to [->] (\to);
\end{scope}
\begin{scope}[style=thin]
\foreach \from/\to/\weight/\where
in{ 4/04/1/above, 0/00/1/above, 2/02/1/right, 3/03/1/above, -4/-04/1/right,
6/06/1/above, 5/05/1/above, 7/07/1/above}
\draw[blue] (\from) to [->] (\to);
\end{scope}
\end{tikzpicture}
\caption{$Cay(Q_8, \{-1,\pm i, \pm j\})\Box K_3$}
\end{center}
\end{figure}
\section{Approximation algorithm for covering array on graph}\label{Approx}
In this section, we present an approximation algorithm for construction of covering array on a given graph $G=(V,E)$ with
$k>1$ prime factors with respect to the Cartesian product.
In 1988, G. Seroussi and N H. Bshouty proved that the decision problem whether there exists a binary
covering array of strength $t\geq 2$ and size $2^t$ on a given $t$-uniform hypergraph is NP-complete \cite{VS}.
Also, construction of
an optimal size covering array on a graph is at least as hard as finding its optimal size.
\noindent We give an approximation algorithm for the Cartesian product with approximation ratio $O(\log_s |V|)$, where $s$ can be obtained from the
number of symbols corresponding to each vertex. The following result by Bush is used in our approximation algorithm.
\begin{theorem}\rm{\cite{GT}}\label{B} Let $g$ be a positive integer. If $g$ is written in standard form: $$g=p_1^{n_1}p_2^{n_2}\ldots p_l^{n_l}$$ where $p_1,p_2,\ldots,p_l$ are distinct primes, and if
$$r=\mbox{min}(p_1^{n_1},p_2^{n_2},\ldots, p_l^{n_l}),$$ then one can construct $OA(s,g)$ where
$s =1+ \max{(2,r)}$.
\end{theorem}
We are given a wighted connected graph $G=(V,E)$ with each vertex having the same weight $g$.
In our approximation algorithm, we use a technique from \cite{HBGP} for prime factorization of $G$ with respect to the Cartesian product.
This can be done in $O(E \log V$) time. For details see \cite{HBGP}. After obtaining prime factors of $G$, we construct
strength two covering array $C_1$ on maximum size prime factor. Then
using rows of $C_1$, we produce a covering array on $G$.\\
\noindent\textbf{APPROX $CA(G,g)$:}
\\\textbf{Input:} A weighted connected graph $G=(V,E)$ with $k>1$ prime factors with respect to the Cartesian product. Each vertex has weight $g$; $g=p_1^{n_1}p_2^{n_2}\ldots p_l^{n_l}$ where
$p_1$, $p_2, \ldots, p_l$ are primes.
\\\textbf{Output:} $CA(ug^2,G,g)$.
\\\textbf{Step 1:} Compute $s = 1 + \mbox{max}\{2,r\}$ where $r=\mbox{min}(p_1^{n_1},p_2^{n_2},\ldots, p_l^{n_l})$.
\\\textbf{Step 2:} Factorize $G$ into prime factors with respect to the Cartesian product;
say $G = \Box_{i=1} ^{k} G_i$ where $G_i= (V_i,E_i)$ is a prime factor.
\\\textbf{Step 3:} Suppose $V_1\geq V_2\geq \ldots\geq V_k$. For prime factor $G_1=(V_1, E_1)$ \textbf{do}
\begin{enumerate}
\item Find the smallest positive integer $u$ such that $s^u\geq V_1$. That is, $u=\lceil \mbox{log}_s V_1\rceil$.
\item Let $OA(s,g)$ be an orthogonal array and denote its $i$th row by $R_i$ for $i=1,2,\ldots,s$. Total $s^u$ many row vectors $(R_{i_1}, R_{i_2},\ldots R_{i_u})$, each of length $ug^2$, are formed by horizontally concatenating $u$ rows
$R_{i_1}$, $ R_{i_2}$, $\ldots,$ $ R_{i_u}$ where $1\leq i_1, \ldots, i_u\leq s$.
\item Form an $V_1 \times ug^2$ array $C_1$ by choosing any $V_1$ rows out of $s^u$ concatenated row vectors.
Each row in the array corresponds to a vertex in the graph $G_1$. \end{enumerate}
\textbf{Step 4:}
From $C_1$ we can construct an $V\times ug^2$ array $C$. Index the rows of $C$ by $(u_1,u_2,\ldots,u_k)$, $u_i\in V(G_i)$.
Set the row $(u_1,u_2,\ldots,u_k)$ to be identical to the row corresponding to $u_1+u_2+\ldots+u_k ~ \mbox{mod } V_1$ in $C_1$. Return $C$.
\vspace{1cm}\begin{theorem}
Algorithm APPROX $CA(G,g)$ is a polynomial-time $\rho(V)$ approximation algorithm for covering array on graph problem, where
$$\rho(V) \leq \lceil \log_s \frac{V}{2^{k-1}} \rceil.$$
\end{theorem}
\begin{proof}
\textbf{Correctness:} The verification that $C$ is a $CA(ug^2,G,g)$ is straightforward. First, we show that $C_1$ is a covering array of strength two with $ |V_1|$ parameters.
Pick any two distinct rows of $C_1$ and consider the sub matrix induced by these two rows. In the sub matrix, there must be a column $(R_i, R_j)^T$ where $i \neq j$.
Hence each ordered pair of values appears at least once.
Now to show that $C$ is a covering array on $G$, it is sufficient to show that the rows in $C$ for any pair of adjacent vertices $u=(u_1,u_2,\ldots,u_k)$ and $v=(v_1,v_2,\ldots,v_k)$ in $G$ will be qualitatively
independent. We know $u$ and $v$ are adjacent if and only if $(a_i,b_i)\in E(G_i)$ for exactly one index $1\leq i\leq k$ and
$a_j=b_j$ for $j\neq i$.
Hence $ u_1+u_2+ \ldots+u_k \neq v_1+v_2+\ldots+v_k ~ \mbox{mod } V_1$ and in Step 6,
two distinct rows from $C_1$ are assigned to the vertices $u$ and $v$.\\
\textbf{Complexity :} The average order of $l$ in Step 1 is $\ln\ln g$ \cite{Riesel}. Thus, the time to find $s$ in Step 1 is $O(\ln \ln g)$.
The time to factorize graph $G=(V,E)$ in Step 2 is $O(E \log V)$. In Step 3(1), the smallest positive integer $u$ can be found in
$O(\mbox{log}_s V_1)$ time. In Step 3(2), forming one row vector requires $\mbox{log}_sV_1$ assignments; hence, forming $V_1$ row vectors require $O(V_1\mbox{log}V_1)$ time.
Thus the total running time of APPROX $CA(G,g)$ is $O(E \log V+\ln \ln g)$. Observing that, in practice, $\ln \ln g \leq E \log V$, we can restate the running time of
APPROX $CA(G,g)$ as $O(E \log V)$. \\
\textbf{Approximation ratio:} We show that APPROX $CA(G,g)$ returns a covering array that is at most $\rho(V)$ times the size of an optimal covering array on $G$.
We know the smallest $n$ for which a $CA(n,G,g)$ exists is $g^2$, that is, $CAN(G,g)\geq g^2$. The algorithm returns a covering array on $G$ of size $ug^2$ where
$$u=\lceil \log_s V_1\rceil.$$ As $G$ has $k$ prime factors, the maximum number of vertices in a factor can be $\frac{V}{2^{k-1}}$, that is, $V_1\leq \frac{V}{2^{k-1}}$.
Hence $$u= \lceil \log_s V_1\rceil \leq \lceil \log_s \frac{V}{2^{k-1}}\rceil.$$ By relating to the size of the covering array returned to the optimal size, we obtain our approximation ratio
$$\rho(V)\leq \lceil \log_s \frac{V}{2^{k-1}}\rceil.$$ \end{proof}
\section{Conclusions} One motivation for introducing a graph structure was to optimise covering arrays for their use in testing software and networks based on internal structure. Our primary
concern in this paper is with constructions that make optimal covering arrays on large graphs from smaller ones. Large graphs are obtained by considering either the Cartesian, the direct, the strong, or the Lexicographic product of small graphs. Using graph homomorphisms, we have
$$\max_{i=1,2}\{CAN(G_i,g)\}\leq CAN(G_1\Box G_2,g)\leq CAN( \max_{i=1,2}\{\chi(G_i)\},g).$$ We gave several classes of Cayley graphs where the lower bound on covering array number $CAN(G_1\Box G_2)$ is achieved. It is an interesting problem to find out other classes of graphs for which lower bound on covering array number of product graph can be achieved. We gave an approximation algorithm
for construction of covering array on a graph $G$ having more than one factor with respect to the Cartesian product. Clearly, another area to explore is to consider in details the other graph products, that is, the direct, the strong, and the Lexicographic product.
| {'timestamp': '2015-12-23T02:03:33', 'yymm': '1512', 'arxiv_id': '1512.06966', 'language': 'en', 'url': 'https://arxiv.org/abs/1512.06966'} |
\section{Introduction}\label{sec:introduction}
Galaxy clusters are the ultimate result of the hierarchical bottom-up process of cosmic structure formation. Hosted in massive dark matter haloes that formed through subsequent phases of mass accretion and mergers, galaxy clusters carry information on the underlying cosmological scenario as well as the astrophysical processes that shape the properties of the intra-cluster medium (ICM) \citep[for a review, see e.g.][]{2005RvMP...77..207V,2011ARA&A..49..409A,2012ARA&A..50..353K}.
Being at the top of the pyramid of cosmic structures, galaxy clusters are mostly found in the late-time universe. These can be observed using a variety a techniques that probe either the distribution of the hot intra-cluster gas through its X-ray emission \citep[see e.g.][]{2005ApJ...628..655V,2010MNRAS.407...83E,2016A&A...592A...1P,2021A&A...650A.104C}, the scattering of the Cosmic Microwave Background radiation (CMB) due to the Sunyaev-Zeldovich effect \citep[see e.g.][]{2009ApJ...701...32S,2013ApJ...765...67M,2013ApJ...763..127R,2014A&A...571A..29P,2015ApJS..216...27B}, through measurement of galaxy overdensities or the gravitational lensing effect caused by the cluster's gravitational mass on background sources \citep{2016ApJS..224....1R,2019MNRAS.485..498M,2011ApJ...738...41U,2012ApJS..199...25P}.
The mass distribution of galaxy clusters primarily depends on the dynamical state of the system. Observations of relaxed clusters have shown that the matter density profile at large radii is consistent with the universal Navarro-Frenk-White profile \citep[NFW,][]{NFW1997}, while deviations have been found in the inner regions \citep[][]{2013ApJ...765...24N,2017ApJ...851...81A,2017ApJ...843..148C,2020A&A...637A..34S}. In relaxed systems, the gas falls in the dark matter dominated gravitational potential and thermalises through the propagation of shock waves. This sets the gas in a hydrostatic equilibrium (HE) that is entirely controlled by gravity. Henceforth, aside astrophysical processes affecting the baryon distribution in the cluster core, the thermodynamic properties of the outer ICM are expected to be self-similar \citep[see e.g.][]{2019A&A...621A..39E,2019A&A...621A..41G,2021ApJ...910...14G}. This is not the case of clusters undergoing major mergers for which the virial equilibrium is strongly altered \citep[see e.g.][]{2016ApJ...827..112B}. Such systems exhibit deviations from self-similarity such that scaling relations between the ICM temperature, the cluster mass and X-ray luminosity differ from that of relaxed clusters \citep[see e.g.][]{2009MNRAS.399..410P,2011ApJ...729...45R,2019MNRAS.490.2380C}.
A direct consequence of merger events is that the mass estimates inferred assuming the HE hypothesis or through scaling relations may be biased. This may induce systematic errors on cosmological analyses that rely upon accurate cluster mass measurements. On the other hand, merging clusters can provide a unique opportunity to investigate the physics of the ICM \citep{2007PhR...443....1M,2016JPlPh..82c5301Z} and test the dark matter paradigm \citep[as in the case of the Bullet Cluster][]{2004ApJ...604..596C,2004ApJ...606..819M}. This underlies the importance of identifying merging events in large cluster survey catalogues.
The identification of unrelaxed clusters relies upon a variety of proxies specifically defined for each type of observations \citep[for a review see e.g.][]{2016FrASS...2....7M}. As an example, the detection of radio haloes and relics in clusters is usually associated with the presence of mergers. Similarly, the offset between the position of the brightest central galaxy and the peak of the X-ray surface brightness, or the centroid of the SZ signal are used as proxy of merger events. This is because the merging process alters differently the distribution of the various matter constituents of the cluster.
The growth of dark matter haloes through cosmic time has been investigated extensively in a vast literature using results from N-body simulations. \citet{2003MNRAS.339...12Z} found that haloes build up their mass through an initial phase of fast accretion followed by a slow one. \citet{2007MNRAS.379..689L} have shown that the during the fast-accretion phase, the mass assembly occurs primarily through major mergers, that is mergers in which the mass of the less massive progenitor is at least one third of the more massive one. Moreover, they found that the greater the mass of the halo the later the time when the major merger occurred. In contrast, slow accretion is a quiescent phase dominated by minor mergers. Subsequent studies have mostly focused on the relation between the halo mass accretion history and the concentration parameter of the NFW profile \citep[see e.g.][]{2007MNRAS.381.1450N,2009ApJ...707..354Z,2012MNRAS.427.1322L,2016MNRAS.460.1214L,2017MNRAS.466.3834L,2019MNRAS.485.1906R}. Recently, \citet{Wang2020} have shown that major mergers have a universal impact on the evolution of the median concentration. In particular, after a large initial response, in which the concentration undergoes a large excursion, the halo recovers a more quiescent dynamical state within a few dynamical times. Surprisingly, the authors have also found that even minor mergers can have a non-negligible impact on the mass distribution of haloes, contributing to the scatter of the concentration parameter.
The use of concentration as a proxy of galaxy mergers is nevertheless challenging for multiple reasons. Firstly, the concentration exhibits a large scatter across the merger phase and the value inferred from the analysis of galaxy cluster observations may be sensitive to the quality of the NFW-fit. Secondly, astrophysical processes may alter the mass distribution in the inner region of the halo, thus resulting in values of the concentration that differ from those estimated from N-body simulations \citep[see e.g.][]{2010MNRAS.406..434M,2011MNRAS.416.2539K}, which could be especially the case for merging clusters.
Alternatively, a non-parametric approach to characterise the mass distribution in haloes has been proposed by \citet{Balmes2014} in terms of simple mass ratios, dubbed halo {\it sparsity}:
\begin{equation}\label{sparsdef}
s_{\Delta_1,\Delta_2} = \frac{M_{\Delta_1}}{M_{\Delta_2}},
\end{equation}
where $M_{\Delta_1}$ and $M_{\Delta_2}$ are the masses within spheres enclosing respectively the overdensity $\Delta_1$ and $\Delta_2$ (with $\Delta_1<\Delta_2$) in units of the critical density (or equivalently the background density). This statistics presents a number of interesting properties that overcome many of the limitations that concern the concentration parameter. First of all, the sparsity can be estimated directly from cluster mass estimates without having to rely on the assumption of a specific parametric profile, such as the NFW profile. Secondly, for any given choice of $\Delta_1$ and $\Delta_2$, the sparsity is found to be weakly dependent on the overall halo mass with a much reduced scatter than the concentration \citep{Balmes2014,Corasaniti2018,Corasaniti2019}. Thirdly, these mass ratios retain cosmological information encoded in the mass profile, thus providing an independent cosmological proxy. Finally, the halo ensemble average sparsity can be predicted from prior knowledge of the halo mass functions at the overdensities of interests, which allows to infer cosmological parameter constraints from cluster sparsity measurements \citep[see e.g.][]{Corasaniti2018,Corasaniti2021}.
As haloes grow from inside out such that newly accreted mass is redistributed in concentric shells within a few dynamical times \citep[see e.g.][for a review]{2011MNRAS.413.1373W,2011AdAst2011E...6T}, it is natural to expect that major mergers can significantly disrupt the onion structure of haloes and result in values of the sparsity that significantly differ from those of the population of haloes that have had sufficient time to rearrange their mass distribution and reach the virial equilibrium.
Here, we perform a thorough analysis of the relation between halo sparsity and the halo mass accretion history using numerical halo catalogues from large volume high-resolution N-body simulations. We show that haloes which undergo a major merger in their recent history form a distinct population of haloes characterised by large sparsity values. Quite importantly, we are able to fully characterise the statistical distributions of such populations in terms of the halo sparsity and the time of their last major merger. Thus, building upon these results, we have developed a statistical tool which uses cluster sparsity measurements to test whether a galaxy cluster has undergone a recent major merger and if so when such event took place.
The paper is organised as follows. In Section~\ref{halocat} we describe the numerical halo catalogues used in the analysis, while in Section~\ref{sparsmah} we present the results of the study of the relation between halo sparsity and major mergers. In Section~\ref{calistat} we present the statistical tests devised to identify the imprint of mergers in galaxy clusters and in discuss the statistical estimation of the major merger epoch from sparsity measurements. In Section~\ref{cosmo_imp} we discuss the implications of these results regarding cosmological parameter estimation studies using halo sparsity. In Section~\ref{testcase} we validate our approach using similar data, assess its robustness to observational biasses and describe the application of our methodology to the analysis of known galaxy clusters. Finally, in Section~\ref{conclu} we discuss the conclusions.
\section{Numerical Simulation Dataset}\label{halocat}
\subsection{N-body Halo catalogues}
We use N-body halo catalogues from the MultiDark-Planck2 (MDPL2) simulation \citep{Klypin2016} which consists of $3840^3$ particles in $(1 \,h^{-1}\,\textrm{Gpc})^3$ comoving volume (corresponding to a particle mass resolution of $m_p=1.51\cdot 10^{9}\,h^{-1} \text{M}_{\odot}$) of a flat $\Lambda$CDM cosmology run with the \textsc{Gadget-2}\footnote{\href{https://wwwmpa.mpa-garching.mpg.de/gadget/}{https://wwwmpa.mpa-garching.mpg.de/gadget/}} code \citep{2005MNRAS.364.1105S}. The cosmological parameters have been set to the values of the \textit{Planck} cosmological analysis of the Cosmic Microwave Background (CMB) anisotropy power spectra \citep{2014A&A...571A..16P}: $\Omega_m=0.3071$, $\Omega_b=0.0482$, $h=0.6776$, $n_s=0.96$ and $\sigma_8=0.8228$. Halo catalogues and merger trees at each redshift snapshot were generated using the friend-of-friend (FoF) halo finder code \textsc{rockstar}\footnote{\href{https://code.google.com/archive/p/rockstar/}{https://code.google.com/archive/p/rockstar/}} \citep{Behroozi2013a,Behroozi2013b}. We consider the default set up with the detected haloes consisting of gravitationally bound particles only. We specifically focus on haloes in the mass range of galaxy groups and clusters corresponding to $M_{200\text{c}}>10^{13}\,h^{-1} \text{M}_{\odot}$.
For each halo in the MDPL2 catalogues we build a dataset containing the following set of variables: the halo masses $M_{200\text{c}}$, $M_{500\text{c}}$ and $M_{2500\text{c}}$ estimated from the number of N-body particles within spheres enclosing overdensities $\Delta=200,500$ and $2500$ (in units of the critical density) respectively; the scale radius, $r_s$, of the best-fitting NFW profile; the virial radius, $r_{\rm vir}$; the ratio of the kinetic to the potential energy, $K/U$; the offset of the density peak from the average particle position, $x_{\rm off}$; and the scale factor (redshift) of the last major merger, $a_{\rm LMM}$ ($z_{\rm LMM}$). From these variables we additionally compute the following set of quantities: the halo sparsities $s_{200,500}$, $s_{200,2500}$ and $s_{500,2500}$; the offset in units of the virial radius, $\Delta_r=x_{\rm off}/r_{\rm vir}$, and the concentration parameter of the best-fit NFW profile, $c_{200\text{c}}=r_{200\text{c}}/r_s$, with $r_{200\text{c}}$ being the radius enclosing an overdensity $\Delta=200$ (in units of the critical density). In our analysis we also use the mass accretion history of MDPL2 haloes.
In addition to the MDPL2 catalogues, we also use data from the Uchuu simulations \citep{Ishiyama2021}, which cover a larger cosmic volume with higher mass resolution. We use these catalogues to calibrate the sparsity statistics that provides the base for practical applications of halo sparsity measurements as cosmic chronometers of galaxy cluster mergers. The Uchuu simulation suite consists of N-body simulations of a flat $\Lambda$CDM model realised with \textsc{GreeM} code \citep{2009PASJ...61.1319I,2012arXiv1211.4406I} with cosmological parameters set to the values of a later \textit{Planck}-CMB cosmological analysis \citep{2016A&A...594A..13P}: $\Omega_m=0.3089$, $\Omega_b=0.0486$, $h=0.6774$, $n_s=0.9667$ and $\sigma_8=0.8159$. In particular, we use the halo catalogues from the $(2\,\textrm{Gpc}\,h^{-1})^3$ comoving volume simulation with $12800^3$ particles (corresponding to a particle mass resolution of $m_p=3.27\cdot 10^{8}\,h^{-1}\text{M}_{\odot}$) that, as for MDPL2, were also generated using the \textsc{rockstar} halo finder.
It is important to stress that the major merger epoch to which we refer in this work is that defined by the \textsc{rockstar} halo finder, that is the time when the particles of the merging halo and those of the parent one are within the same iso-density contour in phase-space. Hence, this should not be confused with the first core-passage time usually estimated in Bullet-like clusters.
\begin{table}
\centering
\caption{Characteristics of the selected halo samples at $z=0,0.2,0.4$ and $0.6$ (columns from left to right). Quoted in the rows are the number of haloes in the samples and the redshift of the last major merger $z_{\rm LMM}$ used to select the haloes for each sample.}
\begin{tabular}{ccccc}
\hline
\hline
& \multicolumn{4}{c}{Merging Halo Sample ($T>-1/2$)} \\
\hline
\hline
& $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\
\hline
$\#$-haloes & $23164$ & $28506$ & $31903$ & $32769$ \\
$z_{\rm LMM}$ & $<0.113$ & $<0.326$ & $<0.540$ & $<0.754$ \\
\hline
\hline
& \multicolumn{4}{c}{Quiescent Halo Sample ($T<-4$)} \\
\hline
\hline
& $z=0.0$ & $z=0.2$ & $z=0.4$ & $z=0.6$ \\
\hline
$\#$-haloes & $199853$ & $169490$ & $140464$ & $113829$ \\
$z_{\rm LMM}$ & $>1.15$ & $>1.50$ & $>1.86$ & $>2.22$ \\
\hline
\end{tabular}
\label{tab:samples}
\end{table}
\subsection{Halo Sample Selection}\label{haloeselection}
We aim to study the impact of merger events on the halo mass profile. To this purpose we focus on haloes which undergo their last major merger at different epochs. In such a case, it is convenient to introduce a time variable that characterises the backward time interval between the redshift $z$ (scale factor $a$) at which a halo is investigated and that of its last major merger $z_{\rm LMM}$ ($a_{\rm LMM}$) in units of the dynamical time \citep{Jiang2016, Wang2020},
\begin{equation}\label{backwardtime}
T(z|z_\text{LMM})= \frac{\sqrt{2}}{\pi}\int_{z_{\text{LMM}}}^{z}\frac{\sqrt{\Delta_\text{vir}(z)}}{z+1}dz,
\end{equation}
where $\Delta_{\rm vir}(z)$ is the virial overdensity, which we estimate using the spherical collapse model approximated formula $\Delta_{\rm vir}(z)=18\pi^2+82[\Omega_m(z)-1]-39[\Omega_m(z)-1]^2$ \citep{Bryan1998}. Hence, one has $T=0$ for haloes which undergo a major merger at the time they are investigated (i.e. $z_{\rm LMM}=z$), and $T<0$ for haloes that had their last major merger at earlier times (i.e. $z_{\rm LMM}>z$). Notice that the definition used here differs by a minus sign from that of \citet{Wang2020}, where the authors have found that merging haloes recover a quiescent state within $|T| \sim 2$ dynamical times.
In Section~\ref{sparsprof}, we investigate the differences in halo mass profile between merging haloes and quiescent ones, to maximise the differences we select haloes samples as following:
\begin{itemize}
\item {\it Merging haloes}: a sample of haloes that are at less than one half the dynamical time since their last major merger ($T> -1/2$), and therefore still in the process of rearranging their mass distribution;
\item {\it Quiescent haloes}: a sample of haloes for which their last major merger occurred far in the past ($T\le -4)$, thus they had sufficient time to rearrange their mass distribution to an equilibrium state;
\end{itemize}
In the case of the $z=0$ catalogue, the sample of merging haloes with $T>-1/2$ consists of all haloes for which their last major merger as tagged by the \textsc{rockstar} algorithm occurred at $a_{\rm LMM}>0.897$ ($z_{\rm LMM}<0.115$), while the samples of quiescent
haloes with $T\le -4$ in the same catalogue are characterised by a last major merger at $a_{\rm LMM}<0.464$ ($z_{\rm LMM}>1.155$). In order to study the redshift dependence, we perform a similar selection for the catalogues at $z=0.2,0.4$ and $0.6$ respectively. In Table~\ref{tab:samples} we quote the characteristics of the different samples selected in the various catalogues.
\begin{figure*}
\centering
\includegraphics[width=.8\linewidth]{figures/concentration_sparsities_lines.pdf}
\caption{Distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\delta_{200,500}=1-s^{\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\delta_{200,2500}=1-s^{\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel) respectively}
\label{fig:relative_spars_conc}
\end{figure*}
\section{Halo Sparsity \& Major Mergers}\label{sparsmah}
\subsection{Halo Sparsity Profile}\label{sparsprof}
Here, we seek to investigate the halo mass profile of haloes undergoing a major merger as traced by halo sparsity and evaluate to which extent the NFW profile can account for the estimated sparsities at different overdensities. To this purpose, we compute for each halo in selected samples the halo sparsities $s_{200,500}$ and $s_{200,2500}$ from their SOD estimated masses, as well as the values obtained assuming the NFW profile using the best-fit concentration parameter $c_{200\text{c}}$, which we denote as $s^{\rm NFW}_{200,500}$ and $s^{\rm NFW}_{200,2500}$ respectively. These can be inferred from the sparsity-concentration relation \citep{Balmes2014}:
\begin{equation}
x^3_{\Delta}\frac{\Delta}{200}=\frac{\ln{(1+c_{200\text{c}}x_{\Delta})}-\frac{c_{200\text{c}}x_{\Delta}}{1+c_{200\text{c}}x_{\Delta}}}{\ln{(1+c_{200\text{c}})}-\frac{c_{200\text{c}}}{1+c_{200\text{c}}}},\label{sparconc}
\end{equation}
where $x_{\Delta}=r_{\Delta}/r_{200\text{c}}$ with $r_{\Delta}$ being the radius enclosing $\Delta$ times the critical density. Hence, for any value of $\Delta$ and given the value of $c_{200\text{c}}$ for which the NFW-profile best-fit that of the halo of interest, we can solve Eq.~(\ref{sparconc}) numerically to obtain $x_{\Delta}$ and then derive the value of the NFW halo sparsity given by:
\begin{equation}
s^{\rm NFW}_{200,\Delta}=\frac{200}{\Delta}x_{\Delta}^{-3}.
\end{equation}
It is worth emphasising that such relation holds true only for haloes whose density profile is well described by the NFW formula. In such a case the higher the concentration the smaller the value of sparsity, and inversely the lower the concentration the higher the sparsity. Because of this, the mass ratio defined by Eq.~(\ref{sparsdef}) provides information on the level of sparseness of the mass distribution within haloes, that justifies being dubbed as halo sparsity. Notice that from Eq.~(\ref{sparconc}) we can compute $s_{200,\Delta}$ for any $\Delta>200$, and this is sufficient to estimate the sparsity at any other pair of overdensities $\Delta_1\ne\Delta_2>200$ as given by $s_{\Delta_1,\Delta_2}=s_{200,\Delta_1}/s_{200,\Delta_2}$. Haloes whose mass profile deviates from the NFW prediction will have sparsity values that differ from that given by Eq.~(\ref{sparconc}).
This is emphasised in Fig.~\ref{fig:relative_spars_conc}, where we plot the distribution of the relative deviations of individual halo sparsities with respect to the expected NFW value for $\delta_{200,500}=1-s^{\rm NFW}_{200,500}/s_{200,500}$ (dashed lines) and $\delta_{200,2500}=1-s^{\rm NFW}_{200,2500}/s_{200,2500}$ (solid lines) in the case of the merging (blue lines) and quiescent (orange lines) haloes at $z=0.0,0.2,0.4$ and $0.6$ respectively. We can see that for quiescent haloes the distributions are nearly Gaussian. More specifically, in the case $\delta_{200,500}$ we can see that the distribution has a narrow scatter with a peak that is centred at the origin at $z=0.6$, and slightly shifts toward positive values at smaller redshifts with a maximal displacement at $z=0$. This corresponds to an average bias of the NFW-estimated sparsity $s^{\rm NFW}_{200,500}$ of order $\sim 4\%$ at $z=0$. A similar trend occurs for the distribution of $\delta_{200,2500}$, though with a larger scatter and a larger shift in the location of the peak of the distribution at $z=0$, which corresponds to an average bias of $s^{\rm NFW}_{200,2500}$ of order $\sim 14\%$ at $z=0$. Such systematic differences are indicative of the limits of the NFW-profile in reproducing the halo mass profile of haloes both in the outskirt regions and the inner ones. Moreover, the redshift trend is consistent with the results of the analysis of the mass profile of stacked haloes presented in \citep[][]{2018ApJ...859...55C}, which shows that the NFW-profile better reproduce the halo mass distribution at $z=3$ than at $z=0$ (see top panels of their Fig.~8). Very different is the case of the merging halo sample, for which we find the distribution of $\delta_{200,500}$ and $\delta_{200,2500}$ to be highly non-Gaussian and irregular. In particular, in the case of $\delta_{200,500}$ we find the distribution to be characterised by a main peak located near the origin with a very heavy tail up to relative differences of order $20\%$. The effect is even more dramatic for $\delta_{200,2500}$, in which case the distribution looses the main peak and become nearly bimodal, while being shifted over a positive range of values that extend up to relative variations up $\sim 40\%$. Overall this suggests that sparsity provides a more reliable proxy of the halo mass profile than that inferred from the NFW concentration.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/sparsity_concentration_histories.pdf}
\caption{Evolution with scale factor $a$ (redshift $z$) of the median sparsity $s_{200,500}$ (top panels), $s_{500,2500}$ (middle panels) and $s_{200,2500}$ (bottom panels) for a sample of $10^4$ randomly selected haloes from the MDPL2 halo catalogue at $z=0$ and the sample of all haloes with a last major merger event at $a_{\rm LMM} = 0.67$ (right panels). The solid lines corresponds to the median sparsity computed from the mass accretion histories of the individual haloes, while the shaded area corresponds to the $68\%$ region around the median.}
\label{fig:sparsity_histories_1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.8\linewidth]{figures/sparsity_vs_T.pdf}
\caption{Median sparsity histories as function of the backward time interval since the major merger events $T$ (in units of dynamical time) for halo samples from the MDPL2 catalogue at $z=0$ with different last major merger redshifts $z_{\rm LMM}=0.2,0.4,0.6,0.8,0.8$ and $1$ (curves from bottom to top). Notice that the backward time interval used here differ by a minus sign from that given by Eq.~(\ref{backwardtime}) to be consistent with the definition by \citet{Wang2020}.}
\label{fig:sparsity_histories_2}
\end{figure}
\subsection{Halo Sparsity Evolution}
Differently from the previous analysis, we now investigate the evolution of the halo mass profile as traced by halo sparsity, which we reconstruct from the mass accretion histories of the haloes in the MDPL2 catalogue at $z=0$. In Fig.~\ref{fig:sparsity_histories_1}, we plot the median sparsity evolution of $s_{200,500}$ (top panel), $s_{500,2500}$ (middle panel) and $s_{200,2500}$ (bottom panel) as function of the scale factor. In the left panels we shown the case of a sample of $10^4$ randomly selected haloes, thus behaving as quiescent haloes in the redshift range considered, while in the right panels we plot the evolution of the sparsity of all haloes in the $z=0$ catalogue undergoing a major merger at $a_{\rm LMM}=0.67$. The shaded area corresponds to the $68\%$ sparsity excursion around the median, while the vertical dashed line marks the value of the scale factor of the last major merger.
It is worth remarking taht the sparsity provides us with an estimate of the fraction of mass in a shell of radii $R_{\Delta_1}$ and $R_{\Delta_2}$ relative to the mass enclosed in the inner radius $R_{\Delta_2}$, i.e. Eq.~(\ref{sparsdef}) can be rewritten $s_{\Delta_1,\Delta_2}=\Delta{M}/M_{\Delta_2}+1$. As such $s_{200,500}$ is a more sensitive probe of the mass distribution in the external region of the halo, while $s_{500,2500}$ and $s_{200,2500}$ are more sensitive to the inner part of the halo.
As we can see from Fig.~\ref{fig:sparsity_histories_1}, the evolution of the sparsity of merging haloes matches that of the quiescent sample before the major merger event. In particular, during the quiescent phase of evolution, we notice that $s_{200,500}$ remains nearly constant, while $s_{500,2500}$ and $s_{200,2500}$ are decreasing functions of the scale factor. This is consistent with the picture that haloes grow from inside out, with the mass in the inner region (in our case $M_{2500\text{c}}$) increasing relative to that in the external shell ($\Delta{M}=M_{\Delta_1}-M_{2500\text{c}}$, with $\Delta_1=200$ and $500$ in units of critical density), thus effectively reducing the value of the sparsity. This effect is compensated on $s_{200,500}$, thus resulting in a constant evolution.
We can see that the onset of the major merger event induce a pulse-like response in the evolution of halo sparsities at the different overdensities with respect to the quiescent evolution. These trends are consistent with the evolution of the median concentration during major mergers found in \citet{Wang2020}, in which the concentration rapidly drops to a minimum before bouncing again. Here, the evolution of the sparsity allows to follow how the merger alters the mass profile of the halo throughout the merging process. In fact, we may notice that the sparsities rapidly increase to a maximum, suggesting the arrival of the merger in the external region of the parent halo, which increases the mass $\Delta{M}$ in the outer shell relative to the inner mass. Then, the sparsities decrease to a minimum, indicating that the merged mass has reached the inner region, after which the sparsities increases to a second maximum that indicates that the merged mass has been redistributed outside the $R_{2500\text{c}}$ radius. However, notice that in the case of $s_{200,2500}$ and $s_{500,2500}$ the second peak is more pronounced than the first one, while the opposite occurs for $s_{200,500}$, which suggests that the accreted mass remains confined within $R_{500\text{c}}$. Afterwards, a quiescent state of evolution is recovered.
In Fig.~\ref{fig:sparsity_histories_2} we plot the median sparsities of haloes in the MDPL2 catalogue at $z=0$ that are characterised by different major merger redshifts $z_{\rm LMM}$ as function of the backward interval of time $T$ (in units of the dyanmical time) since the last major merger. Notice that $T$ used in this plot differs by a minus sign from that given by Eq.~(\ref{backwardtime}) to conform to the definition by \citet{Wang2020}. We can see that after the onset of the major merger (at $T\ge 0$), the different curves superimpose on one another, indicating that the imprint of the major merger on the profile of haloes is universal, producing the same pulse-like feature on the evolution of the halo sparsity. Furthermore, all haloes recover a quiescent evolution within two dynamical times, i.e. for $T\ge 2$. Conversely, on smaller time scale $T<2$, haloes are still perturbed by the major merger event. These result are consistent with the findings of \citet{Wang2020}, who have shown that the impact of mergers on the median concentration of haloes leads to a time pattern that is universal and also dissipates within two dynamical times. Notice, that this distinct pattern due to the major merger is the result of gravitational interactions only. Hence, it is possible that such a feature may be sensitive to the underlying theory of gravity or the physics of dark matter particles.
As we will see next, the universality of the pulse-like imprint of the merger event on the evolution of the halo sparsity, as well as its limited duration in time, have quite important consequences, since these leave a distinct feature on the statistical distribution of sparsity values, which can be exploited to use sparsity measurements as a time proxy of major mergers in clusters.
\begin{figure*}
\centering
\includegraphics[width = 0.8\linewidth]{figures/T_aLMM_vs_s200500.pdf}
\caption{\label{fig:s_almm} Iso-probability contours of the joint probability distribution in the $s_{200,500}-T$ plane for the haloes from the MDPL2 catalogues at $z=0.0,0.2,0.4$ and $0.6$ respectively. The solid horizontal line marks the value $T=-2$. The inset plots show that marginal probability distribution for haloes with $T>-2$ (blue histograms) and $T<-2$ (beige histograms) respectively.}
\label{fig:sva}
\end{figure*}
\subsection{Halo Sparsity Distribution}
We have seen that the sparsity of different haloes evolves following the same pattern after the onset of the major merger, such that the universal imprint of the merger event is best highlighted in terms of the backward interval time $T$. Hence, we aim to investigate the joint statistical distribution of halo sparsity values for haloes characterised by different time $T$ since their last major merger in the MDPL2 catalogues at different redshift. Here, we revert to the definition of $T$ given by Eq.~(\ref{backwardtime}), where the time interval is measured relative to the time the haloes are investigated, that is the redshift $z$ of the halo catalog. Hence, $T=0$ for haloes undergoing a major merger at $z_{\rm LMM}=z$ and $T<0$ for those with $z_{\rm LMM}>z$.
For conciseness, here we only describe the features of the joint distribution $p(s_{200,500},T)$ shown in Fig.~\ref{fig:s_almm} in the form of iso-probability contours in the $s_{200,500}-T$ plane at $z=0$ (top left panel), $0.2$ (top right panel), $0.4$ (bottom left panel) and $0.6$ (bottom right panel). We find a similar structure of the distributions at other redshift snapshots and for the halo sparsities $s_{200,2500}$ and $s_{500,2500}$. In each panel the horizontal solid line marks the characteristic time interval $|T|=2$. As shown by the analysis of the evolution of the halo sparsity, haloes with $|T|>2$ have recovered a quiescent state, while those with $|T|<2$ are still undergoing the merging process. The marginal conditional probability distributions $p(s_{200,500}|T<-2)$ and $p(s_{200,500}|T>-2)$ are shown in the inset plot.
Firstly, we may notice that the joint probability distribution has a universal structure, that is the same at different redshift snapshots. Moreover, it is characterised by two distinct regions. The region with $T\le -2$, that corresponds to haloes which are several dynamical times away since their last major merger event ($|T|\ge 2$), as such they are in a quiescent state of evolution of the sparsity; and a region with $-2<T<0$, corresponding to haloes that are still in the merging processes ($|T|<2$). In the former case, the pdf has a rather regular structure that is independent of $T$, while in the latter case the pdf has an altered structure with a pulse-like feature shifted toward higher sparsity values. The presence of such a feature is consistent with the evolution of the median sparsity inferred from the halo mass accretion histories previously discussed. This is because among the haloes observed at a given redshift snapshot, those which are within two dynamical times from the major merger event are perturbed, thus exhibiting sparsity values that are distributed around the median shown in Fig.~\ref{fig:sparsity_histories_2}. In contrast, those which are more than two dynamical times since their last major merger had time to redistributed the accreted mass and are in a quiescent state, causing a regular structure of the pdf. From the inset plots, we can see that these two regions identify two distinct population of haloes, quiescent haloes with $T\le -2$ and merging (or perturbed) ones with $-2<T<0$ characterised by a stiff tail toward large sparsity values and that largely contributes to the overall scatter of the halo sparsity of the entire halo ensemble. It is worth stressing that the choice of $T=2$ as threshold to differentiate between the quiescent haloes and the perturbed ones at a given redshift snapshot is not arbitrary, since it is the most conservative value of the dynamical time above which haloes that have undergone a major merger recover a quiescent evolution of their halo mass profile as shown in Fig.~\ref{fig:sparsity_histories_2}.
Now, the fact that two populations of haloes have different probability distribution functions suggests that measurements of cluster sparsity can be used to identify perturbed systems that have undergone a major merger.
\section{Identifying Galaxy Cluster Major Mergers}\label{calistat}
Given the universal structure of the probability distributions characterising merging haloes and quiescent ones, we can use the numerical halo catalogues to calibrate their statistics at different redshifts and test whether a cluster with a single or multiple sparsity measurements has had a major merger in its recent mass assembly history.
In the light of these observations, we first design a method to assess whether a cluster has or hasn't been recently perturbed by a major merger. To do this we design a binary test, as defined in detection theory \citep[see e.g.][]{kay1998fundamentals}, to differentiate between the different cases. Formally, this translates into defining two hypotheses denoted as $\mathcal{H}_0$, the null hypothesis and, $\mathcal{H}_1$, the alternate hypothesis. In our case these are, $\mathcal{H}_0$: \textit{The halo has not been recently perturbed} and $\mathcal{H}_1$: \textit{The halo has undergone a recent major merger}. Formally the distinction between the two is given in terms of the backward time interval $T$,
\begin{equation}
\begin{cases}
\mathcal{H}_0:\; T(a_\text{LMM}|a(z)) < -2\\
\mathcal{H}_1:\; T(a_\text{LMM}|a(z)) \geq -2
\end{cases}
\label{eq:hypothesis}
\end{equation}
if we consider the halo to no longer be perturbed after $2\tau_\text{dyn}$. In Fig.~\ref{fig:s_almm} we have delimited these two regions using black horizontal lines.
In the context of detection theory \citep[see e.g.][]{kay1998fundamentals}, one defines some test statistic,
\begin{equation}
\Gamma \underset{\mathcal{H}_0}{\overset{\mathcal{H}_1}{\gtrless}} \Gamma_\text{th},
\end{equation}
from the observed data such that when compared to a threshold, $\Gamma_\text{th}$, allows us to distinguish between the two hypotheses.
In the following we will explore multiple ways of defining the test statistic and associated thresholds. This may appear cumbersome, however it is necessary to unambiguously define thresholds according to probabilistic criteria rather than arbitrary ones, while the variety of approaches we adopt allow us to check their robustness.
\subsection{Frequentist Approach}
\label{sec:frequentist}
We start with the simplest possible choice that is using $s_{200,500}$ as our test statistic. Separating our data set into the realisations of the two hypotheses, we estimate their respective likelihood functions, which we model using a generalised $\beta '$ probability density function (pdf),
\begin{equation}
\rho(x,\alpha,\beta,p,q) = \frac{p\left(\frac{x}{q}\right)^{\alpha p - 1}\left(1+\left(\frac{x}{q}\right)^p\right)^{-\alpha-\beta}}{q\,B(\alpha,\beta)},
\label{eq:gen_beta_prime}
\end{equation}
where $B(\alpha,\beta)$ is the Beta function and where $x = s_{200,500} - 1$. From our two samples we then fit this model using a standard least squares method to obtain the set of best fitting parameters under both hypotheses, these are reproduced in Tab.~\ref{tab:fit_params} and particular fits are shown in Fig.~\ref{fig:pdf_fit} for the halo catalogues at $z=0$. In both cases we additionally report the 95 percent confidence intervals estimated using 1000 bootstrap iterations.
\begin{table}
\centering
\caption{Best fitting parameters for the distribution of sparsities at $z=0$ under both hypotheses. Here, we quote each parameter with its 95 percent confidence interval estimated over 1000 bootstrap iterations.}
\begin{tabular}{r|cc}
\hline Parameter & $\mathcal{H}_0$& $\mathcal{H}_1$\\
\hline $\alpha$ & $1.4^{+0.1}_{-0.1}$ & $1.5^{+0.2}_{-0.2}$ \\
$\beta$ & $0.61^{+0.03}_{-0.03}$ & $0.71^{+0.10}_{-0.08}$ \\
$p$ & $7.7^{+0.3}_{-0.3}$ & $4.1^{+0.4}_{-0.3}$ \\
$q$ & $0.304^{+0.002}_{-0.003}$ & $0.370^{+0.008}_{-0.008}$ \\
\hline
\end{tabular}
\label{tab:fit_params}
\end{table}
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/binary_fit.pdf}
\caption{Estimated probability distribution functions for $\mathcal{H}_0$ (purple solid line) and $\mathcal{H}_1$ (orange solid line) hypotheses at $z=0$ along with best fitting generalised beta prime distribution functions (dotted black lines). The shaded area corresponds to the 95 percent confidence interval estimated over 1000 bootstrap iterations.}
\label{fig:pdf_fit}
\end{figure}
The quality of the fits degrades towards the tails of the distributions, most notably under $\mathcal{H}_1$ due to the fact we do not account for the pulse feature. Nonetheless, they still allow us to obtain an estimate, $\tilde\Sigma(x)$, of the corresponding likelihood ratio (LR) test statistic $\Sigma(x) = {\rho(x|\mathcal{H}_1)}/{\rho(x|\mathcal{H}_0)}$. Under the Neyman-Pearson lemma \citep[see e.g.][]{kay1998fundamentals} the true LR test statistic constitutes the most powerful estimator for a given binary test. We can express this statistic in terms of the fitted distribution, for $z=0$ this reads as:
\begin{align}
\tilde\Sigma(x) &\propto x^{\alpha_1 p_1 - \alpha_0 p_0}\frac{(1 + (x/q_1)^{p_1})^{-\alpha_1 - \beta_1}}{(1 + (x/q_0)^{p_0})^{-\alpha_0 - \beta_0}}\\
&= x^{-4.6}\frac{(1 + (x/0.370)^{4.1})^{-2.2}}{(1 + (x/0.304)^{7.7})^{-2.0}}
\end{align}
from which we can obtain an approximate expression, $\tilde\Sigma(x) \propto x^{1.8}$, for large values, $x \gg 0.3$. What one can observe is that for large values of sparsity the LR test statistic is a monotonously increasing function of $x = s_{200,500} - 1$, indicating that in this regime the sparsity itself will have comparable differentiating power to the LR test. A similar dependence holds at $z>0$. What this indicates is that we can use $\Gamma = s_{200,500}$ to efficiently differentiate between haloes that have undergone a recent major merger from an at rest population. In addition to this result, one can estimate a simple ${\rm p}-$value,
\begin{equation}
{\rm p} = \text{P}_\text{r}(\Gamma > s_{200,500}|\mathcal{H}_0) = 1 - \int_0^{s_{200,500}-1}\rho(x|\mathcal{H}_0)dx
\end{equation}
i.e. the probability of finding a higher value of $s_{200,500}$ in a halo at equilibrium. And conversely, one can also estimate the value of the threshold corresponding to a given $p-$value, by inverting this relation.
In Fig.~\ref{fig:xi_of_z} we have estimated the threshold corresponding to three key p-values at increasingly higher redshifts.
Here, each point is estimated using the sparsity distributions from the numerical halo catalogues. This figure allows to quickly estimate the values of sparsity above which a halo at some redshift $z$ should be considered as recently perturbed.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/xi_200_500_z.pdf}
\caption{Sparsity thresholds $s^{\rm th}_{200,500}$ as function of redshift for ${\rm p}-$values$=0.05$ (purple solid line), $0.01$ (orange solid line) and $0.005$ (green solid line) computed using the Frequentist Likelihood-Ratio approach.}
\label{fig:xi_of_z}
\end{figure}
It is worth noticing that these thresholds are derived from sparsity estimates from N-body halo masses. In contrast, sparsities of observed galaxy clusters are obtained from mass measurements that may be affected by systematic uncertainties that may differ depending on the type of observations. The impact of mass biases is reduced in the mass ratio, but it could still be present. As an example, using results from hydro/N-body simulations for an extreme model of AGN feedback model, \citet{Corasaniti2018} have shown that baryonic processes on average can bias the estimates of the sparsity $s_{200,500}$ up to $\lesssim 4\%$ and $s_{200,2500}$ up to $\lesssim 15\%$ at the low mass end. This being said, as long as the mass estimator is unbiased we expect our analysis to hold, albeit with a modification to the fitting parameters. In Section~\ref{testcase} we present a preliminary analysis of the impact of mass biasses on our approach, however we will leave more in depth investigations into this topic, as well as modifications that could arise from non-gravitational physics, to upcoming work.
\subsection{Bayesian approach}
\label{sec:Bayesisan}
An alternate way of tackling this problem is through the Bayesian flavour of detection theory. In this case, instead of looking directly at how likely the data $\boldsymbol{x}$ is described by a model characterised by the model parameters $\boldsymbol{\theta}$ in terms of the likelihood function $p(\boldsymbol{x}|\boldsymbol{\theta})$, one is interested in how likely is the model given the observed data, that is the posterior function $p(\boldsymbol{\theta}|\boldsymbol{x})$.
Bayes theorem allows us to relate these two quantities:
\begin{equation}
p(\bmath{\theta}|x) = \frac{p(x|\bmath{\theta})\pi(\bmath{\theta})}{\pi(x)},
\label{eq:posterior}
\end{equation}
where $\pi(\bmath{\theta})$ is the prior distribution for the parameter vector $\bmath{\theta}$ and
\begin{equation}
\pi(x) = \int p(x|\bmath{\theta})\pi(\bmath{\theta}) d\bmath{\theta},
\end{equation}
is a normalisation factor, known as evidence.
While this opens up the possibility of estimating the parameter vector, which we will discuss in sub-section~\ref{statmergerepoch}, this approach also allows one to systematically define a test statistic known as the Bayes Factor,
\begin{equation}
B_\text{f} = \frac{\int_{V_1} p(\bmath{x}|\bmath{\theta})\pi(\bmath{\theta})d\bmath{\theta}}{\int_{V_0} p(\bmath{x}|\bmath{\theta})\pi(\bmath{\theta})d\bmath{\theta}},
\end{equation}
associated to the binary test. Here, we have denoted $V_1$ and $V_0$ the volumes of the parameter space respectively attributed to hypothesis $\mathcal{H}_1$ and $\mathcal{H}_0$.
In practice, to evaluate this statistic we first need to model the likelihood. Again we use the numerical halo catalogues as calibrators. We find that the distribution of $s_{200,500}$ for a given value of the scale factor at the epoch of the last major merger, $a_\text{LMM}$, is well described by a generalised $\beta '$ pdf. In particular, we fit the set of parameters $\bmath{\theta} = [\alpha, \beta, p, q]^\top$ that depend solely on $a_\text{LMM}$ by sampling the posterior distribution using Monte-Carlo Markov Chains (MCMC) with a uniform prior for $a_\text{LMM}\sim \mathcal{U}(0; a(z))$\footnote{The upper bound is the scale factor at the epoch at which the halo is observed.}. This is done using the \textsc{emcee}\footnote{\href{https://emcee.readthedocs.io/en/stable/}{https://emcee.readthedocs.io/en/stable/}} library \citep{Emcee2013}. The resulting values of $B_\text{f}$ can then be treated in exactly the same fashion as the Frequentist statistic. It is however important to note that the Bayes factor is often associated with a standard ``rule of thumb'' interpretation \citep[see e.g.][]{Trotta2007} making these statistic particularly interesting to handle.
One way of comparing the efficiency of different tests is to draw their respective Receiver Operating Characteristic (ROC) curves \citep{Fawcett2006}, which show the probability of having a true detection, $\text{P}_\text{r}(\Gamma > \Gamma_{\rm th}|\mathcal{H}_1)$, plotted against the probability of a false one, $\text{P}_\text{r}(\Gamma > \Gamma_{\rm th}|\mathcal{H}_0)$ for the same threshold. In other words, this means we are simply plotting the probability of finding a value of $\Gamma$ that is larger than the threshold under the alternate hypothesis against that of finding a value of $\Gamma$ larger than the same threshold under the null hypothesis. The simplest graphical interpretation of this type of figure is, the closer a curve gets to the top left corner the more powerful the test is at differentiating between both cases.
In Fig.~\ref{fig:roc_curves} we plot the ROC curves corresponding to all the tests we have studied in the context of this work. These curves have been evaluated using a sub sample of $10^4$ randomly selected haloes from the MDPL2 catalogues at $z=0$ with masses $M_{200\text{c}} > 10^{13}\,h^{-1}\text{M}_{\odot}$. Let us focus on the comparison between the Frequentist direct sparsity approach (S 1D) against the Bayes Factor obtained using a single sparsity measurement (BF 1D). We can see that both tests have very similar ROC curves for low false alarm rates. This indicates that we do not gain any substantial power from the additional computational work done to estimate the Bayes factor using a single value of sparsity.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/roc_curves.pdf}
\caption{ROC curves associated with the binary tests studied in this work: the Frequentist sparsity test (S 1D, solid orange line), the Bayes Factor based on a single sparsity value (BF 1D, dashed green line) and using three values (BF 3D, dash-dotted magenta line), the Support Vector Machines with one sparsity value (SVM 1D, dotted purple line) and three sparsities (SVM 3D, dotted yellow line). What can be observed is that all 1D tests are equivalent at small false alarm rates and the only way to significantly increase the power of the test is to increase the amount of input data, i.e. adding a third mass measurement as in the BF 3D and SVM 3D cases.}
\label{fig:roc_curves}
\end{figure}
While this may seem as the end of the line for the method based on the Bayes factor, the latter does present the significant advantage of being easily expanded to include additional data. In our case this comes in the form of adding additional sparsity measurements at different overdensities. Simply including a third mass measurement, here we use $M_{2500\text{c}}$, gives us access to two additional sparsities from the three possible pairs, $s_{200,500},\,s_{200,2500}$ and $s_{500,2500}$. This leads us to defining each halo as a point in a 3-dimensional space with coordinates
\begin{equation}
\begin{cases}
x = s_{200,500} - 1 \\
y = s_{200,2500} -1 \\
z = s_{500,2500} -1
\end{cases}
\end{equation}
After estimating the likelihood in this coordinate system, one quickly observes that a switching spherical-like coordinate system, $\mathbfit{r} = [r, \vartheta, \varphi]^\top$, allows for a much simpler description. The resulting likelihood model,
\begin{equation}
L(\mathbfit{r};\bmath{\theta},\bmath{\mu},\mathbfss{C}) = \frac{f(r;\bmath{\theta})}{2\pi\sqrt{|\mathbfss{C}|}}\exp\left[-\frac{1}{2}(\bmath{\alpha} - \bmath{\mu})^\top\mathbfss{C}^{-1}(\bmath{\alpha} - \bmath{\mu})\right],
\label{eq:like3D}
\end{equation}
treats $r$ as independent from the two angular coordinates that are placed within the 2-vector $\bmath{\alpha} = [\vartheta, \varphi]^\top$. Making the radial coordinate independent allows us to constrain $f(r,\bmath{\theta})$ simply from the marginalised distribution. Doing so we found that the latter is best described by a Burr type XII \citep{10.1214/aoms/1177731607} distribution,
\begin{equation}
f(x,c,k,\lambda,\sigma) = \frac{ck}{\sigma}\left(\frac{x-\lambda}{\sigma}\right)^{c-1}\left[1+\left(\frac{x-\lambda}{\sigma}\right)^2\right]^{-k-1},
\end{equation}
with additional displacement, $\lambda$, and scale, $\sigma$, parameters. In total the likelihood function is described by 9 parameters, 3 of which are constrained by fitting the marginalised distribution of $r$ realisations assuming $\lambda = 0$ and 5, 2 in $\bmath{\mu}$ and 3 in $\mathbfss{C}$, are measured through unbiased sample means and covariances.
In a similar fashion as in the single sparsity case, we evaluate these parameters as functions of $a_\text{LMM}$ and thus recover a posterior likelihood for the epoch of the last major merger using MCMC, again applying a flat prior on $a_\text{LMM}$. This posterior in turn allow us to measure the the corresponding Bayes Factor. We calculate these Bayes factors for the same test sample used previously and evaluate the corresponding ROC curve (BF 3D in Fig.~\ref{fig:roc_curves}). As intended, the additional mass measurement has for effect of increasing the detection power of the test, thus raising the ROC curve with respect to the 1D tests. Increasing the true detection rate from 40 to 50 percent for a false positive rate of 10 percent. We have tested that the same trends hold valid at $z>0$.
\subsection{Support Vector Machines}
An alternative to the Frequentist -- Bayesian duo is to use machine learning techniques designed for classification. While Convolutional Neural Networks \citep[see eg.][for a review]{2015Natur.521..436L} are very efficient and have been profusely used to classify large datasets, both in terms of dimensionality and size, recent examples in extra-galactic astronomy include galaxy morphology classification \citep[e.g.][]{Hocking2018,Martin2020,Abul_Hayat2020,Cheng2021,Spindler2021} detection of strong gravitational lenses \citep[e.g.][]{Jacobs2017,Jacobs2019, Lanusse2018,Canameras2020,Huang2020,Huang2021,He2020,Gentile2021,Stein2021} galaxy merger detection \citep{Ciprijanovic2021} and galaxy cluster merger time estimation \citep{Koppula2021}. They may not be the tool of choice when dealing with datasets of small dimensionality, like the case at a hand. A simpler option for this problem is to use Support Vector Machines (SVM) \citep[see e.g.][]{Cristianini2000} as classifiers for the hypotheses defined in Eq.~(\ref{eq:hypothesis}), using as training data the sparsities measured from the halo catalogues.
A SVM works on the simple principle of finding the boundary that best separates the two hypotheses. In opposition to Random Forests \citep[see e.g.][]{Breiman2001} which can only define a set of horizontal and vertical boundaries, albeit to arbitrary complexity, the SVM maps the data points to a new euclidean space and solves for the plane best separating the two sub-classes. This definition of a new euclidean space allows for a non-linear boundary between the classes. For large datasets however the optimisation of the non-linear transformation can be slow to converge, and thus we restrict ourselves to a linear transformations. To do so we make use the \textsc{scikit-learn}\footnote{\href{https://scikit-learn.org/}{https://scikit-learn.org/stable/}} \citep{scikit-learn} python package. The ``user friendly'' design of this package allows for fast implementations with little required knowledge of python and little input from the user, giving this method an advantage over its Frequentist and Bayesian counterparts.
In order to compare the effectiveness of the SVM tests, with 1 and 3 sparsities, against those previously presented we again plot the corresponding ROC curves\footnote{Note that the test data used for the ROC curves was excluded from the training set.} in Fig.~\ref{fig:roc_curves}. What can be seen is that the SVM tests reach comparable differentiating power to both the Bayesian and Frequentist test for 1 sparisty and is only slightly out performed by the Bayesian test using 3 sparsities. This shows that designing a statistical test based on the sparsity can be done in a simple fashion without significant loss of differentiation power. Making sparsity an all the more viable proxy to identify recent major mergers.
\subsection{Estimating cluster major merger epoch}\label{statmergerepoch}
In the previous section we have investigated the possibility of using halo sparsity as a statistic to identify clusters that have had a recent major merger. We will now expand the Bayesian formulation of the binary test to \emph{estimate} when this last major merger took place. This can be achieved by using the posterior distributions which we have previously computed to calculate the Bayes Factor statistics. These distributions allow us to define the most likely epoch for the last major merger as well as the credible interval around this epoch.
\begin{figure}
\centering
\includegraphics[width = 0.95\linewidth]{figures/sparsity1d_posteriors.pdf}
\caption{Posterior distributions for different values of the sparsity $s_{200,500}=1.2$ (dash-dotted green line), $1.7$ (dashed orange line), $2$ (dotted purple line) and $3$ (solid magenta line). We can see that for large sparsity values, the distributions are bimodal at recent epoch, while low values produce both a continuous distribution at low scale factor values as well as a single peak at recent epochs corresponding to a confusion region. This induce a degeneracy that needs to be broken if we are to accurately estimate $a_\text{LMM}$.}
\label{fig:post_1sparsity}
\end{figure}
Beginning with the one sparsity estimate, in Fig.~\ref{fig:post_1sparsity} we plot the resulting posterior distributions $p(a_{\rm LMM}|s_{200,500})$ obtained assuming four different values of $s_{200,500}=1.2,1.7,2$ and $3$ at $z=0$. As we can see, in the case of large sparsity values ($s_{200,500}\ge 1.7$), we find a bimodal distribution in the posterior, caused by the pulse-like feature in the structure of the joint distribution shown in Fig.~\ref{fig:sva} and which is consequence of universal imprint of the major merger on the halo sparsity evolution shown in Fig.~\ref{fig:sparsity_histories_2}. In particular, we notice that the higher the measured sparsity and the lower the likelihood of having the last major merger to occur in the distant past. A consequence of this pulse-like feature is that a considerable population of haloes with a recent major mergers, characterised by $-1/2 <T(a_\text{LMM}; a(z))<-1/4$, have sparsities in the same range as those in the quiescent regime. This confusion region results in a peak of the posterior distribution for the $s_{200,500}=1.2$ case, that is located on the minimum of the binomial distributions associated to the values of $s_{200,500}\ge 1.7$. This suggests the presence of a degeneracy, such that quiescent haloes may be erroneously identified as haloes having undergone a recent merger, or on the contrary haloes having undergone a recent merger are misidentified as quiescent haloes.
\begin{figure*}
\centering
\includegraphics[width = 0.95\linewidth]{figures/posteriors_sparsity_1D_3D.pdf}
\caption{Posterior distributions of the last major merger epoch for three selected haloes with different sparsity values from the $z=0$ halo catalogue. The shaded areas corresponds to the 68\% credible interval around the median (coloured vertical line) assuming a single (orange) and three sparsity (purple) measurements. The black vertical dashed lines mark the true fiducial value of $a_\text{LMM}$ for each of the selected haloes.}
\label{fig:post_3sparsity}
\end{figure*}
The presence of this peak in the $p(a_{\rm LMM}|s_{200,500})$ posterior for low sparsity values biases the Bayesian estimation towards more recent major mergers when using a single sparsity measurement. As a result the previously mentioned Bayes factors, which depend on such a posterior, will also be biased towards recent mergers resulting in higher measured values. Moreover, this impacts our choice when it comes to the estimation we use, indeed a maximum likelihood estimation will be very sensitive to this peak. Therefore, we prefer to use a median likelihood estimation that is significantly more robust. The credible interval is then estimated iteratively around the median as to encompass 68 percent of the total probability. The end result of this procedure is shown in Fig.~\ref{fig:post_3sparsity} where we plot inferred posteriors along with the corresponding credible intervals (shaded areas) and median likelihood measurements (vertical lines) obtained assuming one (orange curves) and three sparsity (purple curves) values from three haloes selected from the numerical halo catalogue at $z=0$. The black vertical dashed lines indicate the true $a_{\rm LMM}$ value of the haloes.
We can clearly see that the inclusion of an additional mass measurement (or equivalently two additional sparsity estimates) allows to break the $s_{200,500}$ degeneracy between quiescent and merging haloes with low sparsity values. In such a case the true $a_{\rm LMM}$ value is found to be within the $1\sigma$ credible regions. Hence, this enable us to also identify merging haloes that are located in the confusion region.
\section{Cosmological Implications}\label{cosmo_imp}
Before discussing practical applications on the use of large halo sparsities as tracers of major merger events in clusters, it is worth highlighting the impact that such systems can have on average sparsity measurements that are used for cosmological parameter inference.
Halo sparsity depends on the underlying cosmological model \citep{Balmes2014,Ragagnin2021}, and it has been shown \citep[][]{Corasaniti2018,Corasaniti2021} that the determination of the average sparsity of an ensemble of galaxy clusters estimated at different redshifts can provide cosmological constraints complementary to those from standard probes. This is possible thanks to an integral relation between the average halo sparsity at a given redshift and the halo mass function at the overdensities of interest, which allows to predict the average sparsity for a given cosmological model \citep{Balmes2014}. Hence, the average is computed over the entire ensemble of haloes as accounted for by the mass functions. In principle, this implies that at a given redshift the mean sparsity should be computed over the available cluster sample without regard to their state, since any selection might bias the evaluation of the mean. This can be seen in the left panel of Fig.~\ref{fig:cosmo_mean}, where we plot the average sparsity $\langle s_{200,500}\rangle$ (top panel), $\langle s_{500,2500}\rangle$ (central panel) and $\langle s_{200,2500} \rangle$ as function of redshift in the case of haloes which are within two dynamical times since the last major merger (blue curves), for those which are more than two dynamical times since the last major merger (orange curves) and for the full sample (green curves). As we can see removing the merging haloes induces a $\sim 10\%$ bias on $\langle s_{200,500}\rangle$ at $z=0$, which decreases to $\sim 4\%$ at $z=1$, while in the same redshift range the bias is at $\sim 20\%$ level for $\langle s_{500,2500}\rangle$ and $\sim 30\%$ for $\langle s_{200,2500}\rangle$.
However, the dynamical time is not observable and in a realistic situation, one might have to face the reverse problem, which is that of having a number of outliers characterised by large sparsity values in a small cluster sample, potentially biasing the estimation of the mean compared to that of a representative cluster ensemble. Which clusters should be considered as outliers, and which should be removed from the cluster sample such that the estimation of the mean sparsity will remain representative of the halo ensemble average, say at sub-percent level? To assess this question, we can make use of the sparsity thresholds defined in Section~\ref{sec:frequentist} based on the p-value statistics. As an example in the right panel of Fig.~\ref{fig:cosmo_mean}, we plot the mean sparsities $\langle s_{200,500}\rangle$, $\langle s_{500,2500}\rangle$ and $\langle s_{200,2500}\rangle$ as function of redshift computed using the full halo sample (blue curves), and a selected halo sample from which we have removed haloes with sparsities above the sparsity thresholds, such as those shown in Fig.~\ref{fig:xi_of_z}, associated to p-values of $p\le 0.01$ (green curves) and $p\le 0.005$ (orange curves) respectively. We can see that removing outliers alter the estimated mean sparsity $\langle s_{200,500}\rangle$ at sub-percent level over the range $0<z<2$, and in the case of $\langle s_{500,2500}\rangle$ and $\langle s_{200,2500}\rangle$ up to a few per-cent level only in the high-redshift range $1\lesssim z <2$.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/mean_spars_z.pdf}
\caption{Redshift evolution of the average halo sparsity $\langle s_{200,500}\rangle$ (top panels), $\langle s_{500,2500}\rangle$ (middle panels) and $\langle s_{200,2500}\rangle$ (bottom panels). In the left panels we show the average sparsity estimated for the full halo samples (green curves), for haloes which are within two dynamical times from the last major merger event (blue curves) and for haloes which are at more than two dynamical times from it (orange curves). In the right panels we show the average sparsity estimate from the full halo samples (blue curves) and for selected samples from which we removed outliers whose sparsity lies above thresholds corresponding to p-values of $p\le 0.01$ (green curves) and $p\le 0.005$ (orange curves). In the inset plots we show the relative differences with respect to the mean sparsity estimated from the full catalogues.}
\label{fig:cosmo_mean}
\end{figure}
\section{Towards practical applications}\label{testcase}
We will now work towards applying the statistical analysis presented in Section~\ref{calistat} to observational data. To this purpose we have specifically developed the numerical code \textsc{lammas}\footnote{\href{https://gitlab.obspm.fr/trichardson/lammas}{https://gitlab.obspm.fr/trichardson/lammas}}. Given the mass measurements $M_{200\text{c}}$, $M_{500\text{c}}$ and $M_{2500\text{c}}$ of a galaxy cluster, the code computes the sparsity data vector $\bmath{D}=\{s_{200,500},s_{200,2500},s_{500,2500}\}$ (the last two values only if the estimate of $M_{2500\text{c}}$ is available) and performs a computation of the frequentist statistics discussed in Section~\ref{sec:frequentist} and the Bayesian computation presented in Section~\ref{sec:Bayesisan}. The code computes the frequentist p-value only for $s_{200,500}$ and it's associated uncertainty. Bayesian statistics are measure for both 1 and 3 sparsities, these include the posterior distributions $p(a_{\rm LMM}|\bmath{D})$ and their associated marginal statistics, along with the Bayes factor, $B_\text{f}$, using the available data. We implement the statistical distributions of merging and quiescent halo populations calibrated on the halo catalogues from the Uchuu simulations \citep{Ishiyama2021} rather than MDPL2, thus benefiting from the higher mass resolution and redshift coverage of the Uchuu halo catalogues. A description of the code \textsc{lammas} is given in Appendix~\ref{LAMMAS}. In the following we will first validate this code by presenting it with haloes from N-body catalogues that were not used for calibration. We will then quantify the robustness of our analysis to observational mass biases using empirical models. In particular, we will focus on weak lensing, hydrostatic equilibrium and NFW-concentration derived galaxy cluster masses. Finally we present a preliminary analysis of two galaxy clusters, Abell 383 and Abell 2345.
\subsection{Validation on simulated haloes}
As we have calibrated \textsc{lammas} using the Uchuu simulation suite \citep{Ishiyama2021}, we use a randomly selected sample of $10^4$ haloes from the previously described MDPL2 catalogues as validation dataset. This choice has two main advantages, firstly it naturally guarantees the same haloes are not used in both the calibration and the validation; secondly, it allows to test the robustness of the method to small changes in cosmology as the Uchuu suite is run on the cosmology of \citet{2016A&A...594A..13P} compared to MDPL2 which is run using that of \citet{2014A&A...571A..16P}. Furthermore, we choose to do this validation at $z=0.248$ to ensure that our pipeline also performs well at redshifts $z\neq 0$.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{figures/roc_biass_samples.pdf}
\caption{ROC curves estimated from the validation dataset for sparsities estimated from the N-body halo masses (dashed lines), from the concentration parameter of the best-fitting NFW-profile (solid lines) and in the case of a conservative model for the mass bias induced by lensing observations (dash-dotted lines) in the case of the single sparsity Bayesian (BF 1D, orange curves) and frequentist (S 1D, blue curves) estimators and the three sparsity Bayesian estimator (BF 3D, green curves). We can see that in all cases, S 1D and BF 1D tests offer a similar detection power. Comparing the BF 3D curves to the S 1D ones, it is clear that while adding an independent sparsity measurement increases the detection power, this is not the case when the sparsities are deduced from the concentration parameter, with the latter having the opposite effect. Finally we can also see that strong mass biases have a strong negative impact on the efficiency of the detection of mergers.}
\label{fig:validation_roc_curves}
\end{figure}
We evaluate the efficiency of the detection procedure in terms of ROC curves shown in Fig.~\ref{fig:validation_roc_curves} and constructed using the same method as those shown in Fig.~{\ref{fig:roc_curves}}. We plot the case of the single sparsity frequentist (S 1D) and Bayesian (BF 1D) estimators, as well as the three sparsity Bayesian (BF 3D) estimator for sparsity measurements inferred from N-body halo masses (dashed lines), lensing masses (dash-dotted lines) and NFW-concentration derived masses (solid lines). Comparing the dashed curves of Fig.~\ref{fig:validation_roc_curves} and those in Fig.~{\ref{fig:roc_curves}} we can see that for the validation dataset considered here the efficiency of merger detection of the different test statistics is comparable to that we have inferred for the MDPL2 halo sample.
We quantify the accuracy of the estimation procedure by introducing three metrics defined as:
\begin{itemize}
\item the accuracy as given by the frequency at which the true value $a_\text{LMM}$ of a halo is recovered within the $1\,\sigma$ credible interval, $\alpha_\text{cc}$;
\item the estimated epoch of the last major merger, $\hat{a}_{\rm LMM}$;
\item the relative width of the $1\,\sigma$ credible interval, $\sigma/\hat{a}_{\rm LMM}$.
\end{itemize}
In Fig.~\ref{fig:test_metrics}, we plot these metrics as function of the true scale factor (redshift) of the last major merger of the haloes in the validation sample for the case of a single sparsity (orange curves) and three sparsity (blue curves) measurements, to which we will simply refer as 1S and 3S respectively. At first glance, it may appear from the top panel as if the 1S estimator is more accurate at recovering the merger epoch than it's 3S counterpart over a large interval $0.2<a_{\rm LMM}<0.68$. However, this is simply due to the fact that for haloes which are more than two dynamical times from their last major merger the posterior distribution is nearly flat and the estimator returns the same estimated time, as can be seen from the plot in the central panel. Consequently, the increased accuracy is simply due to wider credible intervals, as can be seen in the bottom panel. Hence, in this particular regime it is more prudent to extract an upper bound on $\hat{a}_{\rm LMM}$ from the resulting posterior, rather than a credible interval.
We can see that the trend is reversed for recent mergers occurring at $0.68<a_{\rm LMM}>0.8$, with the 3S estimator being much more accurate at recovering the scale factor of the last major merger and with restricted error margins (see blue curves in top and bottom panels respectively). Nevertheless from the middle panel, we may notice that both the 1S and 3S estimators have an area of confusion around the dip of the pulse feature in the $\hat{a}_{\rm LMM}$ plot. In both cases, we see that the estimator disfavours very recent merger (at $a_{\rm LMM}\approx 0.8$ in favour of placing them in the second bump of pulse, thus causing the median value and the $68\%$ region of $\hat{a}_{\rm LMM}$ to be lower than the true value of the last major merger epoch. An effect, that should be kept in mind when using the pipeline.
\begin{figure}
\centering
\includegraphics[width = .9\linewidth]{figures/estimator_tests.pdf}
\caption{\textit{Top:} Accuracy of the estimation of the epoch of the last major merger, $\alpha_{\rm cc}$, as a function of the true value $a_{\rm LMM}$ of the haloes in the validation sample for both the 1S (orange solid line) and 3S (blue solid line) estimators respectively. \textit{Middle:} Median value of the estimated epoch of the last major merger, $\hat{a}_{\rm LMM}$, as function of the true value for the 1S (orange curves) and 3S (blue curves) estimators respectively. The shaded areas correspond to the $68\%$ interval around the median, while the dashed diagonal line gives the ideal value of the estimator $\hat{a}_{\rm LMM}=a_{\rm LMM}$. \textit{Bottom:} relative width of the $68\%$ interval around the median value of $\hat{a}_{\rm LMM}$ as a function of the true value $a_{\rm LMM}$ for the 1S (orange curves) and 3S (blue curves) estimators respectively. We refer the reader to the text for a detailed discussion of the various trends.}
\label{fig:test_metrics}
\end{figure}
\subsection{Systematic Bias}
The statistical methodology we have developed here relies on sparsity estimates from N-body halo masses. However, these masses are not directly comparable to those inferred from galaxy cluster mass measurements, since the latter involve systematic uncertainties that may bias the cluster mass estimates compared to that from dark matter only simulations. Hence, before applying the sparsity test to real observations, we check the robustness of our approach against observational mass biases. More specifically, we will review conservative estimates of these biases for various mass estimation techniques and attempt to quantify the effect that these have on the sparsity.
\subsubsection{Weak Lensing Mass Bias}
A well known source of systematic error in weak lensing mass estimates comes from fitting the observed tangential shear profile of a cluster with a spherically symmetric NFW inferred shear profile. In such a case deviations from sphericity of the mass distribution within the cluster, as well as projection effects induce a systematic error on the estimated cluster mass that may vary at different radii, consequently biasing the evaluation of the sparsity.
\citet{Becker2011} have investigated the impact of this effect on weak lensing estimated masses. They modelled the observed mass at overdensity $\Delta$ as:
\begin{equation}
M_{\Delta}^{\text{WL}} = M_\Delta \exp(\beta_\Delta)\exp(\sigma_\Delta X),
\end{equation}
where $M_{\Delta}$ is the unbiased mass, $\beta_{\Delta}$ is a deterministic bias terms, while the third factor is a stochastic term with $\sigma_{\Delta}$ quantifying the spread of a log-normal distribution and $X\sim\mathcal{N}(0,1)$. Under the pessimistic assumption of independent scatter on both mass measurements, the resulting bias on the sparsity then reads as:
\begin{equation}\label{spars_wl_bias}
s_{\Delta_1,\Delta_2}^{\rm WL} = s_{\Delta_1,\Delta_2} \left(b^{\rm WL}_{\Delta_1,\Delta_2} +1\right) \exp\left(\sigma^{\rm WL}_{\Delta_1,\Delta_2} X\right),
\end{equation}
where $b^{\rm WL}_{\Delta_1,\Delta_2} = \exp(\beta_{\Delta_1} - \beta_{\Delta_2}) - 1$ and $\sigma^{\rm WL}_{\Delta_1,\Delta_2} = \sqrt{\sigma_{\Delta_1}^2 + \sigma_{\Delta_2}^2}$, with the errors being propagated from the errors quoted on the mass biases. \citet{Becker2011} have estimated the mass bias model parameters at $\Delta_1=200$ and $\Delta_2=500$, using the values quoted in their Tab.~3 and 4 we compute the sparsity bias $b^{\rm WL}_{200,500}$ and the scatter $\sigma^{\rm WL}_{200,500}$, which we quote in Tab.~\ref{tab:WL_bias}, for different redshifts and galaxy number densities, $n_\text{gal}$, in units of galaxies per arcmin$^{-2}$. Notice that the original mass bias estimates have been obtained assuming an intrinsic shape noise $\sigma_e = 0.3$.
\begin{table}
\centering
\caption{Sparsity bias and scatter obtained from the weak lensing mass bias estimates by \citet{Becker2011}.}
\begin{tabular}{cccc}
\hline
& $n_\text{gal}$ & $b^\text{WL}_\text{200,500}$ & $\sigma^\text{WL}_\text{200,500}$\\
\hline
& $10$ & $0.04\pm0.02$ & $ 0.51\pm0.03 $\\
$z=0.25$ & $20$ & $ 0.01\pm0.01 $ & $ 0.40\pm0.02 $\\
& $40$ & $ 0.03\pm0.01 $ & $ 0.35\pm0.02 $\\
& & &\\
& $10$ & $0.07\pm0.07$ & $ 0.76\pm0.03 $\\
$z=0.5$ & $20$ & $ 0.02\pm0.02 $ & $ 0.58\pm0.04 $\\
& $40$ & $ 0.03\pm0.01 $ & $ 0.49\pm0.03 $\\
\hline
\end{tabular}
\label{tab:WL_bias}
\end{table}
We may notice that although the deterministic sparsity bias is smaller than that on individual mass estimates the scatter can be large. In order to evaluate the impact of such biasses on the identification of merging clusters using sparsity estimates we use the values of the bias parameters quoted in Tab.~\ref{tab:WL_bias} to generate a population of biased sparsities using Eq.~(\ref{spars_wl_bias}) with the constraint that $s_{200,500}^\text{WL} > 1$ for our validation sample at $z=0.25$. We then performed the frequentist test for a single sparsity measurements (the Bayesian estimator has a detection power similar to that of the frequentist one.) and evaluated the Area Under the ROC curve (AUC) as function of the scatter $\sigma^{\rm WL}_{200,500}$ to quantify the efficiency of the estimator at detecting recent major merger events. This is shown in Fig.~\ref{fig:AUC-scatter}. Notice that a classifier should have values of AUC$>0.5$ \citep{Fawcett2006}. Hence, we can see that the scatter can greatly reduce the detection power of the sparsity estimator and render the method ineffective at detecting recent mergers for $\sigma^{\rm WL}_{200,500}>0.2$. In contrast, the estimator is valuable classifier for smaller values of the scatter.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/AUC_scatter.pdf}
\caption{Area Under the ROC Curve (AUC) as function of the scatter on the measured sparsity for WL mass estimates. A random classifier has an AUC$=0.5$. The vertical and horizontal lines denote AUC = 0.6 and the corresponding scatter $\sigma^{\rm WL}_{200,500}=0.2$, denoting the point, $\sigma^\text{WL}_{200,500} > 0.2$, beyond which the detector can be considered ineffective at detecting recent mergers.}
\label{fig:AUC-scatter}
\end{figure}
\subsubsection{Hydrostatic Mass Bias}
Measurements of galaxy cluster masses from X-ray observations rely on the hypothesis that the intra-cluster gas is in hydrostatic equilibrium. Deviations from this condition can induce a radially dependent bias on the cluster masses \citep[see e.g.][]{2016ApJ...827..112B,Eckert2019,Ettori2022}, thus affecting the estimation of the cluster's sparsity. The hydrostatic mass bias has been studied in \citet{2016ApJ...827..112B}, who have realised cosmological zoom N-body/hydro simulations of 29 clusters to evaluate the bias of masses at overdensities $\Delta=200, 500$ and $2500$ (in units of the critical density) for Cool Core (CC) and No Cool Core (NCC) clusters, as defined with respect to the entropy in the core of their sample, as well as for Regular and Disturbed clusters defined by the offset of the centre of mass and the fraction of substructures.
\begin{table}
\centering
\caption{Sparsity bias from the hydrostatic mass bias estimated of \citet{2016ApJ...827..112B} for different categories of simulated clusters.}
\begin{tabular}{lccc}
\hline
& $b_{200,500}^\text{HE}$ & $b_{500,2500}^\text{HE}$ & $b_{200,2500}^\text{HE}$ \\
\hline
All & $0.003\pm0.032$ & $-0.037\pm0.025$ & $-0.033\pm0.034$ \\
CC & $-0.009\pm0.031$ & $-0.151\pm0.038$ & $-0.162\pm0.041$ \\
NCC & $0.019\pm0.046$ & $0.005\pm0.027$ & $0.023\pm0.041$ \\
Regular & $0.032\pm0.089$ & $0.025\pm0.037$ & $0.057\pm0.082$ \\
Disturbed & $-0.017\pm0.077$ & $-0.080\pm0.086$ & $-0.098\pm0.052$\\
\hline
\end{tabular}
\label{tab:hydro_biasses}
\end{table}
Following the evaluation presented in \citet{Corasaniti2018}, we use the hydrostatic mass bias estimates given in Tab.~1 of \citet{2016ApJ...827..112B} to estimate the bias on cluster sparsities, these are quoted in Tab.~\ref{tab:hydro_biasses}. Overall, we can see that the hydrostatic mass bias does not significantly affect the estimated sparsity, with a bias of the order of few percent and in most cases compatible with a vanishing bias with only a few exceptions. This is consistent with the results of the recent analysis based on observed X-ray clusters presented in \citet{Ettori2022}, which yield sparsity biasses at the percent level and consistent with having no bias at all. However, we have seen in the case of the WL mass bias that even though the effect on the measured sparsity remains small, the scatter around the true sparsity can severely affect the efficiency of the detector at identifying recent mergers. Unfortunately, the limited sample from \citet{2016ApJ...827..112B} does not allow to compute the hydrostatic mass bias scatter of the sparsity. If the latter behaves in the same manner as in the WL case, then we can expect the estimator to respond to the increasing scatter as in Fig.~\ref{fig:AUC-scatter}. Consequently, as long as the scatter remains small, $\sigma^{\rm HE}_{\Delta_1,\Delta_2} < 0.1$, then the efficiency of the estimator will remain unaffected.
\subsubsection{Concentration Mass Bias}
We have seen in Section~\ref{sparsprof} that sparsities deduced from the concentration parameter of a NFW profile fitted to the halo density profile are biased compared to those measured using N-body masses. In particular, as seen in Fig.~\ref{fig:relative_spars_conc}, concentration deduced sparsities tend to underestimate their N-body counterparts. Hence, they are more likely to be associated with relaxed clusters than systems in a perturbed state characterised by higher values. A notable exception is the case of haloes undergoing recent mergers which are associated to lower concentration values, or equivalently higher sparsity, even though the N-body estimated sparsity is low. This effect is most likely due to poor fit agreement \citep{Balmes2014}, and systematically increases the population of perturbed haloes above the detection threshold. The concurrences of these two effects leads to an apparent increase in detection power for the 1S estimators when using NFW-concentration estimated masses, as can be seen for the solid lines in Fig.~\ref{fig:validation_roc_curves}.
In contrast when looking at the 3S case in Fig.~\ref{fig:validation_roc_curves}, there is a clear decrease in the detection power for the concentration based sparsity estimates. This is due to the differences in the pulse patterns deduced from concentration compared to the direct measurement of the sparsity, which results in a shape of the pulse at inner radii that is significantly different from that obtained using the N-body masses. Similarly to the 1S estimator, the sparsities measured using the NFW concentration are on average shifted towards smaller values. As such, the effect of using concentration based estimates results in an overestimation of the likelihood that a halo has not undergone a recent merger.
Keeping the above discussions in mind we now present example applications to two well studied galaxy clusters.
\subsection{Abell 383}
Abell 383 is a cluster at $z=0.187$ that has been observed in X-ray \citep{2004A&A...425..367B,2006ApJ...640..691V} and optical bands \citep{2002PASJ...54..833M,2012ApJS..199...25P} with numerous studies devoted to measurements of the cluster mass from gravitational lensing analyses \citep[e.g.][]{2016MNRAS.461.3794O,2016ApJ...821..116U,2019MNRAS.488.1704K}. The cluster appears to be a relaxed system with HE masses $M_{500\text{c}}=(3.10\pm 0.32)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{2500\text{c}}=(1.68\pm 0.15)\cdot 10^{14}\,\text{M}_{\odot}$ from Chandra X-ray observations \citep{2006ApJ...640..691V}, corresponding to the halo sparsity $s_{500,2500}=1.84\pm 0.25$ that is close to the median of the halo sparsity distribution. We compute the merger test statistics of Abell 383 using the lensing masses estimates from the latest version of the Literature catalogues of Lensing Clusters \citep[LC$^2$][]{2015MNRAS.450.3665S}. In particular, we use the mass estimates obtained from the analysis of the latest profile data of \citep{2019MNRAS.488.1704K}: $M_{2500\text{c}}=(2.221\pm 0.439)\cdot 10^{14}\,\text{M}_{\odot}$, $M_{500\text{c}}=(5.82\pm 1.15)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{200\text{c}}=(8.55\pm 1.7)\cdot 10^{14}\,\text{M}_{\odot}$. These give the following set of sparsity values: $s_{200,500}=1.47\pm 0.41$, $s_{200,2500}=3.85\pm 1.08$ and $s_{500,2500}=2.62\pm 0.73$. We obtain a p-value ${\rm p}=0.21$ and Bayes Factor $B_\text{f}=0.84$, incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\rm p}=0.40$, which can be interpreted as an effective sparsity of $s^\text{eff}_{200,500} = 1.40$. These results disfavour the hypothesis that the cluster has gone through a major merger in its recent history.
\subsection{Abell 2345}
Abell 2345 is a cluster at $z=0.179$ that has been identified as a perturbed system by a variety of studies that have investigated the distribution of the galaxy members in optical bands \citep{2002ApJS..139..313D,2010A&A...521A..78B} as well as the properties of the gas through radio and X-ray observations \citep[e.g.][]{1999NewA....4..141G,2009A&A...494..429B,2017ApJ...846...51L,2019ApJ...882...69G,2021MNRAS.502.2518S}. The detection of radio relics and the disturbed morphology of the gas emission indicate that the cluster is dynamically disturbed. Furthermore, the analysis by \citet{2010A&A...521A..78B} suggests that the system is composed of three sub-clusters. \citet{2002ApJS..139..313D} have conducted a weak lensing study on a small field of view centred on the main sub-cluster and found that the density distribution is roughly peaked on the bright central galaxy. This is also confirmed by the study of \citet{2004ApJ...613...95C}, however the analysis by \citet{2010PASJ...62..811O} on a larger field-of-view has indeed shown that Abell 2345 has a complex structure. The shear data have been re-analysed to infer lensing masses that are reported in latest version the LC$^2$-catalogue \citep{2015MNRAS.450.3665S}: $M_{200\text{c}}=(28.44\pm 10.76)\cdot 10^{14}\,\text{M}_{\odot}$, $M_{500\text{c}}=(6.52\pm 2.47)\cdot 10^{14}\,\text{M}_{\odot}$ and $M_{2500\text{c}}=(0.32\pm 0.12)\cdot 10^{14}\,\text{M}_{\odot}$. These mass estimates give the following set of sparsity values: $s_{200,500}= 4.36\pm 2.33$, $s_{200,2500}=87.51\pm 46.83$ and $s_{500,2500}=20.06\pm 10.74$. Using only the $s_{200,500}$ estimate result in a very small p-value, ${\rm p}=4.6\cdot 10^{-5}$. Incorporating errors on the measurement of $s_{200,500}$ yields a higher p-value, ${\rm p}=7.5\cdot10^{-4}$, which can be interpreted as an effective sparsity of $s^\text{eff}_{200,500} = 2.76$, significantly lower than the measured value, however both strongly favour the signature of a major merger event, that is confirmed by the combined analysis of the three sparsity measurements for which we find a divergent Bayes factor. In Fig.~\ref{fig:post_A2345} we plot the marginal posterior for the single sparsity $s_{200,500}$ (orange solid line) and for the ensemble of sparsity estimates (purple solid line). In the former case with obtain a median redshift $z_{\rm LMM}=0.30^{+0.03}_{-0.06}$, while in the latter case we find $z_\text{LMM} = 0.39\pm 0.02$, which suggests that a major merger event occurred $t_\text{LMM} = 2.1\pm 0.2$ Gyr ago. One should however note that in light of the discussions presented above, this result could be associated to a more recent merger event which, as can be seen in Fig.~\ref{fig:test_metrics}, are artificially disfavoured by our method.
\begin{figure}
\centering
\includegraphics[width = 0.9\linewidth]{figures/fig_A2345.pdf}
\caption{Posterior distributions Abell 2345 obtained using three sparsity measurements from the lensing cluster masses in the LC$^2$ catalogue \citep{2015MNRAS.450.3665S} using the shear data from \citep{2010PASJ...62..811O}. The vertical lines indicates the median value of $z_{\rm LMM}$, while the shaded are corresponds to the $68\%$ credible region around the median.}
\label{fig:post_A2345}
\end{figure}
\section{Conclusions}\label{conclu}
In this work we have investigated the properties of the mass profile of massive dark matter haloes hosting galaxy clusters. We have focused on haloes undergoing major merger events with the intent of finding observational proxies of the halo mass distribution that can provide hints of recent mergers in galaxy clusters. To this purpose we have performed a thorough analysis of N-body halo catalogues from the MultiDark-Planck2 simulation.
We have shown that halo sparsity provides a good proxy of the halo mass profile, especially in the case of merging haloes whose density profile significantly deviates from the NFW formula. We have found that major mergers leave a characteristic universal imprint on the evolution of the halo sparsity. This manifests as a rapid pulse response to the major merger event with a shape that is independent of the time at which the major merger occurs. The onset of the merger systematically increases the value of the sparsity, suggesting that mass in the inner part of the halo is displaced relative to the mass in the external region. Following the pulse the value of the sparsity, a quiescent evolution of the halo mass distribution is recovered within only $\sim 2$ dynamical times, which is consistent with the findings of the concentration analysis by \citet{Wang2020}.
The universal imprint of major mergers on the evolution of halo sparsity implies the universality of the distribution of halo sparsities of merging and quiescent haloes respectively. That is to say that at any given redshift it is possible to distinctly characterise the distribution of merging and quiescent haloes. This is because the distribution of sparsity values of haloes that have undergone their last major merger within $|T|\lesssim 2$ dynamical times differs from that of quiescent haloes that had their last major merger at earlier epochs, $|T|\gtrsim 2$. The former constitutes a sub-sample of the whole halo population that largely contributes to the scatter of the halo sparsity distribution with their large sparsity values.
The characterisation of these distributions enable us to devise statistical tests to evaluate whether a cluster at a given redshift and with given sparsity estimates has gone through a major merger in its recent history and eventually at which epoch. To this purpose we have developed different metrics based on a standard binary frequentist test, Bayes Factors and Support Vector Machines. We have shown that having access to cluster mass estimates at three different overdensities, allowing to obtain three sparsity estimates, provides more robust conclusions. In the light of these results we have developed a numerical code that can be used to investigate the presence of major mergers in observed clusters. As an example case, we have considered Abell 2345 a known perturbed clusters as well as Abell 383 a known quiescent cluster.
In the future we plan to expand this work in several new directions. On the one hand, it will be interesting to assess the impact of baryons on halo sparsity estimates especially for merging haloes. This should be possible through the analysis of N-body/hydro simulations of clusters. On the other hand, it may be also useful to investigate whether the universality of the imprint of major mergers on the evolution of halo sparsity depends with the underlying cosmological model. The analysis of N-body halo catalogues from simulations of non-standard cosmological scenarios such as the RayGalGroupSims suite \citep{Corasaniti2018,2021arXiv211108745R}, may allow us to address this point.
It is important to stress that the study presented here focuses on the statistical relation between halo sparsity and the epoch of last major merger defined as the time when the parent halo merges with a smaller mass halo that has at least one third of its mass. This is different from the collision time, or the central passage time of two massive haloes, which occur on a much shorter time scale. Hence, the methodology presented here cannot be applied to Bullet-like clusters that have just gone through a collision, since the distribution of the collisionless dark matter component in the colliding clusters has not been disrupted and their merger has yet to be achieved. Overall, our results opens the way to timing major merger in perturbed galaxy clusters through measurements of dark matter halo sparsity.
\section*{Acknowledgements}
We are grateful to Stefano Ettori, Mauro Sereno and the anonymous referee for carefully reading the manuscript and their valuable comments.
The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP).
The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064.
The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de).
We thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1 in the Skies \& Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu DR1 effort has made use of the skun@IAA\_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).
\section*{Data Availability}
During this work we have used publicly available data from the MDPL2 simulation suite \citep{Klypin2016}, provided by the CosmoSim database \href{https://www.cosmosim.org/}{https://www.cosmosim.org/}, in conjunction with publicly available data from the Uchuu simulation suite \citep{Ishiyama2021}, provided by the Skies and Universes database \href{http://skiesanduniverses.org/}{http://skiesanduniverses.org/}.
The numerical code \textsc{lammas} used for this analysis are available at: \href{https://gitlab.obspm.fr/trichardson/lammas}{https://gitlab.obspm.fr/trichardson/lammas}. The package also contains the detailed fitting parameters of the 1S and 3S likelihood distributions for all Uchuu snapshots up to z = 2.
\bibliographystyle{mnras}
| {'timestamp': '2022-04-01T02:34:25', 'yymm': '2112', 'arxiv_id': '2112.04926', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.04926'} |
\section{Introduction}
The performance of a sea-going ship is important not only to keep the fuel and operational cost in-check but also to reduce global emissions from the shipping industry. Analyzing the performance of a ship is also of great interest for charter parties to estimate the potential of a ship and the profit that can be made out of it. Therefore, driven by both the economic and social incentives, the trade of ship performance analysis and monitoring has been booming substantially in recent times. The importance of in-service data in this context is very well understood by most of the stake holders, clearly reflected by the amount of investment made by them on onboard sensors, data acquisition systems, and onshore operational performance monitoring and control centers.
The traditional way to evaluate the performance of a ship is using the noon report data provided by the ship's crew. A more exact approach, but not very feasible for commercial vessels, was suggested by \citet{Walker2007}, conducting in-service sea trials in calm-water conditions on a regular basis. With the advent of sensor-based continuous monitoring systems, the current trend is to directly or indirectly observe the evolution of the calm-water speed-power curve over time. ISO 19030 \cite{ISO19030} along with several researchers (\citet{Koboevic2019}; \citet{Coraddu2019DigTwin}) recommends observing the horizontal shift (along the speed axis) of the calm-water speed-power curve, termed as the speed-loss, over time to monitor the performance of a sea-going ship using the in-service data. Alternatively, it is suggested to observe the vertical shift of the calm-water speed-power curve, often termed as the change in power demand (adopted by \citet{Gupta2021PrefMon} and \citet{CARCHEN2020}). Some researchers also formulated and used some indirect performance indicators like fuel consumption (\citet{Koboevic2019}), resistance (or fouling) coefficient (\citet{Munk2006}; \citet{Foteinos2017}; \citet{CARCHEN2020}), (generalized) admiralty coefficient (\citet{Ejdfors2019}; \citet{Gupta2021}), wake fraction (\citet{CARCHEN2020}), fuel efficiency (\citet{Kim2021}), etc. In each of these cases, it is clearly seen (and most of the time acknowledged) that the results are quite sensitive to the quality of the data used to estimate the ship's performance.
The ship's performance-related data obtained from various sources usually inherits some irregularities due to several factors like sensor inaccuracies, vibration of the sensor mountings, electrical noise, variation of environment, etc., as pointed out in the Guide for Smart Functions for Marine Vessels and Offshore Units (Smart Guide) published recently by \citet{ABS2020guide}. The quality of data used to carry-out ship performance analysis and the results obtained further can be significantly improved by adopting some rational data processing techniques, as shown by \citet{Liu2020} and \citet{Kim2020}. Another important factor is the source of data as it may also be possible to obtain such datasets using the publicly available AIS data (\citet{You2017}). \citet{Dalheim2020DataPrep} presented a data preparation toolkit based on the in-service data recorded onboard 2 ships. The presented toolkit was developed for a specific type of dataset, where the variables were recorded asynchronously and had to be synchronized before carrying-out ship performance analysis. The current work would rather focus on challenges faced while processing an already synchronized dataset.
The current paper presents a review of different data sources used for ship performance analysis and monitoring, namely, onboard recorded in-service data, AIS data, and noon reports, along with the characteristics for each of these data sources. Finally, a data processing framework is outlined which can be used to prepare these datasets for ship performance analysis and monitoring. Although the data processing framework is developed for the performance monitoring of ships, it may easily be casted for several other purposes. With the easy availability of data from ships, the concept of creating digital twins for sea-going ships is becoming quite popular. \citet{Major2021} presented the concept of digital twin for a ship and the cranes onboard it. The digital twin established by \citet{Major2021} can be used to perform three main offshore operations, including remote monitoring of the ship, maneuvering in harsh weather and crane operations, from an onshore control center. Moreover, as pointed out by \citet{Major2021}, the digital twin technology can also be adopted for several other purposes, like predictive maintenance, ship autonomy, etc. Nevertheless, the data processing framework presented here can also be used to process the real-time data obtained to create digital twins for ships in an efficient manner.
The following section discusses the art of ship performance analysis and the bare minimum characteristics of a dataset required to do such an analysis. Section \ref{sec:dataSources} presents the above mentioned sources of data used for ship performance analysis, their characteristics, and the tools required to process these datasets. Section \ref{sec:results} presents the data processing framework which can be used to process and prepare these datasets for ship performance monitoring. Finally, section \ref{sec:conclusion} finishes the paper with concluding remarks.
\section{Ship Performance Analysis}
The performance of a ship-in-service can be assessed by observing its current performance and, then, comparing it to a benchmarking standard. There are several ways to establish (or obtain) a benchmarking standard, like model test experiments, full-scale sea trials, CFD analysis, etc. It may even be possible to establish a benchmarking standard using the in-service data recorded onboard a newly built ship, as suggested by \citet{Coraddu2019DigTwin} and \citet{Gupta2021}. On the other hand, evaluating the current performance of a ship requires a good amount of data processing as the raw data collected during various voyages of a ship is susceptible to noise and errors. Moreover, the benchmarking standard is, generally, established for only a given environmental condition, most likely the calm-water condition. In order to draw a comparison between the current performance and the benchmarking standard, the current performance must be translated to the same environmental condition, therefore, increasing the complexity of the problem.
\subsection{Bare Minimum Variables}
For translating the current performance data to the benchmarking standard's environmental condition and carrying-out a reliable ship performance analysis, a list of bare minimum variables must be recorded (or observed) at a good enough sampling rate. The bare minimum list of variables must provide the following information about each sampling instant for the ship: (a) Operational control, (b) Loading condition, (c) Operational environment, and (d) Operating point. The variables containing the above information must either be directly recorded (or observed) onboard the ship, collected from regulatory data sources such as AIS, or may be derived using additional data sources, like the operational environment can be easily derived using the ship's location and timestamp with the help of an appropriate weather hindcast (or metocean) data repository.
The operational control information should contain the values of the propulsion-related control parameters set by the ship's captain on the bridge, like shaft rpm, rudder angle, propeller pitch, etc. The shaft rpm (or propeller pitch, in case of ships equipped with controllable pitch propellers running at constant rpm) is by far the most important variable here as it directly correlates with the ship's speed-through-water. It should be noted that even in the case of constant power or speed mode, the shaft rpm (or propeller pitch) continues to be the primary control parameter as the set power or speed is actually achieved by using a real-time optimizer (incorporated in the governor) which optimizes the shaft rpm (or propeller pitch) to get to the set power or speed. Nevertheless, in case the shaft rpm (or propeller pitch) is not available, it may be appropriate to use the ship's speed-through-water as an operational control parameter, as done by several researchers (\citet{FARAG2020}; \citet{Laurie2021}; \citet{Minoura2020}; \citet{Liang2019}), but in this case, it should be kept in mind that, unlike the shaft rpm (or propeller pitch), the speed-through-water is a dependant variable strongly influenced by the loading condition and the operational environment.
The loading condition should contain the information regarding the ship's fore and aft draft, which can be easily recorded onboard the ship. Although the wetted surface area and under-water hull-form are more appropriate for a hydrodynamic analysis, these can be derived easily using the ship's hull form, if the fore and aft draft is known. The operational environment should at least contain variables indicating the intensity of wind and wave loads acting on the ship, like wind speed and direction, significant wave height, mean wave direction, mean wave period, etc. Finally, the operating point should contain the information regarding the speed-power operating point for the sampling instant. Table \ref{tab:bareMinVars} presents the list of bare minimum variables required for ship performance analysis. The list given in the table may have to be modified according to ship specifications, for example, the propeller pitch is only relevant for a ship equipped with a controllable pitch propeller.
\begin{table}[ht]
\caption{The list of bare minimum data variables required for ship performance analysis.} \label{tab:bareMinVars}
\centering
\begin{tabular}{l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c}{\textbf{Variables}} \\
\hline
Operational Control & Shaft rpm, Rudder angle, \& Propeller pitch \\
\hline
Loading Condition & Fore and aft draft \\
\hline
Operational Environment & \begin{tabular}[l]{@{}l@{}}Longitudinal and transverse wind speed, Significant wave height,\\ Relative mean wave direction, \& Mean wave period\end{tabular} \\
\hline
Operating Point & Shaft power \& Speed-through-water \\
\hline
\end{tabular}
\end{table}
\subsection{Best Practices} \label{sec:bestPractices}
It is well-known that the accuracy of various measurements is not the same. It also depends on the source of the measurements. The measurements recorded using onboard sensors are generally more reliable as compared to the manually recorded noon report measurements, due to the possibility of human error in the latter. Even in the case of onboard recorded sensor measurements, the accuracy varies from sensor-to-sensor and case-to-case. Some sensors can be inherently faulty, whereas others can give incorrect measurements due to unfavorable installation and operational conditions, and even the best ones are known to have some measurement noise. Thus, it is recommended to establish and follow some best practices for a reliable and robust ship performance analysis.
The onboard measurements for shaft rpm ($n$) and shaft torque ($\tau$) are generally obtained using a torsion meter installed on the propeller shaft, which is considered to be quite reliable. The shaft power ($P_s$) measurements are also derived from the same as the shaft power is related to the shaft rpm and torque through the following identity: $P_s = 2\pi n\tau$. It should be noted that no approximation is assumed in this formulation and, therefore, it should be validated with the data, if all three variables ($n, \tau, P_s$) are available. On the other hand, the measurements for speed-through-water are known to have several problems, as presented by \citet{DALHEIM2021}. Thus, it is recommended to use shaft rpm (and not speed-though-water) as the independent variable while creating data-driven regression models to predict the shaft power. Owing to the same reason, it may also be a good idea to quantify the change in ship's performance in terms of change in power demand rather than speed-loss (or speed-gain), as recommended by ISO 19030 \cite{ISO19030}.
Further, it is also quite common to use fuel oil consumption as a key performance indicator for ship performance analysis (\citet{Karagiannidis2021}). The fuel oil consumption can be easily calculated from the engine delivered torque and engine rpm, if the specific fuel consumption (SFC) curve for the engine is known. Even though the SFC curve is established and supplied by the engine manufacturer, it is only valid for a specific operating environment, and it is known to evolve over time due to engine degradation and maintenance. Thus, including the fuel oil consumption in ship performance analysis increases the complexity of the problem, which requires taking engine health into account. If the objective of ship performance analysis is also to take into account the engine performance, then it may be beneficial to divide the problem into two parts: (a) Evaluate the change in power demand (for hydrodynamic performance analysis), and (b) Evaluate the change in engine SFC (for engine performance analysis). Now, the latter can be formulated as an independent problem with a completely new set of variables-of-interest, like engine delivered torque, engine rpm, ambient air temperature, calorific value of fuel, turbocharger health, etc. This would not only improve the accuracy of ship's hydrodynamic performance analysis but would also allow the user to develop a more comprehensive and, probably, accurate analysis model. The current work is focused on the hydrodynamic performance analysis.
\subsection{Sampling Frequency}
Almost all electronics-based sensors are known to have some noise in their measurements. The simplest way adopted to subdue this noise is by taking an average over a number of measurements (known as a `sample' in statistics), recorded over a very short period of time (milliseconds). It is also known that the statistical mean of a `sample' converges to the true mean (i.e., the mean of the entire population), thereby eliminating the noise, as the number of measurements in the `sample' is increased, provided the observations follow a symmetrical distribution. Nevertheless, it is observed that the high frequency data still retains some noise, probably due to the fact that the number of measurements in each `sample' is small, i.e., the measurements are obtained by averaging a small number of samples recorded over a very short period of time. On the other hand, as seen in the case of noon reports and most of the in-service datasets, time-averaging the measurements over a longer period of time obscures the effect of moderately varying influential factors, for example, instantaneous incident wind and waves, response motions, etc. Thus, a very high sampling frequency data may retain high noise, and a very low sampling frequency data, with time-averaged values, may result in obscuring important effects from the data time-series. Furthermore, in the third scenario, it may be possible that the data acquisition (DAQ) system onboard the ship is simply using low sampling frequency, recording instantaneous values instead of time-averaged ones, saving a good amount of storage and bandwidth while transmitting it to the shore-based control centers. These low frequency instantaneous values may result in an even more degraded data quality as it would contain noise as well as obscure the moderately varying effects.
The ideal sampling frequency would also depend on the objective of the analysis and the recorded variables. For example, if the objective of the analysis is to predict the motion response of a ship or analyse its seakeeping characteristics, the data should be recorded at a high enough sampling frequency such that it is able to capture such effects. \citet{hansen2011performance} analyzed the ship's rudder movement and the resulting resistance, and demonstrated that if the sampling interval would be large, the overall dynamics of the rudder movement would not be captured, resulting in a difference in resistance. One criterion for selecting the data sampling rate is Nyquist frequency (\citet{jerri1977shannon}), which is widely used in signal processing. According to this criterion, the sampling frequency shall be more than twice the frequency of the observed phenomenon to sufficiently capture the information regarding the phenomenon. Therefore, if the aim is not to record any information regarding the above mentioned moderately varying effects (instantaneous incident wind and waves, response motions, etc.), it may be acceptable to just obtain low frequency time-averaged values so that such effects are subdued. But it may still be useful to obtain high frequency data, in this case, as it can be advantageous from data cleaning point of view. For example, the legs of time-series showing very high variance, due to the noise or moderately varying effects, can be removed from the analysis to increase the reliability of results.
\section{Data Sources, Characteristics \& Processing Tools} \label{sec:dataSources}
\subsection{In-service Data}
The in-service data, referred to here, is recorded onboard a ship during its voyages. This is achieved by installing various sensors onboard the ship, collecting the measurements from these sensors on a regular basis (at a predefined sampling rate) using a data acquisition (DAQ) system, and fetching the collected data to onshore control centers. The two most important features of in-service data is the sampling rate (or, alternatively, sampling frequency) and the list of recorded variables. Unfortunately, there is no proper guide or standard which is followed while defining both these features for a ship. Thus, the in-service data processing has to be adapted to each case individually.
The in-service datasets used here are recorded over a uniform (across all recorded variables) and evenly-spaced sampling interval, which makes it easier to adopt and apply data processing techniques. In an otherwise case, where the data is sampled with a non-uniform and uneven sampling interval, some more pre-processing has to be done in order to prepare it for further analysis, as demonstrated by \citet{Dalheim2020DataPrep}. \citet{Dalheim2020DataPrep} presented a detailed algorithm to deal with time vector jumps and synchronizing non-uniformly recorded data variables. The problem of synchronization can, alternatively, be looked at using the well-known dynamic time warping (DTW) technique, which is generally used for aligning the measurements taken by two sensors, measuring the same or highly correlated features. In a different approach, \citet{virtanen2020scipy} demonstrated that the collected data can be down-sampled or up-sampled (resampling) to obtain a uniform and evenly sampled dataset.
\subsubsection{Inherently Faulty \& Incorrect Measurements} \label{sec:incorrMeasureInServData}
Some of the sensors onboard a ship can be inherently faulty and provide incorrect measurements due to unfavorable installation or operational conditions. Many of these can actually be fixed quite easily. For instance, \citet{Wahl2019} presented the case of faulty installation of the wind anemometer onboard a ship, resulting in missing measurements for head-wind condition probably due to the presence of an obstacle right in front of the sensor. Such a fault is fairly simple to deal with, say, by fixing the installation of the sensor, and it is even possible to fix the already recorded dataset using the wind measurements from one of the publicly available weather hindcast datasets. Such an instance also reflects the importance of data exploration and validation for ship performance analysis. Unlike above, the case of draft and speed-through-water measurement sensors is not as fortunate and easy to resolve.
The ship's draft is, generally, recorded using a pressure transducer installed onboard the ship. The pressure transducer measures the hydrostatic pressure acting on the bottom plate of the ship which is further converted into the corresponding water level height or the draft measurement. When the ship starts to move and the layer of water in contact with the ship develops a relative velocity with respect to the ship, the total pressure at the ship's bottom reduces due to the non-zero negative hydrodynamic pressure and, therefore, further measurements taken by the draft sensor are incorrect. This is known as the Venturi effect. It may seem like a simple case, and one may argue that the measurements can be fixed by just adding the water level height equivalent to the hydrodynamic pressure, which may be calculated using the ship's speed-through-water. Here, it should be noted that, firstly, to accurately calculate the hydrodynamic pressure, one would need the localized relative velocity of the flow (and not the ship's speed-through-water), which is impractical to measure, and secondly, the speed-though-water measurements are also known to have several sources of inaccuracy. Alternatively, it may be possible to obtain the correct draft measurements from the ship's loading computer. The loading computer can calculate the draft and trim in real-time based on the information such as the ship's lightweight, cargo weight and distribution, and ballast water loading configuration.
The state-of-the-art speed-though-water measurement device uses the Doppler acoustic speed log principle. Here, the relative speed of water around the hull (i.e., the speed-though-water) is measured by observing the shift in frequency (popularly known as the Doppler shift) of the ultrasound pulses emitted from the ship's hull, due to its motion. The ultrasonic pulses are reflected by the ocean bottom, impurities in the surrounding water, marine life, and even the liquid-liquid interface between the density difference layers in deep ocean. The speed of water surrounding the ship is influenced by the boundary layer around the hull so it is required that the ultrasonic pulses reflected only by the particles outside the boundary layer are used to estimate the speed-though-water. Therefore, a minimum pulse travelling distance has to be prescribed for the sensor. If the prescribed distance is too larger or if the ship is sailing in shallow waters, the Doppler shift is calculated using the reflection from the ocean bottom, i.e., the sensor is in ground-tracking mode, and therefore, it would clearly record the ship's speed-over-ground instead of the speed-though-water. \citet{DALHEIM2021} presented a detailed account regarding the uncertainty in the speed-though-water measurements for a ship, commenting that the speed log sensors are considered to be one of the most inaccurate ones onboard the ship.
It may also be possible to estimate the speed-though-water of a ship using the ship's speed-over-ground and incident longitudinal water current speed. The speed-over-ground of a ship is measured using a GPS sensor, which is considered to be quite accurate, but unfortunately, the water current speed is seldom recorded onboard the ship. It is certainly possible to obtain the water current speed from a weather hindcast data source, but the hindcast measurements are not accurate enough to obtain a good estimate for speed-through-water, as indicated by \citet{Antola2017}. It should also be noted that the temporal and spatial resolution of weather hindcast data is relatively larger than the sampling interval of the data recorded onboard the ship. Moreover, the water current speed varies along the depth of the sea, therefore, the incident longitudinal water current speed must be calculated as an integral of the water current speed profile over the depth of the ship. Thus, in order to obtain accurate estimates of speed-though-water, the water current speed has to be measured or estimated upto a certain depth of the sea with good enough accuracy, which is not possible with the current state-of-the-art.
\subsubsection{Outliers} \label{sec:outliers}
Another big challenge with data processing is the problem of detecting and handling outliers. As suggested by \citet{Olofsson2020}, it may be possible to categorize outliers into the following two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers\footnote{Called collective outliers by \citet{Olofsson2020}.}. \citet{Dalheim2020DataPrep} presented methods to detect and remove contextual outliers, further categorized as (i) obvious (or invalid) outliers, (ii) repeated values, (iii) drop-outs, and (iv) spikes. Contextual outliers are easily identifiable as they either violate the known validity limits of one or more recorded variables (as seen in the case of obvious outliers and spikes) or present an easily identifiable but anomalous pattern (as seen in the case of repeated values and drop-outs).
The case of correlation-defying outliers is much more difficult to handle, as they can easily blend into the cleaned data pool. The two most popular methods which can be used to identify correlation-defying outliers are Principal Component Analysis (PCA) and autoencoders. Both these methods try to reconstruct the data samples after learning the correlation between the variables. It is quite obvious that a correlation-defying outlier would result in an abnormally high reconstruction error and, therefore, can be detected using such techniques. In a recent attempt, \citet{Thomas2021} demonstrated an ensemble method combining PCA and autoencoders coupled with isolation forest to detect such outliers.
\subsubsection{Time-Averaging Problem} \label{sec:timeAvgProb}
As aforementioned, the onboard recorded in-service data can be supplied as time-averaged values over a short period of time (generally upto around 15 minutes). Although the time-averaging method eliminates white noise and reduces the variability in the data samples, it introduces a new problem in case of angular measurements. The angular measurements are, generally, recorded in the range of 0 to 360 degrees. When the measurement is around 0 or 360 degrees, it is obvious that the instantaneous measurements, reported by the sensor, will fluctuate in the vicinity of 0 and 360 degrees. For instance, assuming that the sensor reports a value of about 0 degree for half of the averaging time and about 360 degrees for the remaining time, the time-averaged value recorded by the data acquisition (DAQ) system will be around 180 degrees, which is significantly incorrect. Most of the angular measurements recorded onboard a ship, like relative wind direction, ship heading, etc., are known to inherit this problem, and it should be noted that, unlike the example given here, the incorrect time-averaged angle can take any value between 0 and 360 degrees, depending on the instantaneous values over which the average is calculated.
Although it may be possible to fix these incorrect values using a carefully designed algorithm, there is no established method available at the moment. Thus, it is suggested to fix these measurements using an alternate source for the data variables. For example, the wind direction can be gathered easily from a weather hindcast data source. Thus, it can be used to correct or just replace the relative wind direction measurements, recorded onboard the ship. The ship's heading, on the other hand, can be estimated using the latitude and longitude measurements from the GPS sensor.
\subsection{AIS Data}
AIS is an automatic tracking system that uses transceivers to help ships and maritime authorities identify and monitor ship movements. It is generally used as a tool for ship transportation services to prevent collisions during navigation. Ships over 300 tons must be equipped with transponders capable of transmitting and receiving all message types of AIS under the SOLAS Convention. AIS data is divided into dynamic (position, course, speed, etc.) static (ship name, dimensions, etc.), and voyage-related data (draft, destination, ETA, etc.). Dynamic data is automatically transmitted every 2-10 seconds depending on the speed and course of the ship, and if anchored, such information is automatically transmitted every 6 minutes. On the other hand, static and voyage-related data is provided by the ship's crew, and it is transmitted every 6 minutes regardless of the ship's movement state.
Since dynamic information is automatically updated based on sensor data, it is susceptible to faults and errors, similar to those described in section \ref{sec:incorrMeasureInServData}. In addition, problems may occur even in the process of collecting and transmitting data between AIS stations, as noted by \citet{weng2020exploring}. The AIS signal can also be influenced by external factors, such as weather conditions and Earth's magnetic field, due to their interference with the very high frequency (VHF) equipment. Therefore, some of the AIS messages are lost or get mixed. Moreover, the receiving station has a short time slot during which the data must be received, and due to heavy traffic in the region, it fails to receive the data from all the ships in that time. In some cases, small ships deliver inaccurate information due to incorrectly calibrated transmitters, as shown by \citet{weng2020exploring}. In a case study, \citet{harati2007automatic} observed that 2\% of the MMSI (Maritime Mobile Service Identity) information was incorrect and 30\% of the ships were not properly marked with the correct navigation status. In the case of ship dimensions, about 18\% of the information was found to be inaccurate. Therefore, before using AIS raw data for ship performance analysis, it is necessary to check key parameters such as GPS position, speed, and course, and the data identified as incorrect must be fixed.
\subsubsection{Irrational Speed Data}
The GPS speed (or speed-over-ground) measurements from AIS data may contain samples that have a sudden jump compared to adjacent samples or excessively higher or lower value than the normal operating range. This type of inaccurate data can be identified through comparison with location and speed data of adjacent samples. The distance covered by the ship at the corresponding speed during the time between the two adjacent AIS messages is calculated, and the distance between the actual two coordinates is calculated using the Haversine formula (given by equation \ref{eq:havsineDistance}) to compare the two values. If the difference between the two values is negligible, the GPS speed can be said to be normal, but if not, it is recommended to be replaced with the GPS speed value of the adjacent sample. It should be noted that if the time difference between the samples is too short, the deviation of the distance calculated through this method may be large. In such a case, it is necessary to consider the average trend for several samples. If there are no valid samples nearby or the GPS coordinate data is problematic, one can refer to the normal service speed according to the ship type, as shown in table \ref{tab:vParams}, or if available, a more specific method such as normalcy box (\citet{rhodes2005maritime,tu2017exploiting}), which defines the speed range of the ships according to the geographic location, may be applied.
\begin{equation}\label{eq:havsineDistance}
{D = 2r\sin^{-1} \left(\sqrt{\sin^{2}\left(\frac{y_{i+1}-y_{i}}{2}\right)+\cos{\left(y_i\right)}\cos{\left(y_{i+1}\right)}\sin^{2}\left(\frac{x_{i+1}-x_{i}}{2}\right)}\right)}
\end{equation}
Where $D$ is the distance between two coordinates ($x_i$, $y_i$) and ($x_{i+1}$, $y_{i+1}$), $r$ is the radius of Earth, and ($x_i$, $y_i$) is the longitude and latitude at timestamp $i$.
\begin{table}[ht]
\caption{Typical service speed range of different ship types, given by \citet{solutions2018basic}.} \label{tab:vParams}
\centering
\begin{tabular}{l|l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Type}} & \multicolumn{1}{c}{\textbf{Service speed (knot)}}\\
\hline
Tanker & Crude oil carrier & 13-17\\
& Gas tanker/LNG carrier & 16-20\\
& Product & 13-16\\
& Chemical & 15-18\\
\hline
Bulk carrier & Ore carrier & 14-15\\
& Regular & 12-15\\
\hline
Container & Line carrier & 20-23\\
& Feeder & 18-21\\
\hline
General cargo & General cargo & 14-20\\
& Coaster & 13-16\\
\hline
Roll-on/roll-off cargo & Ro-Ro/Ro-Pax & 18-23\\
\hline
Passenger ship & Cruise ship & 20-23\\
& Ferry & 16-23\\
\hline
\end{tabular}
\end{table}
\subsubsection{Uncertainty due to Human Error}
AIS data, excluding dynamic information, is not automatically updated by the sensors, but it is logged by the ship's crew manually, so there is a possibility of human error. This includes information such as the draft, navigation status, destination, and estimated time of arrival (ETA) of the ship. Although it is difficult to clearly distinguish the incorrectly entered information, it is possible to indirectly determine whether the manual input values have been updated using the automatically logged dynamic information. Each number in navigation status represents ship activity such as `under way using engine (0)', `at anchorage (1)', and `moored (5)'. If this field is being updated normally, it should be `0' if the ship is in-trip and `5' if it is at berth. If the navigation status of the collected AIS data is `1' or `5' above a certain GPS speed (or speed-over-ground), or if the state is set to `0' even when the speed is 0 and the location is within the port, the AIS data have not been updated on time and other manually entered information should also be questioned.
\subsection{Noon Report Data}
Ships engaged in international navigation of more than 500 gross tons are required to send a noon report to the company, which briefly records what happened on the ship from previous noon to present noon. The noon report must basically contain sufficient information regarding the location, course, speed, and internal and external conditions affecting the vessel's voyage. Additionally, the shipping company collects information related to fuel consumption and remaining fuel onboard, propeller slip, average RPM, etc. as needed. Such information is often used as a ship's management tool and reference data, such as monitoring and evaluating ship's performance, calculating energy efficiency operating indicators, and obtaining fuel and freshwater order information. Despite its customary use, the standardized information in the noon reports may not be sufficient to accurately assess the performance of the ship, due to several problems discussed as follows. This information is based on the average values from noon to noon. For an accurate ship performance analysis, higher frequency samples and additional data may be recommended.
\subsubsection{Uncertainties due to Averaging Measurements \& Human Error} \label{sec:noonReportsAvgProb}
Basically, information reported through the noon reports is created based on the measurement values of the onboard sensor. Therefore, it also has the possibility to involve the problem of inherently faulty sensors and incorrect measurements, as discussed in section \ref{sec:incorrMeasureInServData}. Apart from the problems caused by sensors, the noon report data may have problems caused by the use of 24-hour averaged values and human errors. The data collection interval is once a day and the average of the values recorded for 24 hours is reported, thus, significant inaccuracies may be included in the data. \citet{aldous2015uncertainty} performed a sensitivity analysis to assess the uncertainty due to the input data for ship performance analysis using continuously recorded in-service data and noon reports. It was observed here that the uncertainty of the outcome was significantly sensitive to the number of samples in the dataset. In other words, such uncertainty can be mitigated through the use of data representing longer time-series, data collection with higher frequency, and data processing. These results were also confirmed by \citet{park2017comparative} and \citet{themelis2018comparative}. \citet{park2017comparative} demonstrated in a case study that the power consumption between the noon reports and the recorded sensor data differed by 6.2\% and 17.8\% in ballast and laden voyage, respectively.
Using the averaged values over a long time period, as in the case of noon reports, the variations due to acceleration/deceleration and maneuvering cannot be captured. In particular, in the case of ships that sail relatively short voyages such as feeder ships and ferries, inappropriate data for performance analysis may be provided due to frequent changes in the operational state. In the case of information regarding the weather and sea states, the information generally corresponds to the condition right before the noon report is sent from the ship, therefore, it is not easy to account for the changes in the performance of the ship due to the variation of weather conditions during the last 24 hours. In general, the information to be logged in the noon report is read and noted by a person from onboard sensors. Thus, it is possible that the time at which the values are read from the sensors everyday may be different as well as different sensors may be used for the values to be logged for the same field. In addition, there may be cases when the observed value is incorrectly entered into the noon report. Thus, if the process of preparing the noon reports is not automated, there would always be a possibility of human errors in the data.
\section{Results: Data Processing Framework} \label{sec:results}
The results here are presented in the form of the developed data processing framework, which can be used to process raw data obtained from one of the above mentioned data sources (in section \ref{sec:dataSources}) for ship performance analysis. The data processing framework is designed to resolve most of the problems cited in the above section. Figure \ref{fig:flowDiag} shows the flow diagram for the data processing framework. The following sections will explain briefly the consecutive processing steps of the given flow diagram. It may be possible that the user may not able to carry-out each step due to unavailability of some information or features in the dataset, for example, due to the unavailability of the GPS data (latitude, longitude and timestamp variables), it may not be possible to interpolate weather hindcast data. In such a case, it is recommended to skip the corresponding step and continue with the next one.
The data processing framework has been outlined in such a manner that, after being implemented, it can be executed in a semi-automatic manner, i.e., requiring limited intervention from the user. The semi-autonomous nature of the framework would also result in fast data processing, which can be important for very large datasets. The implementation of the framework in terms of executable code is also quite important to obtain a semi-automatic and fast implementation of the data processing framework. Therefore, it is recommended to adopt best practices and optimized algorithms for each individual processing step according to the programming language in use. On another note, the reliability of the data processing activity is also quite critical to obtain good results. Therefore, it is important to carry-out the validation of work done in each processing step by creating visualization (or plots) and inspecting them for any undesired errors. The usual practice adopted here, while processing the data using the framework, is to create several such visualizations, like time-series plots of data variables in trip-wise manner (explained later in section \ref{sec:divideIntoTrips}), at the end of each processing step and then inspecting them to validate the outcome.
\begin{figure}
\centering
\begin{tikzpicture}[font=\small,thick, node distance = 0.35cm]
\node[draw,
rounded rectangle,
minimum width = 2.5cm,
minimum height = 1cm
] (block1) {Raw Data};
\node[draw,
below=of block1,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block2) {Ensure Uniform \\ Time Steps};
\node[draw,
below=of block2,
minimum width=3.5cm,
minimum height=1cm
] (block3) {Divide into Trips};
\node[draw,
below=of block3,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block4) {Interpolate Hindcast \\ (Using GPS Data)};
\node[draw,
trapezium,
trapezium left angle = 65,
trapezium right angle = 115,
trapezium stretches,
left=of block4,
minimum width=3.5cm,
minimum height=1cm
] (block5) {Weather Hindcast};
\node[draw,
below=of block4,
minimum width=3.5cm,
minimum height=1cm
] (block6) {Derive New Features};
\node[draw,
diamond,
right=of block6,
minimum width=2.5cm,
inner sep=1,
align=center
] (block17) {Interpolation \\ Error?};
\node[draw,
below=of block6,
minimum width=3.5cm,
minimum height=1cm
] (block7) {Validation Checks};
\node[draw,
diamond,
below=of block7,
minimum width=2.5cm,
inner sep=1,
align=center
] (block8) {Data Processing \\ Errors Detected?};
\node[coordinate,right=1.8cm of block8] (block9) {};
\node[coordinate,right=1.6cm of block4] (block10) {};
\node[draw,
below=of block8,
minimum width=3.5cm,
minimum height=1cm
] (block11) {Fix Draft \& Trim};
\node[draw,
below=of block11,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block12) {Calculate Hydrostatics \\ (Displacement, WSA, etc.)};
\node[draw,
trapezium,
trapezium left angle = 65,
trapezium right angle = 115,
trapezium stretches,
left=of block12,
minimum width=3.5cm,
minimum height=1cm
] (block15) {Ship Particulars};
\node[draw,
below=of block12,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block13) {Calculate Resistance \\ Components};
\node[draw,
below=of block13,
minimum width=3.5cm,
minimum height=1cm,
align=center
] (block16) {Data Cleaning \& \\ Outlier Detection};
\node[draw,
rounded rectangle,
below=of block16,
minimum width = 2.5cm,
minimum height = 1cm,
inner sep=0.25cm
] (block14) {Processed Data};
\draw[-latex] (block1) edge (block2)
(block2) edge (block3)
(block3) edge (block4)
(block4) edge (block6)
(block6) edge (block7)
(block7) edge (block8)
(block8) edge node[anchor=east,pos=0.25,inner sep=2.5]{No} (block11)
(block11) edge (block12)
(block12) edge (block13)
(block13) edge (block16)
(block16) edge (block14);
\draw[-latex] (block5) edge (block4);
\draw[-latex] (block15) edge (block12);
\draw[-latex] (block8) -| (block9) node[anchor=south,pos=0.1,inner sep=2.5]{Yes}
(block9) -| (block17);
\draw[-latex] (block17) |- (block10)
(block10) |- (block4) node[anchor=south,pos=0.1,inner sep=2.5]{Yes};
\draw[-latex] (block17) -- (block6) node[anchor=south,pos=0.4,inner sep=2.5]{No};
\end{tikzpicture}
\caption{Data processing framework flow diagram.} \label{fig:flowDiag}
\end{figure}
\subsection{Ensure Uniform Time Steps}
Ensuring uniform and evenly-spaced samples would not only make it easier to apply time-gradient-based data processing or analysis steps but would also help avoid any misunderstanding while visualizing the data, by clearly showing a gap in the time-series plots (when plotted against sample numbers) and removing any abrupt jumps in the data values. Depending on the data acquisition (DAQ) system, the in-service data recorded onboard a ship is generally recorded with a uniform and evenly spaced sampling interval. Nevertheless, it is observed that the extracted sub-dataset from the main database may contain several missing time steps (or timestamps). In such a case, it is recommended to check for such missing timestamps by simply calculating the gradient of timestamps, and for each missing timestamp, just add an empty row consisting only the missing timestamp value. Finally, the dataset should be sorted according the timestamps, resulting in a uniform and evenly-spaced list of samples.
Similar procedure can be adopted for a noon report dataset. The noon reports are generally recorded every 24 hours, but it may sometimes be more or less than 24 hours if the vessel's local time zone is adjusted, specially on the day of arrival or departure. The same procedure may not be feasible in case of AIS data, as the samples here are sporadically distributed in general. Here, the samples are collected at different frequencies depending on the ship's moving state, surrounding environment, traffic, and the type of AIS receiving station (land-based or satellite). It is observed here that the data is collected in short and continuous sections of the time-series, leaving some large gaps between samples, as shown in figure \ref{fig:resampleSOG}. Here, it is recommended to first resample the short and continuous sections of AIS data to a uniform sampling interval through data resampling techniques, i.e., up-sampling or down-sampling (as demonstrated by \citet{virtanen2020scipy}), and then, fill the remaining large gaps with empty rows.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/resample.png}
\caption{Down-sampling the collected AIS data to 15 minutes interval.} \label{fig:resampleSOG}
\end{figure}
\subsection{Divide Into Trips} \label{sec:divideIntoTrips}
Using conventional tools, data visualization becomes a challenge if the number of samples in the dataset is enormously large. It may simply not be practical to plot the whole time-series in a single plot. Moreover, dividing the time-series into individual trips may be considered as neat and help discretize the time-series into sensible sections which may be treated individually for further data processing and analysis. Plotting an individual trip would also give a complete overview of a port-to-port journey of the ship. Dividing the data into trips and at-berth legs would also make further data processing computationally less expensive as it may be possible to ignore a large number of samples (for further steps) where the ship is not in a trip. For such samples, it may not be necessary to interpolate hindcast, calculate hydrostatics, calculate resistance components, etc. Lastly, identifying individual trips would also make draft and trim correction step easier.
Dividing data into trips is substantially easier for noon reports and AIS data as they are generally supplied with a source and/or destination port name. In case of in-service data, it may be possible that no such information is available. In such a case, if the GPS data (latitude and longitudes) is available, it may be possible to just plot the samples on the world map and obtain individual trips by looking at the port calls. Alternatively, if the in-service data is supplied with a `State' variable\footnote{Generally available for ships equipped with Marorka systems (www.marorka.com).} (mentioned by \citet{Gupta2019}), indicating the propulsive state of the ship, like `Sea Passage', `At Berth', `Maneuvering', etc., it is recommended to find the continuous legs of `At Berth' state and enumerate the gaps in these legs with trip numbers, containing the rest of the states, as shown in figure \ref{fig:splitTSviaState}. Otherwise, it is recommended to use the shaft rpm and GPS speed (or speed-over-ground) time-series to identify the starting and end of each port-to-port trip. Here, a threshold value can be adopted for the shaft rpm and GPS speed. All the samples above these threshold values (either or both) are considered to be in-trip samples, as shown in figure \ref{fig:splitTS}. Thus, continuous legs of such in-trip samples can simply be identified and enumerated. It may also be possible to append few samples before and after each of these identified trips to obtain a proper trip, starting from zero and ending at zero. Such a process is designed keeping in mind the noise in the shaft rpm and GPS speed variables when the ship is actually static. Finally, if the GPS data is available, further adjustments can be done by looking at the port calls on the world map plotted with the GPS data.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Split_TS_J3.png}
\caption{Splitting time-series into trips using the `State' variable.} \label{fig:splitTSviaState}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Static_Indices_J3.png}
\caption{Splitting time-series into trips using threshold values (indicated by dashed red lines) for shaft rpm (10 rpm) and GPS speed (3 knots) variables.} \label{fig:splitTS}
\end{subfigure}
\caption{Splitting time-series into trips.}
\end{figure}
\subsection{Interpolate Hindcast \& GPS Position Correction} \label{sec:interpolateHindcast}
Even if the raw data contains information regarding the state of the weather for each data sample, it may be a good idea to interpolate weather hindcast (or metocean) data available from one of the well-established sources. The interpolated hindcast data would not only provide a quantitative measure of the weather conditions (and, consequently, the environmental loads) experienced by the ship, but it would also help carry-out some important validation checks (discussed later in section \ref{sec:resultsValChecks}). In order to interpolate hindcast data, the information regarding the location (latitude and longitude) and recording timestamp must be available in the ship's dataset. For ship performance analysis, it should be aimed that, at least, the information regarding the three main environmental load factors, i.e., wind, waves and sea currents, is gathered from the weather hindcast sources. For a further detailed analysis, it may also be a good idea to obtain additional variables, like sea water temperature (both surface and gradient along the depth of the ship), salinity, etc.
Before interpolating the weather hindcast data to the ship's location and timestamps, it is recommended to ensure that the available GPS (or navigation) data is validated and corrected (if possible) for any errors. If the GPS data is inaccurate, weather information at the wrong location is obtained, resulting in incorrect values for further analysis. For instance, the ship's original trajectory obtained from the GPS data, presented in figure \ref{fig:gps_outlier}, shows that the ship proceeds in a certain direction while suddenly jumping to an off-route location occasionally. The ship, of course, may have gone off-route as shown here, but referring to the GPS speed and heading of the ship at the corresponding time, shown in figure \ref{fig:gps_condition}, it is obvious that the navigation data is incorrect. Here, such an irrational position change can be detected through the two-stage steady-state (or stationarity) filter suggested by \citet{Gupta2021}, based on the method developed by \citet{Dalheim2020}. The first stage of the filter uses a sliding window to remove unsteady samples by performing a t-test on the slope of the data values, while the second stage performs an additional gradient check for the samples failing in the first stage to retain the misidentified samples. The `irrational position' in figure \ref{fig:gps_outlier} shows the coordinates identified as unsteady when the above two-stage filter is applied to longitude and latitude time-series. The filtered trajectory is further obtained after removing the samples with `irrational position' from the original data.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/GPS_outlier.png}
\caption{Original trajectory and filtered trajectory with irrational GPS position.} \label{fig:gps_outlier}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/GPS_condition.png}
\caption{Trends of GPS speed, heading, and position of the ship according to the corresponding period.} \label{fig:gps_condition}
\end{subfigure}
\caption{GPS position cleaning using the steady-state detection algorithm.}
\end{figure}
The hindcast data sources generally allow downloading a subset of the variables, timestamps, and a sub-grid of latitudes and longitudes, i.e., the geographical location. Depending on the hindcast source, the datasets can be downloaded manually (by filling a form), using an automated API script, or even by directly accessing their ftp servers. It may also be possible to select the temporal and spatial resolution of the variables being downloaded. In some cases, the hindcast web servers allows the users to send a single query, in terms of location, timestamp, and list of variables, to extract the required data for an individual sample. But every query received by these servers is generally queued for processing, causing substantially long waiting times, as they are facing a good amount of traffic from all over the world. Thus, it is recommended to simply download the required subset of data on a local machine for faster interpolation.
Once the hindcast data files are available offline, the main task at hand is to understand the cryptic (but highly efficient) data packaging format. Now-a-days, the two most poplar formats for such data files are GRIdded Binary data (GRIB) and NetCDF. GRIB (available as GRIB1 or GRIB2) is the international standard accepted by World Meteorological Organization (WMO), but due to some compatibility issues with windows operating systems, it may be preferable to use the NetCDF format.
Finally, a step-by-step interpolation has to be carried-out for each data sample from the ship's dataset. Algorithm \ref{algo:hindcastInterp} shows a simple procedure for n-th order (in time) interpolation scheme. Here, the spatial and temporal interpolation is performed in steps \ref{algoStep:spatialInterp} and \ref{algoStep:temporalInterp}, respectively. For a simple and reliable procedure, it is recommended to perform the spatial interpolation using a grid of latitudes and longitudes around the ship's location, after fitting a linear or non-linear 2D surface over the hindcast grid. It may be best to use a linear surface here as, firstly, the hindcast data may not be so accurate that performing a higher order interpolation would provide any better estimates, and secondly, in some case, higher order interpolation may result in highly inaccurate estimates, due to the waviness of the over-fitted non-linear surface. Similar arguments can be given in the case of temporal interpolation, and therefore, a linear interpolation in time can also be considered acceptable. The advantage of using the given algorithm is that the interpolation steps, here, can be easily validated by plotting contours (for spatial interpolation) and time-series (for temporal interpolation).
\begin{algorithm}
\caption{A simple algorithm for n-th order interpolation of weather hindcast data variables.}\label{algo:hindcastInterp}
\begin{algorithmic}[1]
\State $wData \gets $ weather hindcast data
\State $x \gets $ data variables to interpolate from hindcast
\State $wT \gets $ timestamps in $wData$
\ForAll{timestamps in ship's dataset}
\State $t \gets $ current ship time stamp
\State $loc \gets $ current ship location (latitude \& longitude)
\State $i \gets n+1$ indices of $wT$ around $t$
\ForAll{$x$}
\ForAll{$i$}
\State $x[i] \gets $ 2D spatial interpolation at $loc$ using $wData[x][i, :, :]$ \label{algoStep:spatialInterp}
\EndFor
\State $X \gets $ n-th order temporal interpolation at $t$ using $x[i]$ \label{algoStep:temporalInterp}
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
An important feature of hindcast datasets is masking the invalid values. For instance, the significant wave height should only be predicted by the hindcast model for the grid nodes which fall in the sea, requesting the value of such a variable on land should result in an invalid value. Such invalid values (or nodes) are by default masked in the downloaded hindcast data files, probably for an efficient storage of the data. These masked nodes may be filled with zeros before carrying-out the spatial interpolation in step \ref{algoStep:spatialInterp}, as one or more of these nodes may be contributing to the interpolation. Alternatively, if a particular masked node is contributing to the interpolation, it can be set to the mean of other nodes surrounding the point of interpolation, as suggested by \citet{Ejdfors2019}. It is argued by \citet{Ejdfors2019} that this would help avoid the artificially low (zero) values during the interpolation, but if the grid resolution is fine-enough, it is expected that the calculated mean (of unmasked surrounding nodes) would also not be much higher than zero.
\subsection{Derive New Features}
Interpolating the weather hindcast variables to ship's location at a given time would provide the hindcast variables in the global (or the hindcast model's) reference frame. For further analysis, it may be appropriate to translate these variables to ship's frame of reference, and furthermore, it may be desired to calculate some new variables which could be more relevant for the analysis or could help validate the assimilated (ship and hindcast) dataset. The wind and sea current variables, obtained from the hindcast source and the ship's dataset, can be resolved into the longitudinal and transverse speed components for validation and further analysis. Unfortunately, the wave load variables cannot be resolved in a similar manner, but the mean wave direction should be translated into the relative mean wave direction (relative to the ship's heading or course).
\subsection{Validation Checks} \label{sec:resultsValChecks}
Although it is recommended to validate each processing step by visualizing (or plotting) the task being done, it may be a good idea to take an intermediate pause and perform all types of possible validation checks. These validation checks would not only help assess the dataset from reliability point of view but can also be used to understand the correlation between various features. The validation checks can be done top-down, starting from the most critical feature to the least one. As explained in section \ref{sec:bestPractices}, the shaft power measurements can be validated against the shaft rpm and shaft torque measurements, if these are available, else just plotting the shaft rpm against the shaft power can also provide a good insight into the quality of data. For a better assessment, it is suggested to visualize the shaft rpm vs shaft power overlaid with the engine operational envelope and propeller curves, as presented by \citet{Liu2020} (in figure 11). Any sample falling outside the shaft power overload envelope (specially at high shaft rpm) should be removed from the analysis, as they may be having measurement errors. It may also be possible to make corrections, if the shaft power data seems to be shifted (up or down) with respect to the propeller curves due to sensor bias.
The quality of speed-through-water measurements can be assessed by validating it against its estimate, obtained as a difference between the speed-over-ground and longitudinal current speed. Here, it should be kept in mind that the two values may not be a very good match due to several problems cited in section \ref{sec:incorrMeasureInServData}. Visualizing the speed-though-water vs shaft power along with all the available estimates of the speed-power calm-water curve is also an important validation step (shown in figure \ref{fig:speedVsPowerWSPCurves}). Here, the majority of measurement data should accumulate around these curves. In case of disparity between the curves, the curve obtained through the sea trial of the actual ship may take precedence.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{Figures/Log_speed_vs_Power_0_J3.png}
\caption{Speed-though-water (log speed) vs shaft power with various estimates of speed-power calm-water curves.} \label{fig:speedVsPowerWSPCurves}
\end{figure}
The interpolated weather hindcast data variables must also be validated against the measurements taken onboard the ship. This is quite critical as the sign and direction notations assumed by the hindcast models and ship's sensors (or data acquisition system) are probably not the same, which may cause mistakes during the interpolation step. Moreover, most ships are generally equipped with anemometers that can measure the actual and relative wind speed and directions, and these two modes (actual or relative) can be switched through a simple manipulation by the crew onboard. It is possible that this mode change may have occurred during the data recording duration, resulting in errors in the recorded data. In addition, there may be a difference between the reference height of the wind hindcast data and the vertical position of the installed anemometer, which may lead to somewhat different results even at the same location at sea. The wind speed at the reference height (${V_{WT}}_{ref}$) can be corrected using the anemometer recorded wind speed ($V_{WT}$), assuming a wind speed profile, as follows (recommended by \citet{ITTC2017}):
\begin{equation}\label{eq:referenceHeight}
{V_{WT}}_{ref} = V_{WT}\left(\frac{Z_{ref}}{Z_{a}}\right)^{\frac{1}{9}}
\end{equation}
Where $Z_{ref}$ is the reference height above the sea level and $Z_a$ is the height of the anemometer.
Finally, these wind measurements can be translated into the longitudinal and transverse relative components. The obtained transverse relative wind speed can be validated against the transverse wind speed, obtained from the hindcast source, as they are basically the same. Similarly, the difference between the longitudinal relative wind speed and the speed-over-ground of the ship can be validated against the longitudinal wind speed measurements from hindcast, as shown in figure \ref{fig:longWindSpeedValidation}. In case of time-averaged in-service data, the problem of faulty averaging of angular measurements when the measurement values are near 0 or 360 degrees (i.e., the angular limits), explained in section \ref{sec:timeAvgProb}, must also be verified and appropriate corrective measures should be taken. From figure \ref{fig:longWindSpeedValidation}, it can be clearly seen that the time-averaging problem (in relative wind direction) causes the longitudinal wind speed (estimated using the ship data) to jump from positive to negative, resulting in a mismatch with the corresponding hindcast values. In such a case, it is recommended to either fix these faulty measurements, which may be difficult as there is no proven way to do it, or just use the hindcast measurements for further analysis.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\linewidth]{Figures/LongWindSpeed_J3.png}
\caption{Validating longitudinal wind speed obtained using the ship data against the values obtained from the hindcast. The time-averaging problem with angular measurements around 0 or 360 degrees (explained in section \ref{sec:timeAvgProb}) is clearly visible here.} \label{fig:longWindSpeedValidation}
\end{figure}
As discussed in the case of noon reports in section \ref{sec:noonReportsAvgProb}, weather information generally refers to the state of the weather at the time when the report is logged, which is probably not the average state from noon to noon. Furthermore, the wind loads here are observed based on the Beaufort scale, therefore, the deviation may be somewhat large when converted to the velocity scale. In this case, it is recommended to consider the daily average values obtained from the weather hindcast data, over the travel region, rather than the noon report values.
\subsection{Data Processing Errors}
The validation step is very critical in finding out any processing mistakes or inherent problems with the dataset, as demonstrated in the previous section. Such problems or mistakes, if detected, must be corrected or amended for before moving forward with the processing and analysis. The main mistakes found at this step are generally either interpolation mistakes or incorrect formulation of the newly derived feature. These mistakes should be rectified accordingly, as shown in the flow diagram (figure \ref{fig:flowDiag}).
\subsection{Fix Draft \& Trim} \label{sec:fixDraft}
The draft measurements recorded onboard the ship are often found to be incorrect due to the Venturi effect, explained briefly in section \ref{sec:incorrMeasureInServData}. The Venturi effect causes the draft measurements to drop to a lower value due to a non-zero negative dynamic pressure as soon as the ship develops a relative velocity with respect of the water around the hull. Thus, the simplest solution to fix these incorrect measurements is by interpolating the draft during a voyage using the draft measured just before and after the voyage. Such a simple solution provides good results for a simple case where the draft of the ship basically remains unchanged during the voyage, except for the reduction of draft due to consumed fuel, as shown in the figure \ref{fig:simpleDraftCorr}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_014.png}
\caption{Simple draft correction.} \label{fig:simpleDraftCorr}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_TS_033_Corr_J3.png}
\caption{Complex draft correction.} \label{fig:complexDraftCorr}
\end{subfigure}
\caption{Correcting in-service measured draft.}
\end{figure}
In a more complex case where the draft of the ship is changed in the middle of the voyage and the ship is still moving, i.e., conducting ballasting operations or trim adjustments during transit, the simple draft interpolation would result in corrections which can be way off the actual draft of the vessel. As shown in figure \ref{fig:complexDraftCorr}, the fore draft is seen to be dropping and the aft draft increasing in the middle of the voyage without much change in the vessel speed, indicating trim adjustments during transit. In this case, a more complex correction is applied after taking into account the change in draft during the transit. Here, first of all, a draft change operation is identified (marked by green and red vertical lines in figure \ref{fig:complexDraftCorr}), then the difference between the measurements before and after the operation is calculated by taking an average over a number of samples. Finally, a ramp is created between the starting of the draft change operation (green line) and the end of operation (red line). The slope of the ramp is calculated using the difference between the draft measurements before and after the draft change operation. The draft change operation can either be identified manually, by looking at the time-series plots, or using the steady-state (or stationarity) filter developed by \citet{Dalheim2020}.
In case of AIS data, \citet{bailey2008training} reported that 31\% of the draft information out of the investigated AIS messages had obvious errors. The draft information from AIS data generally corresponds to the condition of ships while arriving at or departing from the port, and changes due to fuel consumption and ballast adjustment onboard are rarely updated. Since the draft obtained from the AIS as well as noon reports has a long update cycle and is acquired by humans, it is practically difficult to precisely fix the draft values as in the case of in-service data. However, by comparing the obtained draft with a reference value, it may be possible to gauge whether the obtained draft is, in fact, correct. If the obtained draft excessively deviates from the reference, it may be possible to remove the corresponding data samples from further analysis or replace the obtained draft value with a more appropriate value. Table \ref{tab:draftRatio} shows the results of investigating the average draft ratio, which is the ratio of the actual draft ($T_c$) and design draft ($T_d$), for various ship types from 2013 to 2015 by \citet{olmer2017greenhouse}. As summarized in the table, the draft ratio varies depending on the ship type and the voyage type. By using these values as the above mentioned reference, the draft obtained from the AIS data and noon reports can be roughly checked and corrected.
\begin{table}[ht]
\caption{Average draft ratio ($T_c/T_d$) for different ship types. $T_c$ = actual draft during a voyage; $T_d$ = design draft of the ship.} \label{tab:draftRatio}
\centering
\begin{tabular}{l|c|c}
\hline
\multicolumn{1}{c|}{\textbf{Ship types}} & \multicolumn{1}{c|}{\textbf{Ballast Voyage}} & \multicolumn{1}{c}{\textbf{Laden Voyage}}\\
\hline
Liquefied gas tanker & 0.67 & 0.89\\
Chemical tanker & 0.66 & 0.88\\
Oil tanker & 0.60 & 0.89\\
Bulk carrier & 0.58 & 0.91\\
General cargo & 0.65 & 0.89\\
\hline
\multicolumn{3}{c}{\textit{The following ship types do not generally have ballast-only voyages.}} \\
\hline
Container & \multicolumn{2}{c}{0.82}\\
Ro-Ro & \multicolumn{2}{c}{0.87}\\
Cruise & \multicolumn{2}{c}{0.98}\\
Ferry pax & \multicolumn{2}{c}{0.90}\\
Ferry ro-pax & \multicolumn{2}{c}{0.93}\\
\hline
\end{tabular}
\end{table}
\subsection{Calculate Hydrostatics}
Depending on the type of performance analysis, it may be necessary to have features like displacement, wetted surface area (WSA), etc. in the dataset, as they are more relevant from a hydrodynamic point of view. Moreover, most of the empirical or physics-based methods for resistance calculations (to be done in the next step) requires these features. Unfortunately, these feature cannot be directly recorded onboard the ship. But it is fairly convenient to estimate them using the ship's hydrostatic table or hull form (or offset table) for the corresponding mean draft and trim for each data sample. Here, it is recommended to use the corrected draft and trim values, obtained in the previous step. If the detailed hull form is not available, the wetted surface area can also be estimated using the empirical formulas shown in table \ref{tab:wsaParams}. The displacement at design draft, on the other hand, can be estimated using the ship particulars and typical range of block coefficient ($C_B$), presented in table \ref{tab:cbParams}.
\begin{table}[ht]
\caption{Estimation formulas for wetted surface area of different ship types.} \label{tab:wsaParams}
\centering
\begin{tabular}{l|l|l}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Formula}} & \multicolumn{1}{c}{\textbf{Reference}}\\
\hline
Tanker/Bulk carrier & $WSA = 0.99\cdot(\frac{\nabla}{T}+1.9\cdot L_{WL}\cdot T)$ & \citet{Kristensen2017} \\
Container & $WSA = 0.995\cdot(\frac{\nabla}{T}+1.9\cdot L_{WL}\cdot T)$ & \citet{Kristensen2017} \\
Other (General) & $WSA = 1.025\cdot(\frac{\nabla}{T}+1.7\cdot L_{PP}\cdot T)$ & \citet{molland2011maritime} \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht]
\caption{Typical block coefficient ($C_B$) range at design draft for different ship types, given by \citet{solutions2018basic}.} \label{tab:cbParams}
\centering
\begin{tabular}{l|l|c}
\hline
\multicolumn{1}{c|}{\textbf{Category}} & \multicolumn{1}{c|}{\textbf{Type}} & \multicolumn{1}{c}{\textbf{Block coefficient ($C_B$)}}\\
\hline
Tanker & Crude oil carrier & 0.78-0.83\\
& Gas tanker/LNG carrier & 0.65-0.75\\
& Product & 0.75-0.80\\
& Chemical & 0.70-0.78\\
\hline
Bulk carrier & Ore carrier & 0.80-0.85\\
& Regular & 0.75-0.85\\
\hline
Container & Line carrier & 0.62-0.72\\
& Feeder & 0.60-0.70\\
\hline
General cargo & General cargo/Coaster & 0.70-0.85\\
\hline
Roll-on/roll-off cargo & Ro-Ro cargo & 0.55-0.70\\
& Ro-pax & 0.50-0.70\\
\hline
Passenger ship & Cruise ship & 0.60-0.70\\
& Ferry & 0.50-0.70\\
\hline
\end{tabular}
\end{table}
\subsection{Calculate Resistance Components}
There are several components of the ship's total resistance, and there are several methods to estimate each of these components. The three main resistance components which generally constitutes the majority of the ship's total resistance are calm-water, added wind, and added wave resistance. It is possible to further divide the calm-water resistance into sub-components, namely, skin friction and residual resistance. The total calm-water resistance can be calculated using one of the many well-known empirical methods, like Guldhammer and Harvald (\citet{Guldhammer1970}), updated Guldhammer and Harvald (\citet{Kristensen2017}), Hollenbach (\citet{Hollenbach1998}), Holtrop and Mennen (\citet{Holtrop1982}), etc. These empirical methods are developed using the data from numerous model test results of different types of ships, and each one is proven to be fitting well on several different ship types. The latter makes choosing the right method for a ship quite complicated.
The easiest way to choose the right calm-water resistance estimation method is to calculate the calm-water resistance from each of these methods and comparing it with the corresponding data obtained for the given ship. The calm-water data for a given ship can be obtained from the model tests, sea trial, or even filtering the operational data, obtained from one of the sources discussed here (in section \ref{sec:dataSources}), for near-calm-water condition. The usual practice here is to use the sea trial data as it is obtained and corrected for near-calm-water condition and do not suffer from scale effects, seen in model test results. But the sea trials are sometimes conducted at only the high range of speed and ballast displacement (as shown in figure \ref{fig:speedVsPowerWSPCurves}). Thus, it is recommended to use the near-calm-water filtered (and corrected) operational data to choose the right method so that a good fit can be ensured for a complete range to speed and displacement.
According to \citet{ITTC2017}, the increase in resistance due to wind loads can be obtained by applying one of the three suggested methods, namely, wind tunnel model tests, STA-JIP, and Fujiwara's method. If the wind tunnel model test results for the vessel are available, it may be considered as the most accurate method for estimating added wind resistance. Otherwise, the database of wind resistance coefficients established by STA-JIP (\citet{van2013new}) or the regression formula presented by \citet{Fujiwara2005} is recommended. From the STA-JIP database, experimental values according to the specific ship type can be obtained, whereas Fujiwara's method is based-on the regression analysis of data obtained from several wind tunnel model tests for different ship types.
The two main sets of parameters required to estimate the added wind resistance using any of the above three methods are incident wind parameters and information regarding the exposed area to the wind. The incident wind parameters, i.e., relative wind speed and direction, can be obtained from onboard measurements or weather hindcast data. In case of weather hindcast data, the relative wind measurements can be calculated from the hindcast values according to the formulation outlined by \citet{ITTC2017} in section E.1, and in case of onboard measurements, the relative wind measurements should be corrected for the vertical position of the anemometer according to the instructions given by \citet{ITTC2017} in section E.2, also explained here in section \ref{sec:resultsValChecks}. The information regarding the exposed area to the wind can be either estimated using the general arrangement drawing of the ship or approximately obtained using a regression formula based-on the data from several ship, presented by \citet{kitamura2017estimation}.
The added wave resistance ($R_{AW}$) can also be obtained in a similar manner using one of the several well-established methods for estimating $R_{AW}$. \citet{ITTC2017} recommends conducting sea keeping model tests in regular waves to obtain $R_{AW}$ transfer functions, which can further be used to estimate $R_{AW}$ for the ship in irregular seas. To empirically obtain these transfer functions or $R_{AW}$ for a given ship, it is possible to use physics-based empirical methods like STAWAVE1 and STAWAVE2 (recommended by \citet{ITTC2017}). STAWAVE1 is a simplified method for directly estimating $R_{AW}$ in head wave conditions only, and it requires limited input, including ship's waterline length, breadth, and significant wave height. STAWAVE2 is an advanced method to empirically estimate parametric $R_{AW}$ transfer functions for a ship. The method is developed using an extensive database of sea keeping model test results from numerous ships, but unfortunately, it only provides transfer functions for approximate head wave conditions (0 to $\pm$45 degrees from bow). A method proposed by DTU (\citet{Martinsen2016}; \citet{Taskar2019}; \citet{Taskar2021}) provides transfer functions for head to beam seas, i.e., 0 to $\pm$90 degrees from bow. Finally, for all wave heading, it may be recommended to use the newly established method by \citet{Liu2020}. There have been several studies to assess and compare the efficacy of each of these methods and several other methods, but no consistent guidelines are provided regarding their applicability.
\subsection{Data Cleaning \& Outlier Detection}
It may be argued by some that the process of data cleaning and outlier detection should be carried-out way earlier in the data processing framework, as proposed by \citet{Dalheim2020DataPrep}, but it should be noted here that all the above steps proposed here have to be performed only once for a given dataset, whereas data cleaning is done based on the features selected for further analysis. Since the same dataset can be used for several different analyses, which may be using different sets of features, some part of data cleaning has to be repeated before each analysis to obtain a clean dataset with as many data samples as possible. Moreover, the additional features acquired during the above listed processing steps may be helpful in determining to a better extent if a suspected sample is actually an outlier or not.
Nevertheless, it may be possible to reduce the work load for the above processing steps by performing some basic data cleaning before some of these steps. For instance, while calculating the resistance components for in-trip data samples, it is possible to filter-out samples with invalid values for one or more of the ship data variables used to calculate these components, like speed-though-water, mean draft (or displacement), etc. This would reduce the number of samples for which the new feature has to be calculated. It should also be noted that even if such simple data cleaning (before each step) is not performed, these invalid samples would be easily filtered-out in the present step. Thus, the reliability and efficacy of the data processing framework is not affected by performing the data cleaning and outlier detection step at the end.
Most of the methods developed for ship performance monitoring assumes that the ship is in a quasi-steady state for each data sample. The quasi-steady assumption indicates that the propulsive state of the ship remains more or less constant during the sample recording duration, i.e., the ship is neither accelerating nor decelerating. This is specially critical for aforementioned time-averaged datasets, as the averaging duration can be substantially long, hiding the effects of accelerations and decelerations. Here, the two-stage steady-state filter, explained in section \ref{sec:interpolateHindcast}, can be applied to the shaft rpm time-series to remove the samples with accelerations and decelerations, resulting in quasi-steady samples. In tandem to the steady-state filter on the shaft rpm time-series, it may also be possible to use the steady-state filter, with relaxed setting, on the speed-over-ground time-series to filter-out the sample where the GPS speed (or speed-over-ground) signal suddenly drops or recovers from a dead state, resulting in measurement errors.
As discussed in section \ref{sec:outliers}, the outliers can be divided into two broad categories: (a) Contextual outliers, and (b) Correlation-defying outliers. The contextual outliers can be identified and resolved by the methods presented as well as demonstrated by \citet{Dalheim2020DataPrep}, and for correlation-defying outliers, methods like Principal Component Analysis (PCA) and autoencoders can be used. Figure \ref{fig:corrDefyingOutliers} shows the in-service data samples recorded onboard a ship. The data here is already filtered-out for quasi-steady assumption, explained above, and contextual outliers, according to the methods suggested by \citet{Dalheim2020DataPrep}. Thus, the samples highlighted by red circles (around 6.4 MW shaft power in figure \ref{fig:corrDefyingOutliersSP}) can be classified as correlation-defying outliers. The time-series plot (shown in figure \ref{fig:corrDefyingOutliersTS}) clearly indicates that the detected outliers have faulty measurements for the speed-through-water (stw) and speed-over-ground (sog), defying the correlation between these variables and the rest. It is also quite surprising to notice that the same fault occurs in both the speed measurements at the same time, considering that they are probably obtained from different sensors.
\begin{figure}[ht]
\centering
\begin{subfigure}[]{0.42\linewidth}
\includegraphics[width=\linewidth]{Figures/stw_vs_power_J3.png}
\caption{Log speed (or stw) vs shaft power.} \label{fig:corrDefyingOutliersSP}
\end{subfigure}
\begin{subfigure}[]{0.57\linewidth}
\includegraphics[width=\linewidth]{Figures/Trip_TS_128_J3.png}
\caption{Time-series.} \label{fig:corrDefyingOutliersTS}
\end{subfigure}
\caption{Correlation-defying outliers marked with red circles.} \label{fig:corrDefyingOutliers}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
The quality of data is very important in estimating the performance of a ship. In this study, a streamlined semi-automatic data processing framework is developed for ship performance analysis. The data processing framework can be used to process data from several different sources, like onboard recorded in-service data, AIS data and noon reports. These three data sources are discussed here in detail along with their inherent problems and associated examples. It is here recommended to use the onboard recorded in-service data for ship performance monitoring over the other data sources, as it is considered more reliable due its consistent and higher sampling rate. Moreover, the AIS data and noon reports lacks some of the critical variables required for ship performance analysis, and they are also susceptible to human error, as some of the data variables recorded here are manually logged by the ship's crew. Nevertheless, all three data sources are known to have several problems and should be processed carefully for any further analysis.
The data processing framework, presented in the current work, is designed to address and resolve most of the problems found in the above three data sources. It is first recommended to divide the data into trips so that further processing can be performed in a more systematic manner. A simple logic to divide the data into individual trips is outlined here if the port call information is not available. The weather hindcast (metocean) data is considered as an important supplementary information, which can be used for data validation and estimating environmental loads experienced by the ship. A simple algorithm to effectively interpolate the hindcast data at a specific time and location of a ship is presented within the data processing framework. The problem of erroneous draft measurements, caused due to the Venturi effect, is discussed in detail as well as simple interpolation is recommended to fix these measurements. A more complex case, where the draft or trim is voluntarily adjusted during the voyage without reducing the vessel speed, is also presented here. Such a case cannot be resolved with simple interpolation, and therefore, an alternate method is suggested for the same problem.
Choosing the most suitable methods for estimating resistance components may also be critical for ship performance analysis. It is, therefore, recommended to carry out some validation checks to find the most suitable methods before adopting them into practice. Such validation checks should be done, wherever possible, using the data obtained from the ship while in-service rather than just using the sea trial or model test results. Data cleaning and outlier detection is also considered an important step for processing the data. Since cleaning the data requires selecting a subset of features relevant for the analysis, it is recommended to perform this as the last step of the data processing framework, and some part of it should be reiterated before carrying out a new type of analysis. The presented data processing framework can be systematically and efficiently adopted to process the datasets for ship performance analysis. Moreover, the various data processing methods or steps mentioned here can also be used elsewhere to process the time-series data from ships or similar sources, which can be used further for a variety of tasks.
\section{}\label{}
| {'timestamp': '2022-02-03T02:20:20', 'yymm': '2202', 'arxiv_id': '2202.01000', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.01000'} |
\section{Introduction}
\label{sec:introduction}
The coupled interaction between free flow and porous medium flow forms an active area of research due to their appearance in a wide variety of applications. Examples include biomedical applications such as blood filtration, engineering situations such as air filters and PEM fuel cells, as well as environmental considerations such as the drying of soils. In all mentioned applications, it is essential to properly capture the mutual interaction between a free, possibly turbulent, flow and the creeping flow inside the porous medium.
We consider the simplified setting in which the free-flow regime is described by Stokes flow and we let Darcy's law govern the porous medium flow. Moreover, we only consider the case of stationary, single-phase flow and assume a sharp interface between the two flow regimes. These considerations are simplifications of a more general framework of models coupling free flow with porous medium flow. Such models have been the topic of a variety of scientific work in recent years, with focuses including mathematical analysis, discretization methods, and iterative solution techniques. Different formulations of the Stokes-Darcy problem have been analyzed in \cite{discacciati2004domain,layton2002coupling,gatica2011analysis}. Examples in the context of discretization methods include the use of Finite Volume methods \cite{iliev2004a, mosthaf2011a, rybak2015a, masson2016a, fetzer2017a, schneider2020coupling} and (Mixed) Finite Element methods, both in a coupled \cite{layton2002coupling,discacciati2009navier,riviere2005locally,gatica2009conforming} and in a unified \cite{armentano2019unified,karper2009unified} setting. Moreover, iterative methods for this problem are considered in e.g. \cite{discacciati2007robin,discacciati2005iterative,discacciati2018optimized,ganderderivation,galvis2007balancing,cao2011robin}. We refer the reader to the works \cite{discacciati2009navier,rybak2016mathematical,discacciati2004domain} and references therein for more comprehensive overviews on the results concerning the Stokes-Darcy model.
In order to distinguish this work from existing results, we formulate the following objective: \\
The goal of this work is to create an iterative numerical method that solves the stationary, Stokes-Darcy problem with the following three properties:
\begin{enumerate}
\item \label{goal: mass conservation}
The solution retains \textbf{local mass conservation}, after each iteration.
Since mass balance is a physical conservation law, we emphasize its importance over all other constitutive relationships in the model. Hence, the first aim is to produce a solution that respects local mass conservation and use iterations to improve the accuracy of the solution with respect to the remaining equations. Importantly, we aim to obtain a conservative flow field in the case that the iterative scheme is terminated before reaching convergence.
We present two main ideas to achieve this. First, we limit ourselves to discretization methods capable of ensuring local mass conservation within each flow regime. Secondly, we ensure that no mass is lost across the Stokes-Darcy interface by introducing a single variable describing this interfacial flux. Our contribution in this context is to pose and analyze the Stokes-Darcy problem using function spaces that ensure normal flux continuity (Section~\ref{sub:functional_setting}), both in the continuous (Section~\ref{sec:well_posedness}) and discretized (Section~\ref{sec:discretization}) settings. Our approach is closely related to the ``global'' approach suggested in \cite[Remark 2.3.2]{discacciati2004domain} which we further develop in a functional setting. We moreover note that our construction is, in a sense, dual to the more conventional approach in which a mortar variable representing the interface pressure is introduced, see e.g. \cite{layton2002coupling,gatica2009conforming}.
\item
The performance of the iterative solution scheme is \textbf{robust with respect to physical and mesh parameters}. In this respect, the first aim is to obtain sufficient accuracy of the solution within a given number of iterations that is robust with respect to given material parameters such as the permeability of the porous medium and the viscosity of the fluid. This will allow the scheme to handle wide ranges of material parameters that arise either from the physical problem or due to the chosen non-dimensionalization.
Robustness with respect to mesh size is advantageous from a computational perspective. If the scheme reaches sufficient accuracy within a number of iterations on a coarse grid, then this robustness provides a prediction on the necessary computational time on refined grids. We note that the analysis in this work is restricted to shape-regular meshes, hence the typical mesh size $h$ becomes the only relevant mesh parameter.
To attain this goal, we pay special attention to the influence of the material and mesh parameters in the a priori analysis of the problem. We derive stability bounds of the solution in terms of functional norms weighted with the material parameters. One of the main contributions is thus the derivation of a properly weighted norm for the normal flux on the Stokes-Darcy interface, presented in equation \eqref{eq: norm phi}. In turn, this norm is used to construct an optimal preconditioner a priori.
\item
The method is easily extendable to a \textbf{wide range of discretization methods} for the Stokes and Darcy subproblems. Aside from compliance with aim (1), we impose as few restrictions as possible on the underlying choice of discretization methods, thus allowing the presented iterative scheme to be highly adaptable. Moreover, the scheme is able to benefit from existing numerical implementations that are tailored to solving the Stokes and Darcy subproblems efficiently. This work employs a conforming Mixed Finite Element method, keeping in mind that extensions can readily be made to other locally conservative methods such as e.g. Finite Volume Methods or Discontinuous Galerkin methods.
In order to achieve this third goal, we first derive the properties of the problem in the continuous setting and apply the discretization afterward. The key strategy here is to reformulate the problem into a Steklov-Poincar\'e system concerning only the normal flux across the interface, similar to the strategy presented in \cite[Sec. 2.5]{discacciati2004domain}. We then propose a preconditioner for this problem that is independent of the chosen discretization methods for the subproblems.
\end{enumerate}
Our formulation and analysis of the Stokes-Darcy problem therefore has three distinguishing properties. Most importantly, we consider a mixed formulation of the coupled problem using a function space that strongly imposes normal flux continuity at the interface. In contrast, existing approaches often use a primal formulation for the Darcy subproblem \cite{discacciati2004domain} or enforce flux continuity using Lagrange multipliers \cite{layton2002coupling}. In the context of Mixed Finite Element Methods, this directly leads to different choices of discrete spaces. Secondly, our analysis employs weighted norms and we derive an estimate for the interface flux that has, to our knowledge, not been exploited in existing literature. Third, we propose a preconditioner in Section~\ref{sub:parameter_robust_preconditioning} that is entirely local to the interface and does not require additional subproblem solves, in contrast to more conventional approaches such as the Neumann-Neumann method presented in Section~\ref{sub:comparison_to_NN_method}. The construction of this preconditioner does, however, require solving a generalized eigenvalue problem, which is done in the a priori, or ``off-line'', stage. As an additional feature, our set-up does not require choosing any acceleration parameters.
The article is structured as follows. Section~\ref{sec:the_model} introduces the coupled Stokes-Darcy model and its variational formulation as well as the notational conventions used throughout this work. Well-posedness of the model is shown in Section~\ref{sec:well_posedness} with the use of weighted norms. Section~\ref{sec:the_steklov_poincare_system} shows the reduction to an interface problem concerning only the normal flux defined there. A conforming discretization is proposed in Section~\ref{sec:discretization} with the use of the Mixed Finite Element method. Using the ingredients of these sections, Section~\ref{sec:iterative_solvers} describes the proposed iterative scheme and the optimal preconditioner it relies on. The theoretical results are confirmed numerically in Section~\ref{sec:numerical_results}. Finally, Section~\ref{sec:conclusions} contains concluding remarks.
\section{The Coupled Stokes-Darcy Model}
\label{sec:the_model}
Consider an open, bounded domain $\Omega \subset \mathbb{R}^n$, $n \in \{2, 3\}$, decomposed into two disjoint, Lipschitz subdomains $\Omega_S$ and $\Omega_D$. Here, and throughout this work, the subscript $S$ or $D$ is used on subdomains and variables to denote its association to the Stokes or Darcy subproblem, respectively. Let the interface be denoted by $\Gamma := \partial{\Omega}_S \cap \partial{\Omega}_D$ and let $\bm{n}$ denote the unit vector normal to $\Gamma$ oriented outward with respect to $\Omega_S$. An illustration of these definitions is given in Figure~\ref{fig:figure1}.
\begin{figure}[ht]
\centering
\includegraphics[width = \textwidth]{Fig1.pdf}
\caption{Decomposition of the domain into $\Omega_S$ and $\Omega_D$.}
\label{fig:figure1}
\end{figure}
We introduce the model problem following the description of \cite{layton2002coupling}. The main variables are given by the velocity $\bm{u}$ and pressure $p$. A subscript denotes the restriction of a variable to the corresponding subdomain. The model is formed by considering Stokes flow for $(\bm{u}_S, p_S)$ in $\Omega_S$, Darcy flow for $(\bm{u}_D, p_D)$ in $\Omega_D$, and mass conservation laws for $\bm{u}$ in both subdomains:
\begin{subequations} \label{eq: SD strong form}
\begin{align}
-\nabla \cdot \sigma(\bm{u}_S, p_S) &= \bm{f}_S, &
\nabla \cdot \bm{u}_S &= 0, & \text{in }\Omega_S, \\
\bm{u}_D + K \nabla p_D &= 0, &
\nabla \cdot \bm{u}_D &= f_D, & \text{in }\Omega_D.
\end{align}
In this setting, $K$ is the hydraulic conductivity of the porous medium. For simplicity, we assume that $K$ is homogeneous and isotropic and thus given by a positive scalar. On the right-hand side, $\bm{f}_S$ represents a body force and $f_D$ corresponds to a mass source.
In the governing equations for Stokes flow, let the strain $\varepsilon$ and stress $\sigma$ be given by
\begin{align*}
\varepsilon(\bm{u}_S) &:= \frac{1}{2}\left(\nabla \bm{u}_S + (\nabla \bm{u}_S)^T \right), &
\sigma(\bm{u}_S, p_S) &:= \mu \varepsilon(\bm{u}_S) - p_S I.
\end{align*}
The parameter $\mu > 0$ is the viscosity.
Next, we introduce two coupling conditions on the interface $\Gamma$ that describe mass conservation and the balance of forces, respectively:
\begin{align}
\bm{n} \cdot \bm{u}_S &= \bm{n} \cdot \bm{u}_D,
& \text{on } \Gamma, \label{eq: coupling_mass}\\
\bm{n} \cdot \sigma(\bm{u}_S, p_S) \cdot \bm{n} &= -p_D,
& \text{on } \Gamma.
\end{align}
As remarked in the introduction, we keep a particular focus on conservation of mass. To ensure that no mass is lost across the interface, we will prioritize condition \eqref{eq: coupling_mass} at a later stage.
To close the model, we consider the following boundary conditions. First, for the Stokes subproblem, we impose the Beavers-Joseph-Saffman condition on the interface $\Gamma$, given by
\begin{align}
\bm{n} \cdot \sigma(\bm{u}_S, p_S) \cdot \bm{\tau}
&= - \beta \bm{\tau} \cdot \bm{u}_S,
& \text{on } \Gamma.
\label{eq: BJS}
\end{align}
Here, we define $\beta := \alpha \frac{\mu}{\sqrt{\bm{\tau} \cdot \kappa \cdot \bm{\tau}}} $ with $\kappa := \mu K$ the permeability and $\alpha$ a proportionality constant to be determined experimentally. Moreover, the unit vector $\bm{\tau}$ is obtained from the tangent bundle of $\Gamma$. Thus, for $n = 2$, equation \eqref{eq: BJS} corresponds to a single condition on the one-dimensional interface $\Gamma$ whereas for $n = 3$, it describes two separate coupling conditions.
The boundary of $\Omega$ is decomposed in the disjoint unions $\partial \Omega_S \setminus \Gamma = \partial_u \Omega_S \cup \partial_\sigma \Omega_S$ and $\partial \Omega_D \setminus \Gamma = \partial_u \Omega_D \cup \partial_p \Omega_D$. The subscript denotes the type of boundary condition imposed on that portion of the boundary. Specifically, we set
\begin{align}
\bm{u}_S &= 0, & \text{on } &\partial_u \Omega_S, &
\bm{n} \cdot \bm{u}_D &= 0, & \text{on } &\partial_u \Omega_D. \label{eq: BC essential} \\
\bm{n} \cdot \sigma(\bm{u}_S, p_S) &= 0, & \text{on } &\partial_\sigma \Omega_S, &
p_D &= g_p, & \text{on } &\partial_p \Omega_D, \label{eq: BC natural}
\end{align}
with $g_p$ a given pressure distribution.
\end{subequations}
In the following, we assume that the interface $\Gamma$ touches the portion of the boundary $\partial \Omega_S$ where homogeneous flux conditions are imposed, i.e. $\partial \Gamma \subseteq \overline{\partial_u \Omega_S}$.
We note that this assumption excludes the case in which $\Omega_D$ is completely surrounded by $\Omega_S$.
Moreover, we assume that $|\partial_\sigma \Omega_S \cup \partial_p \Omega_D| > 0$ to ensure unique solvability of the coupled problem and we focus on the case in which $|\partial_\sigma \Omega_S| > 0$.
\subsection{Functional Setting}
\label{sub:functional_setting}
In this section, we introduce the function spaces in which we search for a weak solution of problem \eqref{eq: SD strong form}. We start by considering the space for the velocity variable $\bm{u}$. With the aim of deriving mixed formulations for both subproblems, we introduce the following spaces:
\begin{subequations}
\begin{align}
\bm{V}_S &:= \left\{ \bm{v}_S \in (H^1(\Omega_S))^n :\
\bm{v}_S|_{\partial_u \Omega_S} = 0 \right\}, \\
\bm{V}_D &:= \left\{ \bm{v}_D \in H(\div, \Omega_D) :\
\bm{n} \cdot \bm{v}_D|_{\partial_u \Omega_D} = 0 \right\}.
\end{align}
\end{subequations}
Note that these spaces incorporate the boundary conditions \eqref{eq: BC essential} on $\partial_u \Omega$ which become essential boundary conditions in our mixed formulation. Similarly, the normal flux continuity across $\Gamma$ \eqref{eq: coupling_mass} needs to be incorporated as an essential boundary condition. For that, we introduce a single function $\phi \in \Lambda$, defined on $\Gamma$ to represent the normal flux across the interface. The next step is then to define the following three function spaces:
\begin{subequations}
\begin{align}
\bm{V}_S^0 &:= \left\{ \bm{v}_S \in \bm{V}_S :\
\bm{n} \cdot \bm{v}_S|_{\Gamma} = 0 \right\}, \\
\Lambda &:= H^{1/2}_{00}(\Gamma), \\
\bm{V}_D^0 &:= \left\{ \bm{v}_D \in \bm{V}_D :\
\bm{n} \cdot \bm{v}_D|_{\Gamma} = 0 \right\}.
\end{align}
\end{subequations}
We note that $\Lambda$ is the normal trace space of $\bm{V}_S$ on $\Gamma$. From the previous section, we recall that $\Gamma$ touches the boundary $\partial \Omega$ where zero velocity conditions are imposed for the Stokes problem. The trace space is therefore characterized as the fractional Sobolev space $H^{1/2}_{00}(\Gamma)$, containing distributions that can be continuously extended by zero on $\partial \Omega$. We refer the reader to \cite{lions2012non} for more details on this type of trace spaces. For the purpose of our analysis, we note that the inclusion $H_0^1(\Gamma) \subset \Lambda \subset L^2(\Gamma)$ holds and we let $H^{-\frac{1}{2}}(\Gamma)$ denote the dual of $\Lambda$.
For the incorporation of the interface condition \eqref{eq: coupling_mass} in our weak formulation, we introduce continuous operators that extend a given flux distribution on the interface to the two subdomains. The extension operators $\mathcal{R}_S: \Lambda \to \bm{V}_S$ and $\mathcal{R}_D: \Lambda \to \bm{V}_D$ are chosen such that
\begin{align} \label{eq: extension property}
(\bm{n} \cdot \mathcal{R}_S \varphi)|_{\Gamma}
&=
(\bm{n} \cdot \mathcal{R}_D \varphi)|_{\Gamma}
= \varphi.
\end{align}
We use $\| \cdot \|_{s, \Omega}$ as short-hand notation for the norm on $H^s(\Omega)$. With this notation, the continuity of $\mathcal{R}_i$ implies that the following inequalities hold
\begin{align} \label{eq: continuity R_i}
\| \mathcal{R}_S \varphi \|_{1, \Omega_S} &\lesssim \| \varphi \|_{\frac{1}{2}, \Gamma}, &
\| \mathcal{R}_D \varphi \|_{0, \Omega_D} + \| \nabla \cdot \mathcal{R}_D \varphi \|_{0, \Omega_D} &\lesssim \| \varphi \|_{-\frac{1}{2}, \Gamma}.
\end{align}
Examples of continuous extension operators can be found in \cite[Sec. 4.1.2]{quarteroni1999domain}. The notation $A \lesssim B$ implies that a constant $c > 0$ exists, independent of material parameters and the mesh size $h$ such that $A \le cB$. The relationship $\gtrsim$ is defined analogously.
These definitions allow us to create a function space $\bm{V}$ containing velocities with normal trace continuity on $\Gamma$. Let this function space be defined as
\begin{align}
\bm{V} &:= \left\{ \bm{v} \in (L^2(\Omega))^n :\
\exists (\bm{v}_S^0, \varphi, \bm{v}_D^0) \in \bm{V}_S^0 \times \Lambda \times \bm{V}_D^0
\text{ such that } \bm{v}|_{\Omega_i} = \bm{v}_i^0 + \mathcal{R}_i \varphi, \text{ for } i \in \{S, D\} \right\}.
\end{align}
Second, the function space for the pressure variable is given by $W := L^2(\Omega)$ and we define $W_S := L^2(\Omega_S)$ and $W_D := L^2(\Omega_D)$.
As before, we use the subscript $i \in \{S, D\}$ to denote the restriction to a subdomain $\Omega_i$. Thus, for $(\bm{v}, w) \in \bm{V} \times W$, we have
\begin{align}
\bm{v}_i &:= \bm{v}|_{\Omega_i} \in \bm{V}_i, &
w_i &:= w|_{\Omega_i} \in W_i, &
i &\in \{S, D\}.
\end{align}
Despite the fact that each function in $\bm{V}$ can be decomposed into components in $\bm{V}_S$ and $\bm{V}_D$, we emphasize that $\bm{V}$ is a strict subspace of $\bm{V}_S \times \bm{V}_D$ due to the continuity of normal traces on $\Gamma$.
A key concept in our functional setting is to consider a decomposition of $\bm{V}$ comprising of a function with zero normal trace on $\Gamma$ and an extension of the normal flux distribution. For that purpose, let $\bm{V}^0$ be the subspace of $\bm{V}$ consisting of functions with zero normal trace over $\Gamma$:
\begin{align}
\bm{V}^0 := \left\{ \bm{v}^0 \in \bm{V} :\
\exists (\bm{v}_S^0, \bm{v}_D^0) \in \bm{V}_S^0 \times \bm{V}_D^0
\text{ such that } \bm{v}^0|_{\Omega_i} = \bm{v}_i^0, \text{ for } i \in \{S, D\} \right\}.
\end{align}
Secondly, we define the composite extension operator $\mathcal{R}: \Lambda \to V$ such that $\mathcal{R} \varphi|_{\Omega_i} = \mathcal{R}_i \varphi$ for $i \in \{S, D\}$. Combined with the subspace $\bm{V}^0$, we obtain the decomposition
\begin{align} \label{eq: decomposition V}
\bm{V} = \bm{V}^0 \oplus \mathcal{R} \Lambda.
\end{align}
It is important to emphasize that the function space $\bm{V}$ is independent of the choice of extension operators. On the other hand, each choice of $\mathcal{R}$ leads to a specific decomposition of the form \eqref{eq: decomposition V}.
\subsection{Variational Formulation}
\label{sub:variational_Formulation}
With the function spaces defined, we continue by deriving the variational formulation of \eqref{eq: SD strong form}. The first step is to consider the Stokes and Darcy flow equations. We test these with $\bm{v} \in \bm{V}$ and integrate over the corresponding subdomain. Using $(\cdot, \cdot)_{\Omega}$ to denote the $L^2$ inner product on $\Omega$, we apply integration by parts and use the boundary conditions to derive
\begin{align*}
-(\nabla \cdot \sigma(\bm{u}_S, p_S), \bm{v}_S)_{\Omega_S} &= \nonumber \\
(\sigma(\bm{u}_S, p_S), \nabla \bm{v}_S)_{\Omega_S}
- ( \bm{n} \cdot \sigma(\bm{u}_S, p_S), \bm{v}_S )_{\Gamma}
&= \nonumber\\
(\mu \varepsilon(\bm{u}_S), \varepsilon(\bm{v}_S))_{\Omega_S}
- (p_SI, \nabla \bm{v}_S)_{\Omega_S}
+ (\beta \bm{\tau} \cdot \bm{u}_S, \bm{\tau} \cdot \bm{v}_S )_{\Gamma}
+ ( p_D, \bm{n} \cdot \bm{v}_S )_{\Gamma}
&= ( \bm{f}_S, \bm{v}_S )_{\Omega}.
\end{align*}
On the other hand, we test Darcy's law in the porous medium $\Omega_D$ and use similar steps to obtain
\begin{align*}
(K^{-1} \bm{u}_D, \bm{v}_D)_{\Omega_D}
- (p_D, \nabla \cdot \bm{v}_D)_{\Omega_D}
- ( p_D, \bm{n} \cdot \bm{v}_D )_{\Gamma}
- ( g_p, \bm{n} \cdot \bm{v}_D )_{\partial_p \Omega_D}
&= 0.
\end{align*}
The normal trace continuity imposed in the space $\bm{V}$ gives us
$( p_D, \bm{n} \cdot \bm{v}_S )_{\Gamma} - ( p_D, \bm{n} \cdot \bm{v}_D )_{\Gamma} = 0$.
In turn, after supplementing the system with the equations for mass conservation, we arrive at the following variational formulation: \\
Find the pair $(\bm{u}, p) \in \bm{V} \times W$ that satisfies
\begin{subequations}
\begin{align}
(\mu \varepsilon(\bm{u}_S), \varepsilon(\bm{v}_S))_{\Omega_S}
+ (\beta \bm{\tau} \cdot \bm{u}_S, \bm{\tau} \cdot \bm{v}_S )_{\Gamma}
+ (K^{-1} \bm{u}_D, \bm{v}_D)_{\Omega_D}
& \nonumber\\
- (p_S, \nabla \cdot \bm{v}_S)_{\Omega_S}
- (p_D, \nabla \cdot \bm{v}_D)_{\Omega_D}
&= ( \bm{f}_S, \bm{v}_S )_{\Omega_S}
+ ( g_p, \bm{n} \cdot \bm{v}_D )_{\partial_p \Omega_D},
& \forall \bm{v} &\in \bm{V}, \\
(\nabla \cdot \bm{u}_S, w_S)_{\Omega_S}
+ (\nabla \cdot \bm{u}_D, w_D)_{\Omega_D}
&=
(f_D, w_D)_{\Omega_D},
& \forall w &\in W.
\end{align}
\end{subequations}
We note that this system has a characteristic saddle-point structure, allowing us to rewrite the problem as:\\
Find the pair $(\bm{u}, p) \in \bm{V} \times W$ that satisfies
\begin{subequations} \label{eq: variational formulation}
\begin{align}
a(\bm{u}, \bm{v}) + b(\bm{v}, p) &= f_u(\bm{v}),
& \forall \bm{v} &\in \bm{V}, \label{eq: variational formulation 1st eq}\\
b(\bm{u}, w) &= f_p(w), \label{eq: variational formulation 2nd eq}
& \forall w &\in W.
\end{align}
\end{subequations}
The bilinear forms $a: \bm{V} \times \bm{V} \to \mathbb{R}$ and $b: \bm{V} \times W \to \mathbb{R}$, and the functionals $f_u: \bm{V} \to \mathbb{R}$ and $f_p: W \to \mathbb{R}$ are given by
\begin{subequations} \label{eq: bilinear forms}
\begin{align}
a(\bm{u}, \bm{v}) &:= (\mu \varepsilon(\bm{u}_S), \varepsilon(\bm{v}_S))_{\Omega_S}
+ (\beta \bm{\tau} \cdot \bm{u}_S, \bm{\tau} \cdot \bm{v}_S )_{\Gamma}
+ (K^{-1} \bm{u}_D, \bm{v}_D)_{\Omega_D}, \\
b(\bm{u}, w) &:= -(\nabla \cdot \bm{u}_S, w_S)_{\Omega_S}
-(\nabla \cdot \bm{u}_D, w_D)_{\Omega_D}, \\
f_u(\bm{v}) &:= ( \bm{f}_S, \bm{v}_S )_{\Omega_S}
+ ( g_p, \bm{n} \cdot \bm{v}_D )_{\partial_p \Omega_D}, \\
f_p(w) &:=
-(f_D, w_D)_{\Omega_D}.
\end{align}
\end{subequations}
\section{Well-Posedness Analysis}
\label{sec:well_posedness}
In this section, we analyze problem \eqref{eq: variational formulation} with the use of weighted norms. The main goal is to show that a unique solution exists that is bounded in norms that depend on the material parameters. Consequently, this result allows us to construct an iterative method that is robust with respect to material parameters.
We start by deriving the appropriate norms, for which we first make two assumptions on the material parameters.
First, the constant $\beta$ in the Beavers-Joseph-Saffman condition \eqref{eq: BJS} is assumed to be bounded as
\begin{subequations} \label{eqs: material parameter bounds}
\begin{align}
\beta = \alpha \frac{\mu}{\sqrt{\bm{\tau} \cdot \kappa \cdot \bm{\tau}}} \lesssim \mu.
\end{align}
In the special case of $\alpha = 0$, this condition is trivially satisfied.
Second, we assume that the permeability $\kappa := \mu K$ is bounded from above in the sense that
\begin{align}
\mu K \lesssim 1.
\end{align}
\end{subequations}
We are now ready to define the weighted norms for $\bm{v} \in \bm{V}$ and $w \in W$, respectively, given by
\begin{subequations} \label{eq: norms}
\begin{align}
\| \bm{v} \|_V^2 &:=
\| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}^2
+ \| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D}^2
+ \| K^{-\frac{1}{2}} \nabla \cdot \bm{v}_D \|_{0, \Omega_D}^2 \\
\| w \|_W^2 &:= \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}^2
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D}^2.
\end{align}
\end{subequations}
The next step is to analyze the problem using these norms. For that purpose, we recall the identification of \eqref{eq: variational formulation} as a saddle-point problem. Using saddle-point theory \cite{boffi2013mixed}, well-posedness is shown by proving the four sufficient conditions presented in the following lemma.
\begin{lemma} \label{lem: inequalities}
The bilinear forms defined in \eqref{eq: bilinear forms} satisfy the following inequalities:
\begin{subequations}
\begin{align}
& \text{For } \bm{u}, \bm{v} \in \bm{V}:
& a(\bm{u}, \bm{v}) &\lesssim \| \bm{u} \|_V \| \bm{v} \|_V. \label{ineq: a_cont}\\
& \text{For } (\bm{v}, w) \in \bm{V} \times W:
& b(\bm{v}, w) &\lesssim \| \bm{v} \|_V \| w \|_W. \label{ineq: b_cont}\\
& \text{For } \bm{v} \in \bm{V} \text{ with } b(\bm{v}, w) = 0 \ \forall w \in W:
& a(\bm{v}, \bm{v}) &\gtrsim \| \bm{v} \|_V^2. \label{ineq: a_coercive}\\
& \text{For } w \in W, \ \exists \bm{v} \in \bm{V} \text{ with } \bm{v} \ne 0, \text{ such that}:
& b(\bm{v}, w) &\gtrsim \| \bm{v} \|_V \| w \|_W.\label{ineq: b_infsup}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Using the Cauchy-Schwarz inequality, the assumptions \eqref{eqs: material parameter bounds}, and a trace inequality for $H^1$, we obtain the continuity bounds \eqref{ineq: a_cont} and \eqref{ineq: b_cont}:
\begin{subequations} \label{ineqs: continuity}
\begin{align}
a(\bm{u}, \bm{v})
&\lesssim \|\mu^{\frac{1}{2}} \bm{u}_S \|_{1, \Omega_S}
\|\mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}
+ \| \beta^{\frac{1}{2}} \bm{\tau} \cdot \bm{u}_S \|_{0, \Gamma}
\| \beta^{\frac{1}{2}} \bm{\tau} \cdot \bm{v}_S \|_{0, \Gamma}
+ \| K^{-\frac{1}{2}} \bm{u}_D \|_{0, \Omega_D}
\| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D} \nonumber\\
&\lesssim \| \bm{u} \|_V \| \bm{v} \|_V, \\
b(\bm{v}, w)
&\lesssim \| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S} \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}
+ \| K^{-\frac{1}{2}} \nabla \cdot \bm{v}_D \|_{0, \Omega_D} \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D} \nonumber\\
&\lesssim \| \bm{v} \|_V \| w \|_W.
\end{align}
\end{subequations}
For the proof of inequality \eqref{ineq: a_coercive}, we first note that if $b(\bm{v}, w) = 0$ for all $w \in W$, then $\nabla \cdot \bm{v}_D = 0$. Combining this observation with Korn's inequality gives us:
\begin{align} \label{eq: proof 3.5c}
a(\bm{v}, \bm{v}) &\gtrsim
\| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}^2
+ \| \beta^{\frac{1}{2}} \bm{\tau} \cdot \bm{v}_S \|_{0, \Gamma}^2
+ \| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D}^2 \nonumber\\
&\ge
\| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}^2
+ \| K^{-\frac{1}{2}} \nabla \cdot \bm{v}_D \|_{0, \Omega_D}^2
+ \| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D}^2
= \| \bm{v} \|_V^2.
\end{align}
Inequality \eqref{ineq: b_infsup} is the inf-sup condition relevant for this formulation. For a given $w = (w_S, w_D) \in W$, let us construct $\bm{v} = (\bm{v}_S^0, \phi, \bm{v}_D^0) \in \bm{V}$ in the following manner. First, let the interface function $\phi \in H_0^1(\Gamma)$ solve the following, constrained minimization problem:
\begin{align} \label{eq: phi constraint}
\min_{\varphi \in H_0^1(\Gamma)} \tfrac12 &\| \varphi \|_{1, \Gamma}^2,
&\text{subject to } (\varphi, 1)_{\Gamma} &= (K w_D, 1)_{\Omega_D}.
\end{align}
The solution $\phi$ then satisfies the two key properties:
\begin{subequations}
\begin{align}
\| \phi \|_{1, \Gamma} &\lesssim \| K w_D \|_{0, \Omega_D}, \label{eq: bound on phi} \\
(\nabla \cdot \mathcal{R}_D \phi, 1)_{\Omega_D} &=
(- \bm{n} \cdot \mathcal{R}_D \phi, 1)_{\Gamma} =
(- \phi, 1)_{\Gamma} =
- (K w_D, 1)_{\Omega_D}. \label{eq: compatibility of phi}
\end{align}
\end{subequations}
Bound \eqref{eq: bound on phi} can be deduced by constructing a function $\psi \in H_0^1(\Gamma)$ that satisfies the constraint in \eqref{eq: phi constraint} and is bounded in the sense of \eqref{eq: bound on phi}. It then follows that the minimizer $\phi$ satisfies \eqref{eq: bound on phi} as well.
Next, we construct $\bm{v}_i^0 \in \bm{V}_i^0$ for $i \in \{S, D\}$. For that, we first introduce $p_S \in H^2(\Omega_S)$ as the solution to the following auxiliary problem
\begin{subequations} \label{eqs: aux prob p_S}
\begin{align}
- \nabla \cdot \nabla p_S &= \mu^{-1} w_S + \nabla \cdot \mathcal{R}_S \phi, \\
p_S|_{\partial_\sigma \Omega_S} &= 0, \\
(\bm{n} \cdot \nabla p_S)|_{\partial \Omega_S \setminus \partial_\sigma \Omega_S} &= 0.
\end{align}
\end{subequations}
Similarly, we define $p_D \in H^2(\Omega_D)$ such that
\begin{subequations} \label{eqs: aux prob p_D}
\begin{align}
- \nabla \cdot \nabla p_D &= K w_D + \nabla \cdot \mathcal{R}_D \phi, \\
(\bm{n} \cdot \nabla p_D)|_{\partial \Omega_D} &= 0.
\end{align}
\end{subequations}
We note that \eqref{eqs: aux prob p_D} is a Neumann problem. We therefore verify the compatibility of the right hand side by using \eqref{eq: compatibility of phi} in the following derivation:
\begin{align*}
( K w_D + \nabla \cdot \mathcal{R}_D \phi, 1)_{\Omega_D}
= ( K w_D - K w_D, 1)_{\Omega_D}
= 0.
\end{align*}
Let $\bm{v}_S^0 := \nabla p_S$ and $\bm{v}_D^0:= \nabla p_D$. From the elliptic regularity of the auxiliary problems, see e.g. \cite{evans2010partial}, we obtain the bounds
\begin{subequations}
\begin{align}
\| \bm{v}_S^0 \|_{1, \Omega_S} \lesssim
\| \mu^{-1} w_S \|_{0, \Omega_S} + \| \nabla \cdot \mathcal{R}_S \phi \|_{0, \Omega_S}, \\
\| \bm{v}_D^0 \|_{1, \Omega_D} \lesssim
\| K w_D \|_{0, \Omega_D} + \| \nabla \cdot \mathcal{R}_D \phi \|_{0, \Omega_D}.
\end{align}
\end{subequations}
Next, we set $\bm{v}_S := \bm{v}_S^0 + \mathcal{R}_S \phi$ and $\bm{v}_D := \bm{v}_D^0 + \mathcal{R}_D \phi$. Combining the bounds on $\bm{v}_S^0$ and $\phi$ with the continuity of the extension operators from \eqref{eq: continuity R_i} and the material parameter bounds \eqref{eqs: material parameter bounds}, we derive
\begin{subequations}
\begin{align}
\| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}
&\le \| \mu^{\frac{1}{2}} \bm{v}_S^0 \|_{1, \Omega_S}
+ \| \mu^{\frac{1}{2}} \mathcal{R}_S \phi \|_{1, \Omega_S} \nonumber \\
&\lesssim \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}
+ \| \mu^{\frac{1}{2}} \nabla \cdot \mathcal{R}_S \phi \|_{0, \Omega_S}
+ \| \mu^{\frac{1}{2}} \mathcal{R}_S \phi \|_{1, \Omega_S} \nonumber \\
&\lesssim \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}
+ \| \mu^{\frac{1}{2}} \phi \|_{\frac{1}{2}, \Gamma} \nonumber \\
&\lesssim \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}
+ \| \mu^{\frac{1}{2}} K w_D \|_{0, \Omega_D} \nonumber \\
&\lesssim \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D}.
\end{align}
Similarly, $\bm{v}_D$ is bounded in the following sense:
\begin{align}
%
\| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D}
+ \| K^{-\frac{1}{2}} \nabla \cdot \bm{v}_D \|_{0, \Omega_D}
&\le
\| K^{-\frac{1}{2}} \bm{v}_D^0 \|_{0, \Omega_D}
+ \| K^{-\frac{1}{2}} \mathcal{R}_D \phi \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D} \nonumber \\
&\lesssim
\| K^{-\frac{1}{2}} \nabla \cdot \mathcal{R}_D \phi \|_{0, \Omega_D}
+ \| K^{-\frac{1}{2}} \mathcal{R}_D \phi \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D} \nonumber \\
&\lesssim
\| K^{-\frac{1}{2}} \phi \|_{-\frac{1}{2}, \Gamma}
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D} \nonumber \\
&\lesssim
\| K^{\frac{1}{2}} w_D \|_{0, \Omega_D}.
\end{align}
\end{subequations}
In the final step, we have used that $H^1(\Gamma) \subseteq H^{-\frac{1}{2}}(\Gamma)$ and \eqref{eq: bound on phi}.
By construction, $\bm{v}$ now satisfies the following two properties:
\begin{subequations} \label{eqs: proof b inf sup}
\begin{align}
\| \bm{v} \|_V
&= \left(\| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}^2
+ \| K^{-\frac{1}{2}} \bm{v}_D \|_{0, \Omega_D}^2
+ \| K^{-\frac{1}{2}} \nabla \cdot \bm{v}_D \|_{0, \Omega_D}^2 \right)^\frac{1}{2}
\lesssim \| w \|_W, \\
b(\bm{v}, w) &= -(\nabla \cdot \bm{v}_S, w_S)_{\Omega_S}
-(\nabla \cdot \bm{v}_D, w_D)_{\Omega_D} \nonumber\\
&= -(\nabla \cdot (\nabla p_S + \mathcal{R}_S \phi), w_S)_{\Omega_S}
-(\nabla \cdot (\nabla p_D + \mathcal{R}_D \phi), w_D)_{\Omega_D}\nonumber\\
&= \| \mu^{-\frac{1}{2}} w_S \|_{0, \Omega_S}^2
+ \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D}^2 \nonumber\\
&=\| w \|_W^2.
\end{align}
\end{subequations}
The proof is concluded by gathering \eqref{eqs: proof b inf sup}.
\end{proof}
In the special case of $|\partial_p \Omega_D| > 0$, the Darcy subproblem is itself well-posed. This can be used to our advantage in the proof of \eqref{ineq: b_infsup}. In particular, the construction of $\phi \in \Lambda$ becomes obsolete, as shown in the following corollary.
\begin{corollary} \label{cor: infsup V0}
If $|\partial_p \Omega_D| > 0$, then for each $w \in W$, there exists $\bm{v}^0 \in \bm{V}^0$ with $\bm{v}^0 \ne 0$ such that
\begin{align*}
b(\bm{v}^0, w) \gtrsim \| \bm{v}^0 \|_V \| w \|_W.
\end{align*}
\end{corollary}
\begin{proof}
We follow the same arguments as for \eqref{ineq: b_infsup} in Lemma~\ref{lem: inequalities}. The main difference is that we now set $\phi = 0$ and solve auxiliary Poisson problems to obtain $(\bm{v}_S^0, \bm{v}_D^0) \in \bm{V}_S^0 \times \bm{V}_D^0$ such that
\begin{align*}
- \nabla \cdot \bm{v}_S^0 &= \mu^{-1} w_S, &
- \nabla \cdot \bm{v}_D^0 &= K w_D, \\
(\bm{n} \cdot \bm{v}_S^0)|_{\partial \Omega_S \setminus \partial_\sigma \Omega_S} &= 0, &
(\bm{n} \cdot \bm{v}_D^0)|_{\partial \Omega_D \setminus \partial_p \Omega_D} &= 0.
\end{align*}
Since both $\partial_\sigma \Omega_S$ and $\partial_p \Omega_D$ have positive measure, these two subproblems are well-posed and the statement follows by elliptic regularity.
\end{proof}
We are now ready to present the main result of this section, namely that problem \eqref{eq: variational formulation} is well-posed with respect to the weighted norms of \eqref{eq: norms}.
\begin{theorem} \label{thm: well-posedness}
Problem \eqref{eq: variational formulation} is well-posed, i.e. a unique solution $(\bm{u}, p) \in \bm{V} \times W$ exists satisfying
\begin{align}
\| \bm{u} \|_V + \| p \|_W
\lesssim
\| \mu^{-\frac{1}{2}} \bm{f}_S \|_{-1, \Omega_S}
+ \| K^{-\frac{1}{2}} f_D \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} g_p \|_{\frac{1}{2}, \partial_p \Omega_D}.
\end{align}
\end{theorem}
\begin{proof}
With the inequalities from \Cref{lem: inequalities}, it suffices to show continuity of the right-hand side. Let us therefore apply the Cauchy-Schwarz inequality followed by a trace inequality:
\begin{align*}
f_u(\bm{v}) + f_p(w) &= ( \bm{f}_S, \bm{v}_S )_{\Omega_S}
+ (f_D, w_D)_{\Omega_D}
+ ( g_p, \bm{n} \cdot \bm{v}_D )_{\partial_p \Omega_D} \\
&\le \| \mu^{-\frac{1}{2}} \bm{f}_S \|_{-1, \Omega_S} \| \mu^{\frac{1}{2}} \bm{v}_S \|_{1, \Omega_S}
+ \| K^{-\frac{1}{2}} f_D \|_{0, \Omega_D} \| K^{\frac{1}{2}} w_D \|_{0, \Omega_D}
\\
& \ \ + \| K^{\frac{1}{2}} g_p \|_{\frac{1}{2}, \partial_p \Omega_D} \| K^{-\frac{1}{2}} \bm{n} \cdot \bm{v}_D \|_{-\frac{1}{2}, \partial_p \Omega_D} \\
&\lesssim \left( \| \mu^{-\frac{1}{2}} \bm{f}_S \|_{-1, \Omega_S}
+ \| K^{-\frac{1}{2}} f_D \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} g_p \|_{\frac{1}{2}, \partial_p \Omega_D} \right)
\left( \| \bm{v} \|_{V} + \| w \|_W \right).
\end{align*}
With the continuity of the right-hand shown, all requirements are satisfied to invoke standard saddle point theory \cite{boffi2013mixed}, proving the claim.
\end{proof}
\section{The Steklov-Poincar\'e System}
\label{sec:the_steklov_poincare_system}
The strategy is to introduce the Steklov-Poincar\'e operator $\Sigma$ and reduce the system \eqref{eq: variational formulation} to a problem concerning only the interface flux $\phi$. The reason for this is twofold. First, since the interface is a lower-dimensional manifold, the problem is reduced in dimensionality and is therefore expected to be easier to solve. Second, we show that the resulting system is symmetric and positive-definite and hence amenable to a large class of iterative solvers including the Minimal Residual (MinRes) and the Conjugate Gradient (CG) method.
We start with the case in which both the pressure and stress boundary conditions are prescribed on a part of the boundary with positive measure, i.e. we assume that $| \partial_\sigma \Omega_S | > 0$ and $| \partial_p \Omega_D | > 0$. The cases in which one, or both, of the subproblems have pure Neumann boundary conditions are considered afterward.
In order to construct the reduced problem, we use the bilinear forms and functionals from \eqref{eq: bilinear forms} and the extension operator $\mathcal{R}$ from Section~\ref{sub:functional_setting} and define the operator $\Sigma: \Lambda \to \Lambda^*$ and $\chi \in \Lambda^*$ as
\begin{subequations}
\begin{align}
\langle \Sigma \phi, \varphi \rangle &:=
a(\bm{u}_\star^0 + \mathcal{R} \phi, \mathcal{R} \varphi) + b(\mathcal{R} \varphi, p_\star), \label{eq: def Sigma}\\
\langle \chi, \varphi \rangle &:=
f_u(\mathcal{R} \varphi) - a(\bm{u}_0^0, \mathcal{R} \varphi) - b(\mathcal{R} \varphi, p_0),
\end{align}
\end{subequations}
in which $\langle \cdot, \cdot \rangle$ denotes the duality pairing on $\Lambda^* \times \Lambda$. Here, the pair $(\bm{u}_\star^0, p_\star) \in \bm{V}^0 \times W$ satisfies
\begin{subequations} \label{eq: auxiliary problem _phi}
\begin{align}
a(\bm{u}_\star^0, \bm{v}^0) + b(\bm{v}^0, p_\star)
&= - a(\mathcal{R} \phi, \bm{v}^0),
& \forall \bm{v}^0 &\in \bm{V}^0, \\
b(\bm{u}_\star^0, w)
&= - b(\mathcal{R} \phi, w),
& \forall w &\in W.
\label{eq aux problem eq2}
\end{align}
\end{subequations}
and the pair $(\bm{u}_0^0, p_0) \in \bm{V}^0 \times W$ is defined such that
\begin{subequations} \label{eq: auxiliary problem _0}
\begin{align}
a(\bm{u}_0^0, \bm{v}^0) + b(\bm{v}^0, p_0)
&= f_u(\bm{v}^0),
& \forall \bm{v}^0 &\in \bm{V}^0, \\
b(\bm{u}_0^0, w)
&= f_p(w),
& \forall w &\in W.
\end{align}
\end{subequations}
With the above definitions, we introduce the reduced interface problem as: \\
Find $\phi \in \Lambda$ such that
\begin{align} \label{eq: poincare steklov}
\langle \Sigma \phi, \varphi \rangle &=
\langle \chi, \varphi \rangle,
& \forall \varphi &\in \Lambda.
\end{align}
Note that setting $\bm{u} := \bm{u}_\star^0 + \bm{u}_0^0 + \mathcal{R} \phi$ and $p := p_\star + p_0$ yields the solution to the original problem \eqref{eq: variational formulation}. Hence, if this problem admits a unique solution, then \eqref{eq: poincare steklov} and \eqref{eq: variational formulation} are equivalent.
Similar to the analysis of problem~\eqref{eq: variational formulation} in Section~\ref{sec:well_posedness}, we require an appropriate, parameter-dependent norm on functions $\varphi \in \Lambda$ in order to analyze \eqref{eq: poincare steklov}. Let us therefore define
\begin{align} \label{eq: norm phi}
\| \varphi \|_{\Lambda}^2 &:=
\| \mu^{\frac{1}{2}} \varphi \|_{\frac{1}{2}, \Gamma}^2
+ \| K^{-\frac{1}{2}} \varphi \|_{-\frac{1}{2}, \Gamma}^2.
\end{align}
We justify this choice by proving two bounds with respect to $\| \cdot \|_V$ in the following lemma. These results are then used in a subsequent theorem to show that $\Sigma$ is continuous and coercive with respect to $\| \cdot \|_\Lambda$.
\begin{lemma} \label{lem: norm equivalences}
Given $\phi \in \Lambda$, then the following bounds hold for any $\bm{u}^0 \in \bm{V}^0$:
\begin{align}
\| \phi \|_{\Lambda}
&\lesssim \| \bm{u}^0 + \mathcal{R} \phi \|_V, &
\| \mathcal{R} \phi \|_V &\lesssim
\| \phi \|_{\Lambda}.
\end{align}
\end{lemma}
\begin{proof}
We apply trace inequalities in $H^1(\Omega_S)$ and $H(\div, \Omega_D)$:
\begin{align*}
\| \phi \|_{\Lambda}^2
&= \| \mu^{\frac{1}{2}} \phi \|_{\frac{1}{2}, \Gamma}^2
+ \| K^{-\frac{1}{2}} \phi \|_{-\frac{1}{2}, \Gamma}^2 \nonumber\\
&\lesssim\| \mu^{\frac{1}{2}} (\bm{u}_S^0 + \mathcal{R}_S \phi) \|_{1, \Omega_S}^2
+ \| K^{-\frac{1}{2}} (\bm{u}_D^0 + \mathcal{R}_D \phi) \|_{0, \Omega_D}^2
+ \| K^{-\frac{1}{2}} \nabla \cdot (\bm{u}_D^0 + \mathcal{R}_D \phi) \|_{0, \Omega_D}^2
= \| \bm{u}^0 + \mathcal{R} \phi \|_V^2.
\end{align*}
Thus, the first inequality is shown. On the other hand, the continuity of $\mathcal{R}_i$ for $i \in \{S, D\}$ from \eqref{eq: continuity R_i} gives us
\begin{align*}
\| \mathcal{R} \phi \|_V^2
&\le \| \mu^{\frac{1}{2}} \mathcal{R}_S \phi \|_{1, \Omega_S}^2
+ \| K^{-\frac{1}{2}} \mathcal{R}_D \phi \|_{0, \Omega_D}^2
+ \| K^{-\frac{1}{2}} \nabla \cdot \mathcal{R}_D \phi \|_{0, \Omega_D}^2 \nonumber\\
&\lesssim \| \mu^{\frac{1}{2}} \phi \|_{\frac{1}{2}, \Gamma}^2
+ \| K^{-\frac{1}{2}} \phi \|_{-\frac{1}{2}, \Gamma}^2
= \| \phi \|_{\Lambda}^2.
\end{align*}
\end{proof}
\begin{theorem} \label{thm: sigma SPD}
The operator $\Sigma: \Lambda \to \Lambda^*$ is symmetric, continuous, and coercive with respect to the norm $\| \cdot \|_{\Lambda}$.
\end{theorem}
\begin{proof}
We first note that the auxiliary problem \eqref{eq: auxiliary problem _phi} is well-posed by Lemma~\ref{lem: inequalities}, Corollary~\ref{cor: infsup V0}, and saddle point theory. Moreover, the right-hand side is continuous due to \eqref{ineq: a_cont} and \eqref{ineq: b_cont}. For given $\phi$, the pair $(\bm{u}_\star^0, p_\star)$ therefore exists uniquely and satisfies
\begin{align} \label{eq: bound u_phi}
\| \bm{u}_\star^0 \|_V + \| p_\star \|_W
\lesssim
\| \mathcal{R} \phi \|_V.
\end{align}
Symmetry is considered next. Let $(\bm{u}_\varphi, p_\varphi)$ be the solution to \eqref{eq: auxiliary problem _phi} with data $\varphi$. By setting $(\bm{v}^0, w) = (\bm{u}_\star^0, p_\star)$ in the corresponding problem, it follows that
\begin{align*}
a(\bm{u}_\varphi^0, \bm{u}_\star^0)
+ b(\bm{u}_\star^0, p_\varphi)
+ b(\bm{u}_\varphi^0, p_\star)
&=
- a(\mathcal{R} \varphi, \bm{u}_\star^0) - b(\mathcal{R} \varphi, p_\star)
\end{align*}
Substituting this in definition \eqref{eq: def Sigma} and using the symmetry of $a$, we obtain
\begin{align}
\langle \Sigma \phi, \varphi \rangle
&= a(\mathcal{R} \phi, \mathcal{R} \varphi) + a(\bm{u}_\star^0, \mathcal{R} \varphi) + b(\mathcal{R} \varphi, p_\star)
= a(\mathcal{R} \phi, \mathcal{R} \varphi) - a(\bm{u}_\star^0, \bm{u}_\varphi^0) - b(\bm{u}_\varphi^0, p_\star)
- b(\bm{u}_\star^0, p_\varphi),
\end{align}
and symmetry of $\Sigma$ is shown.
We continue by proving continuity of $\Sigma$. Employing \eqref{ineq: a_cont} and \eqref{ineq: b_cont} once again, it follows that
\begin{align}
\langle \Sigma \phi, \varphi \rangle
\lesssim
(\| \bm{u}_\star^0 \|_V + \| \mathcal{R} \phi \|_V + \| p_\star \|_W)
\| \mathcal{R} \varphi \|_V
\lesssim
\| \mathcal{R} \phi \|_V
\| \mathcal{R} \varphi \|_V
\lesssim
\| \phi \|_\Lambda
\| \varphi \|_\Lambda
\end{align}
in which the second and third inequalities follow from \eqref{eq: bound u_phi} and Lemma~\ref{lem: norm equivalences}, respectively.
It remains to show coercivity, which we derive by setting $\varphi = \phi$ and $(\bm{v}^0, w) = (\bm{u}_\star^0, p_\star)$ in \eqref{eq: auxiliary problem _phi}:
\begin{align}
\langle \Sigma \phi, \phi \rangle
&= a(\bm{u}_\star^0 + \mathcal{R} \phi, \mathcal{R} \phi) + b(\mathcal{R} \phi, p_\star)
= a(\bm{u}_\star^0 + \mathcal{R} \phi, \mathcal{R} \phi) + a(\bm{u}_\star^0 + \mathcal{R} \phi, \bm{u}_\star^0)
= a(\bm{u}_\star^0 + \mathcal{R} \phi, \bm{u}_\star^0 + \mathcal{R} \phi).
\end{align}
Next, we observe that \eqref{eq aux problem eq2} gives us $b(\bm{u}_\star^0 + \mathcal{R} \phi, w) = 0$ for all $w \in W$. Thus, we use \eqref{ineq: a_coercive} and Lemma~\ref{lem: norm equivalences} to conclude that
\begin{align}
\langle \Sigma \phi, \phi \rangle
&\gtrsim \| \bm{u}_\star^0 + \mathcal{R} \phi \|_V^2
\gtrsim \| \phi \|_\Lambda^2.
\end{align}
\end{proof}
\begin{corollary}
Problem \eqref{eq: poincare steklov} is well-posed and the solution $\phi \in \Lambda$ satisfies
\begin{align}
\| \phi \|_{\Lambda}
\lesssim
\| \mu^{-\frac{1}{2}} \bm{f}_S \|_{-1, \Omega_S}
+ \| K^{-\frac{1}{2}} f_D \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} g_p \|_{\frac{1}{2}, \partial_p \Omega_D}.
\end{align}
\end{corollary}
\begin{proof}
As shown in Theorem~\ref{thm: sigma SPD}, $\Sigma$ is symmetric and positive-definite. Therefore, \eqref{eq: poincare steklov} admits a unique solution. We then set $\bm{u} = \bm{u}_\star^0 + \bm{u}_0^0 + \mathcal{R} \phi$ and $p = p_\star + p_0$ and note that $(\bm{u}, p)$ is the solution to \eqref{eq: variational formulation}. By employing Lemma~\ref{lem: norm equivalences}, we note that $\| \phi \|_{\Lambda} \lesssim \| \bm{u} \|_V + \| p \|_W$ and the proof is concluded using the result from Theorem~\ref{thm: well-posedness}.
\end{proof}
\subsection{Neumann Problems}
\label{sub:neumann_cases}
In this section, we consider the case in which one, or both, of the subproblems have flux boundary conditions prescribed on the entire boundary. In other words, the cases in which $| \partial_\sigma \Omega_S | = 0$ or $| \partial_p \Omega_D | = 0$.
We first introduce the setting in which one of the subdomains corresponds to a Neumann problem, followed by the case of $| \partial_\sigma \Omega_S | = | \partial_p \Omega_D | = 0$.
\subsubsection{Single Neumann Subproblem}
\label{ssub:single_neumann_problem}
Let us consider $| \partial_p \Omega_D | = 0$ and $| \partial_\sigma \Omega_S | > 0$, noting that the converse case follows by symmetry. The complication in this case is that solving the Darcy subproblem results in a pressure distribution that is defined up to a constant. Thus, several preparatory steps need to be made before the interface problem can be formulated and solved.
The key idea is to pose the interface problem on the subspace of $\Lambda$ containing functions with zero mean. This is done by introducing a function $\phi_\star$ that balances the source term in $\Omega_D$ and subtracting this from the problem. The modified interface problem produces a pressure distribution with zero mean in $\Omega_D$ and we obtain the true pressure average obtained afterwards.
Let us first define the subspace $\Lambda_0 \subset \Lambda$ of functions with zero mean, i.e.
\begin{align} \label{def: Lambda_0}
\Lambda_0
:= \left\{ \varphi_0 \in \Lambda :\ (\varphi_0, 1)_{\Gamma} = 0 \right\}.
\end{align}
We continue by constructing $\phi_\star \in \Lambda \setminus \Lambda_0$. For that, we follow \cite[Sec. 5.3]{quarteroni1999domain} and introduce $\zeta \in \Lambda$ as an interface flux with non-zero mean. For convenience, we choose $\zeta$ such that
\begin{align} \label{eq: avg equal one}
(\zeta, 1)_{\Gamma} = 1.
\end{align}
Any bounded $\zeta$ with this property will suffice for our purposes.
As a concrete example, we uniquely define $\zeta$ by solving a minimization problem in $H_0^1(\Gamma)$ with \eqref{eq: avg equal one} as a constraint, similar to the construction \eqref{eq: phi constraint} in Lemma~\ref{lem: inequalities}.
Next, we test the mass conservation equation \eqref{eq: variational formulation 2nd eq} with $w = 1_{\Omega_D}$, the indicator function of $\Omega_D$. Due to the assumption $| \partial_p \Omega_D | = 0$, the divergence theorem gives us
\begin{align*}
f_p(1_{\Omega_D})
= -(f_D, 1)_{\Omega_D}
= -(\nabla \cdot (\bm{u}_D^0 + \mathcal{R}_D \phi), 1)_{\Omega_D}
= (\phi, 1)_{\Gamma}.
\end{align*}
Using this observation, we define the function $\phi_\star \in \Lambda \setminus \Lambda_0$ such that $(\phi_\star, 1)_\Gamma = (\phi, 1)_\Gamma$ by setting
\begin{align}
\phi_\star
&:= \zeta f_p(1_{\Omega_D}).
\end{align}
The next step is to pose the interface problem, similar to \eqref{eq: poincare steklov}, in this subspace: \\
Find $\phi_0 \in \Lambda_0$ such that
\begin{align} \label{eq: poincare steklov_0}
\langle \Sigma \phi_0, \varphi_0 \rangle &=
\langle \chi, \varphi_0 \rangle,
& \forall \varphi_0 &\in \Lambda_0,
\end{align}
with $\Sigma: \Lambda \to \Lambda^*$ and $\chi \in \Lambda^*$ redefined as
\begin{subequations} \label{eqs: def sigma chi 0}
\begin{align}
\langle \Sigma \phi_0, \varphi_0 \rangle
&:=
a(\bm{u}_0^0 + \mathcal{R} \phi_0, \mathcal{R} \varphi_0) + b(\mathcal{R} \varphi_0, p_0), \\
\langle \chi, \varphi_0 \rangle
&:= f_u(\mathcal{R} \varphi_0) - a(\bm{u}_\star^0 + \mathcal{R} \varphi_\star, \mathcal{R} \varphi_0) - b(\mathcal{R} \varphi_0, p_\star).
\end{align}
\end{subequations}
The construction of the pairs $(\bm{u}_\star^0, p_\star)$ and $(\bm{u}_0^0, p_0)$ now require solving the Darcy subproblem with pure Neumann conditions. We emphasize that due to the nature of these problems, the pressure distributions are defined up to a constant and we therefore enforce $p_\star$ and $p_0$ to have mean zero with the use of Lagrange multipliers.
In particular, let $(\bm{u}_0^0, p_0, r_0) \in \bm{V}^0 \times W \times \mathbb{R}$ satisfy the following:
\begin{subequations} \label{eqs: aux problem u0 p0}
\begin{align}
a(\bm{u}_0^0, \bm{v}^0) + b(\bm{v}^0, p_0)
&= -a(\mathcal{R} \phi_0, \bm{v}^0),
& \forall \bm{v}^0 &\in \bm{V}^0, \\
(r_0, w)_{\Omega_D} + b(\bm{u}_0^0, w)
&= -b(\mathcal{R} \phi_0, w),
& \forall w &\in W, \label{eq: conservation aux 0}\\
(p_0, s)_{\Omega_D} &= 0,
& \forall s &\in \mathbb{R}.
\end{align}
\end{subequations}
Similarly, we let $(\bm{u}_\star^0, p_\star, r_\star) \in \bm{V}^0 \times W \times \mathbb{R}$ solve
\begin{subequations} \label{eqs: aux problem u* p*}
\begin{align}
a(\bm{u}_\star^0, \bm{v}^0) + b(\bm{v}^0, p_\star)
&= -a(\mathcal{R} \phi_\star, \bm{v}^0) + f_u(\bm{v}^0),
& \forall \bm{v}^0 &\in \bm{V}^0, \\
(r_\star, w)_{\Omega_D} + b(\bm{u}_\star^0, w)
&= - b(\mathcal{R} \phi_\star, w) + f_p(w),
& \forall w &\in W, \label{eq: conservation aux *}\\
(p_\star, s)_{\Omega_D} &= 0,
& \forall s &\in \mathbb{R}.
\end{align}
\end{subequations}
We emphasize that setting $w = 1_{\Omega_D}$ in the conservation equations \eqref{eq: conservation aux 0} and \eqref{eq: conservation aux *} yields $r_\star = r_0 = 0$. Hence, these terms have no contribution to the mass balance.
The solution to problem \eqref{eq: poincare steklov_0} allows us to construct the velocity distribution:
\begin{align}
\bm{u} := \bm{u}_0^0 + \bm{u}_\star^0 + \mathcal{R} (\phi_0 + \phi_\star). \label{eq: reconstructed flux}
\end{align}
The next step is to recover the correct pressure average in $\Omega_D$. For that, we presume that the pressure solution is given by $p = p_0 + p_\star + \bar{p}$ with $\bar{p} := c_D 1_{\Omega_D}$ for some $c_D \in \mathbb{R}$. In other words, $\bar{p}$ is zero in $\Omega_S$ and a constant $c_D$ on $\Omega_D$, to be determined next.
Using $\zeta$ from \eqref{eq: avg equal one}, we substitute this function in \eqref{eq: variational formulation 1st eq} and choose the test function $\bm{v} = \mathcal{R} \zeta$:
\begin{align*}
a(\bm{u}, \mathcal{R} \zeta) + b(\mathcal{R} \zeta, p_0 + p_\star + \bar{p}) = f_u(\mathcal{R} \zeta).
\end{align*}
Using this relationship and the divergence theorem, we make the following two observations:
\begin{subequations} \label{eq: def c_D}
\begin{align}
b(\mathcal{R} \zeta, \bar{p})
&=
f_u(\mathcal{R} \zeta)
- a(\bm{u}, \mathcal{R} \zeta) - b(\mathcal{R} \zeta, p_0 + p_\star)
= \langle \chi - \Sigma \phi_0, \zeta \rangle, \\
b(\mathcal{R} \zeta, \bar{p})
&= - (\nabla \cdot \mathcal{R} \zeta, \bar{p})_{\Omega_D}
= c_D(\zeta, 1)_\Gamma
= c_D.
\end{align}
\end{subequations}
Combining these two equations yields $c_D = \langle \chi - \Sigma \phi_0, \zeta \rangle$ and we set
\begin{align}\label{eq: def p bar single}
\bar{p} := \langle \chi - \Sigma \phi_0, \zeta \rangle 1_{\Omega_D}.
\end{align}
Finally, by setting
$p := p_0 + p_\star + \bar{p}$,
we have obtained $(\bm{u}, p) \in \bm{V} \times W$, the solution to \eqref{eq: variational formulation}. We remark that the well-posedness of \eqref{eq: poincare steklov_0} follows by the same arguments as in \Cref{thm: sigma SPD}.
\subsubsection{Coupled Neumann Problems}
\label{ssub:coupled_neumann_problems}
In this case, we have flux conditions prescribed on the entire boundary, i.e. $\partial \Omega = \partial_u \Omega_S \cup \partial_u \Omega_D$.
We follow the same steps as in Section~\ref{ssub:single_neumann_problem}, and highlight the differences required to treat this case.
Let us consider a slightly more general case than \eqref{eq: bilinear forms} by including a source function $f_S$ in the Stokes subdomain. In other words, the right-hand side of the mass balance equations is given by
\begin{align}
f_p(w) := -(f_S, w_S)_{\Omega_S} -(f_D, w_D)_{\Omega_D}.
\end{align}
By compatibility of the source function with the boundary conditions, it follows that $f_p(1) = 0$ and therefore
\begin{align}
f_p(1_{\Omega_D})
= (f_D, 1)_{\Omega_D}
= -(f_S, 1)_{\Omega_S}
= - f_p(1_{\Omega_S}).
\end{align}
Let $\Lambda_0 \subset \Lambda$ be defined as in \eqref{def: Lambda_0} and $\zeta$ as in \eqref{eq: avg equal one}. Using the same arguments as in the previous section, we define
\begin{align}
\phi_\star
&:= \zeta f_p(1_{\Omega_D})
= - \zeta f_p(1_{\Omega_S}).
\end{align}
The operators $\Sigma$ and $\chi$ are defined as in \eqref{eqs: def sigma chi 0} with the only difference being in the pairs of functions $(\bm{u}_0^0, p_0)$ and $(\bm{u}_\star^0, p_\star)$. As before, these pairs are constructing by solving the separate subproblems. Since these correspond to Neumann problems, it follows that the pressure distributions $p_0$ and $p_\star$ are defined up to a constant in each subdomain.
We therefore enforce zero mean of these variables in each subdomain with the use of a Lagrange multiplier $s \in S$. Let $S$ be the space of piecewise constant functions given by
\begin{align}
S := \operatorname{span}\{ 1_{\Omega_S}, 1_{\Omega_D} \}.
\end{align}
We augment problem \eqref{eqs: aux problem u0 p0} to: \\
Find $(\bm{u}_0^0, p_0, r_0) \in \bm{V}^0 \times W \times S$ such that
\begin{subequations}
\begin{align}
a(\bm{u}_0^0, \bm{v}^0) + b(\bm{v}^0, p_0)
&= -a(\mathcal{R} \phi_0, \bm{v}^0),
& \forall \bm{v}^0 &\in \bm{V}^0, \\
(r_0, w)_{\Omega} + b(\bm{u}_0^0, w)
&= -b(\mathcal{R} \phi_0, w),
& \forall w &\in W,\\
(p_0, s)_{\Omega} &= 0,
& \forall s &\in S.
\end{align}
\end{subequations}
Problem \eqref{eqs: aux problem u* p*} is changed analogously to produce $(\bm{u}_\star^0, p_\star, r_\star) \in \bm{V}^0 \times W \times S$. After solving the interface problem, all ingredients are available to construct the velocity $\bm{u}$ as in \eqref{eq: reconstructed flux}.
In the construction of the pressure $p$, we compute $c_D = \langle \chi - \Sigma \phi_0, \zeta \rangle$ using the same arguments as in \eqref{eq: def c_D}. Since the pressure is globally defined up to a constant, we ensure that the pressure distribution has mean zero on $\Omega$ by setting
\begin{align} \label{eq: def p bar pure}
\bar{p} := \langle \chi - \Sigma \phi_0, \zeta \rangle \left(
1_{\Omega_D} - \frac{| \Omega_D |}{| \Omega |}
\right).
\end{align}
As before, we set
$p := p_0 + p_\star + \bar{p}$ and obtain the solution $(\bm{u}, p)$ of the original problem \eqref{eq: variational formulation}.
\section{Discretization}
\label{sec:discretization}
This section presents the discretization of problem \eqref{eq: variational formulation} with the use of the Mixed Finite Element method. By introducing the interface flux as a separate variable, we derive a mortar method reminiscent of \cite{boon2018robust,nordbotten2019unified}, presented in the context of fracture flow. The focus in this section is to introduce this flux-mortar method for the coupled Stokes-Darcy problem and show its stability.
Let $\Omega_{S, h}$, $\Omega_{D, h}$, $\Gamma_h$ be shape-regular tesselations of $\Omega_S$, $\Omega_D$, and $\Gamma$, respectively. Let $\Omega_{S, h}$ and $\Omega_{D, h}$ be constructed independently and consist of simplicial or quadrangular (hexahedral in 3D) elements. Similarly, $\Gamma_h$ is a simplicial or quadrangular mesh of dimension $n - 1$, constructed according to the restrictions mentioned below.
We impose the following three restriction on the Mixed Finite Element discretization:
\begin{enumerate}
\item
For the purpose of structure preservation, the finite element spaces are chosen such that
\begin{subequations} \label{eq: inclusions}
\begin{align}
\bm{V}_{S, h}
&\subset \bm{V}_S, &
\bm{V}_{D, h}
&\subset \bm{V}_D, &
\Lambda_h
&\subset \Lambda, \\
W_{S, h}
&\subset W_S, &
W_{D, h}
&\subset W_D.
\end{align}
\end{subequations}
It is convenient, but not necessary, to define $\Gamma_h$ as the trace mesh of $\Omega_{S, h}$ and $\Lambda_h = (\bm{n} \cdot \bm{V}_{S, h})|_\Gamma$. In this case, it follows that $\Lambda_h \subset H_0^1(\Gamma) \subset H_{00}^{1/2}(\Gamma) = \Lambda$. We moreover define
\begin{align}
\bm{V}_{S, h}^0
&:= \bm{V}_{S, h} \cap \bm{V}_S^0, &
\bm{V}_{D, h}^0
&:= \bm{V}_{D, h} \cap \bm{V}_D^0.
\end{align}
\item
The Mixed Finite Element spaces $V_{i, h} \times W_{i, h}$ with $i \in \{S, D\}$ are chosen to form stable pairs for the Stokes and Darcy (sub)systems, respectively.
In particular, bounded interpolation operators $\Pi_{V_i}$ exist for $i \in \{S, D\}$ such that for all $\bm{v} \in \bm{V} \cap H^\epsilon(\Omega)$ with $\epsilon > 0$:
\begin{align} \label{eq: commutative}
\Pi_{W_S} \nabla \cdot ((I - \Pi_{V_S}) \bm{v}_S) &= 0, &
\nabla \cdot (\Pi_{V_D} \bm{v}_D) &= \Pi_{W_D} \nabla \cdot \bm{v}_D.
\end{align}
in which $\Pi_{W_i}$ is the $L^2$-projection onto $W_{i, h}$.
Moreover, to ensure local mass conservation, we assume that the space of piecewise constants $(P_0)$ is contained in $W_h$.
Examples for the Stokes subproblem include $\bm{P}_2-P_0$ in the two-dimensional case as well as the Bernardi-Raugel pair \cite{bernardi1985analysis}.
For the Darcy subproblem, stable choices of low-order finite elements include the Raviart-Thomas pair $RT_0-P_0$ \cite{raviart1977mixed} and the Brezzi-Douglas Marini pair $BDM_1-P_0$ \cite{brezzi1985two}. For more examples of stable Mixed Finite Element pairs, we refer the reader to \cite{boffi2013mixed}.
\item
For $\phi_h \in \Lambda_h$, let the discrete extension operators $\mathcal{R}_{i, h}: \Lambda_h \to \bm{V}_{i, h}$ with $i \in \{S, D\}$ satisfy
\begin{align} \label{eq: extension property h}
(\phi_h - \bm{n} \cdot \mathcal{R}_{i, h} \phi_h, \bm{n} \cdot \bm{v}_{i, h} )_{\Gamma} &= 0, & \forall \bm{v}_{i, h} &\in \bm{V}_{i, h}.
\end{align}
The extension operators are continuous in the sense that for $\phi_h \in \Lambda_h$, we have
\begin{align} \label{eq: continuity R_h}
\| \mathcal{R}_{S, h} \varphi_h \|_{1, \Omega_S} &\lesssim \| \varphi_h \|_{\frac{1}{2}, \Gamma}, &
\| \mathcal{R}_{D, h} \varphi_h \|_{0, \Omega_D} + \| \nabla \cdot \mathcal{R}_D \varphi_h \|_{0, \Omega_D} &\lesssim \| \varphi_h \|_{-\frac{1}{2}, \Gamma}.
\end{align}
We define $\mathcal{R}_h \phi = \mathcal{R}_{S, h} \oplus \mathcal{R}_{D, h}$Let the mesh $\Gamma_h$ and function space $\Lambda_h$ be chosen such that the kernel of $\mathcal{R}_h$ is zero:
\begin{align}
\mathcal{R}_h \phi_h = 0 \text{ if and only if } \phi_h = 0.
\end{align}
We remark that this is a common restriction encountered in mortar methods (see e.g. \cite{arbogast2000mixed}) and can be satisfied by choosing $\Gamma_h$ sufficiently coarse or constructing $\Lambda_h$ using polynomials of lower order.
\end{enumerate}
With the above restrictions in place, we define the discretizations of the combined spaces $\bm{V}$ and $W$ as
\begin{subequations}
\begin{align}
\bm{V}_h &:= \left\{ \bm{v}_h :\
\exists (\bm{v}_{S, h}^0, \varphi_h, \bm{v}_{D, h}^0) \in \bm{V}_{S, h}^0 \times \Lambda_h \times \bm{V}_{D, h}^0
\text{ such that } \bm{v}_h|_{\Omega_i} = \bm{v}_{i, h}^0 + \mathcal{R}_{i, h} \varphi_h, \text{ for } i \in \{S, D\} \right\}
, \\
W_h &:= W_{S, h} \times W_{D, h}.
\end{align}
\end{subequations}
As in the continuous case, the function space $\bm{V}_h$ is independent of the choice of extension operators. We remark that in the case of non-matching grids or if different polynomial orders are chosen for $\bm{V}_{S, h}$ and $\bm{V}_{D, h}$, we have $V_h \not \subset V$ due to the weaker property of $\mathcal{R}_h$ in \eqref{eq: extension property h} as opposed to $\mathcal{R}$ defined by \eqref{eq: extension property}. Nevertheless, a normal flux continuity is imposed in the sense that the normal trace of $\bm{v}_{S, h}$ and $\bm{v}_{D, h}$ are $L^2$ projections of a single variable $\phi_h$.
Again, the subscript $i \in \{S, D\}$ distinguishes the restrictions of $(\bm{v}, w) \in \bm{V}_h \times W_h$ to the different subdomains:
\begin{align}
\bm{v}_{i, h} &:= \bm{v}_{i, h}^0 + \mathcal{R}_{i, h} \varphi_h \in \bm{V}_{i, h}, &
w_{i, h} &:= w_h|_{\Omega_i} \in W_{i, h}.
\end{align}
We finish this section by formally stating the discrete problem: \\
Find the pair $(\bm{u}_h, p_h) \in \bm{V}_h \times W_h$ such that
\begin{subequations} \label{eq: variational formulation_h}
\begin{align}
a(\bm{u}_h, \bm{v}_h) + b(\bm{v}_h, p_h) &= f_u(\bm{v}_h),
& \forall \bm{v}_h &\in \bm{V}_h, \\
b(\bm{u}_h, w_h) &= f_p(w_h),
& \forall w_h &\in W_h.
\end{align}
\end{subequations}
with the bilinear forms and functionals defined in \eqref{eq: bilinear forms}.
With the chosen spaces, the discretizations on $\Omega_{S, h}$ and $\Omega_{D, h}$ are stable and consistent for the Stokes and Darcy subproblems, respectively. However, in order to show stability of the method for the fully coupled problem \eqref{eq: variational formulation_h}, we briefly confirm that the relevant inequalities hold independent of the mesh parameter $h$.
\begin{lemma} \label{lem: inequalities_h}
The following inequalities are satisfied:
\begin{subequations}
\begin{align}
& \text{For } \bm{u}_h, \bm{v}_h \in \bm{V}_h:
& a(\bm{u}_h, \bm{v}_h) &\lesssim \| \bm{u}_h \|_V \| \bm{v}_h \|_V. \label{ineq: a_cont_h}\\
& \text{For } (\bm{v}_h, w_h) \in \bm{V}_h \times W_h:
& b(\bm{v}_h, w_h) &\lesssim \| \bm{v}_h \|_V \| w_h \|_W. \label{ineq: b_cont_h}\\
& \text{For } \bm{v}_h \in \bm{V}_h \text{ with } b(\bm{v}_h, w_h) = 0 \ \forall w_h \in W_h:
& a(\bm{v}_h, \bm{v}_h) &\gtrsim \| \bm{v}_h \|_V^2. \label{ineq: a_coercive_h}\\
& \text{For } w_h \in W_h, \ \exists \bm{v}_h \in \bm{V}_h \text{ with } \bm{v}_h \ne 0, \text{ such that}:
& b(\bm{v}_h, w_h) &\gtrsim \| \bm{v}_h \|_V \| w_h \|_W.\label{ineq: b_infsup_h}
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Inequalities \eqref{ineq: a_cont_h} and \eqref{ineq: b_cont_h} follow using the same arguments as \eqref{ineqs: continuity} in \Cref{lem: inequalities}. Continuing with \eqref{ineq: a_coercive_h}, we note from \eqref{eq: commutative} that $b(\bm{v}_h, w_h) = 0$ implies
\begin{align*}
0 = \Pi_{W_D} \nabla \cdot \bm{v}_{D, h} = \nabla \cdot (\Pi_{V_D} \bm{v}_{D, h}) = \nabla \cdot \bm{v}_{D, h}.
\end{align*}
Hence, the same derivation as \eqref{eq: proof 3.5c} is followed to give us \eqref{ineq: a_coercive_h}.
For the final inequality, we consider $w_h \in W_h$ given and follow the strategy of \Cref{lem: inequalities}. First, we set up a minimization problem in $\Lambda_h$ analogous to \eqref{eq: phi constraint} to obtain a bounded $\phi_h \in \Lambda_h$ such that
\begin{subequations}
\begin{align}
\| \phi_h \|_{1, \Gamma} &\lesssim \| K w_{D, h} \|_{0, \Omega_D}, \\
(\nabla \cdot \mathcal{R}_{D, h} \phi_h, 1)_{\Omega_D} &=
(- \bm{n} \cdot \mathcal{R}_{D, h} \phi_h, 1)_{\Gamma} =
(- \phi_h, 1)_{\Gamma} =
- (K w_{D, h}, 1)_{\Omega_D}. \label{eq: compatibility of phi_h}
\end{align}
\end{subequations}
Next, we note that $w_h \in W_h \subset W$. In turn, we use the auxiliary problems \eqref{eqs: aux prob p_S} and \eqref{eqs: aux prob p_D} to construct $\bm{v}_S^0 \in \bm{V}_S^0$ and $\bm{v}_D^0 \in \bm{V}_D^0$ such that
\begin{subequations}
\begin{align}
- \nabla \cdot \bm{v}_S^0 &= \mu^{-1} w_{S, h} + \nabla \cdot \mathcal{R}_{S, h} \phi_h, \\
- \nabla \cdot \bm{v}_D^0 &= K w_{D, h} + \nabla \cdot \mathcal{R}_{D, h} \phi_h, \\
\| \bm{v}_S^0 \|_{1, \Omega_S} &\lesssim
\| \mu^{-1} w_{S, h} \|_{0, \Omega_S} + \| \nabla \cdot \mathcal{R}_{S, h} \phi_h \|_{0, \Omega_S}, \\
\| \bm{v}_D^0 \|_{1, \Omega_D} &\lesssim
\| K w_{D, h} \|_{0, \Omega_D} + \| \nabla \cdot \mathcal{R}_{D, h} \phi_h \|_{0, \Omega_D}.
\end{align}
\end{subequations}
We then employ the interpolation operators from \eqref{eq: commutative} to create $\bm{v}_{S, h}^0 = \Pi_{V_S} \bm{v}_S^0$ and $\bm{v}_{D, h}^0 = \Pi_{V_D}\bm{v}_D^0$. Using the commutative properties, we obtain
\begin{align*}
b(\bm{v}_h, w_h) &
= - \sum_{i \in \{S, D\}} (\nabla \cdot (\Pi_{V_i} \bm{v}_i^0 + \mathcal{R}_{i, h} \phi_h), w_{i, h})_{\Omega_i}
= (\mu^{-1} w_{S, h}, w_{S, h})_{\Omega_S} + (K w_{D, h}, w_{D, h})_{\Omega_D}
= \| w_h \|_W^2.
\end{align*}
Moreover, by the boundedness of these interpolation operators, we have
\begin{align*}
\| \bm{v}_h \|_V
\le \| \bm{v}_h^0 \|_V + \| \mathcal{R}_h \phi_h \|_V
\lesssim \| \bm{v}^0 \|_V + \| \phi_h \|_\Lambda
\lesssim \| w_h \|_W,
\end{align*}
proving the final inequality \eqref{ineq: b_infsup_h}.
\end{proof}
\begin{theorem}
If the three conditions presented at the beginning of this section are satisfied, then the discretization method is stable, i.e. a unique solution $(\bm{u}_h, p_h) \in \bm{V}_h \times W_h$ exists for \eqref{eq: variational formulation_h} satisfying
\begin{align}
\| \bm{u}_h \|_V + \| p_h \|_W
\lesssim
\| \mu^{-\frac{1}{2}} \bm{f}_S \|_{-1, \Omega_S}
+ \| K^{-\frac{1}{2}} f_D \|_{0, \Omega_D}
+ \| K^{\frac{1}{2}} g_p \|_{\frac{1}{2}, \partial_p \Omega_D}.
\end{align}
\end{theorem}
\begin{proof}
This result follows from \Cref{lem: inequalities_h}, the continuity of the right-hand side from \Cref{thm: well-posedness}, and saddle point theory.
\end{proof}
\section{Iterative Solution Method}
\label{sec:iterative_solvers}
With well-posedness of the discrete system shown in the previous section, we continue by constructing an efficient solution method to solve the coupled system in an iterative manner. The scheme is introduced according to three steps. We first present the discrete Steklov-Poincar\'e system that we aim to solve using a Krylov subspace method. Second, a parameter-robust preconditioner is introduced for the reduced system. The third step combines these two ideas to form an iterative method that respects mass conservation at each iteration.
\subsection{Discrete Steklov-Poincar\'e System}
\label{sub:discrete_poincar}
Similar to the continuous case in Section~\ref{sec:the_steklov_poincare_system}, we reduce the problem to the interface flux variables $\phi_h \in \Lambda_h$. The reduced system is a direct translation of \eqref{eq: poincare steklov} to the discrete setting: \\
Find $\phi_h \in \Lambda_h$ such that
\begin{align} \label{eq: poincare steklov_h}
\left \langle \Sigma_h \phi_h, \varphi_h \right \rangle &= \left \langle \chi_h, \varphi_h \right \rangle, &
\forall \varphi_h &\in \Lambda_h.
\end{align}
To ease the implementation of the scheme, we focus particularly on the structure of the operator $\Sigma_h$. For that, we note that the space $\bm{V}_h^0$ can be decomposed orthogonally into $\bm{V}_{S, h}^0 \times \bm{V}_{D, h}^0$ and a similar decomposition holds for $W_h$. The aim is to propose a solution method which exploits this property. For brevity, the subscript $h$ is omitted on all variables and operators, keeping in mind that the remainder of this section concerns the discretized setting.
Let us rewrite the bilinear forms $a$ and $b$ in terms of duality pairings, thereby revealing the matrix structure of the problem. For that, we group the terms according to the subdomains and let the operators $A_i: \bm{V}_{i, h} \to \bm{V}_{i, h}^*$ and $B_i:\bm{V}_{i, h} \to W_{i, h}^*$ be defined for $i \in \{S, D\}$ such that
\begin{align*}
\left \langle A_S \bm{u}_S, \bm{v}_S \right \rangle
&= (\mu \varepsilon(\bm{u}_S), \varepsilon(\bm{v}_S))_{\Omega_S}
+ (\beta \bm{\tau} \cdot \bm{u}_S, \bm{\tau} \cdot \bm{v}_S )_{\Gamma}, \\
\left \langle A_D \bm{u}_D, \bm{v}_D \right \rangle
&= (K^{-1} \bm{u}_D, \bm{v}_D)_{\Omega_D}, \\
\left \langle B_S \bm{u}_S, w_S \right \rangle
&= -(\nabla \cdot \bm{u}_S, w_S)_{\Omega_S}, \\
\left \langle B_D \bm{u}_D, w_D \right \rangle
&= -(\nabla \cdot \bm{u}_D, w_D)_{\Omega_D}.
\end{align*}
Let $A_{i, 0}$ and $B_{i, 0}$ be the respective restrictions of the above to the subspace $\bm{V}_{i, h}^0$.
With these operators and the decomposition $\bm{u}_i = \bm{u}_i + \mathcal{R}_i \phi$ for the trial and test functions, problem \eqref{eq: variational formulation} attains the following matrix form:
\begin{align} \label{eq: matrix form}
\begin{bmatrix}
A_{S, 0} & B_{S, 0}^T & & & A_S \mathcal{R}_S \\[5pt]
B_{S, 0} & & & & B_S \mathcal{R}_S \\[5pt]
& & A_{D, 0} & B_{D, 0}^T & A_D \mathcal{R}_D \\[5pt]
& & B_{D, 0} & & B_D \mathcal{R}_D \\[5pt]
(A_S \mathcal{R}_S)^T & (B_S \mathcal{R}_S)^T & (A_D \mathcal{R}_D)^T & (B_D \mathcal{R}_D)^T & \sum_i \mathcal{R}_i^T A_i \mathcal{R}_i
\end{bmatrix}
\begin{bmatrix}
\bm{u}_S^0 \\[5pt]
p_S \\[5pt]
\bm{u}_D^0 \\[5pt]
p_D \\[5pt]
\phi
\end{bmatrix}
&=
\begin{bmatrix}
f_{S, u} \\[5pt]
f_{S, p} \\[5pt]
f_{D, u} \\[5pt]
f_{D, p} \\[5pt]
f_{\phi}
\end{bmatrix}.
\end{align}
In practice, the discrete extension operators are chosen to only have support in the elements adjacent to $\Gamma$, leading to a favorable sparsity pattern.
We moreover note that the final row corresponds to test functions $\varphi \in \Lambda_h$.
The right-hand side of \eqref{eq: matrix form} is defined such that
\begin{align}
\left \langle f_{S, u}, \bm{v}_S^0 \right \rangle
+ \left \langle f_{S, p}, w_S \right \rangle
+ \left \langle f_{D, u}, \bm{v}_D^0 \right \rangle
+ \left \langle f_{D, p}, w_D \right \rangle
+ \left \langle f_{\phi}, \varphi \right \rangle
&=
f_u(\bm{v}) + f_p(w), &
\forall (\bm{v}, w) &\in \bm{V}_h \times W_h.
\end{align}
The discrete Steklov-Poincar\'e system is obtained by taking a Schur-complement of this system. In particular, we obtain $\Sigma_h$ and the right-hand side $\chi_h$ as
\begin{subequations}
\begin{align}
\label{eq: def sigma_h}
\Sigma_h &:= \sum_{i \in \{S, D\} }
\mathcal{R}_i^T A_i \mathcal{R}_i
- \mathcal{R}_i^T [A_i \ B_i^T]
\begin{bmatrix}
A_{i, 0} & B_{i, 0}^T \\[5pt]
B_{i, 0} & \end{bmatrix}^{-1}
\begin{bmatrix}
A_i \\[5pt]
B_i
\end{bmatrix}
\mathcal{R}_i, \\
\chi_h &:= f_{\phi}
- \sum_{i \in \{S, D\} } \mathcal{R}_i^T [A_i \ B_i^T]
\begin{bmatrix}
A_{i,0} & B_{i, 0}^T \\[5pt]
B_{i, 0} & \end{bmatrix}^{-1}
\begin{bmatrix}
f_{u, i} \\[5pt]
f_{p, i}
\end{bmatrix}.
\label{eq: def chi_h}
\end{align}
\end{subequations}
We employ a Krylov subspace method to solve \eqref{eq: poincare steklov_h} iteratively, thereby avoiding the computationally costly assembly of $\Sigma_h$. In order to obtain a parameter-robust iterative method, the next step is to introduce an appropriate preconditioner, as presented in the next section.
\subsection{Parameter-Robust Preconditioning}
\label{sub:parameter_robust_preconditioning}
In this section, we construct the preconditioner such that the resulting iterative method is robust with respect to the material parameters ($K$ and $\mu$) and the mesh size ($h$). For that, we use the parameter-dependent norm $\| \cdot \|_\Lambda$ from \eqref{eq: norm phi} and follow the framework presented by \cite{mardal2011preconditioning} to form a norm-equivalent preconditioner. In particular, we use the following result from that work:
\begin{lemma}
Given a bounded, symmetric, positive-definite operator $\Sigma: \Lambda \to \Lambda^*$
and a symmetric positive definite operator $\mathcal{P}: \Lambda^* \to \Lambda$. If the induced norm $\| \phi \|_{\mathcal{P}^{-1}}^2 := \left \langle \mathcal{P}^{-1} \phi, \phi \right \rangle$ satisfies
\begin{align} \label{eq: norm equivalence precond}
\| \phi \|_{\Lambda}^2 \lesssim
\| \phi \|_{\mathcal{P}^{-1}}^2 \lesssim
\| \phi \|_{\Lambda}^2,
\end{align}
then $\mathcal{P}$ is a robust preconditioner in the sense that the condition number of $\mathcal{P} \Sigma$ is bounded independent of the material and mesh parameters.
\end{lemma}
Note that symmetry of $\Sigma_h$ is apparent from \eqref{eq: def sigma_h}. Positive definiteness follows using the same arguments as in \Cref{thm: sigma SPD}. The next step is therefore to create an operator $\mathcal{P}^{-1}$ that generates a norm which is equivalent to $\| \cdot \|_\Lambda$ on $\Lambda_h$. Recall from \eqref{eq: norm phi} that $\| \cdot \|_\Lambda$ is composed of fractional Sobolev norms. The key idea is to introduce a matrix $\mathsf{H}(s)$ that induces a norm which is equivalent to $H^s(\Gamma)$ for $s = \pm \frac{1}{2}$. We apply the strategy explained in \cite{kuchta2016preconditioners} to achieve this, of which a short description follows.
For given basis $\{\phi_i\}_{i = 1}^{n_\Lambda} \in \Lambda_h$ with $n_\Lambda$ the dimension of $\Lambda_h$, let the mass matrix $\mathsf{M}$ and stiffness matrix $\mathsf{A}$ be defined such that
\begin{align}
\mathsf{M}_{ij} &:= ( \phi_j, \phi_i )_{\Gamma}, &
\mathsf{A}_{ij} &:= ( \nabla_\Gamma \phi_j, \nabla_\Gamma \phi_i )_{\Gamma}.
\end{align}
Then, a complete set of eigenvectors $\mathsf{v}_i \in \mathbb{R}^{n_\Lambda}$ and eigenvalues $\lambda_i \in \mathbb{R}$ exist solving the generalized eigenvalue problem
\begin{align} \label{eq: generalized eigenvalue problem}
\mathsf{A} \mathsf{v}_i = \lambda_i \mathsf{M} \mathsf{v}_i.
\end{align}
The eigenvectors satisfy $\mathsf{v}_i^\mathsf{T} \mathsf{M} \mathsf{v}_j = \delta_{ij}$ with $\delta_{ij}$ the Kronecker delta function. Let the diagonal matrix $\mathsf{\Lambda} := \operatorname{diag}([\lambda_i]_{i = 1}^{n_\Lambda})$ and let $\mathsf{V}$ be the matrix with $\mathsf{v}_i$ as its columns. The following eigendecomposition then holds:
\begin{align}
\mathsf{A} = \mathsf{(MV) \Lambda (MV)^T}.
\end{align}
Using the matrices $\mathsf{M}$, $\mathsf{V}$, and $\mathsf{\Lambda}$, we define the operator $\mathsf{H}: \mathbb{R} \to \mathbb{R}^{n_\Lambda \times n_\Lambda}$ as
\begin{align}
\mathsf{H}(s) = \mathsf{(MV) \Lambda}^s \mathsf{(MV)^T}.
\end{align}
An advantage of this construction is that its inverse can be directly computed as $\mathsf{H}(s)^{-1} = \mathsf{V \Lambda}^{-s} \mathsf{V^T}$ due to $\mathsf{V^TMV = I}$. Next, we emphasize that $\mathsf{H}(0) = \mathsf{M}$ and $\mathsf{H}(1) = \mathsf{A}$, i.e. the discrete $L^2(\Gamma)$ and $H^1(\Gamma)$ norms on $\Lambda_h$ are generated for $s = 0$ and $s = 1$, respectively. As a generalization, the norm induced by the matrix $\mathsf{H}(s)$ is equivalent to the $H^s(\Gamma)$ norm on the discrete space $\Lambda_h$ \cite{kuchta2016preconditioners}. In other words,
\begin{align}
\| \phi \|_{s, \Gamma}^2
\lesssim (\pi \phi)^T \mathsf{H}(s) (\pi \phi)
\lesssim \| \phi \|_{s, \Gamma}^2,
\end{align}
in which $\pi$ is the representation operator in the basis $\{\phi_i\}_{i = 1}^{n_\Lambda}$.
Next, we use these tools to define our preconditioner following the strategy of \cite{mardal2011preconditioning}. The operator $\mathsf{P}^{-1}: \mathbb{R}^{n_{\Lambda}} \to \mathbb{R}^{n_{\Lambda}}$ is defined according to the norm $\| \cdot \|_{\Lambda}$ from \eqref{eq: norm phi}:
\begin{align}
\mathsf{P}^{-1} := \mu \mathsf{H} \left(\tfrac{1}{2}\right) + K^{-1} \mathsf{H} \left(-\tfrac{1}{2}\right).
\end{align}
Defining $\mathcal{P}^{-1} := \pi^T \mathsf{P}^{-1} \pi$, we obtain the equivalence relation \eqref{eq: norm equivalence precond} by construction. In turn, the inverse operator $\mathcal{P}$ is an optimal preconditioner for the system $\eqref{eq: poincare steklov_h}$. The matrix $\mathsf{P}$ is explicitly computed using the properties of $\mathsf{V}$ and $\mathsf{M}$:
\begin{align} \label{eq: preconditioner}
\mathsf{P} =
\left(
\mu \mathsf{H} \left(\tfrac{1}{2}\right) + K^{-1} \mathsf{H} \left(-\tfrac{1}{2}\right)
\right)^{-1}
=
\mathsf{V} \left(
\mu \mathsf{\Lambda}^{\frac{1}{2}} + K^{-1} \mathsf{\Lambda}^{-\frac{1}{2}}
\right)^{-1}
\mathsf{V^T}.
\end{align}
\subsection{An Iterative Method Respecting Mass Conservation}
\label{sub:a_conservative_method}
The Steklov-Poincar\'e system from \cref{sub:discrete_poincar} and the preconditioner from \cref{sub:parameter_robust_preconditioning} form the main ingredients of the iterative scheme proposed next. As mentioned before, we aim to use a Krylov subspace method on the reduced system \eqref{eq: poincare steklov_h}. We turn to the Generalized Minimal Residual (GMRes) method \cite{saad1986gmres} and propose the scheme we refer to as \Cref{alg: GMRes}, described below.
\begin{algorithm}[ht]
\caption{}
\label{alg: GMRes}
\begin{enumerate}
\item Set the tolerance $\epsilon > 0$, choose an initial guess $\phi_h^0 \in \Lambda_h$, and construct the right-hand side $\chi_h$ from \eqref{eq: def chi_h} and $\mathsf{P}$ from \eqref{eq: preconditioner}.
\item \label{step: SD solves}
Using $\mathsf{P}$ as a preconditioner, apply GMRes to the discrete Steklov-Poincar\'e system \eqref{eq: poincare steklov_h} until the relative, preconditioned residual is smaller than $\epsilon$. This involves solving a Stokes and a Darcy subproblem in \eqref{eq: def sigma_h} at each iteration.
\item \label{step: reconstruction}
Construct $(\bm{u}_{S, h}, p_{S, h}) \in \bm{V}_{S, h} \times W_{S, h}$ and $(\bm{u}_{D, h}, p_{D, h}) \in \bm{V}_{D, h} \times W_{D, h}$ by solving the independent Stokes and Darcy subproblems with $\phi_h$ as the normal flux on $\Gamma$.
\item In the case of Neumann problems, reconstruct the mean of the pressure in $\Omega_D$ using \eqref{eq: def p bar single} or \eqref{eq: def p bar pure}.
\end{enumerate}
\end{algorithm}
We make three observations concerning this algorithm. Most importantly, the solution $(\bm{u}_h, p_h)$ produced by \Cref{alg: GMRes} conserves mass locally, independent of the number of GMRes iterations. In particular,
the definition of the space $\bm{V}_h$ ensures that no mass is lost across the interface. Moreover, with the flux $\phi_h$ given on the interface, the reconstruction in step \eqref{step: reconstruction} ensures that mass is conserved in each subdomain.
Secondly, we emphasize that the solves for the Stokes and Darcy subproblems in step \ref{step: SD solves} can be performed in parallel by optimized solvers. Moreover, the preconditioner is local to the interface and is agnostic to the choice of extension operators and chosen discretization methods. In turn, this scheme is applicable to a wide range of well-established ``legacy'' codes tailored to solving Stokes and Darcy flow problems.
Third, we have made the implicit assumption that obtaining $\mathsf{P}$ by solving \eqref{eq: generalized eigenvalue problem} is computationally feasible. This is typically the case if the dimension of the interface space $\Lambda_h$ is sufficiently small. The generalized eigenvalue problem is solved once in an a priori, or ``off-line'', stage and the assembled matrix $\mathsf{P}$ is then applied in each iteration of the GMRes method.
\section{Numerical Results}
\label{sec:numerical_results}
In this section, we present numerical experiments that verify the theoretical results presented above. By setting up artificial coupled Stokes-Darcy problems in two dimension, we investigate the dependency of \Cref{alg: GMRes} on physical and discretization parameters in Section~\ref{sub:parameter_robustness}. Afterward, Section~\ref{sub:comparison_to_NN_method} presents a comparison of the proposed scheme to a Neumann-Neumann method.
\subsection{Parameter Robustness}
\label{sub:parameter_robustness}
Let the subdomains be given by $\Omega_S := (0, 1) \times (0, 1)$, $\Omega_D := (0, 1) \times (-1, 0)$, and $\Gamma := [0, 1] \times \{ 0 \}$. Two test cases are considered, defined by different boundary conditions. The first concerns the setting in which both the Stokes and Darcy subproblems are well-posed. On the other hand, test case 2 illustrates the scenario in which a pure Neumann problem is imposed on the porous medium.
For test case 1, let $\partial_\sigma \Omega_S$ be the top boundary ($x_2 = 1$) and $\partial_u \Omega_D$ the bottom boundary ($x_2 = -1$). The remaining portions of the boundary $\partial \Omega$ form $\partial_u \Omega_S$ and $\partial_p \Omega_D$. On $\partial_u \Omega_S$, zero velocity is imposed as described by \eqref{eq: BC essential}. The pressure data is set to $g_p(x_1, x_2) := x_2$ on $\partial_p \Omega_D$ to stimulate a flow field that infiltrates the porous medium.
Test case 2 simulates parallel flow over a porous medium. We impose no-flux conditions on $\partial \Omega_D \setminus \Gamma$, thereby ensuring that all mass transfer to and from the porous medium occurs at the interface $\Gamma$. The flow is stimulated by prescribing the velocity at the left and right boundaries of $\Omega_S$ using the parabolic profile $\bm{u}_S(x_1, x_2) := [0, x_2 (2 - x_2)]$. As in test case 1, the top boundary represents $\partial_\sigma \Omega_S$, at which a zero stress is prescribed.
Both test cases consider zero body force and mass source, i.e. $\bm{f}_S := 0$ and $f_D := 0$. Moreover, we set the parameter $\alpha$ in the Beavers-Joseph-Saffman condition to zero for simplicity.
The meshes $\Omega_{S, h}$ and $\Omega_{D, h}$ are chosen to be matching at $\Gamma$ and we set $\Gamma_h$ as the coinciding trace mesh. Following \Cref{sec:discretization}, we choose the Mixed Finite Element method in both subdomains, implemented with the use of FEniCS \cite{logg2012automated}. The spaces are given by a vector of quadratic Lagrange elements $(\bm{P}_2)$ for the Stokes velocity $\bm{V}_{S, h}$ and lowest order Raviart-Thomas elements $(RT_0)$ for the Darcy velocity $\bm{V}_{D, h}$. The pressure space $W_h$ is given by piecewise constants $(P_0)$. The interface space $\Lambda_h$ is chosen to be the normal trace of $\bm{V}_{S, h}$ on $\Gamma_h$, and therefore consists of quadratic Lagrange elements.
For the sake of efficiency, the matrices $\mathsf{V}$ and $\mathsf{\Lambda}$ used in the preconditioner $\mathsf{P}$ are computed a priori. Moreover, we pre-compute the $\mathsf{LU}$-decompositions of the Darcy and Stokes subproblems. These decompositions serve as surrogates for optimized ``legacy'' codes. The iterative solver is terminated when a relative residual of $\epsilon = 10^{-6}$ is reached, i.e. when the ratio of Euclidean norms of the preconditioned residual and the preconditioned right-hand side is smaller than $\epsilon$.
We first consider the dependency of the iterative solver on the mesh size. We set unit viscosity and permeability and start with a coarse mesh with $h = 1/8$. The mesh is refined four times and at each refinement, the number of iterations in \Cref{alg: GMRes} is reported. The results for both test cases are shown in \Cref{tab: Robustness}.
The results show that the number of iterations is robust with respect to the mesh size. Moreover, the second and third columns indicate the reduction from a fully coupled problem of size $n_{total}$ to a significantly smaller interface problem of size $n_\Lambda$. As shown in \Cref{sec:the_steklov_poincare_system}, this interface problem is symmetric and positive definite.
\begin{table}[ht]
\caption{The number of iterations necessary to reach the given tolerance with respect to the mesh size. The material parameters are given by $\kappa = \mu = 1$. }
\label{tab: Robustness}
\centering
\begin{tabular}{|r|r|r|c|c|}
\hline
\hline
$1 / h$ &
$n_{total}$ &
$n_\Lambda$ &
Case 1 &
Case 2 \\
\hline
8 & 1,042 & 15 & 8 & 6 \\
16 & 4,002 & 31 & 9 & 8 \\
32 & 15,682 & 63 & 8 & 9 \\
64 & 62,082 & 127 & 8 & 9 \\
128 & 247,042 & 255 & 8 & 8 \\
\hline
\hline
\end{tabular}
\end{table}
We investigate the robustness of \Cref{alg: GMRes} with respect to physical parameters by varying both the (scalar) permeability $\kappa$ and the viscosity $\mu$ over a range of eight orders of magnitude. The number of iterations is reported in Table~\ref{tab: Robustness parameters}.
It is apparent that the scheme is robust with respect to both parameters, reaching the desired tolerance within a maximum of eleven iterations for the two test cases. Minor deviations in the iteration numbers can be observed for low permeabilities. The scheme may require an extra iteration in that case due to the higher sensitivity of the Darcy subproblem on flux boundary data.
\begin{table}[ht]
\caption{Number of iterations necessary to reach the tolerance with respect to the physical parameters. For both cases, the mesh size is set to $h = 1/64$.}
\label{tab: Robustness parameters}
\centering
\begin{tabular}{|r|r|rrrrr|}
\hline
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Case 1}}
&
\multicolumn{5}{c|}{log$_{10}(\mu)$} \\
\cline{3-7}
\multicolumn{2}{|c|}{}
& $-4$ & $-2$ & $\phantom{-}0$ & $\phantom{-}2$ & $\phantom{-}4$ \\
\hline
\multirow{5}{*}{log$_{10}(\kappa)$}
& 4 & 8 & 8 & 8 & 8 & 8 \\
& 2 & 8 & 8 & 8 & 8 & 8 \\
& 0 & 8 & 8 & 8 & 8 & 8 \\
& $-2$ & 7 & 7 & 7 & 7 & 7 \\
& $-4$ & 7 & 7 & 7 & 7 & 7 \\
\hline
\hline
\end{tabular}
\hspace{50 pt}
\begin{tabular}{|r|r|rrrrr|}
\hline
\hline
\multicolumn{2}{|c|}{\multirow{2}{*}{Case 2}}
&
\multicolumn{5}{c|}{log$_{10}(\mu)$} \\
\cline{3-7}
\multicolumn{2}{|c|}{}
& $-4$ & $-2$ & $\phantom{-}0$ & $\phantom{-}2$ & $\phantom{-}4$ \\
\hline
\multirow{5}{*}{log$_{10}(\kappa)$}
& 4 & 9 & 9 & 9 & 9 & 9 \\
& 2 & 9 & 9 & 9 & 9 & 9 \\
& 0 & 9 & 9 & 9 & 9 & 9 \\
& $-2$ & 9 & 9 & 9 & 9 & 9 \\
& $-4$ & 11 & 11 & 11 & 11 & 10 \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Comparison to a Neumann-Neumann Method}
\label{sub:comparison_to_NN_method}
In order to compare the performance of Algorithm~\ref{alg: GMRes} to more conventional domain decomposition methods, we consider the closely-related Neumann-Neumann method. This method, as remarked in \cite[Remark 3.1]{discacciati2007robin}, solves the Steklov-Poincar\'e sytem \eqref{eq: poincare steklov_h} in the following, iterative manner. Given the current residual, we solve the Stokes and Darcy subproblems by interpreting this residual as a normal stress, respectively pressure, boundary condition. The computed fluxes normal to $\Gamma$ then update $\phi$ through the following operator:
\begin{align}
\mathcal{P}_{NN} := \Sigma_{S, h}^{-1} + \Sigma_{D, h}^{-1},
\end{align}
with $\Sigma_{S, h} + \Sigma_{D, h} = \Sigma_h$ the decomposition in \eqref{eq: def sigma_h}.
Noting that $\mathcal{P}_{NN}$ is an approximation of $\Sigma_h^{-1}$, we define the Neumann-Neumann method by replacing the preconditioner $\mathsf{P}$ by $\mathcal{P}_{NN}$ in Algorithm~\ref{alg: GMRes}. Moreover, we choose the same Krylov subspace method (GMRes) and stopping criterion, in order to make the comparison as fair as possible.
We consider the numerical experiment from \cite{discacciati2005iterative,discacciati2007robin} posed on $\Omega := (0, 1) \times (0, 2)$ with $\Omega_S := (0, 1) \times (1, 2)$, $\Omega_D := (0, 1) \times (0, 1)$, and $\Gamma = (0, 1) \times \{ 1 \}$.
The solution is given by $\bm{u}_S := \left[(x_2 - 1)^2, \ x_1(x_1 - 1)\right]$, $p_S = \mu (x_1 + x_2 - 1) + (3K)^{-1}$, and $p_D = \left(x_1(1 - x_1)(x_2 - 1) + \frac13 x_2^3 - x_2^2 + x_2 \right) K^{-1} + \mu x_1$. The boundary conditions are chosen to comply with this solution and we impose the pressure on $\partial \Omega_D \setminus \Gamma$, the normal stress on the top boundary, and the velocity on the remaining boundaries of $\Omega_S$. Finally, the Beavers-Joseph-Saffman condition is replaced by a no-slip condition for the tangential Stokes velocity on $\Gamma$.
Using the same Mixed Finite Element discretization as in the previous section, we vary the material and discretization parameters and report the number of iterations necessary to reach a relative residual of $\epsilon = 10^{-6}$ to the preconditioned problem. The results are presented in Table~\ref{tab: comparison with NN}.
\begin{table}[ht]
\caption{Iteration counts of the proposed scheme compared to a Neumann-Neumann method. The initial mesh has $h_0 = 1/7$ and each refinement is such that $h_{i + 1} = h_i/2$.}
\label{tab: comparison with NN}
\centering
\begin{tabular}{|c|c|rrrrr|rrrrr|}
\hline
\hline
\multirow{2}{*}{log$_{10}(\mu)$} &
\multirow{2}{*}{log$_{10}(K)$} &
\multicolumn{5}{c|}{Neumann-Neumann} &
\multicolumn{5}{c|}{Algorithm~\ref{alg: GMRes}} \\
\cline{3-12}
& &
$h_0$ & $h_1$ & $h_2$ & $h_3$ & $h_4$ &
$h_0$ & $h_1$ & $h_2$ & $h_3$ & $h_4$ \\
\hline
\multirow{3}{*}{$\phantom{-}0$}
& $\phantom{-}0$ & 3 & 3 & 3 & 3 & 3 & 8 & 8 & 8 & 8 & 8 \\
& $-1$ & 5 & 5 & 5 & 5 & 5 & 8 & 8 & 8 & 8 & 8 \\
& $-2$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\
\hline
\multirow{3}{*}{$-1$}
& $\phantom{-}0$ & 5 & 5 & 5 & 5 & 5 & 8 & 8 & 8 & 8 & 8 \\
& $-1$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\
& $-2$ & 8 & 9 & 12 & 16 & 13 & 10 & 9 & 8 & 7 & 7 \\
\hline
\multirow{3}{*}{$-2$}
& $\phantom{-}0$ & 7 & 7 & 8 & 8 & 8 & 7 & 7 & 7 & 8 & 8 \\
& $-1$ & 8 & 9 & 12 & 16 & 13 & 10 & 9 & 8 & 7 & 7 \\
& $-2$ & 14 & 9 & 18 & 25 & 29 & 13 & 14 & 12 & 8 & 7 \\
\hline
\hline
\end{tabular}
\end{table}
We observe that the two methods behave opposite as the mesh size decreases. Whereas the Neumann-Neumann method requires more iterations for finer grids, the performance of our proposed scheme appears to improve, requiring only eight iterations in all cases on the finest levels.
In general, Algorithm~\ref{alg: GMRes} outperforms the Neumann-Neumann method in terms of robustness, with the only deviation occurring in the case of a low permeability and a coarse grid. The Neumann-Neumann method appears more sensitive to both material and discretization parameters and only converges faster for material parameters close to unity.
In terms of computational cost, we emphasize that our proposed scheme requires an off-line computation to construct the preconditioner and contains a solve for the Stokes and Darcy subproblems at each iteration. On the other hand, the Neumann-Neumann method requires an additional solve for the subproblems in the preconditioner $\mathcal{P}_{NN}$. These additional solves will likely become prohibitively expensive for finer grids, since each solve is more costly and more iterations become necessary. Thus, if the preconditioner $\mathsf{P}$ can be formed efficiently, then Algorithm~\ref{alg: GMRes} forms an attractive alternative for such problems.
Although these results do not allow for a thorough, quantitative comparison with the Robin-Robin methods presented in \cite{discacciati2007robin}, we do make an important, qualitative observation. In particular, our proposed method does not require setting any acceleration parameters and its performance is a direct consequence of the constructed preconditioner. This is advantageous because finding the optimal values for such parameters can be a non-trivial task.
\section{Conclusion}
\label{sec:conclusions}
In this work, we proposed an iterative method for solving coupled Stokes-Darcy problems that retains local mass conservation at each iteration. By introducing the normal flux at the interface, the original problem is reduced to a smaller problem concerning only this variable. Through a priori analysis with the use of weighted norms, a preconditioner is formed to ensure that the scheme is robust with respect to physical and discretization parameters.
Future research will focus on four main ideas. First, we are interesting in investigating the application of this scheme on different discretization methods, including the pairing of a MPFA finite volume method with the MAC-scheme as in \cite{schneider2020coupling}.
Secondly, we note that the use of non-matching grids forms another natural extension. In that case, we aim to investigate how such a mismatch affects the discretization error. However, such analysis heavily depends on the chosen discretization method and is therefore reserved as a topic for future investigation.
Third, the generalization of these ideas to non-linear problems forms another area of our interest. By considering the Navier-Stokes equations in the free-flow subdomain, for example, the reduction to an interface problem will inherit the non-linearity. An iterative method that solves this reduced problem may benefit from a similarly constructed preconditioner.
Finally, as remarked in Section~\ref{sub:a_conservative_method}, we are under the assumption that the generalized eigenvalue problem \eqref{eq: generalized eigenvalue problem} can be solved efficiently. However, if the assembly of $\mathsf{P}$ is too costly, then more efficient, spectrally equivalent preconditioners are required. A promising example may be to employ the recent work on multi-grid preconditioners for fractional diffusion problems \cite{baerland2019multigrid}.
To conclude, we have presented this iterative method in a basic setting so that it may form a foundation for a variety of research topics that we aim to pursue in future work.
\begin{acknowledgement}
The author expresses his gratitude to Prof. Rainer Helmig, Prof. Ivan Yotov, Dennis Gl\"aser, and Kilian Weishaupt for valuable discussions on closely related topics.
\end{acknowledgement}
\bibliographystyle{siam}
| {'timestamp': '2022-09-28T02:22:25', 'yymm': '2209', 'arxiv_id': '2209.13421', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.13421'} |
\section{Introduction}
Interplay between interactions and disorders has been one of the
central issues in modern condensed matter physics
\cite{Interaction_Disorder_Book,RMP_Disorder_Interaction}. In the
weakly disordered metal the lowest-order interaction-correction
was shown to modify the density of states at the Fermi energy in
the diffusive regime \cite{AAL}, giving rise to non-Fermi liquid
physics particulary in low dimensions less than $d = 3$ while
further enhancement of electron correlations was predicted to
cause ferromagnetism \cite{Disorder_FM}. In an insulating phase
spin glass appears ubiquitously, where the average of the spin
moment vanishes in the long time scale, but local spin
correlations become finite, making the system away from
equilibrium \cite{SG_Review}.
An outstanding question is the role of disorder in the vicinity of
quantum phase transitions
\cite{Disorder_QCP_Review1,Disorder_QCP_Review2}, where effective
long-range interactions associated with critical fluctuations
appear to cause non-Fermi liquid physics
\cite{Disorder_QCP_Review2,QCP_Review}. Unfortunately, complexity
of this problem did not allow comprehensive understanding until
now. In the vicinity of the weakly disordered ferromagnetic
quantum critical point, an electrical transport-coefficient has
been studied, where the crossover temperature from the ballistic
to diffusive regimes is much lowered due to critical fluctuations,
compared with the disordered Fermi liquid
\cite{Paul_Disorder_FMQCP}. Generally speaking, the stability of
the quantum critical point should be addressed, given by the
Harris criterion \cite{Harris}. When the Harris criterion is not
satisfied, three possibilities are expected to arise
\cite{Disorder_QCP_Review2}. The first two possibilities are
emergence of new fixed points, associated with either a
finite-randomness fixed point satisfying the Harris criterion at
this new fixed point or an infinite randomness fixed point
exhibiting activated scaling behaviors. The last possibility is
that quantum criticality can be destroyed, replaced with a smooth
crossover. In addition, even away from the quantum critical point
the disordered system may show non-universal power law physics,
called the Griffiths phase \cite{Griffiths}. Effects of rare
regions are expected to be strong near the infinite randomness
fixed point and the disorder-driven crossover region
\cite{Disorder_QCP_Review2}.
This study focuses on the role of strong randomness in the heavy
fermion quantum transition. Heavy fermion quantum criticality is
believed to result from competition between Kondo and RKKY
(Ruderman-Kittel-Kasuya-Yosida) interactions, where larger Kondo
couplings give rise to a heavy fermion Fermi liquid while larger
RKKY interactions cause an antiferromagnetic metal
\cite{Disorder_QCP_Review2,QCP_Review,HF_Review}. Generally
speaking, there are two competing view points for this problem.
The first direction is to regard the heavy fermion transition as
an antiferromagnetic transition, where critical spin fluctuations
appear from heavy fermions. The second view point is that the
transition is identified with breakdown of the Kondo effect, where
Fermi surface fluctuations are critical excitations. The first
scenario is described by the Hertz-Moriya-Millis (HMM) theory in
terms of heavy electrons coupled with antiferromagnetic spin
fluctuations, the standard model for quantum criticality
\cite{HMM}. There are two ways to realize the second scenario
depending on how to describe Fermi surface fluctuations. The first
way is to express Fermi surface fluctuations in terms of a
hybridization order parameter called holon in the slave-boson
context \cite{KB_z2,KB_z3}. This is usually referred as the Kondo
breakdown scenario. The second one is to map the lattice problem
into the single site one resorting to the dynamical mean-field
theory (DMFT) approximation \cite{DMFT_Review}, where order
parameter fluctuations are critical only in the time direction.
This description is called the locally critical scenario
\cite{EDMFT}.
Each scenario predicts its own critical physics. Both the HMM
theory and the Kondo breakdown model are based on the standard
picture that quantum criticality arises from long-wave-length
critical fluctuations while the locally quantum critical scenario
has its special structure, that is, locally (space) critical
(time). Critical fluctuations are described by $z = 2$ in the HMM
theory due to finite-wave vector ordering \cite{HMM} while by $z =
3$ in the Kondo breakdown scenario associated with uniform
"ordering" \cite{KB_z3}, where $z$ is the dynamical exponent
expressing the dispersion relation for critical excitations. Thus,
quantum critical physics characterized by scaling exponents is
completely different between these two models. In addition to
qualitative agreements with experiments depending on compounds
\cite{Disorder_QCP_Review2}, these two theories do not allow the
$\omega/T$ scaling in the dynamic susceptibility of their critical
modes because both theories live above their upper critical
dimensions. On the other hand, the locally critical scenario gives
rise to the $\omega/T$ scaling behavior for the dynamic spin
susceptibility \cite{EDMFT} while it seems to have some
difficulties associated with some predictions for transport
coefficients.
We start to discuss an Ising model with Gaussian randomness for
its exchange coupling, called the Edwards-Anderson model
\cite{SG_Review}. Using the replica trick and performing the
saddle-point analysis, one can find a spin glass phase when the
average value of the exchange interaction vanishes, characterized
by the Edwards-Anderson order parameter without magnetization.
Applying this concept to the Heisenberg model with Gaussian
randomness, quantum fluctuations should be incorporated to take
into account the Berry phase contribution carefully. It was
demonstrated that quantum corrections in the DMFT approximation
lead the spin glass phase unstable at finite temperatures,
resulting in a spin liquid state when the average value of the
exchange coupling vanishes \cite{Sachdev_SG}. It should be noted
that this spin liquid state differs from the spin liquid phase in
frustrated spin systems in the respect that the former state
originates from critical single-impurity dynamics while the latter
phase results from non-trivial spatial spin correlations described
by gauge fluctuations \cite{Spin_Liquid_Review}. The spin liquid
phase driven by strong randomness is characterized by its critical
spin spectrum, given by the $\omega/T$ scaling local spin
susceptibility \cite{Sachdev_SG}.
Introducing hole doping into the spin liquid state, Parcollet and
Georges examined the disordered t-J model within the DMFT
approximation \cite{Olivier}. Using the U(1) slave-boson
representation, they found marginal Fermi-liquid phenomenology,
where the electrical transport is described by the $T$-linear
resistivity, resulting from the marginal Fermi-liquid spectrum for
collective modes, here the $\omega/T$ scaling in the local spin
susceptibility. They tried to connect this result with physics of
high T$_{c}$ cuprates.
In this study we introduce random hybridization with conduction
electrons into the spin liquid state. Our original motivation was
to explain both the $\omega/T$ scaling in the spin spectrum
\cite{INS_Local_AF} and the typical $T$-linear resistivity
\cite{LGW_F_QPT_Nature} near the heavy fermion quantum critical
point. In particular, the presence of disorder leads us to the
DMFT approximation naturally \cite{Moore_Dis_DMFT}, expected to
result in the $\omega/T$ scaling for the spin spectrum
\cite{Sachdev_SG}.
Starting from an Anderson lattice model with disorder, we derive
an effective local field theory in the DMFT approximation, where
randomness is introduced into both hybridization and RKKY
interactions. Performing the saddle-point analysis in the U(1)
slave-boson representation, we reveal its phase diagram which
shows a quantum phase transition from a spin liquid state to a
local Fermi liquid phase. In contrast with the clean limit of the
Anderson lattice model \cite{KB_z2,KB_z3}, the effective
hybridization given by holon condensation turns out to vanish,
resulting from the zero mean value of the hybridization coupling
constant. However, we show that the holon density becomes finite
when variance of hybridization is sufficiently larger than that of
the RKKY coupling, giving rise to the Kondo effect. On the other
hand, when the variance of hybridization becomes smaller than that
of the RKKY coupling, the Kondo effect disappears, resulting in a
fully symmetric paramagnetic state, adiabatically connected with
the spin liquid state of the disordered Heisenberg model
\cite{Sachdev_SG}.
Our contribution compared with the previous works
\cite{Kondo_Disorder} is to introduce RKKY interactions between
localized spins and to observe the quantum phase transition in the
heavy fermion system with strong randomness. The previous works
focused on how the non-Fermi liquid physics can appear in the
Kondo singlet phase away from quantum criticality
\cite{Kondo_Disorder}. A huge distribution of the Kondo
temperature $T_{K}$ turns out to cause such non-Fermi liquid
physics, originating from the finite density of unscreened local
moments with almost vanishing $T_K$, where the $T_{K}$
distribution may result from either the Kondo disorder for
localized electrons or the proximity of the Anderson localization
for conduction electrons. Because RKKY interactions are not
introduced in these studies, there always exist finite $T_{K}$
contributions. On the other hand, the presence of RKKY
interactions gives rise to breakdown of the Kondo effect, making
$T_{K} = 0$ identically in the strong RKKY coupling phase.
In Ref. [\onlinecite{Kondo_RKKY_Disorder}] the role of random RKKY
interactions was examined, where the Kondo coupling is fixed while
the chemical potential for conduction electrons is introduced as a
random variable with its variance $W$.
Increasing the randomness of the electron chemical potential, the
Fermi liquid state in $W < W_{c}$ turns into the spin liquid phase
in $W > W_{c}$, which displays the marginal Fermi-liquid
phenomenology due to random RKKY interactions
\cite{Kondo_RKKY_Disorder}, where the Kondo effect is suppressed
due to the proximity of the Anderson localization for conduction
electrons \cite{Kondo_Disorder}. However, the presence of finite
Kondo couplings still gives rise to Kondo screening although the
$T_{K}$ distribution differs from that in the Fermi liquid state,
associated with the presence of random RKKY interactions. In
addition, the spin liquid state was argued to be unstable against
the spin glass phase at low temperatures, maybe resulting from the
fixed Kondo interaction. On the other hand, we do not take into
account the Anderson localization for conduction electrons, and
introduce random hybridization couplings. As a result, the Kondo
effect is completely destroyed in the spin liquid phase, thus
quantum critical physics differs from the previous study of Ref.
[\onlinecite{Kondo_RKKY_Disorder}]. In addition, the spin liquid
phase is stable at finite temperatures in the present study
\cite{Sachdev_SG}.
We investigate the quantum critical point beyond the mean-field
approximation. Introducing quantum corrections fully
self-consistently in the non-crossing approximation
\cite{Hewson_Book}, we prove that the local charge susceptibility
has exactly the same critical exponent as the local spin
susceptibility. This is quite unusual because these correlation
functions are symmetry-unrelated in the lattice scale. This
reminds us of deconfined quantum criticality \cite{Senthil_DQCP},
where the Landau-Ginzburg-Wilson forbidden continuous transition
may appear with an enhanced emergent symmetry. Actually, the
continuous quantum transition was proposed between the
antiferromagnetic phase and the valence bond solid state
\cite{Senthil_DQCP}. In the vicinity of the quantum critical point
the spin-spin correlation function of the antiferromagnetic
channel has the same scaling exponent as the valence-bond
correlation function, suggesting an emergent O(5) symmetry beyond
the symmetry O(3)$\times$Z$_{4}$ of the lattice model
\cite{Tanaka_SO5} and confirmed by the Monte Carlo simulation of
the extended Heisenberg model \cite{Sandvik}. Tanaka and Hu
proposed an effective O(5) nonlinear $\sigma$ model with the
Wess-Zumino-Witten term as an effective field theory for the
Landau-Ginzburg-Wilson forbidden quantum critical point
\cite{Tanaka_SO5}, expected to allow fractionalized spin
excitations due to the topological term. This proposal can be
considered as generalization of an antiferromagnetic spin chain,
where an effective field theory is given by an O(4) nonlinear
$\sigma$ model with the Wess-Zumino-Witten term, which gives rise
to fractionalized spin excitations called spinons, identified with
topological solitons \cite{Tsvelik_Book}. Applying this concept to
the present quantum critical point, the enhanced emergent symmetry
between charge (holon) and spin (spinons) local modes leads us to
propose novel duality between the Kondo singlet phase and the
critical local moment state beyond the Landau-Ginzburg-Wilson
paradigm. We suggest an O(4) nonlinear $\sigma$ model in a
nontrivial manifold as an effective field theory for this local
quantum critical point, where the local spin and charge densities
form an O(4) vector with a constraint. The symmetry enhancement
serves the mechanism of electron fractionalization in critical
impurity dynamics, where such fractionalized excitations are
identified with topological excitations.
This paper is organized as follows. In section II we introduce an
effective disordered Anderson lattice model and perform the DMFT
approximation with the replica trick. Equation (\ref{DMFT_Action})
is the main result in this section. In section III we perform the
saddle-point analysis based on the slave-boson representation and
obtain the phase diagram showing breakdown of the Kondo effect
driven by the RKKY interaction. We show spectral functions,
self-energies, and local spin susceptibility in the Kondo phase.
Figures (1)-(3) with Eqs. (\ref{Sigma_C_MFT})-(\ref{Sigma_FC_MFT})
and (\ref{Lambda_MFT})-(\ref{Constraint_MFT}) are main results in
this section. In section IV we investigate the nature of the
impurity quantum critical point based on the non-crossing
approximation beyond the previous mean-field analysis. We solve
self-consistent equations analytically and find power-law scaling
solutions. As a result, we uncover the marginal Fermi-liquid
spectrum for the local spin susceptibility. We propose an
effective field theory for the quantum critical point and discuss
the possible relationship with the deconfined quantum critical
point. In section V we summarize our results.
The present study extends our recent publication
\cite{Tien_Kim_PRL}, showing both physical and mathematical
details.
\section{An effective DMFT action from an Anderson lattice model with strong randomness}
We start from an effective Anderson lattice model \bqa H &=& -
\sum_{ij,\sigma} t_{ij} c^{\dagger}_{i\sigma} c_{j\sigma} + E_{d}
\sum_{i\sigma} d^{\dagger}_{i\sigma} d_{i\sigma} \nn &+& \sum_{ij}
J_{ij} \mathbf{S}_{i} \cdot \mathbf{S}_{j} + \sum_{i\sigma} (V_{i}
c^{\dagger}_{i\sigma} d_{i\sigma} + {\rm H.c.}) , \label{ALM} \eqa
where $t_{ij} = \frac{t}{M \sqrt{z}}$ is a hopping integral for
conduction electrons and \bqa && J_{ij} = \frac{J}{\sqrt{z M}}
\varepsilon_{i}\varepsilon_{j} , ~~~~~ V_{i} = \frac{V}{\sqrt{M}}
\varepsilon_{i} \nonumber \eqa are random RKKY and hybridization
coupling constants, respectively. Here, $M$ is the spin degeneracy
and $z$ is the coordination number. Randomness is given by the
Gaussian distribution \bqa \overline{\varepsilon_{i}} = 0 , ~~~~~
\overline{\varepsilon_{i}\varepsilon_{j}} = \delta_{ij} . \eqa
The disorder average can be performed in the replica trick
\cite{SG_Review}. Performing the disorder average in the Gaussian
distribution function, we reach the following expression for the
replicated effective action
\begin{eqnarray}
&& \overline{Z^n} = \int \mathcal{D}c_{i\sigma}^{a}
\mathcal{D}d_{i\sigma}^{a} e^{-\bar{S}_n } , \nn &&
\overline{S}_{n} = \int\limits_{0}^{\beta} d\tau \sum_{ij\sigma a}
c^{\dagger a}_{i\sigma}(\tau) ((\partial_{\tau} - \mu)\delta_{ij}
+ t_{ij}) c^{a}_{j\sigma}(\tau) \nn && + \int\limits_{0}^{\beta}
d\tau \sum_{i\sigma a}d^{\dagger a}_{i\sigma}(\tau)
(\partial_{\tau} + E_d) d^{a}_{i\sigma}(\tau) \nn && -
\frac{J^2}{2 z M} \int\limits_{0}^{\beta} d\tau
\int\limits_{0}^{\beta} d\tau' \sum_{ijab}
\mathbf{S}^{a}_{i}(\tau) \cdot \mathbf{S}^{a}_{j}(\tau) \;\;
\mathbf{S}^{b}_{i}(\tau') \cdot \mathbf{S}^{b}_{j}(\tau') \nn && -
\frac{V^{2}}{2 M} \int\limits_{0}^{\beta} d\tau
\int\limits_{0}^{\beta} d\tau' \sum_{i \sigma \sigma' ab} \big(
c^{\dagger a}_{i\sigma}(\tau) d^{a}_{i\sigma}(\tau) + d^{\dagger
a}_{i\sigma}(\tau) c^{a}_{i\sigma}(\tau)\big) \nn &&
~~~~~~~~~~~~~~~ \times \big( c^{\dagger b}_{i\sigma'}(\tau')
d^{b}_{i\sigma'}(\tau') + d^{\dagger b}_{i\sigma'}(\tau')
c^{b}_{i\sigma'}(\tau')\big) , \label{DALM}
\end{eqnarray}
where $\sigma, \sigma' = 1, ..., M$ is the spin index and $a, b =
1, ..., n$ is the replica index. In appendix A we derive this
replicated action from Eq. (\ref{ALM}).
One may ask the role of randomness of $E_{d}$, generating \bqa &&
- \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{i\sigma\sigma' ab} d^{\dagger a}_{i\sigma}(\tau)
d^{a}_{i\sigma}(\tau) d^{\dagger b}_{i\sigma'}(\tau')
d^{b}_{i\sigma'}(\tau') , \nonumber \eqa where density
fluctuations are involved. This contribution is expected to
support the Kondo effect because such local density fluctuations
help hybridization with conduction electrons. In this paper we fix
$E_{d}$ as a constant value in the Kondo limit, allowed as long as
its variance is not too large to overcome the Kondo limit.
One can introduce randomness in the hopping integral of conduction
electrons. But, this contribution gives rise to the same effect as
the DMFT approximation in the $z\rightarrow \infty$ Bethe lattice
\cite{Olivier}. In this respect randomness in the hopping integral
is naturally introduced into the present DMFT study.
The last disorder contribution can arise from randomness in the
electron chemical potential, expected to cause the Anderson
localization for conduction electrons. Actually, this results in
the metal-insulator transition at the critical disorder strength,
suppressing the Kondo effect in the insulating phase. Previously,
the Griffiths phase for non-Fermi liquid physics has been
attributed to the proximity effect of the Anderson localization
\cite{Kondo_Disorder}. In this work we do not consider the
Anderson localization for conduction electrons.
We observe that the disorder average neutralizes spatial
correlations except for the hopping term of conduction electrons.
This leads us to the DMFT formulation, resulting in an effective
local action for the strong random Anderson lattice model
\begin{eqnarray}
&& \bar{S}_{n}^{\rm eff} = \int_{0}^{\beta} d\tau \Bigl\{
\sum_{\sigma a} c^{\dagger a}_{\sigma}(\tau) (\partial_{\tau} -
\mu) c^{a}_{\sigma}(\tau) \nn && + \sum_{\sigma a}d^{\dagger
a}_{\sigma}(\tau) (\partial_{\tau} + E_d) d^{a}_{\sigma}(\tau)
\Bigr\} \nn && -\frac{V^2}{2 M} \int_{0}^{\beta} d\tau
\int_{0}^{\beta} d\tau' \sum_{\sigma \sigma' a b} \big[ c^{\dagger
a}_{\sigma}(\tau) d^{a}_{\sigma}(\tau) + d^{\dagger
a}_{\sigma}(\tau) c^{a}_{\sigma}(\tau)\big] \nn &&
~~~~~~~~~~~~~~~~~~~~~~~~~ \times \big[ c^{\dagger
b}_{\sigma'}(\tau') d^{b}_{\sigma'}(\tau') + d^{\dagger
b}_{\sigma'}(\tau') c^{b}_{\sigma'}(\tau')\big] \nn && -
\frac{J^2}{2 M} \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{ab} \sum_{\alpha\beta\gamma\delta} S^{a}_{\alpha\beta}(\tau)
R^{ab}_{\beta\alpha\gamma\delta}(\tau-\tau')
S^{b}_{\delta\gamma}(\tau') \nn && + \frac{t^2}{M^2}
\int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau' \sum_{ab\sigma}
c^{\dagger a}_{\sigma}(\tau) G^{ab}_{c \;
\sigma\sigma}(\tau-\tau') c^{b}_{\sigma}(\tau' ) ,
\label{DMFT_Action}
\end{eqnarray}
where $G^{ab}_{c \; ij\sigma\sigma}(\tau-\tau')$ is the local
Green's function for conduction electrons and $R^{ab}_{\beta
\alpha \gamma \delta}(\tau-\tau')$ is the local spin
susceptibility for localized spins, given by \bqa G^{ab}_{c \;
ij\sigma\sigma}(\tau-\tau') &=& - \langle T_{\tau} [
c^{a}_{i\sigma}(\tau) c^{\dagger b}_{j\sigma}(\tau') ] \rangle ,
\nn R^{ab}_{\beta \alpha \gamma \delta}(\tau-\tau') &=& \langle
T_{\tau} [S^{a}_{\beta\alpha}(\tau) S^{b}_{\gamma\delta}(\tau')]
\rangle , \label{Local_Green_Functions} \eqa respectively. Eq.
(\ref{DMFT_Action}) with Eq. (\ref{Local_Green_Functions}) serves
a completely self-consistent framework for this problem.
Derivation of Eq. (\ref{DMFT_Action}) from Eq. (\ref{DALM}) is
shown in appendix B.
This effective model has two well known limits, corresponding to
the disordered Heisenberg model \cite{Sachdev_SG} and the
disordered Anderson lattice model without RKKY interactions
\cite{Kondo_Disorder}, respectively. In the former case a spin
liquid state emerges due to strong quantum fluctuations while a
local Fermi liquid phase appears at low temperatures in the latter
case as long as the $T_{K}$ distribution is not so broadened
enough. In this respect it is natural to consider a quantum phase
transition driven by the ratio between variances for the RKKY and
hybridization couplings.
\section{Phase diagram}
\subsection{Slave boson representation and mean field approximation}
We solve the effective DMFT action based on the U(1) slave boson
representation
\begin{eqnarray}
d^{a}_{\sigma} &=& \hat{b}^{\dagger a} f^{a}_{\sigma} , \label{SB_Electron} \\
S_{\sigma\sigma'}^{a} &=& f^{a\dagger}_{\sigma} f_{\sigma'}^{a} -
q_{0}^{a} \delta_{\sigma \sigma'} \label{SB_Spin}
\end{eqnarray}
with the single occupancy constraint $|b^{a}|^2 + \sum_{\sigma}
f^{a}_{\sigma}(\tau) f^{a}_{\sigma}(\tau) = 1$, where $q_{0}^{a} =
\sum_{\sigma}f^{a\dagger}_{\sigma} f_{\sigma}^{a}/M $.
In the mean field approximation we replace the holon operator
$\hat{b}^{a}$ with its expectation value $\langle \hat{b}^{a}
\rangle \equiv b^{a}$. Then, the effective action Eq.
(\ref{DMFT_Action}) becomes
\begin{widetext}
\begin{eqnarray}
&& \bar{S}_{n}^{\rm eff} = \int_{0}^{\beta} d\tau \Bigl\{
\sum_{\sigma a} c^{\dagger a}_{\sigma}(\tau) (\partial_{\tau} -
\mu) c^{a}_{\sigma}(\tau) + \sum_{\sigma a} f^{\dagger
a}_{\sigma}(\tau) (\partial_{\tau} + E_d) f^{a}_{\sigma}(\tau) +
\sum_{a} \lambda^{a} (|b^{a}|^2 + \sum_{\sigma}
f^{a}_{\sigma}(\tau) f^{a}_{\sigma}(\tau)- 1) \Bigr\} \nonumber \\
&& -\frac{V^2}{2 M} \int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau'
\sum_{\sigma \sigma' a b} \big[ c^{\dagger a}_{\sigma}(\tau)
f^{a}_{\sigma}(\tau) (b^{a})^{*} + b^{a} f^{\dagger
a}_{\sigma}(\tau) c^{a}_{\sigma}(\tau)\big] \big[ c^{\dagger
b}_{\sigma'}(\tau') f^{b}_{\sigma'}(\tau') (b^{b})^{*} + b^{b}
f^{\dagger b}_{\sigma'}(\tau') c^{b}_{\sigma'}(\tau')\big]
\nonumber \\ &&-\frac{J^2}{2 M} \int_{0}^{\beta} d\tau
\int_{0}^{\beta} d\tau' \sum_{ab} \sum_{\alpha\beta\gamma\delta}
\big[f^{\dagger a}_{\alpha}(\tau) f^{a}_{\beta}(\tau) -
q_{\alpha}^{a} \delta_{\alpha\beta} \big]
R^{ab}_{\beta\alpha\gamma\delta}(\tau-\tau') \big[f^{\dagger
b}_{\delta}(\tau') f^{b}_{\gamma}(\tau') - q_{\gamma}^{b}
\delta_{\gamma\delta} \big] \nonumber \\ && + \frac{t^2}{M^2}
\int_{0}^{\beta} d\tau \int_{0}^{\beta} d\tau' \sum_{ab\sigma}
c^{\dagger a}_{\sigma}(\tau) G^{ab}_{\sigma}(\tau-\tau')
c^{b}_{\sigma}(\tau' ) , \label{SB_MFT}
\end{eqnarray}
\end{widetext}
where $\lambda^{a}$ is a lagrange multiplier field to impose the
constraint and $q_{\alpha}^{a} =\langle f^{\dagger a}_{\alpha}
f^{a}_{\alpha} \rangle$.
Taking the $M\rightarrow \infty$ limit, we obtain self-consistent
equations for self-energy corrections,
\begin{eqnarray}
\Sigma_{c \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{f \;
\sigma\sigma'}^{\; a b}(\tau) (b^{a})^{*} b^b + \frac{t^2}{M^2}
\delta_{\sigma\sigma'} G_{c \; \sigma}^{\; a b}(\tau) ,
\\ \Sigma_{f \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{c
\; \sigma\sigma'}^{\; a b}(\tau) (b^{b})^{*} b^a \nn &+&
\frac{J^2}{2 M} \sum_{s s'} G_{f \; s s'}^{\; a b}(\tau) [
R^{ab}_{s\sigma \sigma' s'}(\tau) + R^{ba}_{\sigma' s' s
\sigma}(-\tau) ] , \nn \\ \Sigma_{cf \; \sigma\sigma'}^{\;\;
ab}(\tau) &=& - \delta_{ab} \delta_{\sigma\sigma'}\delta(\tau)
\frac{V^2}{M} \sum_{s c} [\langle f^{\dagger c}_{s} c^{c}_{s}
\rangle b^c + {\rm c.c.} ] (b^{a})^{*} \nn &+& \frac{V^2}{M}
G_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) (b^a b^b)^{*} , \\
\Sigma_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) &=& - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c}
[\langle f^{\dagger c}_{s} c^{c}_{s} \rangle b^c + {\rm c.c.} ]
b^{a} \nn &+& \frac{V^2}{M} G_{cf \; \sigma\sigma'}^{\;\;
ab}(\tau) b^a b^b ,
\end{eqnarray} respectively, where local Green's functions are given by
\begin{eqnarray}
G_{c \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
c^{a}_{\sigma}(\tau) c^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{f \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
f^{a}_{\sigma}(\tau) f^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{cf \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
c^{a}_{\sigma}(\tau) f^{\dagger b}_{\sigma'} (0) \rangle ,
\\
G_{fc \; \sigma\sigma'}^{\; ab}(\tau) &=& - \langle T_c
f^{a}_{\sigma}(\tau) c^{\dagger b}_{\sigma'} (0) \rangle .
\end{eqnarray}
In the paramagnetic and symmetric replica phase these Green's
functions are diagonal in the spin and replica indices, i.e.,
$G^{ab}_{x \sigma\sigma'}(\tau)=\delta_{ab}\delta_{\sigma\sigma'}
G_{x}(\tau)$ with $x=c,f,cf,fc$. Then, we obtain the Dyson
equation
\begin{widetext}
\begin{eqnarray}
\left(\begin{array}{cc} G_{c}(i \omega_l) & G_{fc}(i \omega_l) \\
G_{cf}(i \omega_l) & G_{f}(i \omega_l)
\end{array} \right) = \left( \begin{array}{cc}
i\omega_l + \mu - \Sigma_{c}(i \omega_l) & - \Sigma_{cf}(i
\omega_l) \\
- \Sigma_{fc}(i \omega_l) & i\omega_l - E_d -\lambda -
\Sigma_{f}(i \omega_l)
\end{array} \right)^{-1} ,
\end{eqnarray}
\end{widetext}
where $\omega_l=(2 l+1) \pi T$ with $l$ integer. Accordingly, Eqs.
(9)-(12) are simplified as follows
\begin{eqnarray}
\Sigma_{c}(i\omega_l) &=& \frac{V^2}{M} G_{f}(i\omega_l) |b|^2 +
\frac{t^2}{M^2} G_{c}(i\omega_l) , \label{Sigma_C_MFT} \\
\Sigma_{f}(i\omega_l) &=& \frac{V^2}{M} G_{c}(i\omega_l) |b|^2 +
\frac{J^2}{2 M} T \sum_{s} \sum_{\nu_m} G_{f}(i\omega_l-\nu_m) \nn
&\times& [R_{s\sigma\sigma s}(i\nu_m) + R_{\sigma s
s\sigma}(-i\nu_m) ] , \label{Sigma_F_MFT} \\
\Sigma_{cf}(i\omega_l) &=& \frac{V^2}{M} G_{fc}(i\omega_l)
(b^2)^{*} - n \frac{V^2}{M} (b^2)^{*} \sum_s \langle
f^{\dagger}_{s} c_{s}
+ c^{\dagger}_{s} f_{s} \rangle , \label{Sigma_CF_MFT} \nn \\
\Sigma_{fc}(i\omega_l) &=& \frac{V^2}{M} G_{cf}(i\omega_l) b^2 - n
\frac{V^2}{M} b^2 \sum_s \langle f^{\dagger}_{s} c_{s} +
c^{\dagger}_{s} f_{s} \rangle \label{Sigma_FC_MFT}
\end{eqnarray} in the frequency space.
Note that $n$ is the replica index and the last terms in
Eqs.~(\ref{Sigma_CF_MFT})-(\ref{Sigma_FC_MFT}) vanish in the limit
of $n \rightarrow 0$. $R_{s\sigma\sigma s}(i\nu_m)$ is the local
spin susceptibility, given by
\begin{eqnarray}
R_{\sigma s s \sigma}(\tau) = - G_{f \sigma}(-\tau) G_{f s}(\tau)
\label{Spin_Corr_MFT}
\end{eqnarray} in the Fourier transformation.
The self-consistent equation for boson condensation is
\begin{eqnarray}
&& b \Big[ \lambda + 2 V^2 T \sum_{\omega_l} G_{c}(i\omega_l)
G_{f}(i\omega_l) \nn && + V^2 T \sum_{\omega_l} \Bigl\{
G_{fc}(i\omega_l) G_{fc}(i\omega_l) + G_{cf}(i\omega_l)
G_{cf}(i\omega_l)\Bigr\} \Big] =0 . \label{Lambda_MFT} \nn
\end{eqnarray}
The constraint equation is given by
\begin{eqnarray}
|b|^2 + \sum_{\sigma} \langle f^{\dagger}_{\sigma} f_{\sigma}
\rangle = 1 . \label{Constraint_MFT}
\end{eqnarray}
The main difference between the clean and disordered cases is that
the off diagonal Green's function $G_{fc}(i\omega_l)$ should
vanish in the presence of randomness in $V$ with its zero mean
value while it is proportional to the condensation $b$ when the
average value of $V$ is finite. In the present situation we find
$b^{a} = \langle f^{a\dagger}_{\sigma} c_{\sigma}^{a} \rangle = 0$
while $(b^{a})^{*}b^{b} = \langle f^{a\dagger}_{\sigma}
c_{\sigma}^{a} c_{\sigma'}^{b\dagger} f_{\sigma'}^{b} \rangle
\equiv |b|^{2} \delta_{ab} \not= 0$. As a result, Eqs.
(\ref{Sigma_CF_MFT}) and (\ref{Sigma_FC_MFT}) are identically
vanishing in both left and right hand sides. This implies that the
Kondo phase is not characterized by the holon condensation but
described by finite density of holons. It is important to notice
that this gauge invariant order parameter does not cause any kinds
of symmetry breaking for the Kondo effect as it should be.
\subsection{Numerical analysis}
We use an iteration method in order to solve the mean field
equations (\ref{Sigma_C_MFT}), (\ref{Sigma_F_MFT}),
(\ref{Sigma_CF_MFT}), (\ref{Sigma_FC_MFT}), (\ref{Lambda_MFT}),
and (\ref{Constraint_MFT}). For a given $E_d+\lambda$, we use
iterations to find all Green's functions from Eqs.
(\ref{Sigma_C_MFT})-(\ref{Sigma_FC_MFT}) with Eq.
(\ref{Spin_Corr_MFT}) and $b^2$ from Eq.~(\ref{Lambda_MFT}). Then,
we use Eq.~(\ref{Spin_Corr_MFT}) to calculate $\lambda$ and $E_d$.
We adjust the value of $E_d+\lambda$ in order to obtain the
desirable value for $E_d$. Using the obtained $\lambda$ and $b^2$,
we calculate the Green's functions in the real frequency by
iterations. In the real frequency calculation we introduce the
following functions \cite{Saso}
\begin{eqnarray}
\alpha_{\pm}(t)=\int_{-\infty}^{\infty} d\omega e^{-i \omega t}
\rho_{f}(\omega) f(\pm \omega/T),
\end{eqnarray}
where $\rho_{f}(\omega) = - {\rm Im} G_{f}(\omega+i0^{+})/\pi$ is
the density of states for f-electrons, and $f(x)=1/(\exp(x)+1)$ is
the Fermi-Dirac distribution function. Then, the self-energy
correction from spin correlations is expressed as follows
\begin{eqnarray}
&& \Sigma_{J}(i\omega_l) \equiv \frac{J^2}{2 M} T \sum_{s}
\sum_{\nu_m} G_{f}(i\omega_l-\nu_m) \nn && ~~~~~~~~~~ \times
[R_{s\sigma\sigma s}(i\nu_m) + R_{\sigma s s\sigma}(-i\nu_m) ] \nn
&& = - i J^2 \int_{0}^{\infty} d t e^{i\omega t} \Bigl( [
\alpha_{+}(t)]^2 \alpha_{-}^{*}(t) + [ \alpha_{-}(t)]^2
\alpha_{+}^{*}(t) \Bigr) . \nn
\end{eqnarray} Performing the Fourier transformation, we
calculate $\alpha_{\pm}(t)$ and obtain $\Sigma_{J}(\omega)$.
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{crit.eps}
\caption{The phase diagram of the strongly disordered Anderson
lattice model in the DMFT approximation ($E_d=-1$, $\mu=0$,
$T=0.01$, $t=1$, $M=2$).} \label{fig1}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{imself.eps}
\caption{The imaginary part of the self-energy of conduction
electrons and that of localized electrons for various values of
$J$ ($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M$=2).}
\label{fig2}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{dosfcbw.eps}
\caption{Density of states of conduction ($\rho_{c}(\omega)$) and
localized ($\rho_{f}(\omega)$) electrons for various values of $J$
($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M=2$). }
\label{fig3}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{imchi.eps}
\caption{Local spin susceptibility for various values of $J$
($V=0.5$, $E_d=-0.7$, $\mu=0$, $T=0.01$, $t=1$, $M=2$).}
\label{fig4}
\end{figure}
Figure \ref{fig1} shows the phase diagram of the strongly
disordered Anderson lattice model in the plane of $(V, J)$, where
$V$ and $J$ are variances for the Kondo and RKKY interactions,
respectively. The phase boundary is characterized by $|b|^{2} =
0$, below which $|b|^{2} \not= 0$ appears to cause effective
hybridization between conduction electrons and localized fermions
although our numerical analysis shows $\langle
f^{\dagger}_{\sigma} c_{\sigma} \rangle =0$, meaning
$\Sigma_{cf(fc)}(i\omega) = 0$ and $G_{cf(fc)}(i\omega) = 0$ in
Eqs. (\ref{Sigma_CF_MFT}) and (\ref{Sigma_FC_MFT}).
In Fig. \ref{fig2} one finds that the effective hybridization
enhances the scattering rate of conduction electrons dramatically
around the Fermi energy while the scattering rate for localized
electrons becomes reduced at the resonance energy. Enhancement of
the imaginary part of the conduction-electron self-energy results
from the Kondo effect. In the clean situation it is given by the
delta function associated with the Kondo effect
\cite{Hewson_Book}. This self-energy effect reflects the spectral
function, shown in Fig. \ref{fig3}, where the pseudogap feature
arises in conduction electrons while the sharply defined peak
appears in localized electrons, identified with the Kondo
resonance although the description of the Kondo effect differs
from the clean case. Increasing the RKKY coupling, the Kondo
effect is suppressed as expected. In this Kondo phase the local
spin susceptibility is given by Fig. \ref{fig4}, displaying the
typical $\omega$-linear behavior in the low frequency limit,
nothing but the Fermi liquid physics for spin correlations
\cite{Olivier}. Increasing $J$, incoherent spin correlations are
enhanced, consistent with spin liquid physics \cite{Olivier}.
One can check our calculation, considering the $J = 0$ limit to
recover the known result. In this limit we obtain an analytic
expression for $V_c$ at half filling ($\mu=0$)
\begin{eqnarray}
V_c(J=0) &=& \sqrt{\frac{E_d}{2 P_c }}, \\
P_c &=& \int_{-1}^{1} d\omega \rho_{0}(\omega)
\frac{f(\omega/T)-f(0)}{\omega} ,
\end{eqnarray}
where $\rho_{0}(\omega)=\frac{2}{\pi} \sqrt{1-\omega^2}$ is the
bare density of states of conduction electrons. One can check
$V_c(J=0) \rightarrow 0$ in the zero temperature limit because
$P_{c} \rightarrow \infty$.
\section{Nature of quantum criticality}
\subsection{Beyond the saddle-point analysis : Non-crossing approximation}
Resorting to the slave-boson mean-field approximation, we
discussed the phase diagram of the strongly disordered Anderson
lattice model, where a quantum phase transition appears from a
spin liquid state to a dirty "heavy-fermion" Fermi liquid phase,
increasing $V/J$, the ratio of variances of the hybridization and
RKKY interactions. Differentiated from the heavy-fermion quantum
transition in the clean situation, the order parameter turns out
to be the density of holons instead of the holon condensation.
Evaluating self-energies for both conduction electrons and
localized electrons, we could identify the Kondo effect from each
spectral function. In addition, we obtained the local spin
susceptibility consistent with the Fermi liquid physics.
The next task will be on the nature of quantum criticality between
the Kondo and spin liquid phases. This question should be
addressed beyond the saddle-point analysis. Introducing quantum
corrections in the non-crossing approximation, justified in the
$M\rightarrow \infty$ limit, we investigate the quantum critical
point, where density fluctuations of holons are critical.
Releasing the slave-boson mean-field approximation to take into
account holon excitations, we reach the following self-consistent
equations for self-energy corrections,
\begin{eqnarray}
\Sigma_{c \;\sigma\sigma'}^{\;ab}(\tau) = \frac{V^2}{M} G_{f \;
\sigma\sigma'}^{\; a b}(\tau) G_{b}^{a b}(-\tau) + \frac{t^2}{M^2}
\delta_{\sigma\sigma'} G_{c \; \sigma}^{\; a b}(\tau) ,
\label{Sigma_C_NCA}
\end{eqnarray}
\begin{eqnarray}
\Sigma_{f \;\sigma\sigma'}^{\;ab}(\tau) &=& \frac{V^2}{M} G_{c \;
\sigma\sigma'}^{\; a b}(\tau) G_{b}^{a b}(\tau) \nn &+&
\frac{J^2}{2 M} \sum_{s s'} G_{f \; s s'}^{\; a b}(\tau) [
R^{ab}_{s\sigma \sigma' s'}(\tau) + R^{ba}_{\sigma' s' s
\sigma}(-\tau) ] , \label{Sigma_F_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{cf \; \sigma\sigma'}^{\;\; ab}(\tau) = - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c} \int
d\tau_1 \langle f^{\dagger c}_{s} c^{c}_{s} \rangle G_{b}^{c
a}(\tau_1-\tau') , \label{Sigma_CF_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{fc \; \sigma\sigma'}^{\;\; ab}(\tau) = - \delta_{ab}
\delta_{\sigma\sigma'}\delta(\tau) \frac{V^2}{M} \sum_{s c}\int
d\tau_1 \langle c^{\dagger c}_{s} f^{c}_{s} \rangle G_{b}^{a
c}(\tau-\tau_1) , \label{Sigma_FC_NCA} \nn
\end{eqnarray}
\begin{eqnarray}
\Sigma_{b}^{a b}(\tau) = \frac{V^2}{M} \sum_{\sigma\sigma'} G_{f
\; \sigma\sigma'}^{\; b a}(\tau) G_{c \; \sigma'\sigma}^{\; b
a}(-\tau) . \label{Sigma_B_NCA}
\end{eqnarray}
Since we considered the paramagnetic and replica symmetric phase,
it is natural to assume such symmetries at the quantum critical
point. Note that the off diagonal self-energies,
$\Sigma_{cf}(i\omega_l)$ and $\Sigma_{fc}(i\omega_l)$, are just
constants and proportional to $\langle f^{\dagger}_{\sigma}
c_{\sigma} \rangle$ and $\langle c^{\dagger}_{\sigma} f_{\sigma}
\rangle$, respectively. As a result, $\Sigma_{cf}(i\omega_l) =
\Sigma_{fc}(i\omega_l) = 0$ should be satisfied at the quantum
critical point as the Kondo phase because of $\langle
f^{\dagger}_{\sigma} c_{\sigma} \rangle = \langle
c^{\dagger}_{\sigma} f_{\sigma} \rangle = 0$. Then, we reach the
following self-consistent equations called the non-crossing
approximation
\begin{eqnarray}
\Sigma_{c}(\tau) &=& \frac{V^2}{M} G_{f}(\tau) G_{b}(-\tau) +
\frac{t^2}{M^2} G_{c}(\tau) ,
\label{Sigma_C_NCA_GF} \\
\Sigma_{f}(\tau) &=& \frac{V^2}{M} G_{c}(\tau) G_{b}(\tau) - J^2
[G_{f}(\tau)]^2 G_{f}(-\tau) , \label{Sigma_F_NCA_GF} \\
\Sigma_{b}(\tau) &=& V^2 G_{c}(-\tau) G_{f}(\tau) .
\label{Sigma_B_NCA_GF}
\end{eqnarray}
Local Green's functions are given by
\begin{eqnarray}
G_{c}(i\omega_l) &=& \Big[i\omega_l + \mu - \Sigma_{c}(i\omega_l)
\Big]^{-1} , \label{Dyson_Gc} \\
G_{f}(i\omega_l) &=& \Big[i\omega_l - E_d -\lambda -
\Sigma_{f}(i\omega_l) \Big]^{-1} , \label{Dyson_Gf} \\
G_{b}(i \nu_{l}) &=& \Big[ i\nu_{l} -\lambda -\Sigma_{b}(i\nu_l)
\Big]^{-1} , \label{Dyson_Gb}
\end{eqnarray}
where $\omega_l=(2 l+1) \pi T$ is for fermions and $\nu_{l} = 2 l
\pi T$ is for bosons.
\subsection{Asymptotic behavior at zero temperature }
Calling quantum criticality, power-law scaling solutions are
expected. Actually, if the second term is neglected in Eq.
(\ref{Sigma_F_NCA_GF}), Eqs. (\ref{Sigma_F_NCA_GF}) and
(\ref{Sigma_B_NCA_GF}) are reduced to those of the multi-channel
Kondo effect in the non-crossing approximation \cite{Hewson_Book}.
Power-law solutions are well known in the regime of $1/T_K \ll
\tau \ll \beta=1/T \rightarrow \infty$, where $T_{K} =
D[\Gamma_{c}/\pi D]^{1/M} \exp[\pi E_{d}/M \Gamma_{c}]$ is an
effective Kondo temperature \cite{Tien_Kim} with the conduction
bandwidth $D$ and effective hybridization $\Gamma_{c} = \pi
\rho_{c} \frac{V^{2}}{M}$. In the presence of the RKKY interaction
[the second term in Eq. (\ref{Sigma_F_NCA_GF})], the effective
hybridization will be reduced, where $\Gamma_{c}$ is replaced with
$\Gamma_{c}^{J} \approx \pi \rho_{c} (\frac{V^{2}}{M} - J^{2})$.
Our power-law ansatz is as follows
\begin{eqnarray}
G_{c} &=& \frac{A_c}{\tau^{\Delta_c}} , \\
G_{f} &=& \frac{A_f}{\tau^{\Delta_f}} , \\
G_{b} &=& \frac{A_b}{\tau^{\Delta_b}} ,
\end{eqnarray} where $A_{c}$, $A_{f}$, and $A_{b}$ are positive
numerical constants. In the frequency space these are
\begin{eqnarray}
G_{c}(\omega) &=& A_c C_{\Delta_{c}-1} \omega^{\Delta_c-1}, \label{Dyson_W_Gc} \\
G_{f}(\omega) &=& A_f C_{\Delta_{f}-1} \omega^{\Delta_f-1}, \label{Dyson_W_Gf} \\
G_{b}(\omega) &=& A_b C_{\Delta_{b}-1} \omega^{\Delta_b-1},
\label{Dyson_W_Gb}
\end{eqnarray}
where $C_{\Delta_{c,f,b}} = \int_{-\infty}^{\infty} d x \frac{e^{i
x}}{x^{\Delta_{c,f,b}+1}}.$
Inserting Eqs. (\ref{Dyson_W_Gc})-(\ref{Dyson_W_Gb}) into Eqs.
(\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}), we obtain scaling
exponents of $\Delta_{c}$, $\Delta_{f}$, and $\Delta_{b}$. In
appendix C-1 we show how to find such critical exponents in a
detail. Two fixed points are allowed. One coincides with the
multi-channel Kondo effect, given by $\Delta_{c} = 1$, and
$\Delta_{f} = \frac{M}{M+1}$, $\Delta_{b} = \frac{1}{M+1}$ with $M
= 2$, where contributions from spin fluctuations to self-energy
corrections are irrelevant, compared with holon fluctuations. The
other is $\Delta_{c} = 1$ and $\Delta_{f} = \Delta_{b} =
\frac{1}{2}$, where spin correlations are critical as much as
holon fluctuations.
One can understand the critical exponent $\Delta_{f} = 1/2$ as the
proximity of the spin liquid physics \cite{Sachdev_SG}.
Considering the $V \rightarrow 0$ limit, we obtain the scaling
exponents of $\Delta_c = 1$ and $\Delta_f = 1/2$ from the scaling
equations (\ref{92}) and (\ref{93}). Thus, $G_{c}(\omega) \sim
\mbox{sgn}(\omega)$ and $G_{f}(\omega) \sim 1/\sqrt{\omega}$
result for $\omega \rightarrow 0$. In this respect both spin
fluctuations and holon excitations are critical as equal strength
at this quantum critical point.
\subsection{Finite temperature scaling behavior}
We solve Eqs. (\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}) in the
regime $\tau, \beta \gg 1/T_K$ with arbitrary $\tau/\beta$, where
the scaling ansatz at zero temperature is generalized as follows
\begin{eqnarray}
G_{c}(\tau) &=& A_{c} \beta^{-\Delta_{c}}
g_{c}\Big(\frac{\tau}{\beta} \Big) , \label{Dyson_T_Gc} \\
G_{f}(\tau) &=& A_{f} \beta^{-\Delta_{f}}
g_{f}\Big(\frac{\tau}{\beta} \Big) , \label{Dyson_T_Gf} \\
G_{b}(\tau) &=& A_{b} \beta^{-\Delta_{b}}
g_{b}\Big(\frac{\tau}{\beta} \Big) . \label{Dyson_T_Gb}
\end{eqnarray}
\begin{eqnarray}
g_{\alpha}(x) = \bigg(\frac{\pi}{\sin(\pi
x)}\bigg)^{\Delta_\alpha} \label{T_Scaling}
\end{eqnarray}
with $\alpha=c,f,b$ is the scaling function at finite
temperatures. In the frequency space we obtain
\begin{eqnarray}
G_{c}(i\omega_l) &=& A_c \beta^{1-\Delta_c}
\Phi_c(i\bar{\omega}_l) , \label{Dyson_TW_Gc} \\
G_{f}(i\omega_l) &=& A_f \beta^{1-\Delta_f}
\Phi_f(i\bar{\omega}_l) , \label{Dyson_TW_Gf} \\
G_{b}(i\nu_l) &=& A_c \beta^{1-\Delta_b} \Phi_b(i\bar{\nu}_l) ,
\label{Dyson_TW_Gb}
\end{eqnarray}
where $\bar{\omega}_l=(2 l+1) \pi$, $\bar{\nu}_l= 2 l \pi$, and
\begin{eqnarray}
\Phi_{\alpha}(i\bar{x}) = \int_{0}^{1} d t e^{i \bar{x} t}
g_{\alpha}(t) . \label{Phi_alpha}
\end{eqnarray}
Inserting Eqs. (\ref{Dyson_TW_Gc})-(\ref{Dyson_TW_Gb}) into Eqs.
(\ref{Sigma_C_NCA_GF})-(\ref{Sigma_B_NCA_GF}), we find two fixed
points, essentially the same as the case of $T = 0$. But, scaling
functions of $\Phi_c(i\bar{\omega}_l)$, $\Phi_f(i\bar{\omega}_l)$,
and $\Phi_b(i\bar{\omega}_l)$ are somewhat complicated. All
scaling functions are derived in appendix C-2.
\subsection{Spin susceptibility}
We evaluate the local spin susceptibility, given by
\begin{eqnarray}
\chi(\tau) &=& G_{f}(\tau) G_{f}(-\tau) , \nonumber \\
&=& A_f^2 \beta^{-2 \Delta_f} \bigg(\frac{\pi}{\sin(\pi
\tau/\beta)} \bigg)^{2\Delta_f} . \label{126}
\end{eqnarray}
The imaginary part of the spin susceptibility
$\chi^{''}(\omega)={\rm Im} \; \chi(\omega+ i0^{+})$ can be found
from
\begin{eqnarray}
\chi(\tau) = \int \frac{d \omega}{\pi} \frac{e^{-\tau
\omega}}{1-e^{-\beta \omega}} \chi^{''}(\omega) . \label{127}
\end{eqnarray}
Inserting the scaling ansatz
\begin{eqnarray}
\chi^{''}(\omega) = A_f^2 \beta^{1-2\Delta_f}
\phi\Big(\frac{\omega}{T}\Big) \label{128}
\end{eqnarray}
into Eq. (\ref{127}) with Eq. (\ref{126}), we obtain
\begin{eqnarray}
\int \frac{d x}{\pi} \frac{e^{-x \tau/\beta}}{1-e^{-x}} \phi(x) =
\bigg(\frac{\pi}{\sin(\pi \tau/\beta)} \bigg)^{2\Delta_f} .
\end{eqnarray}
Changing the variable $t=i(\tau/\beta -1/2)$, we obtain
\begin{eqnarray}
\int \frac{d x}{\pi} e^{i x t} \frac{\phi(x)}{e^{x}-e^{-x}} =
\bigg(\frac{\pi}{\cosh(\pi t)} \bigg)^{2\Delta_f} .
\end{eqnarray}
As a result, we find the scaling function
\begin{eqnarray}
\phi(x) = 2 (2\pi)^{2 \Delta_f-1} \sinh\Big(\frac{x}{2}\Big)
\frac{\Gamma(\Delta_f+i x/2 \pi)\Gamma(\Delta_f - i
x/2\pi)}{\Gamma(2\Delta_f)} . \nn
\end{eqnarray}
This coincides with the spin spectrum of the spin liquid state
when $V = 0$ \cite{Olivier}.
\subsection{Discussion : Deconfined local quantum criticality}
The local quantum critical point characterized by $\Delta_{c} = 1$
and $\Delta_{f} = \Delta_{b} = 1/2$ is the genuine critical point
in the spin-liquid to local Fermi-liquid transition because such a
fixed point can be connected to the spin liquid state ($\Delta_{c}
= 1$ and $\Delta_{f} = 1/2$) naturally. This fixed point results
from the fact that the spinon self-energy correction from RKKY
spin fluctuations is exactly the same order as that from critical
holon excitations. It is straightforward to see that the critical
exponent of the local spin susceptibility is exactly the same as
that of the local charge susceptibility ($2\Delta_{f} =
2\Delta_{b} = 1$), proportional to $1/\tau$. Since the spinon
spin-density operator differs from the holon charge-density
operator in the respect of symmetry at the lattice scale, the same
critical exponent implies enhancement of the original symmetry at
low energies. The symmetry enhancement sometimes allows a
topological term, which assigns a nontrivial quantum number to a
topological soliton, identified with an excitation of quantum
number fractionalization. This mathematical structure is actually
realized in an antiferromagnetic spin chain \cite{Tsvelik_Book},
generalized into the two dimensional case
\cite{Senthil_DQCP,Tanaka_SO5}.
We propose the following local field theory in terms of physically
observable fields \bqa Z_{eff} &=& \int D
\boldsymbol{\Psi}^{a}(\tau)
\delta\Bigl(|\boldsymbol{\Psi}^{a}(\tau)|^{2} - 1\Bigr) e^{-
\mathcal{S}_{eff}} , \nn \mathcal{S}_{eff} &=& - \frac{g^{2}}{2M}
\int_{0}^{\beta} d \tau \int_{0}^{\beta} d \tau'
\boldsymbol{\Psi}^{a T}(\tau)
\boldsymbol{\Upsilon}^{ab}(\tau-\tau')
\boldsymbol{\Psi}^{b}(\tau') \nn &+& \mathcal{S}_{top} ,
\label{O4_Sigma_Model} \eqa where \bqa &&
\boldsymbol{\Psi}^{a}(\tau) = \left(
\begin{array}{c} \boldsymbol{S}^{a}(\tau) \\ \rho^{a}(\tau)
\end{array} \right) \eqa represents an $O(4)$ vector, satisfying
the constraint of the delta function.
$\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$ determines dynamics of
the $O(4)$ vector, resulting from spin and holon dynamics in
principle. However, it is extremely difficult to derive Eq.
(\ref{O4_Sigma_Model}) from Eq. (\ref{DMFT_Action}) because the
density part for the holon field in Eq. (\ref{O4_Sigma_Model})
cannot result from Eq. (\ref{DMFT_Action}) in a standard way. What
we have shown is that the renormalized dynamics for the O(4)
vector field follows $1/\tau$ asymptotically, where $\tau$ is the
imaginary time. This information should be introduced in
$\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$. $g \propto V/J$ is an
effective coupling constant, and $\mathcal{S}_{top}$ is a possible
topological term.
One can represent the O(4) vector generally as follows
\begin{widetext} \bqa \boldsymbol{\Psi}^{a} : \tau \longrightarrow
\Bigl( \sin \theta^{a}(\tau) \sin \phi^{a}(\tau) \cos
\varphi^{a}(\tau) , \sin \theta^{a}(\tau) \sin \phi^{a}(\tau) \sin
\varphi^{a}(\tau) , \sin \theta^{a}(\tau) \cos \phi^{a}(\tau) ,
\cos \theta^{a}(\tau) \Bigr) , \label{O4_Vector} \eqa
\end{widetext} where $\theta^{a}(\tau), \phi^{a}(\tau),
\varphi^{a}(\tau)$ are three angle coordinates for the O(4)
vector. It is essential to observe that the target manifold for
the O(4) vector is not a simple sphere type, but more complicated
because the last component of the O(4) vector is the charge
density field, where three spin components lie in $- 1 \leq
S^{a}_{x}(\tau), S^{a}_{y}(\tau), S^{a}_{z}(\tau) \leq 1$ while
the charge density should be positive, $0 \leq \rho^{a}(\tau) \leq
1$. This leads us to identify the lower half sphere with the upper
half sphere. Considering that $\sin\theta^{a}(\tau)$ can be folded
on $\pi/2$, we are allowed to construct our target manifold to
have a periodicity, given by
$\boldsymbol{\Psi}^{a}(\theta^{a},\phi^{a},\varphi^{a}) =
\boldsymbol{\Psi}^{a}(\pi - \theta^{a},\phi^{a},\varphi^{a})$.
This folded space allows a nontrivial topological excitation.
Suppose the boundary configuration of
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = 0)$ and
$\boldsymbol{\Psi}^{a}(\pi,\phi^{a},\varphi^{a}; \tau = \beta)$,
connected by $\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0
< \tau < \beta)$. Interestingly, this configuration is {\it
topologically} distinguishable from the configuration of
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = 0)$ and
$\boldsymbol{\Psi}^{a}(0,\phi^{a},\varphi^{a}; \tau = \beta)$ with
$\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0 < \tau <
\beta)$ because of the folded structure. The second configuration
shrinks to a point while the first excitation cannot, identified
with a topologically nontrivial excitation. This topological
excitation carries a spin quantum number $1/2$ in its core, given
by $\boldsymbol{\Psi}^{a}(\pi/2,\phi^{a},\varphi^{a}; 0 < \tau <
\beta) = \Bigl( \sin \phi^{a}(\tau) \cos \varphi^{a}(\tau) , \sin
\phi^{a}(\tau) \sin \varphi^{a}(\tau) , \cos \phi^{a}(\tau) , 0
\Bigr)$. This is the spinon excitation, described by an O(3)
nonlinear $\sigma$ model with the nontrivial spin correlation
function $\boldsymbol{\Upsilon}^{ab}(\tau-\tau')$, where the
topological term is reduced to the single spin Berry phase term in
the instanton core.
In this local impurity picture the local Fermi liquid phase is
described by gapping of instantons while the spin liquid state is
characterized by condensation of instantons. Of course, the low
dimensionality does not allow condensation, resulting in critical
dynamics for spinons. This scenario clarifies the
Landau-Ginzburg-Wilson forbidden duality between the Kondo singlet
and the critical local moment for the impurity state, allowed by
the presence of the topological term.
If the symmetry enhancement does not occur, the effective local
field theory will be given by \bqa Z_{eff} &=& \int
D\boldsymbol{S}^{a}(\tau) D \rho^{a}(\tau) e^{- \mathcal{S}_{eff}}
, \nn \mathcal{S}_{eff} &=& - \int_{0}^{\beta} d \tau
\int_{0}^{\beta} d \tau' \Bigl\{ \frac{V^{2}}{2M} \rho^{a}(\tau)
\chi^{ab}(\tau-\tau') \rho^{b}(\tau') \nn &+& \frac{J^{2}}{2M}
\boldsymbol{S}^{a}(\tau) R^{ab} (\tau-\tau')
\boldsymbol{S}^{b}(\tau') \Bigr\} + \mathcal{S}_{B} \eqa with the
single-spin Berry phase term \bqa \mathcal{S}_{B} = - 2 \pi i S
\int_{0}^{1} d u \int_{0}^{\beta} d \tau \frac{1}{4\pi}
\boldsymbol{S}^{a}(u,\tau)
\partial_{u} \boldsymbol{S}^{a}(u,\tau) \times
\partial_{\tau} \boldsymbol{S}^{a}(u,\tau) , \nonumber \eqa where charge
dynamics $\chi^{ab}(\tau-\tau')$ will be different from spin
dynamics $R^{ab} (\tau-\tau')$. This will not allow the spin
fractionalization for the critical impurity dynamics, where the
instanton construction is not realized due to the absence of the
symmetry enhancement.
\section{Summary}
In this paper we have studied the Anderson lattice model with
strong randomness in both hybridization and RKKY interactions,
where their average values are zero. In the absence of random
hybridization quantum fluctuations in spin dynamics cause the spin
glass phase unstable at finite temperatures, giving rise to the
spin liquid state, characterized by the $\omega/T$ scaling spin
spectrum consistent with the marginal Fermi-liquid phenomenology
\cite{Sachdev_SG}. In the absence of random RKKY interactions the
Kondo effect arises \cite{Kondo_Disorder}, but differentiated from
that in the clean case. The dirty "heavy fermion" phase in the
strongly disordered Kondo coupling is characterized by a finite
density of holons instead of the holon condensation. But,
effective hybridization exists indeed, causing the Kondo resonance
peak in the spectral function. As long as variation of the
effective Kondo temperature is not so large, this disordered Kondo
phase is identified with the local Fermi liquid state because
essential physics results from single impurity dynamics,
differentiated from the clean lattice model.
Taking into account both random hybridization and RKKY
interactions, we find the quantum phase transition from the spin
liquid state to the local Fermi liquid phase at the critical
$(V_{c}, J_{c})$. Each phase turns out to be adiabatically
connected with each limit, i.e., the spin liquid phase when $V =
0$ and the local Fermi liquid phase when $J = 0$, respectively.
Actually, we have checked this physics, considering the local spin
susceptibility and the spectral function for localized electrons.
In order to investigate quantum critical physics, we introduce
quantum corrections from critical holon fluctuations in the
non-crossing approximation beyond the slave-boson mean-field
analysis. We find two kinds of power-law scaling solutions for
self-energy corrections of conduction electrons, spinons, and
holons. The first solution turns out to coincide with that of the
multi-channel Kondo effect, where effects of spin fluctuations are
sub-leading, compared with critical holon fluctuations. In this
respect this quantum critical point is characterized by breakdown
of the Kondo effect while spin fluctuations can be neglected. On
the other hand, the second scaling solution shows that both holon
excitations and spinon fluctuations are critical as the same
strength, reflected in the fact that the density-density
correlation function of holons has the exactly the same critical
exponent as the local spin-spin correlation function of spinons.
We argued that the second quantum critical point implies an
enhanced emergent symmetry from O(3)$\times$O(2)
(spin$\otimes$charge) to O(4) at low energies, forcing us to
construct an O(4) nonlinear $\sigma$ model on the folded target
manifold as an effective field theory for this disorder-driven
local quantum critical point. Our effective local field theory
identifies spinons with instantons, describing the local
Fermi-liquid to spin-liquid transition as the condensation
transition of instantons although dynamics of instantons remains
critical in the spin liquid state instead of condensation due to
low dimensionality. This construction completes novel duality
between the Kondo and critical local moment phases in the strongly
disordered Anderson lattice model.
We explicitly checked that the similar result can be found in the
extended DMFT for the clean Kondo lattice model, where two fixed
point solutions are allowed \cite{EDMFT_Spin,EDMFT_NCA}. One is
the same as the multi-channel Kondo effect and the other is
essentially the same as the second solution in this paper. In this
respect we believe that the present scenario works in the extended
DMFT framework although applicable to only two spatial dimensions
\cite{EDMFT}.
One may suspect the applicability of the DMFT framework for this
disorder problem. However, the hybridization term turns out to be
exactly local in the case of strong randomness while the RKKY term
is safely approximated to be local for the spin liquid state,
expected to be stable against the spin glass phase in the case of
quantum spins. This situation should be distinguished from the
clean case, where the DMFT approximation causes several problems
such as the stability of the spin liquid state \cite{EDMFT_Rosch}
and strong dependence of the dimension of spin dynamics
\cite{EDMFT}.
\section*{Acknowledgement}
This work was supported by the National Research Foundation of
Korea (NRF) grant funded by the Korea government (MEST) (No.
2010-0074542). M.-T. was also supported by the Vietnamese
NAFOSTED.
| {'timestamp': '2011-03-30T02:01:35', 'yymm': '1103', 'arxiv_id': '1103.5603', 'language': 'en', 'url': 'https://arxiv.org/abs/1103.5603'} |
\section{Introduction}
The moment problem is a classical question in analysis, well studied because of its
importance and variety of applications. A simple example is the (univariate) Hamburger
moment problem: when does a given sequence of real numbers represent the successive
moments $\int\! x^n\, d\mu(x)$ of a positive Borel measure $\mu$ on $\mathbb R$?
Equivalently, which linear functionals $L$ on univariate real polynomials are
integration with respect to some $\mu$? By Haviland's theorem \cite{Hav}
this is the case if and only if $L$ is nonnegative on all polynomials nonnegative on
$\mathbb R$. Thus Haviland's theorem relates the moment problem to positive polynomials. It
holds in several variables and also if we are interested in restricting the support of
$\mu$. For details we refer the reader to one of the many beautiful expositions of this
classical branch of functional analysis, e.g.~\cite{Akh,KN,ST}.
Since Schm\"udgen's celebrated solution of the moment problem
on compact basic closed semialgebraic sets \cite{Smu},
the moment problem has played a prominent role in real algebra,
exploiting this duality between positive polynomials and the
moment problem, cf.~\cite{KM,PS,Put,PV}.
The survey of Laurent \cite{laurent2} gives a nice presentation of
up-to-date results and applications;
see also \cite{Mar,PD} for more on positive polynomials.
Our main motivation are trace-positive polynomials in non-commuting
variables. A polynomial is called \emph{trace-positive} if all
its matrix evaluations (of \emph{all} sizes) have nonnegative trace.
Trace-positive polynomials have been employed to investigate
problems on
operator algebras (Connes' embedding conjecture \cite{connes,ksconnes})
and mathematical physics (the Bessis-Moussa-Villani conjecture
\cite{bmv,ksbmv}), so a good understanding of this set is desired.
By duality this leads us to consider the tracial moment problem
introduced below.
We mention that the free non-commutative moment problem
has been studied and solved by
McCullough \cite{McC} and Helton \cite{helton}.
Hadwin \cite{had} considered
moments involving traces on von Neumann algebras.
This paper is organized as follows. The short Section \ref{sec:basic}
fixes notation and terminology involving non-commuting variables used in the sequel.
ection \ref{sec:ttmp} introduces
tracial moment sequences,
tracial moment matrices,
the tracial moment problem, and their truncated counterparts.
Our main results in this section relate the truncated tracial moment problem
to flat extensions of tracial moment matrices and resemble the
results of Curto and Fialkow \cite{cffinite,cfflat} on the (classical)
truncated moment problem. For example,
we prove
that a tracial sequence can be represented with tracial moments of
matrices
if its corresponding tracial moment matrix is positive semidefinite and of finite
rank (Theorem \ref{thm:finiterank}).
A truncated tracial sequence allows for such a representation
if and only if one if its extensions admits a flat extension (Corollary
\ref{cor:flatt}).
Finally, in Section \ref{sec:poly} we
explore the duality
between the tracial moment problem and trace-positivity of polynomials.
Throughout the paper several examples are given
to illustrate the theory.
\section{Basic notions}\label{sec:basic}
Let $\mathbb R\ax$ denote the unital associative $\mathbb R$-algebra freely generated
by $\ushort X=(X_1,\dots,X_n)$. The elements of $\mathbb R\ax$ are polynomials in the non-commuting
variables $X_1,\dots,X_n$ with coefficients in $\mathbb R$.
An element $w$ of the monoid $\ax$, freely generated by $\ushort X$,
is called a \textit{word}. An element of the form $aw$, where $0\neq a\in\mathbb R$
and $w\in\ax$, is called a \textit{monomial} and $a$ its \textit{coefficient}.
We endow $\mathbb R\ax$ with the \textit{involution} $p\mapsto p^*$ fixing $\mathbb R\cup\{\ushort X\}$
pointwise. Hence for each word $w\in\ax$, $w^*$ is its reverse. As an example, we have
$(X_1X_2^2-X_2X_1)^*=X_2^2X_1-X_1X_2$.
For $f\in\mathbb R\ax$ we will substitute symmetric matrices
$\ushort A=(A_1,\dots A_n)$ of the same size for the variables $\ushort X$
and obtain a matrix $f(\ushort A)$. Since $f(\ushort A)$ is
not well-defined if the $A_i$ do not have the
same size, we will assume this condition implicitly without further mention in the sequel.
Let $\sym \mathbb R\ax$ denote the set of \emph{symmetric elements} in $\mathbb R\ax$, i.e.,
$$\sym \mathbb R\ax=\{f\in \mathbb R\ax\mid f^*=f\}.$$
Similarly, we use $\sym \mathbb R^{t\times t}$ to denote the set of all symmetric $t\times t$ matrices.
In this paper we will mostly consider the \emph{normalized} trace $\Tr$,
i.e.,
$$\Tr(A)=\frac 1t\tr(A)\quad\text{for } A\in\mathbb R^{t\times t}.$$
The invariance of the trace under cyclic permutations motivates the
following definition of cyclic equivalence \cite[p.~1817]{ksconnes}.
\begin{dfn}
Two polynomials $f,g\in \mathbb R\ax$ are \emph{cyclically equivalent}
if $f-g$ is a sum of commutators:
$$f-g=\sum_{i=1}^k(p_iq_i-q_ip_i) \text{ for some } k\in\mathbb N
\text{ and } p_i,q_i \in \mathbb R\ax.$$
\end{dfn}
\begin{remark}\label{rem:csim}
\mbox{}\par
\begin{enumerate}[(a)]
\item Two words $v,w\in\ax$ are cyclically equivalent if and only if $w$
is a cyclic permutation of $v$.
Equivalently: there exist $u_1,u_2\in\ax$ such that
$v=u_1u_2$ and $w=u_2u_1$.
\item If $f\stackrel{\mathrm{cyc}}{\thicksim} g$ then $\Tr(f(\ushort A))=\Tr(g(\ushort A))$ for all tuples
$\ushort A$ of symmetric matrices.
Less obvious is the converse: if $\Tr(f(\ushort A))=\Tr(g(\ushort A))$
for all $\ushort A$ and $f-g\in\sym\mathbb R\ax$, then $f\stackrel{\mathrm{cyc}}{\thicksim} g$ \cite[Theorem 2.1]{ksconnes}.
\item Although $f\stackrel{\mathrm{cyc}}{\nsim} f^*$ in general, we still have
$$\Tr(f(\ushort A))=\Tr(f^*(\ushort A))$$
for all $f\in\mathbb R \ax$ and all $\ushort A\in (\sym\mathbb R^{t\times t})^n$.
\end{enumerate}
\end{remark}
The length of the longest word in a polynomial $f\in\mathbb R\ax$ is the
\textit{degree} of $f$ and is denoted by $\deg f$.
We write $\mathbb R\ax_{\leq k}$ for the set of all polynomials of degree $\leq k$.
\section{The truncated tracial moment problem}\label{sec:ttmp}
In this section we define tracial (moment) sequences,
tracial moment matrices,
the tracial moment problem, and their truncated analogs.
After a few motivating examples we proceed to show that the
kernel of a tracial moment matrix has some real-radical-like
properties (Proposition \ref{prop:radical}).
We then prove that a tracial moment matrix of finite
rank has a tracial moment representation, i.e., the tracial moment problem
for the associated tracial sequence is solvable (Theorem \ref{thm:finiterank}).
Finally, we give the solution of
the truncated tracial moment problem: a truncated tracial sequence has
a tracial representation if and only if one of its extensions has a tracial moment matrix that
admits a flat extension (Corollary \ref{cor:flatt}).
For an overview of the classical (commutative) moment problem in several
variables we refer
the reader to Akhiezer \cite{Akh} (for the analytic theory) and
to the survey of Laurent \cite{laurent} and references therein for a more
algebraic approach.
The standard references on the truncated moment problems are
\cite{cffinite,cfflat}.
For the non-commutative moment problem with \emph{free} (i.e.,
unconstrained) moments see
\cite{McC,helton}.
\begin{dfn}
A sequence of real numbers $(y_w)$ indexed by words $w\in \ax$ satisfying
\begin{equation}
y_w=y_u \text{ whenever } w\stackrel{\mathrm{cyc}}{\thicksim} u, \label{cyc}
\end{equation}
\begin{equation}
y_w=y_{w^*} \text{ for all } w, \label{cycstar}
\end{equation}
and $y_\emptyset=1$, is called a (normalized) \emph{tracial sequence}.
\end{dfn}
\begin{example}
Given $t\in\mathbb N$ and symmetric matrices $A_1,\dots,A_n\in \sym \mathbb R^{t\times t}$,
the sequence given by $$y_w:= \Tr(w(A_1,\dots,A_n))=\frac 1t \tr(w(A_1,\dots,A_n))$$
is a tracial sequence since by Remark \ref{rem:csim}, the traces of cyclically
equivalent words coincide.
\end{example}
We are interested in the converse of this example (the \emph{tracial moment problem}):
\emph{For which sequences $(y_w)$ do there exist $N\in \mathbb N$, $t\in \mathbb N$,
$\lambda_i\in \mathbb R_{\geq0}$ with $\sum_i^N \lambda_i=1$ and
vectors $\ushort A^{(i)}=(A_1^{(i)},\dots,A_n^{(i)})\in (\sym \mathbb R^{t\times t})^n$, such that
\begin{equation}
y_w=\sum_{i=1}^N \lambda_i \Tr(w(\ushort A^{(i)}))\,? \label{rep}
\end{equation}}
We then say that $(y_w)$ has a \emph{tracial moment representation}
and call it a \emph{tracial moment sequence}.
The \emph{truncated tracial moment problem} is the study of (finite) tracial sequences
$(y_w)_{\leq k}$
where $w$ is constrained by $\deg w\leq k$ for some $k\in\mathbb N$,
and properties \eqref{cyc} and \eqref{cycstar} hold for these $w$.
For instance, which sequences $(y_w)_{\leq k}$ have a tracial moment
representation, i.e., when does there
exist a representation of the values $y_w$ as in \eqref{rep} for $\deg w\leq k$?
If this is the case, then
the sequence $(y_w)_{\leq k}$ is called a \emph{truncated tracial moment sequence}.
\begin{remark}
\mbox{}\par
\begin{enumerate}[(a)]
\item
To keep a perfect analogy with the classical moment problem,
one would need to consider the existence of a positive
Borel measure $\mu$ on $(\sym \mathbb R^{t\times t})^n$ (for some
$t\in\mathbb N$) satisfying
\begin{equation}\label{eq:gewidmetmarkus}
y_w = \int \! w(\ushort A) \, d\mu(\ushort A).
\end{equation}
As we shall mostly focus on the \emph{truncated}
tracial moment problem in the sequel, the
finitary representations \eqref{rep} seem to be the
proper setting.
We look forward to studying the more general representations
\eqref{eq:gewidmetmarkus} in the future.
\item
Another natural extension of our tracial moment problem
with respect to matrices would be to consider moments obtained by
traces in finite \emph{von Neumann algebras} as
done by Hadwin \cite{had}.
However, our
primary motivation were trace-positive polynomials
defined via traces of matrices (see Definition \ref{def:trpos}),
a theme we expand upon in Section \ref{sec:poly}. Understanding these
is one of the approaches to Connes' embedding conjecture \cite{ksconnes}.
The notion dual to that of trace-positive polynomials is
the tracial moment problem as defined above.
\item The tracial moment problem
is a natural extension of the classical quadrature problem
dealing with
representability via atomic positive measures in
the commutative case. Taking $\ushort a^{(i)}$
consisting of $1\times 1$ matrices $a_j^{(i)}\in\mathbb R$
for the $\ushort A^{(i)}$
in \eqref{rep}, we have
$$y_w=\sum_i \lambda_i w(\ushort a^{(i)})= \int \!x^w \, d\mu(x),$$
where $x^w$ denotes the commutative collapse of $w\in\ax$.
The measure $\mu$ is the convex combination
$\sum \lambda_i\delta_{\ushort a^{(i)}}$
of the atomic measures $\delta_{\ushort a^{(i)}}$.
\end{enumerate}
\end{remark}
The next example shows that there are (truncated) tracial moment sequences $(y_w)$
which
cannot be written as $$y_w=\Tr(w(\ushort A)).$$
\begin{example}\label{exconv}
Let $X$ be a single free (non-commutative) variable.
We take the index set $J=(1,X,X^2,X^3,X^4)$ and $y=(1,1-\sqrt2,1,1-\sqrt2,1)$. Then
$$y_w=\frac{\sqrt2}{2}w(-1)+(1-\frac{\sqrt2}{2})w(1),$$ i.e.,
$\lambda_1=\frac{\sqrt2}{2}$, $\lambda_2=1-\lambda_1$ and $A^{(1)}=-1$, $A^{(2)}=1$.
But there is no symmetric matrix $A\in \mathbb R^{t\times t}$ for any $t\in\mathbb N$ such that
$y_w=\Tr(w(A))$ for all $w\in J$. The proof is given in the appendix.
\end{example}
The (infinite) \emph{tracial moment matrix} $M(y)$ of a tracial
sequence $y=(y_w)$ is defined by
$$M(y)=(y_{u^*v})_{u,v}.$$
This matrix is symmetric due to the condition \eqref{cycstar} in the
definition of a tracial sequence.
A necessary condition for $y$ to be a tracial moment sequence is positive
semidefiniteness of $M(y)$ which in general is not sufficient.
The tracial moment matrix of \emph{order $k$} is the tracial moment matrix $M_k(y)$
indexed by words $u,v$ with $\deg u,\deg v\leq k$.
If $y$ is a truncated tracial moment sequence, then $M_k(y)$ is positive
semidefinite. Here is an easy example showing the converse is false:
\begin{example}\label{expsd}
When dealing with two variables, we write $(X,Y)$ instead of $(X_1,X_2)$.
Taking the index set
$$(1,X,Y,X^2,XY,Y^2,X^3,X^2Y,XY^2,Y^3,X^4,X^3Y,X^2Y^2,XYXY,XY^3,Y^4)$$
the truncated moment sequence $$y=(1,0,0,1,1,1,0,0,0,0,4,0,2,1,0,4) $$ yields the
tracial moment matrix
$$M_2(y)=\left(\begin{smallmatrix}
1&0&0&1&1&1&1\\ 0&1&1&0&0&0&0\\ 0&1&1&0&0&0&0\\ 1&0&0&4&0&0&2\\
1&0&0&0&2&1&0\\ 1&0&0&0&1&2&0\\ 1&0&0&2&0&0&4
\end{smallmatrix}\right)$$
with respect to the basis $(1,X,Y,X^2,XY,YX,Y^2)$.
$M_2(y)$ is positive semidefinite but $y$ has no tracial representation.
Again, we postpone the proof until the appendix.
\end{example}
For a given polynomial $p=\sum_{w\in \ax} p_w w\in \mathbb R \ax$ let $\vv p$ be the
(column) vector of coefficients $p_w$ in a given fixed order.
One can identify $\mathbb R \ax_{\leq k}$ with $\mathbb R^\eta$
for $\eta=\eta(k)=\dim\mathbb R\ax_{\leq k}<\infty$ by sending each $p\in \mathbb R \ax_{\leq k}$ to the vector
$\vv p$ of its entries with $\deg w\leq k$.
The tracial moment matrix $M(y)$ induces the linear map
$$\varphi_M:\mathbb R\ax\to \mathbb R^\mathbb N,\quad p\mapsto M\vv p.$$ The tracial moment matrices $M_k(y)$,
indexed by $w$ with $\deg w\leq k$, can be regarded as linear maps
$\varphi_{M_k}:\mathbb R^\eta\to \mathbb R^\eta$, $\vv p\mapsto M_k\vv p$.
\begin{lemma}\label{lem:mk}
Let $M=M(y)$ be a tracial moment matrix. Then the following holds:
\begin{enumerate}[\rm (1)]
\item $p(y):=\sum_w p_w y_w={\vv{1}}^*M\vv{p}$. In particular,
${\vv{1}}^*M\vv{p}={\vv{1}}^*M\vv{q}$ if $p\stackrel{\mathrm{cyc}}{\thicksim} q$;
\item ${\vv{p}}^*M\vv{q}={\vv{1}}^*M\vv{p^*q}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $p,q\in \mathbb R \ax$. For $k:=\max \{\deg p,\deg q\}$, we have
\begin{equation}
{\vv{p}}^*M(y)\vv{q}={\vv{p}}^*M_k(y)\vv{q}.
\end{equation}
Both statements now follow by direct calculation.
\end{proof}
We can identify the kernel of a tracial moment matrix $M$ with the subset of $\mathbb R \ax$
given by
\begin{equation}\label{eq:momKer}
I:=\{p\in \mathbb R \ax\mid M\vv p=0\}.
\end{equation}
\begin{prop}\label{lem:kerideal} Let $M\succeq0$ be a tracial moment matrix. Then
\begin{equation}\label{kerideal}
I=\{p\in \mathbb R \ax\mid \langle M\vv{p},\vv{p}\rangle=0\}.
\end{equation}
Further, $I$
is a two-sided ideal of $\mathbb R \ax$ invariant under the involution.
\end{prop}
\begin{proof}
Let $J:=\{p\in \mathbb R \ax\mid \langle M\vv{p},\vv{p}\rangle=0\}$. The implication
$I\subseteq J$ is obvious. Let $p\in J$ be given and $k=\deg p$.
Since $M$ and thus $M_k$ for each $k\in \mathbb N$ is positive semidefinite, the square root
$\sqrt{M_k}$ of $M_k$ exists. Then
$0=\langle M_k\vv{p},\vv p\rangle=\langle\sqrt{M_k}\vv{p}, \sqrt{M_k}\vv{p}\rangle$ implies
$\sqrt{M_k}\vv{p}=0$. This leads to $M_k\vv{p}=M\vv p=0$, thus $p\in I$.
To prove that $I$ is a two-sided ideal, it suffices to show that $I$ is a right-ideal
which is closed under *. To do this, consider the bilinear map
$$ \langle p,q\rangle_M:= \langle M\vv{p},\vv{q}\rangle$$ on $\mathbb R \ax$, which is a semi-scalar
product. By Lemma \ref{lem:mk}, we get that
$$\langle pq,pq\rangle_M=((pq)^*pq)(y)=(qq^*p^*p)(y)= \langle pqq^*,p\rangle_M.$$
Then by the Cauchy-Schwarz inequality it follows that for $p\in I$, we have
$$0\leq \langle pq,pq\rangle_M^2=\langle pqq^*,p\rangle_M^2\leq
\langle pqq^*,pqq^*\rangle_M\langle p,p\rangle_M=0.$$
Hence $pq\in I$, i.e., $I$ is a right-ideal.
Since $p^*p\stackrel{\mathrm{cyc}}{\thicksim} pp^*$, we obtain from Lemma \ref{lem:mk} that
$$\langle M\vv{p},\vv{p} \rangle=\langle p,p \rangle_M=(p^*p)(y)=(pp^*)(y)=\langle p^*,p^*
\rangle_M=
\langle M{\vv p}^*,{\vv p}^* \rangle.$$ Thus if $p\in I$ then also $p^*\in I$.
\end{proof}
In the \emph{commutative} context, the kernel of $M$ is a real radical ideal if $M$ is positive
semidefinite as observed by Scheiderer (cf.~\cite[p.~2974]{laurent2}).
The next proposition gives a description of
the kernel of $M$ in the non-commutative setting, and could be helpful in
defining a non-commutative real radical ideal.
\begin{prop}\label{prop:radical}
For the ideal $I$ in \eqref{eq:momKer} we have
$$I=\{f\in \mathbb R \ax\mid (f^*f)^k\in I \;\text{for some}\;k\in \mathbb N\}.$$
Further,
$$I=\{f\in \mathbb R \ax\mid (f^*f)^{2k}+\sum g_i^*g_i\in I \;\text{for some}\;k\in \mathbb N, g_i\in \mathbb R \ax\}.
$$
\end{prop}
\begin{proof}
If $f\in I$ then also $f^*f\in I$ since $I$ is an ideal. If $f^*f\in I$ we have
$M\vv{f^*f}=0$ which implies by Lemma \ref{lem:mk} that
$$0={\vv 1}^*M\vv{f^*f}={\vv f}^*M\vv{f}=\langle Mf,f\rangle.$$
Thus $f\in I$.
If $(f^*f)^k\in I$ then also $(f^*f)^{k+1}\in I$. So without loss of generality let $k$ be even.
From $(f^*f)^k\in I$ we obtain
$$0={\vv 1}^*M\vv{(f^*f)^k}={\vv{(f^*f)^{k/2}}}^*M\vv{(f^*f)^{k/2}},$$ implying
$(f^*f)^{k/2}\in I$. This leads to $f\in I$ by induction.
To show the second statement let $(f^*f)^{2k}+\sum g_i^*g_i\in I$. This leads to
$${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}+\sum_i {\vv{g_i}}^*M\vv{g_i}=0.$$ Since
$M(y)\succeq0$ we have ${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}\geq 0$ and
${\vv{g_i}}^*M\vv{g_i}\geq 0.$ Thus ${\vv{(f^*f)^k}}^*M\vv{(f^*f)^k}=0$
(and ${\vv{g_i}}^*M\vv{g_i}= 0$) which implies $f\in I$ as above.
\end{proof}
In the commutative setting one uses the Riesz representation theorem for
some set of continuous functions (vanishing at infinity or with compact support)
to show the existence of a representing measure. We will use the Riesz
representation theorem for positive linear functionals on a
finite-dimensional Hilbert space.
\begin{dfn}
Let $\mathcal A$ be an $\mathbb R$-algebra with involution. We call a linear map
$L:\mathcal A\to \mathbb R$ a \emph{state} if
$L(1)=1$, $L(a^*a)\geq0$ and $L(a^*)=L(a)$ for all $a\in\mathcal A$.
If all the commutators have value $0$, i.e., if $L(ab)=L(ba)$ for all
$a,b\in \mathcal A$, then $L$ is called a \emph{tracial state}.
\end{dfn}
With the aid of the Artin-Wedderburn theorem we shall
characterize tracial states on matrix $*$-algebras in Proposition
\ref{prop:convtrace}.
This will enable us to prove the existence of a tracial moment representation for
tracial sequences with a finite rank tracial moment matrix; see Theorem
\ref{thm:finiterank}.
\begin{remark}\label{rem:aw}
The only central simple algebras over $\mathbb R$ are full matrix
algebras over $\mathbb R$, $\mathbb C$ or $\mathbb H$ (combine the Frobenius theorem
\cite[(13.12)]{Lam} with the Artin-Wedderburn theorem \cite[(3.5)]{Lam}).
In order to understand ($\mathbb R$-linear) tracial states on these, we recall
some basic Galois theory.
Let
$$\Trd_{\mathbb C/\mathbb R}:\mathbb C\to\mathbb R, \quad z\mapsto\frac 12(z+\bar z) $$
denote the \emph{field trace} and
$$\Trd_{\mathbb H/\mathbb R}:\mathbb H\to\mathbb R,\quad z\mapsto\frac12(z+\bar z)$$
the \emph{reduced trace} \cite[p.~5]{boi}.
Here the Hamilton quaternions $\mathbb H$ are endowed with the \emph{standard
involution}
$$
z=a+\mathbbm i b+\mathbbm j c+\mathbbm k d \mapsto a-\mathbbm i b-\mathbbm j k-\mathbbm k d = \bar z
$$
for $a,b,c,d\in\mathbb R$.
We extend the canonical involution on $\mathbb C$ and $\mathbb H$ to the conjugate
transpose involution $*$ on matrices
over $\mathbb C$ and $\mathbb H$, respectively.
Composing the field trace and reduced trace, respectively, with the normalized
trace, yields an $\mathbb R$-linear map from $\mathbb C^{t\times t}$ and
$\mathbb H^{t\times t}$, respectively, to $\mathbb R$. We will denote it simply
by $\Tr$. A word of \emph{caution}:
$\Tr(A)$ does not denote the (normalized) matricial trace
over $\mathbb K$
if $A\in \mathbb K^{t\times t}$ and $\mathbb K\in\{\mathbb C,\mathbb H\}$.
\end{remark}
An alternative description of $\Tr$ is given by the following lemma:
\begin{lemma}\label{lem:convtrace}
Let $\mathbb K\in\{\mathbb R,\mathbb C,\mathbb H\}$. Then
the only $(\mathbb R$-linear$)$ tracial state on $\mathbb K^{t\times t}$ is $\Tr$.
\end{lemma}
\begin{proof}
An easy calculation shows that $\Tr$ is indeed a tracial state.
Let $L$ be a tracial state on $\mathbb R^{t\times t}$.
By the Riesz representation theorem there exists a positive
semidefinite matrix $B$ with $\Tr(B)=1$ such that $$L(A)=\Tr(BA)$$ for all
$A\in\mathbb R^{t\times t}$.
Write $B=\begin{pmatrix}b_{ij}\end{pmatrix}_{i,j=1}^{t}$.
Let
$i\neq j$.
Then $A=\lambda E_{ij}$ has zero trace for every
$\lambda\in \mathbb R$ and is thus a sum of commutators.
(Here $E_{ij}$ denotes the $t\times t$ \emph{matrix unit} with a one
in the $(i,j)$-position and zeros elsewhere.)
Hence
$$\lambda b_{ij} = L(A) = 0.$$
Since $\lambda\in\mathbb R$ was arbitrary, $b_{ij}=0$.
Now let $A=\lambda (E_{ii}-E_{jj})$. Clearly,
$\Tr(A)=0$ and hence $$\lambda(b_{ii}-b_{jj})= L(A)= 0.$$
As before, this gives $b_{ii}=b_{jj}$. So $B$ is scalar,
and $\Tr(B)=1$. Hence it is the
identity matrix. In particular, $L=\Tr$.
If $L$ is a tracial state on $\mathbb C^{t\times t}$,
then $L$ induces a tracial state on $\mathbb R^{t\times t}$,
so $L_0:=L|_{\mathbb R^{t\times t}}=\Tr$ by the above.
Extend $L_0$ to
$$L_1:\mathbb C^{t\times t} \to \mathbb R,
\quad A+\mathbbm i B\mapsto L_0(A)=\Tr(A) \quad\text{for } A,B\in\mathbb R^{t\times t}.
$$
$L_1$ is a tracial state on $\mathbb C^{t\times t}$ as a
straightforward computation
shows. As $\Tr(A)=\Tr(A+\mathbbm i B)$, all we need to show is that $L_1=L$.
Clearly, $L_1$ and $L$ agree on the vector space spanned
by all commutators in $\mathbb C^{t\times t}$. This space is (over $\mathbb R$)
of codimension $2$. By construction, $L_1(1)=L(1)=1$ and
$L_1(\mathbbm i)=0$. On the other hand,
$$L(\mathbbm i)=L(\mathbbm i^*)=-L(\mathbbm i)$$ implying $L(\mathbbm i)=0$.
This shows $L=L_1=\Tr$.
The remaining case of tracial states over $\mathbb H$ is dealt
with
similarly and is left as an exercise for the reader.
\end{proof}
\begin{remark}\label{rem:real}
Every complex number $z=a+\mathbbm i b$ can be represented
as a $2\times 2$ real matrix
$z'=\left(\begin{smallmatrix} a & b \\ -b & a\end{smallmatrix}\right)$.
This gives rise to
an $\mathbb R$-linear $*$-map
$\mathbb C^{t\times t}\to \mathbb R^{(2t)\times(2t)}$ that commutes with $\Tr$.
A similar property holds if quaternions
$a+\mathbbm i b+\mathbbm j c+\mathbbm k d$
are represented by the $4\times 4$ real matrix
$$\left(\begin{smallmatrix}
a & b & c & d \\
-b & a & -d & c \\
-c & d & a & -b \\
-d & -c & b & a
\end{smallmatrix}\right).$$
\end{remark}
\begin{prop}\label{prop:convtrace}
Let $\mathcal A$ be a $*$-subalgebra of $ \mathbb R^{t\times t}$ for some $t\in \mathbb N$ and
$L:\mathcal A\to \mathbb R$ a tracial state.
Then there exist
full matrix algebras $\mathcal A^{(i)}$ over $\mathbb R$, $\mathbb C$ or $\mathbb H$,
a $*$-isomorphism
\begin{equation}\label{eq:iso}
\mathcal A\to\bigoplus_{i=1}^N \mathcal A^{(i)},
\end{equation}
and $\lambda_1,\dots, \lambda_N\in \mathbb R_{\geq0}$ with $\sum_i \lambda_i=1$, such that for all
$A\in \mathcal A$,
$$L(A)=\sum_i^N \lambda_i\Tr(A^{(i)}).$$
Here, $\bigoplus_i A^{(i)} =\left(\begin{smallmatrix} A^{(1)} \\ & \ddots \\ & & A^{(N)}
\end{smallmatrix}\right)$ denotes the image of $A$ under the isomorphism
\eqref{eq:iso}. The size of $($the real representation of$)$ $\bigoplus_i A^{(i)}$ is
at most $t$.
\end{prop}
\begin{proof}
Since $L$ is tracial,
$L(U^*AU)=L(A)$ for all orthogonal $U\in\mathbb R^{t\times t}$.
Hence we can apply orthogonal transformations to $\mathcal A$
without changing the values of $L$.
So $\mathcal A$ can be transformed into block diagonal form
as in \eqref{eq:iso}
according to its invariant subspaces.
That is, each of the blocks $\mathcal A^{(i)}$
acts irreducibly on a subspace of $\mathbb R^t$ and is thus
a central
simple algebra (with involution) over $\mathbb R$.
The involution on $\mathcal A^{(i)}$ is induced by the
conjugate transpose involution. (Equivalently, by the
transpose on the real matrix representation in the complex
of quaternion case.)
Now $L$ induces (after a possible normalization) a tracial state on the block
$\mathcal A^{(i)}$ and hence by Lemma \ref{lem:convtrace}, we have
$L_i:=L|_{\mathcal A^{(i)}}=\lambda_i \Tr$ for some $\lambda_i\in\mathbb R_{\geq0}$.
Then
\[
L(A)=L\big(\bigoplus_i A^{(i)}\big)=\sum_i L_i\big(A^{(i)}\big)
= \sum_i \lambda_i \Tr\big(A^{(i)}\big)
\]
and
$1=L(1)=\sum_i \lambda_i$.
\end{proof}
The following theorem is the tracial version of the representation theorem
of Curto and Fialkow for moment matrices with finite rank \cite{cffinite}.
\begin{thm}\label{thm:finiterank}
Let $y=(y_w)$ be a tracial sequence with positive semidefinite
moment matrix $M(y)$ of finite rank $t$. Then $y$ is a tracial moment
sequence, i.e., there exist vectors
$\ushort A^{(i)}=(A_1^{(i)},\dots,A_n^{(i)})$ of symmetric matrices $A_j^{(i)}$
of size at most $t$ and $\lambda_i\in \mathbb R_{\geq0}$ with $\sum \lambda_i=1$
such that $$y_w=\sum \lambda_i \Tr(w(\ushort A^{(i)})).$$
\end{thm}
\begin{proof}
Let $M:=M(y)$. We equip $\mathbb R\ax$ with the bilinear form given by
$$\langle p,q\rangle_M:=\langle M\vv{p},\vv{q} \rangle={\vv{q}}^*M\vv p.$$ Let
$I=\{p\in \mathbb R\ax\mid \langle p,p\rangle_M=0\}.$ Then by Proposition \ref{lem:kerideal},
$I$ is an ideal of $\mathbb R \ax$. In particular, $I=\ker \varphi_M$ for
$$\varphi_M:\mathbb R \ax\to \ran M,\quad p\mapsto M\vv{p}.$$ Thus if we define
$E:=\mathbb R \ax/I$, the induced linear map
$$\overline\varphi_M:E\to \ran M,\quad \overline p\mapsto M\vv{p}$$
is an isomorphism and $$\dim E=\dim(\ran M)=\rank M=t<\infty.$$ Hence
$(E,\langle$\textvisiblespace ,\textvisiblespace $\rangle_E)$ is a finite-dimensional
Hilbert space for
$\langle \bar p,\bar q\rangle_E={\vv{q}}^*M\vv{p}$.
Let $\hat X_i$ be the right multiplication with $X_i$ on $E$, i.e.,
$\hat X_i \overline p:=\overline{pX_i}$. Since
$I$ is a right ideal of $\mathbb R \ax$, the operator $\hat X_i$ is well defined.
Further, $\hat X_i$ is symmetric since
\begin{align*}
\langle \hat X_i \overline p,\overline q \rangle_E&=\langle M \vv{pX_i},\vv{q} \rangle
= (X_ip^*q)(y)\\
&=(p^*qX_i)(y)=\langle M \vv{p},\vv{qX_i} \rangle=\langle\overline p,\hat X_i\overline q \rangle_E.
\end{align*}
Thus each $\hat X_i$, acting on a $t$-dimensional vector space, has a representation matrix
$A_i\in \sym \mathbb R^{t\times t}$.
Let $\mathcal B=B(\hat X_1,\dots,\hat X_n)=B(A_1,\dots,A_n)$ be the algebra of
operators generated by $\hat X_1,\dots,\hat X_n$. These operators can be written
as $$\hat p=\sum_{w\in\ax} p_w \hat{w}$$ for some $p_w\in \mathbb R$,
where $\hat w=\hat X_{w_1}\cdots \hat X_{w_s}$ for $w=X_{w_1}\cdots X_{w_s}$.
Observe that $\hat{w}=w(A_1,\dots,A_n)$.
We define the linear functional $$L:\mathcal B\to\mathbb R,\quad
\hat p\mapsto {\vv{1}}^*M\vv p=p(y),$$
which is a state on $\mathcal B$.
Since $y_w=y_u$ for $w\stackrel{\mathrm{cyc}}{\thicksim} u$, it follows that $L$ is tracial. Thus by Proposition
\ref{prop:convtrace} (and Remark \ref{rem:real}), there exist
$\lambda_1,\dots \lambda_N\in \mathbb R_{\geq0}$ with $\sum_i\lambda_i=1$ and real symmetric matrices $A_j^{(i)}$
$(i=1,\ldots,N$)
for each $A_j\in \sym \mathbb R^{t\times t}$, such that for all $w\in \ax$,
$$y_w=w(y)=L(\hat w)=\sum_i \lambda_i \Tr(w(\ushort A^{(i)})),$$
as desired.
\end{proof}
The sufficient conditions on $M(y)$ in Theorem \ref{thm:finiterank} are also
necessary for $y$ to be a tracial moment sequence. Thus we get our first
characterization of tracial moment sequences:
\begin{cor}\label{cor:finite}
Let $y=(y_ w)$ be a tracial sequence. Then $y$ is a tracial moment sequence
if and only if $M(y)$ is positive semidefinite and of finite rank.
\end{cor}
\begin{proof}
If $y_ w=\Tr( w(\ushort A))$ for some $\ushort A=(A_1,\dots,A_n)\in(\sym \mathbb R^{t\times t})^n$,
then $$L(p)=\sum_ w p_ w y_ w=\sum_ w p_ w \Tr( w(\ushort A))=
\Tr(p(\ushort A)).$$
Hence
\begin{align*}
{\vv p}^*M(y)\vv{p}&=L(p^*p)=\Tr(p^*(\ushort A)p(\ushort A))\geq0.
\end{align*}
for all $p \in \mathbb R\ax$.
Further, the tracial moment matrix $M(y)$ has rank at most $t^2$.
This can be seen as follows:
$M$ induces a bilinear map
$$\Phi:\mathbb R \ax\rightarrow\mathbb R \ax^*,\quad p\mapsto\Big(q\mapsto \Tr\big((q^*p)(\ushort A)\big)\Big),$$
where $\mathbb R \ax^*$ is the dual space of $\mathbb R \ax$. This implies
$$\rank M=\dim (\ran\Phi)=\dim(\mathbb R \ax/\ker\Phi).$$
The kernel of the evaluation map
$\varepsilon_{\ushort A}:\mathbb R\ax\rightarrow\mathbb R^{t\times t}$, $p\mapsto p(\ushort A)$
is a subset of $\ker \Phi$. In particular,
\[\dim(\mathbb R\ax/\ker\Phi)\leq \dim(\mathbb R\ax/\ker\varepsilon_{\ushort A})=\dim(\ran \varepsilon_{\ushort A})\leq t^2. \]
The same holds true for each convex combination $y_w=\sum_i \lambda_i \Tr( w(\ushort A^{(i)}))$.
The converse is Theorem \ref{thm:finiterank}.
\end{proof}
\begin{dfn}\label{defflat}
Let $A\in \sym\mathbb R^{t\times t}$ be given. A (symmetric) extension of $A$ is a matrix
$\tilde A\in \sym\mathbb R^{(t+s)\times (t+s)}$ of the form
$$\tilde A=\begin{pmatrix} A &B \\ B^* & C\end{pmatrix} $$
for some $B\in \mathbb R^{t\times s}$ and $C\in \mathbb R^{s\times s}$.
Such an extension is \emph{flat} if $\rank A=\rank\tilde A$,
or, equivalently, if $B = AW$ and $C = W^*AW$ for some matrix $W$.
\end{dfn}
The kernel of a flat extension $M_k$ of a tracial moment matrix $M_{k-1}$
has some (truncated) \emph{ideal-like properties} as
shown in the following lemma.
\begin{lemma}\label{lem:flatrideal}
Let $f\in \mathbb R \ax$ with $\deg f\leq k-1$ and let $M_k$ be a flat extension of $M_{k-1}$.
If $f\in\ker M_k$ then $fX_i,X_if\in \ker M_k$.
\end{lemma}
\begin{proof}
Let $f=\sum_w f_w w$. Then for $v\in \ax_{k-1}$, we have
\begin{equation}\label{eqker}
(M_k\vv{fX_i})_v =\sum_w f_w y_{v^*wX_i}=
\sum_w f_w y_{(vX_i)^*w}=(M_k \vv f)_{vX_i}=0.
\end{equation}
The matrix $M_k$ is of the form $M_k=\left(\begin{smallmatrix} M_{k-1}&B\\B^*&C\end{smallmatrix}\right)$.
Since $M_k$ is a flat extension,
$\ker M_k=\ker \begin{pmatrix} M_{k-1}&B\end{pmatrix}$.
Thus by \eqref{eqker},
$fX_i\in \ker \begin{pmatrix} M_{k-1}&B\end{pmatrix}=\ker M_k$.
For $X_if$ we obtain analogously that
$$(M_k\vv{X_if})_v =\sum_w f_w y_{v^*X_iw}=
\sum_w f_w y_{(X_iv)^*w}=(M_k \vv f)_{X_iv}=0$$
for $v\in \ax_{k-1}$, which implies $X_if\in \ker M_k$.
\end{proof}
We are now ready to prove the tracial version of the flat extension theorem of
Curto and Fialkow \cite{cfflat}.
\begin{thm}\label{thm:flatextension}
Let $y=(y_w)_{\leq 2k}$ be a truncated tracial sequence of order $2k$. If
$\rank M_k(y)=\rank M_{k-1}(y)$, then there exists
a unique tracial extension $\tilde y=(\tilde y_w)_{\leq 2k+2}$ of $y$ such that
$M_{k+1}(\tilde y)$ is a flat extension of $M_k(y)$.
\end{thm}
\begin{proof}
Let $M_k:=M_k(y)$.
We will construct a flat extension $M_{k+1}:=\left(\begin{smallmatrix} M_k&B\\B^*&C\end{smallmatrix}\right)$
such that $M_{k+1}$ is a tracial moment matrix. Since
$M_k$ is a flat extension of $M_{k-1}(y)$ we can find a basis $b$ of
$\ran M_k$ consisting of columns of $M_k$ labeled by $w$ with $\deg w\leq k-1$.
Thus the range of $M_k$ is completely determined by the range of $M_k|_{\spann b}$,
i.e., for each $p\in \mathbb R \ax$ with $\deg p\leq k$ there exists a \emph{unique}
$r\in \spann b$ such that
$M_k\vv p=M_k \vv r$; equivalently, $p-r\in \ker M_k$.
Let $v\in\ax$, $\deg v=k+1$, $v=v'X_i$ for some $i\in \{1,\dots,n\}$ and $v'\in \ax$
with $\deg v'=k$.
For $v'$ there exists an $r\in \spann b$ such that $v'-r\in \ker M_k$.
\emph{If} there exists a flat extension $M_{k+1}$, then by Lemma \ref{lem:flatrideal},
from $v'-r\in \ker M_k\subseteq\ker M_{k+1}$ it
follows that $(v'-r)X_i\in \ker M_{k+1}$. Hence the desired flat extension
has to satisfy
\begin{equation}\label{eqflatcond}
M_{k+1}\vv{v}=M_{k+1}\vv{rX_i}=M_k\vv{rX_i}.
\end{equation}
Therefore we define
\begin{equation}\label{eq:sabinedefinesB}
B\vv{v}:=M_k\vv{rX_i}.
\end{equation}
More precisely, let $(w_1,\dots,w_\ell)$ be the
basis of $M_k$, i.e., $(M_k)_{i,j}=w_i^*w_j$. Let $r_{w_i}$
be the unique element in $\spann b$ with $ w_i-r_{ w_i}\in \ker M_k$.
Then $B=M_kW$ with
$W=(r_{ w_1X_{i_1}},\dots,r_{ w_\ell X_{i_\ell}})$ and we define
\begin{equation}\label{eq:sabinedefinesC}
C:=W^*M_kW.
\end{equation}
Since the $r_{ w_i}$ are uniquely determined,
\begin{equation}\label{eq:sabinedefinesMk+1}
M_{k+1}=\left(\begin{smallmatrix} M_k&B\\B^*&C\end{smallmatrix}\right)
\end{equation}
is well-defined. The constructed $M_{k+1}$ is a flat extension of
$M_k$, and
$M_{k+1}\succeq0$ if and only if $M_k\succeq0$, cf.~\cite[Proposition 2.1]{cfflat}.
Moreover, once $B$ is chosen, there is only one $C$ making
$M_{k+1}$ as in \eqref{eq:sabinedefinesMk+1} a flat extension of $M_k$.
This follows from general
linear algebra, see e.g.~\cite[p.~11]{cfflat}. Hence $M_{k+1}$ is the
\emph{only} candidate for a flat extension.
Therefore we are done if $M_{k+1}$ is a tracial moment matrix, i.e.,
\begin{equation}
(M_{k+1})_w=(M_{k+1})_v \;\text{ whenever}\; w\stackrel{\mathrm{cyc}}{\thicksim} v. \label{mm}
\end{equation}
To show this we prove that $(M_{k+1})_{X_iw}=(M_{k+1})_{wX_i}$. Then \eqref{mm}
follows recursively.
Let $w=u^*v$. If $\deg u,\deg vX_i\leq k$ there is nothing to show since
$M_k$ is a tracial moment matrix. If $\deg u\leq k$ and $\deg vX_i=k+1$ there exists
an $r\in \spann b$ such that $r-v\in \ker M_{k-1}$, and by Lemma \ref{lem:flatrideal},
also $vX_i-rX_i\in \ker M_k$. Then we get
\begin{align*}
(M_{k+1})_{u^*vX_i}&=\vv{u}^*M_{k+1}\vv{vX_i}=\vv{u}^*M_{k+1}\vv{rX_i}
=\vv{u}^*M_{k}\vv{rX_i}\\
&=(M_k)_{u^*rX_i}
=(M_k)_{X_iu^*r}
=(M_k)_{(uX_i)^*r}\\
&\overset{(\ast)}{=}{\vv{uX_i}}^*M_{k+1}\vv{v}=(M_{k+1})_{(uX_i)^*v}
=(M_{k+1})_{X_iw},
\end{align*}
where equality $(\ast)$ holds by \eqref{eqflatcond} which implies Lemma
\ref{lem:flatrideal} by construction.
If $\deg u=\deg vX_i=k+1$, write $u=X_ju'$. Further, there exist $s,r\in \spann b$ with
$u'-s\in \ker M_{k-1}$ and $r-v\in \ker M_{k-1}$. Then
\begin{align*}
(M_{k+1})_{u^*vX_i}&=\vv{X_ju'}^*M_{k+1}\vv{vX_i}=\vv{X_js}^*M_{k}\vv{rX_i}\\
&=(M_k)_{s^*X_jrX_i}=(M_k)_{(sX_i)^*(X_jr)}\\
&\overset{(*)}{=}\vv{uX_i}^*M_{k+1}\vv{X_jv}=(M_{k+1})_{(uX_i)^*X_jv}
=(M_{k+1})_{X_i w}.
\end{align*}
Finally, the construction of $\tilde y$ from $M_{k+1}$ is clear.
\end{proof}
\begin{cor}\label{cor:flat}
Let $y=(y_ w)_{\leq 2k}$ be a truncated tracial sequence. If
$M_k(y)$ is positive semidefinite
and $M_k(y)$ is a flat extension of $M_{k-1}(y)$, then $y$
is a truncated tracial moment sequence.
\end{cor}
\begin{proof}
By Theorem \ref{thm:flatextension} we can extend $M_k(y)$ inductively
to a positive semidefinite moment matrix $M(\tilde y)$ with
$\rank M(\tilde y)=\rank M_k(y)<\infty$. Thus $M(\tilde y)$ has finite
rank and by Theorem \ref{thm:finiterank}, there exists a tracial moment
representation
of $\tilde y$. Therefore $y$ is a truncated tracial moment sequence.
\end{proof}
The following two corollaries give characterizations of tracial
moment matrices coming from tracial moment sequences.
\begin{cor}\label{cor:flatall}
Let $y=(y_ w)$ be a tracial sequence. Then $y$
is a tracial moment sequence if and only if $M(y)$ is positive semidefinite and there
exists some $N\in \mathbb N$ such that $M_{k+1}(y)$ is a flat extension of
$M_{k}(y)$ for all $k\geq N$.
\end{cor}
\begin{proof}
If $y$ is a tracial moment sequence then by Corollary \ref{cor:finite},
$M(y)$ is positive semidefinite and has finite rank $t$. Thus there exists an
$N\in \mathbb N$ such that $t=\rank M_N(y)$.
In particular, $\rank M_k(y)=\rank M_{k+1}(y)=t$ for all $k\geq N$, i.e., $M_{k+1}(y)$
is a flat extension of $M_k(y)$ for all $k\geq N$.
For the converse, let $N$ be given such that $M_{k+1}(y)$ is a flat extension of
$M_{k}(y)$ for all $k\geq N$. By Theorem \ref{thm:flatextension}, the (iterated)
unique extension $\tilde y$ of $(y_w)_{\leq 2k}$ for $k\geq N$ is equal to $y$.
Otherwise there exists a flat extension $\tilde y$ of $(y_w)_{\leq 2\ell}$
for some $\ell\geq N$ such that $M_{\ell+1}(\tilde y)\succeq 0$ is a flat extension
of $M_\ell(y)$ and $M_{\ell+1}(\tilde y)\neq M_{\ell+1}(y)$ contradicting the
uniqueness of the extension in Theorem \ref{thm:flatextension}.
Thus $M(y)\succeq 0$ and $\rank M(y)=\rank M_N(y)<\infty$. Hence by Theorem \ref{thm:finiterank},
$y$ is a tracial moment sequence.
\end{proof}
\begin{cor}\label{cor:flatt}
Let $y=(y_ w)$ be a tracial sequence. Then $y$
has a tracial moment representation with matrices of size at most
$t:=\rank M(y)$ if
$M_N(y)$ is positive semidefinite and $M_{N+1}(y)$ is
a flat extension of $M_{N}(y)$ for some $N\in \mathbb N$ with $\rank M_N(y)=t$.
\end{cor}
\begin{proof}
Since $\rank M(y)=\rank M_N(y)=t,$
each $M_{k+1}(y)$ with $k\geq N$ is a flat extension of $M_k(y)$.
As $M_N(y)\succeq0$, all $M_k(y)$
are positive semidefinite.
Thus $M(y)$ is also positive semidefinite. Indeed, let
$p\in\mathbb R\ax$
and $\ell=\max\{\deg p,N\}$. Then
${\vv p}^*M(y)\vv p={\vv p}^*M_\ell(y)\vv p\geq0$.
Thus by Corollary \ref{cor:flatall}, $y$ is a tracial moment sequence. The
representing matrices can be chosen to be of size at most $\rank M(y)=t$.
\end{proof}
\section{Positive definite moment matrices and trace-positive polynomials}\label{sec:poly}
In this section we explain how the representability of \emph{positive definite}
tracial moment matrices relates
to sum of hermitian squares representations of
trace-positive polynomials. We start by introducing some terminology.
An element of the form $g^*g$ for some $g\in\mathbb R\ax$ is called a
\textit{hermitian square} and we denote the set of all sums of hermitian
squares by
$$\Sigma^2=\{f\in\mathbb R\ax\mid f=\sum g_i^*g_i \;\text{for some}\; g_i\in\mathbb R\ax\}.$$
A polynomial $f\in \mathbb R \ax$ is \emph{matrix-positive} if $f(\ushort A)$ is positive
semidefinite for all tuples $\ushort A$ of symmetric matrices
$A_i\in \sym \mathbb R^{t\times t}$, $t\in\mathbb N$. Helton \cite{helton} proved that $f\in\mathbb R\ax$ is
matrix-positive if and only if $f\in \Sigma^2$ by solving a non-commutative
moment problem; see also \cite{McC}.
We are interested in a different type of positivity induced by
the trace.
\begin{dfn}\label{def:trpos}
A polynomial $f\in \mathbb R \ax$ is called \emph{trace-positive} if
$$\Tr(f(\ushort A))\geq 0\;\text{ for all}\; \ushort A\in(\sym\mathbb R^{t\times t})^n,\; t\in\mathbb N.$$
\end{dfn}
Trace-positive polynomials are intimately connected to deep open
problems from
e.g.~operator algebras (Connes' embedding conjecture \cite{ksconnes})
and mathematical physics (the Bessis-Moussa-Villani conjecture
\cite{ksbmv}), so a good understanding of this set is needed.
A distinguished subset is formed by sums of hermitian squares and
commutators.
\begin{dfn}
Let $\Theta^2$ be the set of all polynomials which are cyclically
equivalent to a sum of hermitian squares, i.e.,
\begin{equation}\label{eq:defcycsohs}
\Theta^2=\{f\in \mathbb R\ax\mid f\stackrel{\mathrm{cyc}}{\thicksim}\sum g_i^*g_i\;\text{for some}\;g_i \in\mathbb R\ax\}.
\end{equation}
\end{dfn}
Obviously, all $f\in \Theta^2$ are trace-positive. However, in contrast to
Helton's sum of squares theorem mentioned above, the following
non-commutative version of the well-known Motzkin polynomial \cite[p.~5]{Mar} shows that
a trace-positive polynomial need not be a member of $\Theta^2$ \cite{ksconnes}.
\begin{example}\label{motznc}
Let $$M_{\rm nc}=XY^4X+YX^4Y-3XY^2X+1\in\mathbb R\axy.$$ Then $M_{\rm nc}\notin \Theta^2$ since
the commutative Motzkin polynomial is not a (commutative) sum of squares \cite[p.~5]{Mar}.
The fact that $M_{\rm nc}(A,B)$ has nonnegative trace for all symmetric matrices $A,B$
has been shown by Schweighofer and the second author \cite[Example 4.4]{ksconnes} using
Putinar's
Positivstellensatz \cite{Put}.
\end{example}
Let $\Sigma_k^2:=\Sigma^2\cap \mathbb R \ax_{\leq 2k}$ and $\Theta_k^2:=\Theta^2\cap \mathbb R \ax_{\leq 2k}$.
These are convex cones in $\mathbb R \ax_{\leq 2k}$.
By duality there exists a connection
between $\Theta_k^2$ and positive semidefinite tracial moment matrices of order $k$.
If every tracial moment matrix $M_k(y)\succeq0$ of order $k$ has a tracial representation
then every trace-positive polynomial of degree at most $2k$ lies in $\Theta_k^2$.
In fact:
\begin{thm}\label{thm:posdefmm}
The following statements are equivalent:
\begin{enumerate}[\rm (i)]
\item all truncated tracial sequences $(y_ w)_{\leq 2k}$ with
{\rm{positive definite}} tracial moment matrix $M_k(y)$ have a tracial moment representation \eqref{rep};
\item all trace-positive polynomials of degree $\leq2k$ are elements of $\Theta^2_k$.
\end{enumerate}
\end{thm}
For the proof we need some preliminary work.
\begin{lemma}\label{lem:thetaclosed}
$\Theta_k^2$ is a closed convex cone in $\mathbb R \ax_{\leq 2k}$.
\end{lemma}
\begin{proof}
Endow $\mathbb R\ax_{\leq 2k}$ with a norm
$\|$\textvisiblespace $\|$ and the quotient space $\mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$
with the quotient norm
\begin{equation}\label{eq:qnorm}
\| \pi(f) \| := \inf \big\{ \| f+h \| \mid h\stackrel{\mathrm{cyc}}{\thicksim} 0\big\}, \quad
f\in\mathbb R\ax_{\leq 2k}.
\end{equation}
Here $\pi:\mathbb R\ax_{\leq 2k}\to \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ denotes
the quotient map. (Note: due to the finite-dimensionality of $\mathbb R\ax_{\leq 2k}$,
the infimum on the right-hand side of \eqref{eq:qnorm} is attained.)
Since $\Theta_k^2= \pi^{-1} \big( \pi(\Theta_k^2)\big)$, it suffices
to show that $\pi(\Theta_k^2)$ is closed.
Let $d_k=\dim \mathbb R \ax_{\leq 2k}$. Since by Carath\`eodory's theorem \cite[p.~10]{bar} each element
$f\in \mathbb R \ax_{\leq 2k}$ can be written as a convex combination of $d_k+1$ elements
of $\mathbb R \ax_{\leq 2k}$, the image of
\begin{align*}
\varphi:\left(\mathbb R \ax_{\leq k}\right)^{d_k}
&\to
\mathbb R \ax_{2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}\\
(g_i)_{i=0,\dots,d_k}
&\mapsto
\pi\big(\sum_{i=0}^{d_k}g_i^*g_i\big)
\end{align*}
equals $\pi(\Sigma^2_k)=\pi(\Theta_k^2)$. In $\left(\mathbb R \ax_{\leq k}\right)^{d_k}$ we define
$\mathcal S:=\{g=(g_i)\mid \|g\|=1\}$. Note that $\mathcal S$ is compact, thus
$V:=\varphi(\mathcal S)\subseteq \pi(\Theta_k^2)$ is compact as well.
Since $0\notin \mathcal S$,
and a sum of hermitian squares cannot be cyclically equivalent to $0$ by
\cite[Lemma 3.2 (b)]{ksbmv}, we see that
$0\notin V$.
Let $(f_\ell)_\ell$ be a sequence in $\pi(\Theta^2_k)$ which converges to $\pi(f)$
for some $f\in\mathbb R \ax_{\leq 2k}$.
Write $f_\ell=\lambda_\ell v_\ell$ for $\lambda_\ell\in\mathbb R_{\geq 0}$ and $v_\ell\in V$.
Since $V$ is compact there exists a subsequence $(v_{\ell_j})_j$ of $v_\ell$ converging
to $v\in V$. Then
$$\lambda_{\ell_j}=\frac{\|f_{\ell_j}\|}{\|v_{\ell_j}\|}\stackrel{j\rightarrow \infty}{\longrightarrow }\frac{\|f\|}{\|v\|}.$$
Thus $f_\ell\rightarrow f=\frac{\|f\|}{\|v\|}v\in\pi(\Theta^2_k)$.
\end{proof}
\begin{dfn}
To a truncated tracial sequence $(y_ w)_{\leq k}$ we
associate
the \emph{$($tracial$)$ Riesz functional} $L_y:\mathbb R \ax_{\leq k}\to\mathbb R$ defined by
$$L_y(p):=\sum_ w p_ w y_ w\quad\text{for } p=\sum_ w p_ w w\in \mathbb R\ax_{\leq k}.$$
We say that $L_y$ is \emph{strictly positive} ($L_y>0$), if
$$L_y(p)>0 \text{ for all trace-positive } p\in\mathbb R \ax_{\leq k},\, p\stackrel{\mathrm{cyc}}{\nsim} 0.$$
If $L_y(p)\geq0$ for all trace-positive $p\in\mathbb R \ax_{\leq k}$, then
$L_y$ is \emph{positive} ($L_y\geq0$).
\end{dfn}
Equivalently, a tracial Riesz functional $L_y$
is positive (resp., strictly positive) if and only if the map
$\bar L_y$ it induces on $ \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ is
nonnegative (resp., positive) on
the nonzero images of trace-positive polynomials in $ \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$.
We shall prove that strictly positive Riesz functionals lie in the interior of the cone
of positive Riesz functionals,
and that truncated tracial sequences $y$ with \emph{strictly}
positive $L_y$ are truncated tracial moment sequences (Theorem \ref{thm:Lrep} below).
These results are motivated by and resemble the
results of Fialkow and Nie
\cite[Section 2]{fnie} in the commutative context.
\begin{lemma}\label{lem:Linner}
If $L_y>0$ then there exists an $\varepsilon>0$ such that $L_{\tilde y}>0$ for all
$\tilde y$ with $\|y-\tilde y\|_1<\varepsilon$.
\end{lemma}
\begin{proof}
We equip $\mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}$ with a quotient norm as in \eqref{eq:qnorm}.
Then $$\mathcal S:=\{\pi(p)\in \mathbb R \ax_{\leq 2k}/_{\stackrel{\mathrm{cyc}}{\thicksim}}\mid p\in\mathcal C_k,\;\|\pi(p)\|=1\}$$ is compact.
By a scaling argument, it suffices to show that $\bar L_{\tilde y}>0$ on $\mathcal S$ for $\tilde y$ close to $y$.
The map $y\mapsto \bar L_y$ is linear between finite-dimensional vector spaces.
Thus
$$|\bar L_{y'}(\pi(p))-\bar L_{y''}(\pi(p))|\leq C \|y'-y''\|_1$$ for all $\pi(p)\in \mathcal S$,
truncated tracial moment sequences $y',y''$, and some $C\in\mathbb R_{>0}$.
Since $\bar L_y$ is continuous and strictly positive on $\mathcal S$,
there exists an $\varepsilon>0$ such
that $\bar L_y(\pi(p))\geq2\varepsilon$ for all $\pi(p)\in \mathcal S$.
Let $\tilde y$ satisfy $\|y-\tilde y\|_1<\frac {\varepsilon}C$.
Then
\[\bar L_{\tilde y}(\pi(p))\geq \bar L_y(\pi(p))-C \|y-\tilde y\|_1\geq\varepsilon>0. \hfill\qedhere \]
\end{proof}
\begin{thm}\label{thm:Lrep}
Let $y=(y_ w)_{\leq k}$ be a truncated tracial sequence of order $k$.
If $L_y>0$, then $y$ is a truncated tracial moment sequence.
\end{thm}
\begin{proof}
We show first that
$y\in \overline T$, where $\overline T$ is the closure of
$$T=\big\{(y_ w)_{\leq k}\mid \exists \ushort A^{(i)}\;\exists \lambda_i\in \mathbb R_{\geq0} :\; y_ w=\sum \lambda_i\Tr( w(\ushort A^{(i)}))\big\}.$$
Assume $L_y>0$ but $y\notin \overline T$. Since $\overline T$ is a closed
convex cone in $\mathbb R^\eta$ (for some $\eta\in \mathbb N$), by the Minkowski separation
theorem there exists a vector $\vv{p}\in \mathbb R^\eta$ such that $\vv{p}^*y<0$
and $\vv{p}^*w\geq 0$ for all $w\in \overline T$. The non-commutative
polynomial corresponding to $\vv{p}$ is
trace positive since $\vv{p}^*z\geq 0$ for all $z\in \overline T$. Thus
$0<L_y(p)=\vv{p}^*y<0$, a contradiction.
By Lemma \ref{lem:Linner}, $y\in\inte(\overline T)$. Thus $y\in \inte (\overline T)\subseteq T$
\cite[Theorem 25.20]{ber}.
\end{proof}
We remark that assuming only non-strict positivity of $L_y$ in Theorem \ref{thm:Lrep}
would not suffice for the existence of a tracial moment representation \eqref{rep}
for $y$. This is a consequence of Example \ref{expsd}.
\begin{proof}[Proof $(\!$of Theorem {\rm\ref{thm:posdefmm}}$)$]
To show (i) $\Rightarrow$ (ii), assume $f=\sum_ w f_ w w
\in\mathbb R\ax_{\leq 2k}$ is
trace-positive but $f\notin \Theta^2_k$.
By Lemma \ref{lem:thetaclosed}, $\Theta_k^2$ is a closed convex cone in $\mathbb R\ax_{\leq 2k}$, thus
by the Minkowski separation theorem we find a hyperplane which
separates $f$ and $\Theta_k^2$. That is, there is a linear form
$L:\mathbb R\ax_{\leq 2k}\to\mathbb R$ such that $L(f)<0$ and $L(p)\geq0$
for $p\in \Theta_k^2$. In particular, $L(f)=0$ for all $f\stackrel{\mathrm{cyc}}{\thicksim} 0$, i.e.,
without loss of generality, $L$ is tracial.
Since there are tracial states strictly positive on $\Sigma^2_k\setminus\{0\}$, we may assume $L(p)>0$
for all $p\in \Theta_k^2$, $p\stackrel{\mathrm{cyc}}{\nsim} 0$.
Hence
the bilinear form given by $$(p,q)\mapsto L(pq)$$ can be written as
$ L(pq)={\vv q}^*M\vv{p}$ for some truncated tracial moment matrix $M\succ0$.
By assumption, the corresponding truncated tracial sequence
$y$ has a tracial moment representation $$y_ w=\sum \lambda_i \Tr( w(\ushort A^{(i)}))$$
for some tuples $A^{(i)}$ of symmetric matrices $A_j^{(i)}$ and $\lambda_i\in \mathbb R_{\geq0}$
which implies the contradiction
$$0>L(f)=\sum \lambda_i \Tr(f(\ushort A^{(i)}))\geq 0.$$
Conversely, if (ii) holds,
then $L_y>0$ if and only if $M(y)\succ0$. Thus a positive definite moment matrix $M(y)$
defines a strictly positive functional $L_y$ which by Theorem \ref{thm:Lrep} has a tracial
representation.
\end{proof}
As mentioned above, the Motzkin polynomial $M_{\rm nc}$
is trace-positive but $M_{\rm nc}\notin \Theta^2$. Thus by Theorem \ref{thm:posdefmm}
there exists at least one truncated tracial moment matrix which is positive definite but has
no tracial representation.
\begin{example}
Taking the index set
$$(1,X,Y,X^2,XY,YX,Y^2,X^2Y,XY^2,YX^2,Y^2X,X^3,Y^3,XYX,YXY),$$
the
matrix
$$M_3(y):=\left(\begin{smallmatrix}
1 & 0 & 0 & \frac74 & 0 & 0 & \frac74 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & \frac74 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & \frac{19}{16} & \frac{21}4 & 0 & 0 & 0 \\
0 & 0 & \frac74 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & \frac{19}{16} & 0 & 0 & \frac{21}4 & 0 & 0 \\
\frac74 & 0 & 0 & \frac{21}4 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\frac74 & 0 & 0 & \frac{19}{16} & 0 & 0 &\frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{5}6 & 0 & 0 & \frac{9}8 & 0 & 0 \\
0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{5}6 & \frac{9}8 & 0 & 0 & 0 \\
0 & 0 & \frac{19}{16} & 0 & 0 & 0 & 0 & \frac{5}6 & 0 & \frac{9}8 & 0 & 0 & \frac{9}8 & 0 & 0 \\
0 & \frac{19}{16} & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6 & 0 & \frac{9}8 & \frac{9}8 & 0 & 0 & 0 \\
0 & \frac{21}4 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{9}8 & 51 & 0 & 0 & 0 \\
0 & 0 & \frac{21}4 & 0 & 0 & 0 & 0 & \frac{9}8 & 0 & \frac{9}8 & 0 & 0 & 51 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{5}6
\end{smallmatrix} \right)$$
is a tracial moment matrix of degree 3 in 2 variables and is positive definite.
But $$L_y(M_{\rm nc})=M_{\rm nc}(y)=-\frac5{16}<0.$$ Thus $y$
is not a truncated tracial moment sequence,
since otherwise $L_y(p)\geq 0$ for all trace-positive polynomials $p\in \mathbb R\axy_{\leq 6}$.
On the other hand, the (free) non-commutative moment problem is always
solvable for positive definite moment matrices \cite[Theorem 2.1]{McC}.
In our example this means
there are symmetric matrices $A,B\in\mathbb R^{15\times 15}$ and a vector
$v\in\mathbb R^{15}$ such that
$$y_ w=\langle w(A,B)v,v\rangle$$
for all $ w\in\axy_{\leq 3}$.
\end{example}
\begin{remark}
A trace-positive polynomial $f\in \mathbb R \ax$ of degree $2k$ lies in $\Theta^2_k$ if
and only if $L_y(f)\geq 0$ for all truncated tracial sequences $(y_w)_{\leq 2k}$ with
$M_k(y)\succeq0$.
This condition is obviously satisfied if all truncated tracial sequences $(y_w)_{\leq 2k}$ with
$M_k(y)\succeq0$ have a tracial representation.
Using this we can prove that trace-positive binary quartics, i.e.,
homogeneous polynomials of degree $4$ in $\mathbb R \langle X,Y\rangle$, lie in $\Theta_2^2$.
Equivalently, truncated tracial sequences $(y_w)$ indexed by words of degree $4$ with a
positive definite tracial
moment matrix have a tracial moment representation.
Furthermore,
trace-positive binary biquadratic polynomials, i.e., polynomials $f\in \mathbb R \axy$ with
$\deg_X f, deg_Y f\leq 2$,
are cyclically equivalent to a sum of hermitian squares.
Example \ref{expsd} then shows that a polynomial $f$ can satisfy $L_y(f)\geq 0$ although there
are truncated tracial sequences $(y_w)_{\leq 2k}$ with $M_k(y)\succeq0$ and no
tracial representation.
Studying extremal points of the convex cone $$\{(y_w)_{\leq 2k}\mid M_k(y)\succeq 0\}$$
of truncated tracial sequences with positive semidefinite tracial moment matrices, we are able
to impose a concrete block structure on the matrices needed in a tracial moment representation.
These statements and concrete sum of hermitian squares and commutators representations of trace-positive polynomials
of low degree will be published elsewhere \cite{sb}.
\end{remark}
| {'timestamp': '2010-01-20T22:15:12', 'yymm': '1001', 'arxiv_id': '1001.3679', 'language': 'en', 'url': 'https://arxiv.org/abs/1001.3679'} |
\section{Introduction}
Some tasks, due to their complexity, cannot be carried out by single individuals. They need the concourse of sets of people composing teams. Teams provide a structure and means of bringing together people with a suitable mix of individual properties (such as competences or personality). This can encourage the exchange of ideas, their creativity, their motivation and job satisfaction and can actually extend individual capabilities. In turn, a suitable team can improve the overall productivity, and the quality of the performed tasks. However, sometimes teams work less effectively than initially expected due to several reasons: a bad balance of their capacities, incorrect team dynamics, lack of communication, or difficult social situations. Team composition is thus a problem that has attracted the interest of research groups all over the world, also in the area of multiagent systems. MAS research has widely acknowledged competences as important for performing tasks of different nature \cite{Anagnostopoulos12onlineteam,Chen2015,Okimoto,Rangapuram2015}. However, the majority of the approaches represent capabilities of agents in a Boolean way (i.e., an agent either has a required skill or not). This is a simplistic way to model an agent's set of capabilities as it ignores any skill degree. In real life, capabilities are not binary since every individual (e.g. human or software) shows different performances for each competence. Additionally, the MAS literature has typically disregarded significant organizational psychology findings (with the exception of several recent, preliminary attempts like \cite{FarhangianPPS15} or \cite{alberola2016artificial}). Numerous studies in organizational psychology \cite{Arnold,Mount,White} underline the importance of personality traits or \emph{types} for team composition. Other studies have focused on how team members should differ or converge in their characteristics, such as experience, personality, level of skill, or gender, among others \cite{West}, in order to increase performance.
In this paper, we focus on scenarios where a complex task requires the collaboration of individuals within a team. More precisely, we consider a scenario, where there are \emph{multiple instances of the same complex task}. The task has a task type and a set of competence requests with competence levels needed to solve the task. We have a pool of human agents characterized by gender, personality, and a set of competences with competence levels.
Our goal is to partition agents into teams so that within a task all competence requirements are covered (whenever possible) and team members work well together. That is, each resulting team is both \emph{proficient} (covers the required competences) and \emph{congenial} (balances gender and psychological traits). We refer to these teams as \emph{synergistic teams}. We define the \emph{synergistic value} of a team as its balance in terms of competence, personality and gender. Each synergistic team works on the very same task. This scenario is present in many real-life settings, for instance a classroom or a crowdsourcing task.
With this purpose, we design an algorithm that uses a greedy technique both to match competences with the required ones and at the same time to balance the psychological traits of teams' members.
This paper makes the following contributions. To start with, we formalise the synergistic team formation problem as the problem of partitioning a group of individuals into teams with limited size.
We provide an approximate local algorithm to solve the team composition problem. We empirically evaluate the algorithm using real data. Preliminary results show that our algorithm predicts better the performance of teams than the experts that know students' social situation, background and competences.
\textbf{Outline.} The remaining of this paper is structured as follows. Section~\ref{related} opens with an overview of the related work. Section~\ref{pers} gives the personality background for our model. Section~\ref{sec:model} describes the synergistic team composition problem and Section~\ref{sec:TeamForm} presents our algorithm to solve the synergistic team composition problem. Then, Section~\ref{sec:results} presents results of our algorithm in the context of team composition in the classroom. Finally, Section~\ref{sec:discuss} discusses our approach and future work.
\vspace{-2mm}
\section{Background} \label{related}
To the best of our knowledge, \cite{farhangian2015agent} is the only model that considers both personality and competences while composing teams. There, the influence of personality on different task allocation strategies (minimizing either undercompetence or overcompetence) is studied. Henceforth, this work is the most relevant for us, however there are substantial differences between our work and \cite{farhangian2015agent}. Firstly, authors do not propose an algorithm to compose teams based on \emph{both} personality and competences. Secondly, gender balance is not considered in their setting. Finally, \cite{farhangian2015agent} does not provide an evaluation involving real data (only an agent-based simulation is presented).
The rest of the literature relevant to this article is divided into two categories as proposed in \cite{andrejczuk}: those that consider agent capacities (individual and social capabilities of agents) and those that deal with agent personality (individual behaviour models).
\textbf{Capacity.}
The capacity dimension has been exploited by numerous previous works \cite{Anagnostopoulos12onlineteam,Chalkiadakis2012,Chen2015,Crawford,Liemhetcharat2014,Okimoto,JAR2015,Rangapuram2015}. In contrast to our work, where the competences are graded, in the majority of works agents are assumed to have multiple binary skills (i.e., the agent either has a skill or not). For instance, \cite{Okimoto,Crawford} use agents' capabilities to compose one k-robust team for a single task. A team is $k$-robust if removing any $k$ members from the team does not affect the completion of the task. \cite{Anagnostopoulos12onlineteam} uses competences and communication cost in a context where tasks sequentially arrive and teams have to be composed to perform them. Each task requires a specific set of competences and the team composition algorithm is such that the workload per agent is fair across teams.
\textbf{Personality.}
In the team formation literature, the only two models to our knowledge considering personality to compose teams are \cite{FarhangianPPS15} and \cite{alberola2016artificial}. \cite{alberola2016artificial} uses Belbin theory to obtain human predominant \emph{roles} (we discuss this method in Section \ref{pers}). Additionally, the gender is not taken into account while composing heterogeneous teams, which we believe may be important for team congeniality. Regarding \cite{FarhangianPPS15}, Farhangian et al. use the classical MBTI personality test (this method is discussed in Section \ref{pers}). They look for the best possible team built around a selected leader. In other words, the \emph{best} team for a particular task is composed. Gender balance is not considered in this setting. Finally, although \cite{FarhangianPPS15}'s team composition considered real data, the resulting teams' performance was not validated in any real setting (Bayesian theory was used to predict the probability of success in various team composition conditions).
\vspace{-3mm}
\section{Personality} \label{pers}
In this section, we discuss the most prominent approaches to measure human personality and we explain the details of the method we have decided to examine.
Personality determines people's behaviour, cognition and emotion. Different personality theorists present their own definitions of personality and different ways to measure it based on their theoretical positions.
The most popular approach is to determine personality through a set of questions. There have been several simplified schemes developed over the years to profile human personality. The most populars are:
\begin{enumerate}
\vspace{-1.5mm}
\item the Five Factor Model (aka FFM or ``Big Five''), which uses five broad dimensions to describe human personality \cite{Costa};
\vspace{-1.5mm}
\item Belbin theory \cite{belbin}, which provides a theory on how different role types influence teamwork; and
\vspace{-1.5mm}
\item the Myers-Briggs Type Indicator (MBTI) scheme designed to indicate psychological preferences in how people perceive the world and make decisions \cite{Myers}.
\end{enumerate}
\vspace{-1.5mm}
According to \cite{Poropat}, FFM personality instruments fail to detect significant sex differences in personality structures. It is also argued that the Big Five dimensions are too broad and heterogeneous, and lack the specificity to make accurate predictions in many real-life settings \cite{Boyle,johnson2004genetic}.
Regarding Belbin theory, the results of previous studies considering the correlation between team composition and team performance are ambiguous. Even though some research shows weak support or does not show support for this theory at all \cite{batenburg2013belbin,van2008belbin,partington1999belbin}, it remains popular.
Finally, the MBTI measure consists of four dimensions on a binary scale (e.g. either the person is Extrovert or Introvert). Within this approach, every person falls into one of the sixteen possible combinations of the four letter codes, one letter representing one dimension. This approach is easy to interpret by non-psychologists, though reliance on dichotomous preference scores rather than continuous scores excessively restricts the level of statistical analysis \cite{devito}.
Having considered the arguments above, we have decided to explore a novel method: the Post-Jungian Personality Theory, which is a modified version of the Myers-Briggs Type Indicator (MBTI) \cite{Myers}, the ``Step II'' version of Quenk, Hammer and Majors \cite{Wilde2013}. The questionnaire to determine personality is short, contains only 20 quick questions (compared to the 93 MBTI questions). This is very convenient for both experts wanting to design teams and individuals doing the test since completing the test takes just a few minutes (for details of the questionnaire, see \cite[p.21]{Wilde2013}). Douglass J. Wilde claims that it covers the same psychological territory as MBTI \cite{Wilde2009}. In contrast to the MBTI measure, which consists of four binary dimensions, the Post-Jungian Personality Theory uses the \emph{numerical} data collected using the questionnaire \cite{Wilde2011}. The results of this method seem promising, since within a decade this novel approach has tripled the fraction of Stanford teams awarded national prizes by the Lincoln Foundation \cite{Wilde2009}.
The test is based on the pioneering psychiatrist Carl Gustav Jung's cognitive-mode personality model \cite{PT}. It has two sets of variable pairs called psychological functions:
\vspace{-1.5mm}
\begin{itemize}
\item {\bf Sensing / Intuition (SN)} --- describes the way of approaching problems
\vspace{-1.5mm}
\item {\bf Thinking / Feeling (TF)} --- describes the way of making decisions
\end{itemize}
\vspace{-1.5mm}
and two sets of psychological attitudes:
\vspace{-1.5mm}
\begin{itemize}
\item {\bf Perception / Judgment (PJ)} --- describes the way of living
\vspace{-1.5mm}
\item {\bf Extroversion / Introversion (EI)} --- describes the way of interacting with the world
\end{itemize}
\vspace{-1.5mm}
For instance, for the Feeling-Thinking (TF) dimension, a value between -1 and 0 means that a person is of the feeling type, and a value between 0 and 1 means she is of the thinking type. Psychological functions and psychological attitudes compose together a personality. Every dimension of a personality (EI, SN, TF, PJ) is tested by five multiple choice true/false questions.
\vspace{-2mm}
\section{Team Composition Model}\label{sec:model}
In this section we introduce and formalise our team composition problem. First, section \ref{ssec:basic} introduces the basic notions of agent, personality, competence, and team, upon which we formalise our problem. Next, we formalise the notion of task assignment for a single team and a single task, and we characterise different types of assignments. Sections \ref{ssec:proficiency} and \ref{ssec:congeniality} show how to evaluate the proficiency and congeniality degrees of a team. Based on these measures, in section \ref{ssec:synergisticProblem} we formalise the \emph{synergistic team composition problem}.
\subsection{Basic definitions}
\label{ssec:basic}
In our model, we consider that each agent is a human. We characterise each agent by the following properties:
\begin{itemize}
\vspace{-1.5mm}
\item A unique \emph{identifier} that distinguishes an agent from others (e.g. ID card number, passport number, employee ID, or student ID).
\vspace{-1.5mm}
\item \emph{Gender.} Human agents are either a man or a woman.
\item A \emph{personality} represented by four personality traits. Each personality trait is a number between -1 and 1.
\item A \emph{set of competences}. A competence integrates knowledge, skills, personal values, and attitudes that enable an agent to act correctly in a job, task or situation \cite{roe2002competences}. Each agent is assumed to possess a set of competences with associated competence levels. This set may vary over time as an agent evolves.
\end{itemize}
\vspace{-1.5mm}
Next, we formalise the above-introduced concepts.
\vspace{-1.5mm}
\begin{mydef}
A \emph{personality profile} is a vector $\langle sn, \mathit{tf}, ei, pj \rangle \in [-1, 1]^4$, where each $sn, \mathit{tf}, ei, pj$ represents one personality trait.
\end{mydef}
We denote by $C = \{c_1, \dots , c_m\}$ the whole set of competences, where each element $c_i \in C$ stands for a competence.
\begin{mydef}
A \emph{human agent} is represented as a tuple $\langle id, g, \emph{{\bf p}}, l \rangle$ such that:
\begin{itemize}
\item $id$ is the agent's identifier;
\item $g \in \{man, {\mathit woman}\}$ stands for their gender;
\item $\emph{\bf{p}}$ is a personality profile vector $\langle sn, \mathit{tf}, ei, pj \rangle \in [-1, 1]^4$;
\item $l: C \to{[0,1]}$ is a function that assigns the probability that the agent will successfully show competence $c$. We will refer to $l(c)$ as the \emph{competence level} of the agent for competence $c$. We assume that when an agent does not have a competence (or we do not know about it), the level of this competence is zero.
\end{itemize}
\end{mydef}
Henceforth, we will note the set of agents as $A =\{a_1,\ldots, \linebreak a_n\}$. Moreover, We will use super-indexes to refer to agents' components. For instance, given an agent $a \in A$, $id^{a}$ will refer to the $id$ component of agent $a$. We will employ matrix $L \in [0,1]^{n \times m}$ to represent the competence levels for each agent and each competence.
\vspace{-2mm}
\begin{mydef}[Team] A \emph{team} is any non-empty subset of $A$ with at least two agents. We denote by $\cal{K_A}$ $ = (2^A \setminus \{\emptyset\})\setminus \{\{a_i\}| a_i \in A\}$ the set of all possible teams in $A$.
\end{mydef}
\vspace{-2mm}
We assume that agents in teams coordinate their activities for mutual benefit.
\subsection{The task assignment problem}
\label{ssec:assignment}
In this section we focus on how to assign a team to a task.
A task type determines the competence levels required for the task as well as the importance of each competence with respect to the others. For instance, some tasks may require a high level of creativity because they were never performed before (so there are no qualified agents in this matter). Others may require a highly skilled team with a high degree of coordination and teamwork (as it is the case for rescue teams). Therefore, we define a task type as:
\begin{mydef}
A task type $\tau$ is defined as a tuple \\ $\langle \lambda, \mu, {\{(c_{i},l_{i}, w_{i})\}_{i \in I_{\tau}}} \rangle$ such that:
\begin{itemize}
\item $\lambda \in [0,1]$ importance given to proficiency;
\item $\mu \in [-1,1]$ importance given to congeniality;
\item $c_{i} \in C$ is a competence required to perform the task;
\item $l_{i} \in [0,1]$ is the required competence level for competence $c_i$;
\vspace{-1.5mm}
\item $w_{i} \in [0,1]$ is the importance of competence $c_i$ for the success of task of type $\tau$; and
\vspace{-1.5mm}
\item $\sum_{i \in I_{\tau}} w_i = 1$.
\end{itemize}
\end{mydef}
We will discuss the meaning of $\lambda$ and $\mu$ further ahead when defining synergistic team composition (see subsection \ref{ssec:synergisticProblem}).
Then, we define a task as:
\vspace{-1.5mm}
\begin{mydef}A \emph{task} $t$ is a tuple $\langle \tau, m \rangle$ such that $\tau$ is a task type and $m$ is the required number of agents, where $m\geq 2$.
\end{mydef}
Henceforth, we denote by $T$ the set of tasks and by $\mathcal{T}$ the set of task types. Moreover, we will note as $C_{\tau} =\{c_{i} | i \in I_{\tau}\}$ the set of competences required by task type $\tau$.
Given a team and a task type, we must consider how to assign competences to team members (agents). Our first, weak notion of task assignment only considers that all competences in a task type are assigned to some agent(s) in the team
\begin{mydef}Given a task type $\tau$ and a team $K \in \cal{}K_A$, an assignment is a function $\eta: K \to 2^{C_{\tau}}$ satisfying that
$C_{\tau} \subseteq \bigcup_{a \in K} \eta(a)$.
\end{mydef}
\subsection{Evaluating team proficiency} \label{ssec:prof}
\label{ssec:proficiency}
Given a task assignment for a team, next we will measure the \emph{degree of competence} of the team as a whole. This measure will combine both the degree of under-competence and the degree of over-competence, which we formally define first. Before that, we must formally identify the agents that are assigned to each competence as follows.
\vspace{-1.5mm}
\begin{mydef}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, the set $\delta(c_{i}) = \{a \in K | c_{i} \in \eta(a)\}$ stands for the agents assigned to cover competence $c_{i}$.
\end{mydef}
\vspace{-1.5mm}
Now we are ready to define the degrees of undercompentence and overcompetence.
\vspace{-1.5mm}
\begin{mydef}[Degree of undercompentence] \item
\vspace{-1.6mm}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, we define the degree of undercompetence of the team for the task as:
\vspace{-2.5mm}
\begin{equation*}
u(\eta)=
\sum_{i \in I_{\tau}} w_{i} \cdot \frac{\sum_{a \in \delta(c_{i})} |\min(l^{a}(c_{i}) - l_{i},0)|}{|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} < 0\}|}
\end{equation*}
\end{mydef}
\vspace{-2.5mm}
\begin{mydef}[Degree of overcompetence] \item
\vspace{-1.6mm}
Given a task type $\tau$, a team $K$, and an assignment $\eta$, we define the degree of overcompetence of the team for the task as:
\vspace{-2.5mm}
\begin{equation*}
o(\eta)=
\sum_{i \in I_{\tau}} w_i \cdot \frac{\sum_{a \in \delta(c_{i})} \max(l^{a}(c_{i}) - l_{i},0)}{|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\}|}
\end{equation*}
\end{mydef}
\vspace{-1.5mm}
Given a task assignment for a team, we can calculate its competence degree to perform the task by combining its overcompetence and undercompetence as follows.
\vspace{-1.5mm}
\begin{mydef}Given a task type $\tau$, a team $K$ and an assignment $\eta$, the competence degree of the team to perform the task is defined as:
\begin{equation}
\label{eq:uprof}
u_{\mathit{prof}}(\eta) = 1-(\upsilon \cdot u(\eta)+(1-\upsilon) \cdot o(\eta))
\end{equation}
where $\upsilon \in [0,1]$ is the penalty given to the undercompetence of team $K$.
\end{mydef}
\vspace{-1.5mm}
Notice that the larger the value of $\upsilon$ the higher the importance of the competence degree of team $K$, while the lower the value $\upsilon$, the less important its undercompetence. The intuition here is that we might want to penalize more the undercompetency of teams, as some tasks strictly require teams to be at least as competent as defined in the task type.
\vspace{-1.5mm}
\begin{proposition}
For any $\eta$, $u(\eta) + o(\eta) \in [0,1]$.
\label{prop1}
\end{proposition}
\begin{proof}
Given that (1) $l^{a}(c_{i}) \in [0,1]$ and $l_{i} \in [0,1]$;
(2) If $\min(l^{a}(c_{i}) - l_{i},0)<0$ then $\max(l^{a}(c_{i}) -l_{i},0) = 0$; and
(3) If $\max(l^{a}(c_{i})-l_{i},0) > 0$ then $\min(l^{a}(c_{i}) - l_{i},0)=0$. Thus, from (1--3)
we have
$|\min(l^{a}(c_{i}) - l_{i},0)|$ + $\max(l^{a}(c_{i})-l_{i},0) \in [0,1]$.
Let $n=|\{a \in \delta(c_{i})|l^{a}(c_{i})-l_{i} > 0\}|$, then obviously it holds that
$\frac{n \cdot (|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ and as $|\delta(c_i)| \leq n$ then
$\frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ holds; and
since $\sum_{i \in I_{\tau}} w_i = 1$ then \\
$\sum_{i \in I_{\tau}} w_i \cdot \frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)| + \max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$;
Finally, distributing, this equation is equivalent to: \\
$\sum_{i \in I_{\tau}} w_i \frac{\sum_{a \in \delta(c_{i})}(|\min(l^{a}(c_{i}) - l_{i},0)|}{n} \\
+ \sum_{i \in I_{\tau}} w_i \frac{\sum_{a \in \delta(c_{i})}(\max(l^{a}(c_{i})-l_{i},0))}{n} \in [0,1]$ which in turn is equivalent to $ u(\eta) + o(\eta) \in [0,1]$.
\end{proof}
\vspace{-1.5mm}
Function $u_{\mathit{prof}}$ is used to measure how proficient a team is for a given task assignment. However, counting on the required competences to perform a task does not guarantee that the team will succeed at performing it. Therefore, in the next subsection we present an evaluation function to measure \emph{congeniality} within teams. Unlike our measure for proficiency, which is based on considering a particular task assignment, our congeniality measure will solely rely on the personalities and genders of the members of a team.
\subsection{Evaluating team congeniality} \label{ssec:con}
\label{ssec:congeniality}
Inspired by the experiments of Douglass J. Wilde \cite{Wilde2009} we will define the team utility function for congeniality $u_{con}(K)$, such that:
\begin{itemize}
\vspace{-1.5mm}
\item it values more teams whose SN and TF personality dimensions are as diverse as possible;
\vspace{-1.5mm}
\item it prefers teams with at least one agent with positive EI and TF dimensions and negative PJ dimension, namely an extrovert, thinking and judging agent (called ETJ personality),
\vspace{-1.5mm}
\item it values more teams with at least one introvert agent;
\vspace{-2.5mm}
\item it values gender balance in a team.
\end{itemize}
Therefore, the higher the value of function $u_{con}(K)$, the more diverse the team is.
Formally, this team utility function is defined as follows:
\vspace{-1mm}
\begin{equation}
\label{eq:ucon}
\begin{aligned}
u_{con}(K) = & \sigma_{SN}(K) \cdot \sigma_{TF}(K) + \max_{a_i \in K}{((0,\alpha, \alpha, \alpha) \cdot {\bf p_i}, 0)} \\
& + {\max_{a_i \in K}{((0,0,-\beta,0) \cdot {\bf p_i}, 0)}} + \gamma \cdot \sin{(\pi \cdot g(K))}
\end{aligned}
\vspace{-2.5mm}
\end{equation}
where the different parameters are explained next.
\begin{itemize}
\vspace{-1.5mm}
\item $\sigma_{SN}(K)$ and $\sigma_{TF}(K)$: These variances are computed over the SN and TF personality dimensions of the members of team $K$. Since we want to maximise $u_{con}$, we want these variances to be as large as possible. The larger the values of $\sigma_{SN}$ and $\sigma_{TF}$ the larger their product will be, and hence the larger team diversity too.
\vspace{-4mm}
\item $\alpha$: The maximum variance of any distribution over an interval $[a,b]$ corresponds to a distribution with the elements evenly situated at the extremes of the interval. The variance will always be $\sigma^2 \le ((b-a)/2)^2$. In our case with $b=1$ and $a=-1$ we have $\sigma \le 1$. Then, to make the four factors equally important and given that the maximum value for ${\bf p_i}$ (the personality profile vector of agent $a_i$) would be $(1, 1, 1, 1)$ a maximum value for $\alpha$ would be $3 \alpha = ((1-(-1))/2)^2 = 1$, as we have the factor $\sigma_{SN} \cdot \sigma_{TF}$, so $\alpha \le 0.33(3)$. For values situated in the middle of the interval the variance will be $\sigma^2 \le \frac{(b-a)^2}{12}$, hence a reasonable value for $\alpha$ would be $\alpha = \frac{\sqrt[]{(1-(-1))^2)/12}}{3} = 0.19$
\vspace{-1.5mm}
\item $\beta$: A similar reasoning shows that $\beta \le 1$.
\vspace{-1.5mm}
\item $\gamma$ is a parameter to weigh the importance of a gender balance and $g(K) = \frac{w(K)}{w(K) + m(K)}$. Notice that for a perfectly gender balanced team with $w(K) = m(K)$ we have that
$\sin{(\pi \cdot g(K))} = 1$. The higher the value of $\gamma$, the more important is that team $u_{con}$ is gender balanced. Similarly to reasoning about $\alpha$ and $\beta$, we assess $\gamma \leq 1$. In order to make this factor less important than the others in the equation we experimentally assessed that $\gamma = 0.1$ is a good compromise.
\end{itemize}
\vspace{-1.5mm}
In summary, we will use a utility function $u_{con}$ such that: $\alpha = \frac{\sigma_{SN}(K) \cdot \sigma_{TF}(SK)} 3$, $\beta = 3 \cdot \alpha $ and $\gamma = 0.1$.
\subsection{Evaluating synergistic teams}
Depending on the task type, different importance for congeniality and proficiency should be given. For instance, creative tasks require a high level of communication and exchange of ideas, and hence, teams require a certain level of congeniality. While, repetitive tasks require good proficiency and less communication. The importance of proficiency ($\lambda$) and congeniality ($\mu$) is therefore a fundamental aspect of the task type. Now, given a team, we can combine its competence value (in equation \ref{eq:uprof}) with its congeniality value (in equation \ref{eq:ucon}) to measure its \emph{synergistic value}.
\vspace{-1.5mm}
\begin{mydef}
Given a team $K$, a task type $\tau = \linebreak \langle \lambda, \mu, {\{(c_{i},l_{i}, w_{i})\}_{i \in I_{\tau}}} \rangle$ and a task assignment $\eta: K \rightarrow 2^{C_{\tau}}$, the synergistic value of team $K$ is defined as:
\vspace{-1.5mm}
\begin{equation}
s(K,\eta) = \lambda \cdot u_{\mathit{prof}}(\eta) + \mu \cdot u_{con}(K)
\end{equation}
where $\lambda \in [0,1]$ is the grade to which the proficiency of team $K$ is important, and $\mu \in [-1,1]$ is the grade to which the task requires diverse personalities.
\end{mydef}
\begin{figure}
\caption{Values of congeniality and proficiency with respect to the task type.}
\begin{tikzpicture}
\begin{axis}[
axis line style={->},
x label style={at={(axis description cs:0.5,-0.1)},anchor=north},
y label style={at={(axis description cs:-0.1,.5)},anchor=south},
xlabel=Proficiency ($\lambda$),
ylabel=Congeniality ($\mu$),
xmin=0,
xmax=1,
ymin=-1,
ymax=1,
unit vector ratio=6 1,
]
\node[black] at (axis cs:0.25,0.5) {
\begin{tabular}{c}
Creative \\ General tasks
\end{tabular}};
\node[black] at (axis cs:0.25,-0.5) {\begin{tabular}{c}
Structured \\ General tasks
\end{tabular}};
\node[black] at (axis cs:0.75,0.5) {\begin{tabular}{c}
Creative \\ Specialized tasks
\end{tabular}};
\node[black] at (axis cs:0.75,-0.5) {\begin{tabular}{c}
Structured \\ Specialized tasks
\end{tabular}};
\draw [black, thick] (axis cs:0,-1) rectangle (axis cs:0.5,1);
\draw (0,0) -- (1,0);
\end{axis}
\end{tikzpicture}
\label{tbl:parameters}
\vspace{-6mm}
\end{figure}
Figure \ref{tbl:parameters} shows the relation between the parameters $\lambda$ and $\mu$.
In general, the higher the $\lambda$, the higher importance is given to the proficiency of a team. The higher the $\mu$ the more important is personality diversity. Notice, that the $\mu$ can be lower than zero. Having $\mu$ negative, we impose that the congeniality value will be as low as possible (to maximize $s(K,\eta)$) and so, team homogeneity is preferred. This situation may happen while performing tasks in unconventional performance environments that have serious consequences associated with failure. In order to quickly resolve issues, a team needs to be proficient and have team-mates who understand one another with minimum communication cost (which is associated to homogeneity of a team).
\subsection{The synergistic team composition problem}
\label{ssec:synergisticProblem}
In what follows we consider that there are multiple instances of the same task to perform. Given a set of agents $A$, our goal is to split them into teams so that each team, and the whole partition of agents into teams, is balanced in terms of competences, personality and gender.
We shall refer to these balanced teams as \emph{synergistic teams}, meaning that they are both congenial and proficient.
Therefore, we can regard our team composition problem as a particular type of set partition problem. We will refer to any partition of $A$ as a team partition. However, we are interested in a particular type of team partitions, namely those where teams are constrained by size $m$ as follows.
\begin{mydef}
Given a set of agents $A$, we say that a team partition $P_m$ of $A$ is constrained by size $m$ iff: (i) for every team $K_i \in P_m$, $K_i \in \cal{K}_A$, $\max(m-1, 2) \leq |K| \leq m+1$ holds; and (ii) for every pair of teams $K_i, K_j \in P_m$ $||K_i| - |K_j|| \le 1$.
\end{mydef}
As $|K| / m$ is not necessarily a natural number, we may need to allow for some flexibility in team size within a partition. This is why we introduced above the condition $\max(m-1, 2) \leq |K| \leq m+1$. In practical terms, in a partition we may have teams differing by one agent. We note by ${\cal P}_m(A)$ the set of all team partitions of $A$ constrained by size $m$. Henceforth, we will focus on team partitions constrained by some size. Since our goal is to find the most competence-balanced and psychologically-balanced team partition, we need a way to measure the synergistic value of a team partition, which we define as follows:
\begin{mydef}
Given a task $t = \langle \tau, m \rangle$, a team partition $P_m$ and an assignment $\eta_i$ for each team $K_i \in P_m$, the synergistic value of $P_m$ is computed by:
\vspace{-1.5mm}
\begin{equation}
u(P_m,\bm{\eta}) = \prod_{i =1}^{|P_m|} s(K_i,\eta_i)
\end{equation}
\vspace{-1.5mm}
where $\bm{\eta}$ stands for the vector of task assignments $\eta_1,\ldots, \linebreak \eta_{|P_m|}$.
\end{mydef}
Notice that the use of a Bernoulli-Nash function over the synergistic values of teams will favour team partitions whose synergistic values are balanced.
Now we are ready to cast the synergistic team composition problem as the following optimisation problem:
\begin{mydef}
Given task $t = \langle \tau, m \rangle$ and set of agents $A$ the \textbf{synergistic team formation problem (STFP)} is the problem of finding a team partition constrained by size $m$, together with competence assignment for its teams, whose synergistic value is maximal. Formally, the STFP is the problem of finding the partition in $P \in \mathcal{P}_m(A)$ and the task assignments $\bm{\eta}$ for the teams in $P_m$ that maximises $u(P_m,\bm{\eta})$.
\end{mydef}
\vspace{-2mm}
\section{Solving STFP}\label{sec:TeamForm}
In this section we detail an algorithm, the so-called \emph{SynTeam}, which solves the synergistic team formation problem described above. We will start from describing how to split agents into a partition (see subsection \ref{ssec:dist}). Next, we will move on to the problem of assigning competences in a task to team members (see subsection \ref{ssec:asg}), so that the utility of synergistic function is maximal. Finally, we will explain \emph{SynTeam} that is a greedy algorithm that quickly finds a first, local solution, to subsequently improve it, hoping to reach a global optimum.
\subsection{How do we split agents?} \label{ssec:dist}
We note by $n = |A|$ the number of agents in $A$, by $m \in \mathbb{N}$ the target number of agents in each team, and by $b$ the minimum total number of teams, $b = \left\lfloor n/m\right\rfloor$. We define the quantity distribution of agents in teams of a partition, noted $T: \mathbb{N} \times \mathbb{N} \to \mathbb{N} \times \mathbb{N} \cup (\mathbb{N} \times \mathbb{N})^2 $ as:
\vspace{-2mm}
\begin{equation}
\begin{multlined}
T(n,m) = \\
\begin{cases}
\{(b, m)\} & \text{if } n \geq m \textit{ and } n \bmod m = 0
\\
\{(n \bmod m,m + 1), \\(b - (n \bmod m),m)\}
& \text{if } n \geq m \textit{ and } n \bmod m \le b
\\
\{(b, m),(1, n \bmod m)\} & \text{if } n \geq m \textit{ and } n \bmod m > b
\\
\{(0,m)\} & \text{otherwise}
\end{cases}
\end{multlined}
\end{equation}
Note that depending on the cardinality of $A$ and the desired team size, the number of agents in each team may vary by one individual (for instance if there are $n=7$ agents in $A$ and we want to compose duets ($m=2$), we split agents into two duets and one triplet).
\subsection{Solving an Assignment} \label{ssec:asg}
There are different methods to build an assignment. We have decided to solve our assignment problem by using the minimum cost flow model \cite{ahuja1993network}. This is one of the most fundamental problems within network flow theory and it can be efficiently solved. For instance, in \cite{orlin1993faster}, it was proven that the minimum cost flow problem can be solved in $O(m \cdot log(n) \cdot (m + n \cdot log(n)))$ time with $n$ nodes and $m$ arcs.
Our problem is as follows:
There are a number of agents in team $K$ and a number of competence requests in task $t$. Any agent can be assigned to any competence, incurring some cost that varies depending on the agent competence level of the assigned competence. We want to get each competence assigned to at least one agent and each agent assigned to at least one competence in such a way that the total cost (that is both undercompetence and overcompetence) of the assignment is minimal with respect to all such assignments.
Formally, let $G = (N, E)$ be a directed network defined by a set $N$ of $n$ nodes and a set $E$ of $e$ directed arcs. There are four types of nodes: (1) one source node; (2) $|K|$ nodes that represent agents in team $K$; (3) $|C_{\tau}|$ competence requests that form task type $\tau$; and (4) one sink node. Each $arc$ $(i, j) \in E$ has an associated cost $p_{ij} \in \mathbb{R}^+$ that denotes the cost per unit flow on that $arc$. We also associate with each $arc$ $(i, j) \in E$ a capacity $u_{ij} \in \mathbb{R}^+$ that denotes the maximum amount that can flow on the arc. In particular, we have three kinds of edges: (1) Supply arcs. These edges connect the source to agent nodes. Each of these arcs has zero cost and a positive capacity $u_{ij}$ which define how many competences at most can be assigned to each agent. (2) Transportation arcs. These are used to ship supplies. Every transportation edge $(i, j) \in E$ is associated with a shipment cost $p_{ij}$ that is equal to:
\begin{equation*}
p_{ij} =
\begin{cases}
(l^{a_i}(c_{\mathit{j}}) - l_{\mathit{j}}) \cdot (1-\upsilon) \cdot w_{\mathit{j}} & \text{if } l^{a_i}(c_{\mathit{j}} - l_{\mathit{j}}) > 0\\
-(l^{a_i}(c_{\mathit{j}}) - l_{\mathit{j}}) \cdot \upsilon \cdot w_{\mathit{j}} & \text{if } l^{a_i}(c_{\mathit{j}} - l_{\mathit{j}}) < 0
\end{cases}
\label{costeq}
\end{equation*}
\noindent
where $v \in [0,1]$ is the penalty given to the undercompetence of team $K$(see subsection \ref{ssec:prof} for the definition).
(3) Demand arcs. These arcs connect the competence requests nodes to the sink node. These arcs have zero costs and positive capacities $u_{ij}$ which equal the demand for each competence.
Thus, a network is denoted by $(G, w, u, b)$. We associate with each node $i \in N$ an integer number $b(i)$ representing its supply. If $b(n) > 0$ then $n$ is a source node, if $b(n) < 0$ then $n$ is a sink node. In order to solve a task assignment problem, we use the implementation of \cite{goldberg1990finding} provided in the ort-tools.\footnote{\url{https://github.com/google/or-tools/blob/master/src/graph/min_cost_flow.h}}
\vspace{-2mm}
\begin{figure}
\includegraphics[max size={\textwidth}{10.35cm}]{attach/asg.png}
\caption{An example of an assignment graph $G(N,E)$}\label{asg}
\vspace{-6mm}
\end{figure}
\paragraph{Example} Let us consider a team of three agents $K = \{a_1, a_2, a_3\}$:
\begin{itemize}
\vspace{-1.5mm}
\item $a_1 = \langle id_1, `woman', p_1, [l(c_1) = 0.9, l(c_2) = 0.5]\rangle$
\vspace{-1.5mm}
\item $a_2 = \langle id_2, `man', p_2, [l(c_2) = 0.2, l(c_3) = 0.8]\rangle$
\vspace{-1.5mm}
\item $a_3 = \langle id_3, `man', p_3, [l(c_2) = 0.4, l(c_4) = 0.6]\rangle$
\end{itemize}
and task type $\tau$ containing four competence requests \\ $\{(c_{1},0.8, 0.25), (c_{2}, 0.6, 0.25), (c_{3},0.6, 0.25),(c_{4},0.6, 0.25)\}$. \\ The penalty given to undercompetence is equal to $\upsilon=0.6$.
Our goal is to assign agents to competence requests, so that: (1) every agent is responsible for at least one competence, (2) every competence is covered by at least one agent, (3) the overall ``cost'' in minimal.
As shown in figure \ref{asg}, we build a graph out of $n = 9$ nodes that is: one source node ($N_0$), three agents nodes ($N_1 - N_3$), four competences nodes ($N_4 - N_7$) and a sink node ($N_8$). Next, we add edges: (1) between source node $N_0$ and all agent nodes $N_1 - N_3$ that have a cost $p_{si} = 0$ and capacity $u_{si} = 2$ for all $i$ as the maximum number of competences assigned to one agent cannot be bigger than two if we want to make sure that all agents are assigned to at least one competence; (2) between agent nodes $N_1 - N_3$ and competence nodes ($N_4 - N_7$), where each capacity $u_{ij} = 1$ and we calculate costs according to the equation \ref{costeq}. For instance, the cost between $N_1$ and $N_4$ is equal to: $(0.9 - 0.8) \cdot (1-0.6) \cdot 0.25 = 0.01$. We multiply all costs by $1000$ to meet the requirements of the solver (edges need to be integer). Hence, the final cost $p_{14}=10$; (3) edges between competence nodes $N_4 - N_7$ and sink node $N_8$ that have costs $p_{jw} = 0$ and capacities $u_{jw} = 1$ to impose that each is assigned.
Once the graph is built, we pass it to the solver to get the assignment, and we get $c_1$ and $c_2$ assigned to $a_1$, $c_3$ assigned to $a_2$ and $c_4$ assigned to $a_3$.
\subsection{SynTeam algorithm} \label{ssec:SynTeam}
Algorithm \ref{alg:teamDistribution} shows the SynTeam pseudocode.
Algorithm \ref{alg:teamDistribution} is divided into two parts:
{\bf 1. \textsl{Find a first team partition}}. This part of the algorithm simply builds a partition by randomly assigning agents to teams of particular team sizes. This part goes as follows. Given a list of agents $A$, we start by shuffling the list so that the order of agents in the list is random (line~1). Next, we determine the quantitative distribution of individuals among teams of size $m$ using function $T(|A|,m)$ as defined in section \ref{ssec:dist} (line~2). We start from the top of the shuffled list of agents (line~3). For each number of teams (line~4), we define a temporary set $team$ to store a current team (line~5). We add to $team$ subsequent $size$ agents from the shuffled list of agents (line~7). We add the newly created team to the team partition $P_{\mathit{best}}$ that we intend to build (line~10). When reaching line~14, $P_{\mathit{best}}$ will contain a first disjoint subset of teams (a team partition).
{\bf 2. \textsl{Improve the current best team partition}}. The second part of the algorithm consists in improving the current best team partition. The idea is to obtain a better team partition by performing crossovers of two randomly selected teams to yield two better teams. In this part, we took inspiration from simulated annealing methods, where the algorithm might accept swaps that actually decrease the solution quality with a certain probability. The probability of accepting worse solutions slowly decreases as the algorithm explores the solution space (as the number of iterations increases). The annealing schedule is defined by the $\mathit{cooling\_rate}$ parameter. We have modified this method to store the partition with the highest synergistic evaluation found so far.
In detail, the second part works as follows. First, we select two random teams, $K_1$ and $K_2$, in the current team partition (line~15). Then we compute all team partitions of size $m$ with agents in $K_1 \cup K_2$ (line~19), and we select the best candidate team partition, named $P_{\mathit{bestCandidate}}$ (lines~19~to~26). If the best candidate synergistic utility is larger than the utility contribution of $K_1$ and $K_2$ to the current best partition $P_{\mathit{best}}$ (line~27), then we replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~28). If the best candidate team partition utility is lowe
, then we check if the probability of accepting a worse solution is higher than a uniformly sampled value from $[0,1]$ (line~29).
If so,
we replace teams $K_1$ and $K_2$ by the teams in the best candidate team partition (line~30) and we lower $heat$ by a cooling rate. This part of the algorithm continues until the value of $heat$ reaches $1$ (line~13). We also store the best partition found so far (line~34) to make sure we do not end up with worse solution. Finally, we return found best partition $P_{\mathit{bestEver}}$ as well as the assignment $\eta$ for each team.
\begin{algorithm}[h]
\small
\caption{\quad SynTeam}
\label{alg:teamDistribution}
\begin{algorithmic}[1]
\Require $A$ \Comment{The list of agents}
\Require $T(|A|,m)$ \Comment{Quantitative team distribution}
\Require $P_{\mathit{best}} = \emptyset$ \Comment{Initialize best partition}
\Require $\mathit{heat=10}$ \Comment{Initial temperature for second step}
\Require $\mathit{Cooling\_rate}$ \Comment{Heating decrease}
\Ensure $(P, \bm{\eta})$ \Comment{Best partition found and best assignments}
\State $\mathit{random.shuffle(A)}$
\If {$T(|A|,m) \ne (0,m)$}
\State $\mathit{index} = 0$ \Comment{Used to iterate over the agent list}
\ForAll{$(\mathit{numberOfTeams}, \mathit{size)} \in T(|A|,m)$}
\State $team = \emptyset$
\For {$i \in (0,\dots ,\mathit{(size-1))}$}
\State $team = team \cup A[\mathit{index}]$
\State $\mathit{index}=\mathit{index} + 1$
\EndFor
\State $P_{\mathit{best}} = P_{\mathit{best}} \cup \{team\}$
\EndFor
\State $\bm{ \eta_{\mathit{best}}} = \mathit{assign\_agents}(P_{\mathit{best}})$ \Comment{see Subsection \ref{ssec:asg}}
\State $(P_{\mathit{bestEver}}, \mathit{bestValueEver}) = (P_{\mathit{best}},u(P_{\mathit{best}},\bm{ \eta_{\mathit{best}}}))$
\While{$\mathit{heat} > 1$}
\State $(K_1,K_2) = selectRandomTeams(P_{\mathit{best}}$)
\State $(\eta_1,\eta_2) = \mathit{assign\_agents}(\{K_1,K_2\})$
\State $\mathit{contrValue} = u(\{K_1,K_2\},(\eta_1,\eta_2))$
\State $(P_{\mathit{bestCandidate}}, \mathit{best Candidatevalue}) = (\emptyset,0)$
\ForAll {$P_{\mathit{candidate}} \in P_m(K_1 \cup K_2) \setminus \{K_1,K_2\}$}
\State $(\eta_1,\eta_2) = assign\_agents(P_{\mathit{candidate}})$
\State $\mathit{candidateValue} = u(P_{\mathit{candidate}},(\eta_1,\eta_2))$
\If{$\mathit{candidateValue} > \mathit{bestCandidateValue}$}
\State $P_{\mathit{bestCandidate}} = P_{\mathit{candidate}}$
\State $\mathit{bestCandidateValue} = \mathit{candidateValue}$
\EndIf
\EndFor
\If{$\mathit{bestCandidateValue} > \mathit{contrValue}$}
\State $P_{\mathit{best}} = replace(\{K_1,K_2\},P_{\mathit{bestCandidate}}, P_{\mathit{best}})$
\ElsIf{$\mathbb{P}(\mathit{bestCandidateValue}, \mathit{contrValue}, heat)$ \StatexIndent[2] $\geq \mathit{random}(0, 1)$}
\State $P_{\mathit{best}} = replace(\{K_1,K_2\},P_{\mathit{bestCandidate}},P_{\mathit{best}})$
\EndIf
\State $\bm{ \eta_{\mathit{best}}} = \mathit{assign\_agents}(P_{\mathit{best}})$
\If {$\mathit{bestValueEver} < u(P_{\mathit{best}},\bm{ \eta_{\mathit{best}}})$}
\State $P_{\mathit{bestEver}} = P_{\mathit{best}}$
\EndIf
\State $heat$ = $heat-\mathit{Cooling\_rate}$
\EndWhile
\State $return(P_{\mathit{bestEver}},\mathit{assign\_agents(P_{\mathit{bestEver}}}))$
\EndIf
\end{algorithmic}
\end{algorithm}
\vspace{-4mm}
\section{Experimental Results} \label{sec:results}
\subsection{Experimental Setting}
``Institut Torras i Bages'' is a state school near Barcelona. Collaborative work has been implemented there for the last 5 years in their final assignment (``Treball de S\'{\i}ntesi'') with a steady and significant increase in the scores and quality of the final product that students are asked to deliver. This assignment takes one week and is designed to check if students have achieved, and to what extent, the objectives set in the various curricular areas. It is a work that encourages teamwork, research, and tests relationships with the environment. Students work in teams and at the end of every activity present their work in front of a panel of teachers that assess the content, presentation and cooperation between team members. This is a creative task, although requiring high level of competences.
\subsection{Data Collection}
In current school practice, teachers group students according to their own, manual method based on the knowledge about students, their competences, background and social situation. This year we have used our grouping system based only on personality (SynTeam\ with $\lambda = 0, \mu = 1$) upon two groups of students: `3r ESO A' (24 students), and `3r ESO C' (24 students). Using computers and/or mobile phones, students answered the questionnaire (described in section \ref{pers}) which allowed us to divide them into teams of size three for each class. Tutors have evaluated each team in each partition giving an integer value $v \in [1,10]$ meaning their expectation of the performance of each team.
Each student team was asked to undertake the set of interdisciplinary activities (``Treball de S\'{\i}ntesi'') described above. We have collected each student's final mark for ``Treball de S\'{\i}ntesi'' as well as final marks obtained for all subjects. That is: Catalan, Spanish, English, Nature, Physics and Chemistry, Social Science, Math, Physical Education, Plastic Arts, Technology. We have used a matrix provided by the tutors to relate each subject to different kinds of intelligence (that in education are understood as competences) needed for this subject. There are eight types of human intelligence \cite{gardner1987theory}, each representing different ways of processing information: Naturalist, Interpersonal, Logical/Mathematical, Visual/Spatial, Body/Kinaesthetic, Musical, Intrapersonal and Verbal/Linguistic. This matrix for each subject and each intelligence is shown in figure \ref{matrix}.
\begin{figure}[h]
\centering
$\begin{bmatrix}
0 & 1 & 0 & 0 & 0 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 0 & 1 & 1 & 1
\\0 & 1 & 0 & 0 & 0 & 1 & 1 & 1
\\1 & 1 & 0 & 1 & 1 & 0 & 1 & 1
\\1 & 1 & 1 & 1 & 0 & 0 & 1 & 1
\\1 & 1 & 0 & 0 & 0 & 0 & 1 & 1
\\0 & 1 & 1 & 1 & 0 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 1
\\0 & 1 & 0 & 1 & 1 & 0 & 1 & 0
\\1 & 1 & 1 & 0 & 1 & 0 & 1 & 1
\end{bmatrix}$
\label{matrix}
\caption{Matrix matching Intelligence with subjects (each row corresponds to a subject, each column to an intelligence)}
\end{figure}
\noindent Subjects are represented by rows and intelligences by columns of the matrix in the order as provided above. Based on this matrix we calculate values of intelligences for every student by averaging all values obtained by her that are relevant for this intelligence. For instance, for Body/Kinaesthetic intelligence, we calculate an average of student marks obtained in Nature, Physical Education, Plastic Arts and Technology. An alternative way to measure students' competences level can be by calculating the collective assessments of each competence (like proposed by \cite{andrejczukCompetences}).
Finally, having competences (Intelligences), personality and actual performance of all students, we are able to calculate synergistic values for each team. We also calculate the average of marks obtained by every student in a team to get teams' performance values.
\subsection{Results}
\noindent
Given several team composition methods, we are interested in comparing them to know which method better predicts team performance. Hence, we generate several team rankings using the evaluation values obtained through different methods. First, we generate a ranking based on actual team performance that will be our base to compare other rankings. Second, we generate a ranking based on the expert evaluations. Finally, we generate several rankings based on calculated synergistic values with varying importance of congeniality and proficiency. Since ``Traball de S\'{\i}ntesi'' is a creative task, we want to examine the evaluation function with parameters $\mu > 0$ and $\lambda = 1-\mu$. In particular, we want to observe how the rankings change when increasing the importance of competences.
Notice that teacher and actual performance rankings may include ties since the pool of possible marks is discrete (which is highly improbable in case of SynTeam\ rankings). Therefore, before generating rankings based on synergistic values, we round them up to two digits to discretize the evaluation space. An ordering with ties is also known as a \emph{partial ranking}.
Next, we compare teacher and SynTeam\ rankings with the actual performance ranking using the standardized Kendall Tau distance. For implementation details, refer to the work by Fagin et al. \cite{Fagin:2004:CAR,fagin2006comparing}, which also provide sound mathematical principles to compare partial rankings. The results of the comparison are shown in Figure \ref{asg}. Notice that the lower the value of Kendall Tau, the more similar the rankings. We observe that the SynTeam\ ranking improves as the importance of competences increases, and it is best at predicting students' performance for $\lambda = 0.8$ and $\mu = 0.2$ (Kendall Tau equal to $0.15$). A standardised Kendall Tau distance for teacher ranking is equal to $0.28$, which shows that SynTeam\ predicts the performance better than teachers, when competences are included ($\lambda > 0.2$). We also calculate the values of Kendall Tau for random ($0.42$) and reversed ($0.9$) rankings to benchmark teacher and SynTeam\ grouping methods. The results show that both teachers and SynTeam\ are better at predicting students' performance than the random method.
\begin{figure}
\includegraphics[max size={\textwidth}{10.35cm}]{attach/KendallTauComparison.png}
\caption{Comparison of Kendall-Tau distances between different methods.}\vspace{-2mm}
\label{asg}
\vspace{-2mm}
\end{figure}
\section{Discussion} \label{sec:discuss}
In this paper we introduced SynTeam, an algorithm for partitioning groups of humans into competent, gender and psychologically balanced teams.
To our knowledge, SynTeam\ is the first computational model to build synergistic teams that not only work well together, but are also competent enough to perform an assignment requiring particular expertise.
We have decided to evaluate our algorithm in the context of a classroom. Besides obvious advantages of observing students work in person, this scenario gave us an opportunity to compare our results with real-life, currently used practice. The results show that SynTeam\ is able to predict team performance better that the experts that know the students, their social background, competences, and cognitive capabilities.
The algorithm is potentially useful for any organisation that faces the need to optimise their problem solving teams (e.g. a classroom, a company, a research unit). The algorithm composes teams in a purely automatic way without consulting experts, which is a huge advantage for environments where there is a lack of experts.
Regarding future work, We would like to investigate how to determine quality guarantees of the algorithm.
Additionally, there is a need to consider richer and more sophisticated models to capture the various factors that influence the team composition process in the real world. We will consider how our problem relates to the constrained coalition formation framework \cite{Rahwan}. This may help add constraints and preferences coming from experts that cannot be established by any algorithm, e.g. Anna cannot be in the same team with Jos\'e as they used to have a romantic relationship.
\newpage
\bibliographystyle{plain}
| {'timestamp': '2017-02-28T02:11:13', 'yymm': '1702', 'arxiv_id': '1702.08222', 'language': 'en', 'url': 'https://arxiv.org/abs/1702.08222'} |
\section{Introduction}
For more than three decades, understanding the mechanism of superconductivity observed at high critical temperature (HTC) in
strongly correlated cuprates~\cite{LaCuO2_Bednorz_86} has been the ``holy grail”
of many theoretical and experimental condensend matter researchers.
In this context, the observation of superconductivity in
nickelates $Ln$NiO$_2$, $Ln$=\{La, Nd and Pr\} ~\cite{li_superconductivity_2019,osada_superconducting_2020,osada_nickelate_2021} upon doping with holes is remarkable.
These superconducting nickelates are isostructural as well as isoelectronic to
HTC cuprate superconductors and thus enable the comparison of
the essential physical features that may be playing a crucial role in the mechanism driving superconductivity.
$Ln$NiO$_2$ family of compounds are synthesized in the so-called infinite-layer structure, where NiO$_2$ and $Ln$ layers are stacked alternatively~\cite{li_superconductivity_2019}.
The NiO$_2$ planes are identical to the CuO$_2$ planes in HTC cuprates which host much of the physics leading to superconductivity~\cite{keimer_quantum_2015}.
A simple valence counting of the these nickelates reveals a {1+} oxidation state for Ni ({2-} for O and {3+} for $Ln$) with 9 electrons in the $3d$ manifold.
In the cuprates, the Cu$^{2+}$ oxidation state gives rise to the same $3d^9$ electronic configuration.
Contrary to many nickel oxides where the Ni atom sits in an octahedral cage of oxygens, in the infinite-layered structure, square planar NiO$_4$ plaques are formed without the apical oxygens.
The crystal field due to square-planar oxygen coordination stabilizes the $d_{z^2}$ orbital of the $e_g$ manifold, making its energy close to the $t_{2g}$ orbitals (the $3d$ orbitals split to 3-fold $t_{2g}$ and 2-fold $e_g$ sub-shells in an octahedral environment). With $d^9$ occupation, a half-filled $d_{x^2-y^2}$-orbital system is realized as in cuprates.
In fact, recent resonant inelastic X-ray scattering (RIXS) experiments~\cite{rossi2020orbital} as well as the {\it ab initio} correlated multiplet calculations~\cite{katukuri_electronic_2020} confirm that the Ni$^{1+}$ $d$-$d$ excitations in NdNiO$_2$\ are similar to the Cu$ ^{2+} $ ions in cuprates~\cite{moretti_sala_energy_2011}.
Several electronic structure calculations based on density-functional theory (DFT) have shown that in monovalent nickelates the Ni 3$d_{x^2-y^2}$ states sit at the Fermi energy level ~\cite{lee_infinite-layer_2004,liu_electronic_njpqm_2020,zhang_effective_prl_2020}.
These calculations further show that the nickelates are more close to the Mott-Hubbard insulating limit with a decreased Ni $3d$- O $2p$ hybridization compared to cuprates.
The latter are considered to be charge transfer insulators~\cite{zsa_mott_charge_transfer_1985} where excitations across the electronic band gap involves O $2p$ to Cu $3d$ electron transfer.
Correlated wavefunction-based calculations~\cite{katukuri_electronic_2020} indeed find that the contribution from the O $2p$ hole configuration to the ground state wavefunction in NdNiO$_2$\ is four times smaller than in the cuprate analogue CaCuO$_2$.
X-ray absorption and photoemission spectroscopy experiments~\cite{hepting2020a,goodge-a} confirm the Mott behavior of nickelates.
In the cuprate charge-transfer insulators, the strong hybridization of the Cu 3$d_{x^2-y^2}$\ and O $2p$ orbitals result in O $2p$ dominated bonding and Cu 3$d_{x^2-y^2}$\ -like antibonding orbitals. As a consequence, the doped holes primarily reside on the bonding O $2p$ orbitals, making them singly occupied.
The unpaired electrons on the Cu $d_{x^2-y^2}$\ and the O $2p$ are coupled antiferromagnetically resulting in the famous Zhang-Rice (ZR) spin singlet state~\cite{zhang_effective_1988}.
In the monovalent nickelates, it is unclear where the doped-holes reside. Do they form a ZR singlet as in cuprates? Instead, if the holes reside on the Ni site, do they form a high-spin local triplet with two singly occupied Ni $3d$ orbitals and aligned ferromagnetically or a low-spin singlet with either both the holes residing in the Ni 3$d_{x^2-y^2}$\ orbital or two singly occupied Ni 3$d$ but aligned anti-parallel.
While Ni L-edge XAS and RIXS measurements~\cite{rossi2020orbital} conclude that an orbitally polarized singlet state is predominant, where doped holes reside on the Ni 3$d_{x^2-y^2}$\ orbital, O K-edge electron energy loss spectroscopy~\cite{goodge-a} reveal that some of the holes also reside on the O $2p$ orbitals.
On the other hand, calculations based on multi-band $d-p$ Hubbard models show that the fate of the doped holes is determined by a subtle interplay of Ni onsite ($U_{dd}$), Ni $d$ - O $2p$ inter-site ($U_{dp}$) Coulomb interactions and the Hund's coupling along with the charge transfer gap~\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021}.
However, with the lack of extensive experimental data, it is difficult to identify the appropriate interaction parameters for a model Hamiltonian study, let alone identifying the model that best describes the physics of superconducting nickelates.
Despite the efforts to discern the similarities and differences between the monovalent nickelates and superconducting cuprates, there is no clear understanding on the nature of doped holes in NdNiO$_2$.
Particularly, there is no reliable parameter-free \textit{ab initio} analysis of the hole-doped situation.
In this work, we investigate the hole-doped ground state in NdNiO$_2$\ and draw parallels to the hole doped ground state of cuprate analogue CaCuO$_2$.
We use fully {\it ab initio} many-body wavefunction-based quantum chemistry methodology
to compute the ground state wavefunctions for the hole doped NdNiO$_2$\ and CaCuO$_2$.
We find that the doped hole in NdNiO$_2$ mainly localizes on the Ni 3$d_{x^2-y^2}$\ orbital to form a closed-shell singlet, and this singlet configuration contributes to $\sim$40\% of the wavefunction.
In contrast, in CaCuO$_2$ the Zhang-Rice singlet configurations contribute to $\sim$65\% of the wavefunction.
The persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects than in CaCuO$_2$,
and consequently the additional hole foot-print is more three-dimensional in NdNiO$_2$.
Our analysis shows that the most commonly used three-band Hubbard model to express the doped scenario in cuprates represents ~ 90\% of the $d^8$ wavefunction for CaCuO$_2$, but such a model grossly approximates the $d^8$ wavefunction for the NdNiO$_2$ as it only stands for $\sim$60\% of the wavefunction.
In what follows, we first describe the computational methodology we employ in this work where we highlight the novel features of the methods and provide all the computational details.
We then present the results of our calculations and conclude with a discussion.
\section{The wavefunction quantum chemistry method}
{\it Ab initio} configuration interaction (CI) wavefunction-based quantum chemistry methods, particularly
the post Hartree-Fock (HF) complete active space self-consistent field (CASSCF) and the multireference perturbation theory (MRPT), are employed.
These methods not only facilitate systematic inclusion of electron correlations, but also enable to quantify different types of correlations, static vs dynamic~\cite{helgaker_molecular_2000}.
These calculations do not use any \textit{ad hoc} parameters to incorporate electron-electron interactions unlike other many-body methods, instead, they are computed fully {\it ab initio} from the kinetic and Coulomb integrals.
Such \textit{ab initio} calculations provide techniques to systematically analyze electron correlation effects and offer insights into the electronic structure of correlated solids that go substantially beyond standard DFT approaches, e.g., see Ref.~\cite{Munoz_afm_htc_qc_prl_2000,CuO2_dd_hozoi11,book_Liviu_Fulde,Bogdanov_Ti_12,katukuri_electronic_2020} for the $ 3d $ TM oxides and Ref.~\cite{katukuri_PRB_2012,Os227_bogdanov_12,213_rixs_gretarsson_2012,Katukuri_ba214_prx_2014,Katukuri_njp_2014} for $ 5d $ compounds.
\subsection{Embedded cluster approach}
Since strong electronic correlations are short-ranged in nature \cite{fulde_new_book}, a local approach for the calculation of the $N$ and $N\pm$1 –electron wavefunction is a very attractive option for transition metal compounds.
In the embedded cluster approach, a finite set of atoms, we call quantum cluster (QC), is cut out from the infinite solid and many-body quantum chemistry methods are used to calculate the electronic structure of the atoms within the QC.
The cluster is ``embedded” in a potential that accounts for the part of the crystal that is not treated explicitly.
In this work, we represent the embedding potential with an array of point charges (PCs) at the lattice positions that are fitted to reproduce the Madelung crystal field in the cluster region~\cite{ewald}.
Such procedure enables the use of quantum chemistry calculations for solids involving transition-metal or lanthanide ions, see Refs.~\cite{katukuri_ab_2012,katukuri_electronic_2014,babkevich_magnetic_2016}.
\subsection{Complete active space self-consistent field}
CASSCF method~\cite{book_QC_00} is a specific type of multi-configurational (MC) self-consistent field technique in which a complete set of Slater determinants or configuration state functions (CSFs) is used in the expansion of the CI wavefunction is defined in a constrained orbital space, called the active space.
In the CASSCF(n,m) approach, a subset of $n$ active electrons are
fully correlated among an active set of $m$ orbitals, leading to a highly multi-configurational (CAS) reference wavefunction.
CASSCF method with a properly chosen active space guarantees a qualitatively correct wavefunction for strongly correlated systems where static correlation~\cite{book_QC_00} effects are taken into account.
%
We consider active spaces as large as CAS(24,30) in this work.
Because the conventional CASSCF implementations based on deterministic CI space (the Hilbert space of all possible configurations within in the active space) solvers are limited to active spaces of 18 active electrons in 18 orbitals,
we use the full configuration interaction quantum Monte Carlo (FCIQMC)~\cite{booth_fermion_2009,cleland_survival_2010,guther_neci_2020} and density matrix renormalization group (DMRG) theory~\cite{chan_density_2011,sharma_spin-adapted_2012} algorithms to solve the eigenvalue problem defined within the active space.
\subsection{Multireference perturbation theory}
While the CASSCF calculation provides a qualitatively correct wavefunction, for a quantitative description of a strongly correlated system, dynamic correlations~\cite{book_QC_00} (contributions to the wavefunction from those configurations related to excitations from inactive to active and virtual, and active to virtual orbitals) are also important and must be accounted for.
A natural choice is variational multireference CI (MRCI) approach where the CI wavefunction is extended with excitations involving orbitals that are doubly occupied and empty in the reference CASSCF wavefunction \cite{book_QC_00}.
An alternative and computationally less demanding approach to take into account dynamic correlations is based on perturbation theory in second- and higher-orders.
In multireference perturbation theory (MRPT) MC zeroth-order wavefunction is employed and excitations to the virtual space are accounted by means of perturbation theory.
If the initial choice of the MC wavefunction is good enough to capture the large part of the correlation energy, then the perturbation corrections are typically small.
The most common variations of MRPT are the complete active space second-order perturbation theory (CASPT2)~\cite{anderson_caspt2_1992} and the n-electron valence second-order perturbation theory (NEVPT2)~\cite{angeli_nevpt2_2001} which differ in the type of zeroth-order Hamiltonian $H_0$ employed.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.450\textwidth]{fig1.pdf}
\caption{Quantum cluster of five NiO$_4$ (a) and CuO$_4$ (b) plaques considered in our calculations. The point-charge embedding is not shown.
The symmetry adapted localized 3$d_{x^2-y^2}$\ and the oxygen Zhang-Rice-like 2$p$ orbitals, the basis in which the wavefunction in Table~\ref{wfn} is presented are shown in yellow and green color. }
\label{fig1}
\end{center}
\end{figure}
\section{The {\em ab initio} model}
Before we describe the {\em ab initio} model we consider, let us summarize the widely used and prominent model Hamiltonian to study the nature of doped hole in HTC cuprates and also employed for monovalent nickelates lately.
It is the three-band Hubbard model~\cite{emery_3b_hubbard_prl_1987} with
three orbital degrees of freedom (bands) which include the $d$ orbital of Cu with $x^2-y^2$ symmetry and the in-plane oxygen $p$ orbitals aligned in the direction of the nearest Cu neighbours.
These belong to the $b_1$ irreducible representation (irrep) of the $D_{4h}$ point group symmetry realized at the Cu site of the CuO$_4$ plaque, the other Cu $d$ orbitals belong to $a_1$ ($d_{z^2}$), $b_2$ ($d_{xy}$) and $e$ ($d_{xz,yz}$) irreps.
The parameters in this Hamiltonian include the most relevant hopping and Coulomb interactions within this set of orbitals.
More recently, the role of the Cu $3d$ multiplet structure on the hole doped ground state is also studied~\cite{jiang_cuprates_prb_2020}.
While this model explains certain experimental observations, there is still a huge debate on what is the minimum model to describe the low-energy physics of doped cuprates.
Nevertheless, this model has also been employed to investigate the character of the doped hole in monovalent nickelates~\cite{jiang_critical_prl_2020,Plienbumrung_condmat_2021, Plienbumrung_prb_2021}.
Within the embedded cluster approach described earlier,
we consider a QC of five NiO$_4$ (CuO$_4$) plaques that includes five Ni (Cu) atoms, 16 oxygens and 8 Nd (Ca) atoms. The 10 Ni (Cu) ions neighbouring to the cluster are also included in the QC, however, these are considered as total ion potentials (TIPs).
The QC is embedded in point charges that reproduce the electrostatic field of the solid environment.
We used the crystal structure parameters for the thin film samples reported in Ref.~\cite{li_superconductivity_2019,hayward_synthesis_2003,kobayashi_compounds_1997,karpinski_single_1994}.
We used all-electron atomic natural orbital (ANO)-L basis sets of tripple-$\zeta$ quality with additional polarization functions -- [$7s6p4d2f1g$] for Ni (Cu)~\cite{roos_new_2005}
and [$4s3p2d1f$] for oxygens~\cite{roos_main_2004}.
For the eight Nd (Ca) atoms large core effective potentials~\cite{dolg_energy-adjusted_1989,dolg_combination_1993,kaupp_pseudopotential_1991} and associated [$3s2p2d$] basis functions were used.
In the case of Nd, the $f$-electrons were incorporated in the core.
Cu$ ^{1+} $ (Zn$^{2+}$) total ion potentials (TIPs) with [$2s1p$] functions were used for the 10 Ni$^{1+}$ (Cu$^{2+}$)~\cite{ingelmann_thesis}~\footnote{Energy-consistent Pseudopotentials of Stuttgart/Cologne group, \url{http://www.tc.uni-koeln.de/cgi-bin/pp.pl?language=en,format=molpro,element=Zn,job=getecp,ecp=ECP28SDF}, [Accessed: 15-Sept-2021]}
neighbouring ions of the QC.
\begin{table}[!t]
\caption{The different active spaces (CAS) considered in this work.
NEL is number of active electrons and NORB is the number of active orbitals.
The numbers in parenthesis indicate the orbital numbers in Fig~\ref{activespace_orb}.
}
\label{activespaces}
\begin{center}
\begin{tabular}{lcc}
\hline
\hline\\
CAS & NEL & NORB \\
\hline\\
CAS-1 & 18 & 24 (1-24) \\
CAS-2 & 24 & 30 (25-30) \\
CAS-3\footnote{The four neighbouring Ni$^{1+}$ (Cu$^{2+}$) ions in the quantum cluster are treated as closed shell Cu$^{1+}$ (Zn$^{2+}$) ions.}
& 12 & 14 (1, 6, 11, 16 and 21-30) \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
To investigate the role of different interactions in the $d^8$ ground state,
two different active spaces were considered.
In the first active space, CAS-1 in Table ~\ref{activespaces}, only the orbitals in the $b_1$ and $a_1$ irreps are active.
These are $d_{x^2-y^2}$ and $d_{z^2}$-like orbitals respectively, and the corresponding double-shell $4d$ orbitals of each of the five Ni (Cu) atoms.
CAS-1 also contains the symmetry-adapted ZR-like composite O 2$p$ and the double-shell 3$p$-like orbitals, numbers 1-20 and 21-24 in Fig.~\ref{activespace_orb}.
At the mean-field HF level of theory, there are 16 electrons within this set of orbitals, resulting in CAS(16,22) active space.
In the second active space, CAS-2, orbitals of $b_2$ and the $e$ irreps from the central Ni (Cu) $d$ manifold are also included.
These are the 3$d_{xy}$, 3$d_{xz,yz}$-like orbitals and the corresponding $4d$ orbitals and the six electrons, numbers 25-30 in Fig.~\ref{activespace_orb}, resulting in a CAS(24,30) active space.
The latter active space takes into account the $d^8$ multiplet effects within the $3d$ manifold explicitly.
The two active spaces considered in this work not only describe all the physical effects included in the above mentioned three-band Hubbard model but go beyond.
More importantly, we do have any \textit{ad-hoc} input parameters for the calculation as
all the physical interactions are implicitly included in the {\it ab initio} Hamiltonian describing the actual scenario in the real materials.
We employed {\sc OpenMolcas}~\cite{fdez_galvan_openmolcas_2019} quantum chemistry package for all the calculations.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.480\textwidth]{cas_orbitals.pdf}
\caption{Active orbital basis used in the CASSCF calculations,
plotted using Jmol~\cite{jmol}.
}
\label{activespace_orb}
\end{center}
\end{figure}
\section{Results}
\subsection{Ground state of the \boldmath${d^8}$ configuration}
Starting from the electronic structure of the parent compounds, where each Ni (Cu) is in the $d^9$ configuration, we compute the electron-removal (in the photoemission terminology) $d^8$ state to investigate the hole-doped quasiparticle state.
Since the parent compounds in $d^9$ configuration have strong nearest neighbour antiferromagnetic (AF) correlations~\cite{katukuri_electronic_2020}, the total spin of our QC in undoped case, with five Ni (Cu) sites, in the AF ground state is $S_{QC}=3/2$.
By introducing an additional hole (or removing an electron) from the central Ni (Cu) in our QC, the $S_{QC}$ values range from 0 to 3.
To simplify the analysis of the distribution of the additional hole, we keep the spins on the four neighbouring Ni (Cu) sites parallelly aligned in all our calculations and from now on we only specify the spin multiplicity of the central Ni (Cu)O$_4$ plaque.
The multiplet structure of the $d^8$ configuration thus consists of only spin singlet and triplet states, spanned by the four irreps of the $3d$ manifold.
The active spaces we consider in this work allow us to compute accurately the excitations only within the $b_1$ and $a_1$ irreps
\footnote{For an accurate quantitative description of the multiplet structure spanned by the other two irreps $b_1$ and $e$, one would need to extend the active space and include the $3d$ and $4d$ manifolds of the four neighbouring Ni (Cu) atoms as well as the O 2$p$ orbitals of the same symmetry, resulting in a gigantic 68 electrons in 74 orbitals active space.}
and we address the full multiplet structure elsewhere.
When computing the local excitations, a local singlet state on the central Ni (Cu) corresponds to a total spin on the cluster $S_{QC}=2$.
However, a local triplet state, with central spin aligned parallel to the neighboring spins, corresponds to $S_{QC}=3$ and do not satisfy the AF correlations.
To avoid the spin coupling between the central $d^8$ Ni (Cu) with the neighbouring $d^9$ Ni (Cu) ions, we replace the latter with closed shell, Cu (Zn) $d^{10}$, ions and freeze them at the mean-field HF level.
Such a simplification is justified, as the local excitation energy we compute is an order of magnitude larger than the exchange interaction~\cite{katukuri_electronic_2020}.
%
In Table \ref{d8-excit}, the relative energies of the lowest local spin singlets $^1\!A_{1g}$, $^1\!B_{1g}$ and spin triplet $^3\!B_{1g}$ states are shown.
These are obtained from CASSCF + CASPT2 calculations with CAS(12,14) active space (CAS-3 in Table~\ref{activespaces}) which includes the 3$d$ and $4d$ orbitals of the central Ni (Cu) ion and the in-plane O 2$p $ and $3p$ orbitals in the $b_1$ irrep.
In the CASPT2 calculation, the remaining doubly occupied O $2p$, the central Ni (Cu) $3s$ and $3p$ orbitals and all the unoccupied virtual orbitals are correlated.
\begin{table}[!t]
\caption{Relative energies (in eV) of the electron removal $d^8$ states in NdNiO$_2$\ and the iso-structural CaCuO$_2$\ obtained from CAS(12,14)SCF and CASSCF+CASPT2 calculations.
}
\label{d8-excit}
\begin{center}
\begin{tabular}{lccccl}
\hline
\hline
State & \multicolumn{2}{c}{NdNiO$ _{2} $} & \multicolumn{2}{c}{CaCuO$ _{2} $} \\
& CASSCF & +CASPT2 & CASSCF & +CASPT2 \\
\hline
$^1\!A_{1g}$ & 0.00 & 0.00 & 0.00 & 0.00 \\
$^3\!B_{1g}$ & 1.35 & 1.88 & 2.26 & 2.50 \\
$^1\!B_{1g}$ & 2.98 & 3.24 & 3.21 & 3.33 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
It can be seen that the ground state is of $^1\!A_{1g}$ symmetry and the lowest triplet excited state, with $^3\!B_{1g}$ symmetry, is around 1.88 eV and 2.5 eV for NdNiO$_2$\ and CaCuO$_2$\ respectively.
The AF magnetic exchange in these two compounds is 76 meV and 208 meV respectively~\cite{katukuri_electronic_2020}, and thus we expect that our simplification of making the neighbouring $d^9$ ions closed shell do not over/underestimate the excitation energies.
At the CASSCF level, the $^1\!A_{1g}$-$^3\!B_{1g}$ excitation energy is 1.35 eV in NdNiO$_2$\ while it is 2.26 eV in CaCuO$_2$.
Interestingly, the inclusion of dynamical correlations via the CASPT2 calculation, the $^1\!A_{1g}$ in NdNiO$_2$\ is stabilized by 0.53 eV compared to $^3\!B_{1g}$ state.
However, in CaCuO$_2$, the $^1\!A_{1g}$ state is stabilized by only 0.24 eV.
This indicates that the dynamical correlations are more active in the $^1\!A_{1g}$ state in NdNiO$_2$\ than in CaCuO$_2$.
We note that the hole excitations within the $3d$ orbitals in the irreps $b_2$ and $e$, calculated with this limited active space (CAS-3) results in energies lower than the $^3\!B_{1g}$ and $^1\!B_{1g}$ states.
However, an accurate description of those states requires an enlarged active space that includes not only the same symmetry oxygen 2$p$ and $3p$ orbitals from the central NiO$_4$ plaque but also the 3$d$, 4$d$ manifold of the neighbouring Ni (Cu) ions, making the active space prohibitively large.
Here, we concentrate on the analysis of the $^1\!A_{1g}$ ground state and address the complete $d^8$ multiplet spectrum elsewhere.
\begin{table}[!b]
\caption{
Ni and Cu $3d^8$ $^1\!A_{1g}$ ground state wavefunction: Weights (\%) of the leading configurations
in the wavefunction computed for NdNiO$_2$\ and CaCuO$_2$\ with active spaces CAS-1 and CAS-2 (see Table~\ref{activespaces}).
$d_{b_1}$ and $p_{b_1}$ are the localized Ni (Cu) $3d_{x^2-y^2}$ and the oxygen $2p$ ZR-like orbitals (see Fig.~\ref{fig1}) in the $b_1$ irrep respectively.
Arrows in the superscript indicate the spin of the electrons and a $\square$ indicates two holes.
}
\begin{center}
\begin{tabular}{l llll}
\hline
\hline\\[-0.30cm]
& \multicolumn{2}{c}{NdNiO$ _{2} $} & \multicolumn{2}{c}{CaCuO$ _{2} $} \\
$^1\!A_{1g}$ & CAS-1 & CAS-2 & CAS-1 & CAS-2 \\
\hline
\\[-0.20cm]
$|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$ & 51.87 & 42.40 & 4.20 & 20.25 \\[0.3cm]
$|d_{b_{1}}^{\uparrow}p_{b_{1}}^{\downarrow} \rangle$ & 8.27 & 10.48 & 42.58 & 38.52 \\[0.3cm]
$|d_{b_{1}}^{\downarrow}p_{b_{1}}^{\uparrow} \rangle$ & 6.07 & 7.60 & 25.00 & 25.60 \\[0.3cm]
$|d_{b_{1}}^{\uparrow \downarrow}p_{b_{1}}^\square \rangle$ & 0.09 & 0.23 & 21.56 & 5.14 \\[0.3cm]
\hline
\hline
\end{tabular}
\end{center}
\label{wfn}
\end{table}
\subsection{Wavefunction of the electron-removal \boldmath$d^8$ ground state}
The $^1\!A_{1g}$ ground wavefunction in terms of
the weights of the four leading configurations (in the case of CaCuO$_2$) is shown in Table~\ref{wfn}.
The wavefunctions corresponding to the CASSCF calculations with the active spaces CAS-1 and CAS-2 are shown.
The basis in which the wavefunctions are represented is constructed in two steps:
1) A set of natural orbitals are generated by diagonalising the CASSCF one-body reduced density matrix.
2) To obtain a set of atomic-like symmetry-adapted localized orbital basis, we localize the Ni (Cu) $3d$ and O $2p$ orbitals on the central NiO$_4$ (CuO$_4$) plaque through a unitary transformation.
Such partial localization within the active space keeps the total energy unchanged.
The resulting 3$d_{x^2-y^2}$\ and the ZR-like oxygen 2$p$ orbital basis is shown in Fig~\ref{fig1}.
FCIQMC calculation was performed in this partial localized basis to obtain the wavefunction as a linear combination of Slater determinants.
10 million walkers were used to converge the FCIQMC energy to within 0.1 mHartree.
From Table~\ref{wfn} it can be seen that the electron-removal $d^8$ ground state wavefunction for the two compounds is mostly described by the four configurations spanned by the localized 3$d_{x^2-y^2}$\ ($d_{b_1}$) and the symmetry-adapted ZR-like oxygen 2$p$ ($p_{b_1}$) orbitals that are shown in Fig.~\ref{fig1}.
Let us first discus the wavefunction obtain from the CAS-1 active space.
For NdNiO$_2$, the dominant configuration involves two holes on 3$d_{x^2-y^2}$, $|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$, and contributes to $\sim$52\% of the wavefunction,
while the configurations that make up the ZR singlet, $|d_{b_{1}}^{\uparrow}p_{b_{1}}^{\downarrow} \rangle$ and $|d_{b_{1}}^{\downarrow}p_{b_{1}}^{\uparrow} \rangle$, contributes to only $\sim$14\%.
On the other hand, the $d^8$ $^1\!A_{1g}$ state in CaCuO$_2$\ is predominantly the ZR singlet with $\sim$68\% weight.
In the CASSCF calculation with CAS-2 active space, where all the electrons in the 3$d$ manifold are explicitly correlated,
we find that the character of the wavefunction remains unchanged in NdNiO$_2$\ but weight on the dominant configurations is slightly reduced.
On the other hand, in CaCuO$_2$, while the contribution from the ZR singlet is slightly reduced, the contribution from $|d_{b_{1}}^\square p_{b_{1}}^{\uparrow \downarrow} \rangle$ configuration is dramatically increased at the expense of the weight on
$|d_{b_{1}}^{\uparrow \downarrow}p_{b_{1}}^\square \rangle$.
This demonstrates that the additional freedom provided by the $d_{xy}$ and $d_{xz/yz}$ orbitals for the electron correlation helps to accommodate the additional hole on the Cu ion.
We note that the four configurations shown in Table~\ref{wfn} encompass almost 90\% of the $d^8$ wavefunction (with CAS-2 active space) in CaCuO$_2$.
Thus, the use of a three-band Hubbard model~\cite{emery_3b_hubbard_prl_1987,jiang_cuprates_prb_2020} to investigate the role of doped holes in CuO$_2$ planes is a reasonable choice.
However, for NdNiO$_2$\ these configurations cover only 60\% of the $d^8$ wavefunction, hence a three-band Hubbard model is too simple to describe the hole-doped monovalent nickelates.
A more intuitive and visual understanding of the distribution of the additional hole can be obtained by plotting the difference of the $d^8$ and the $d^9$ ground state electron densities as shown in Fig.~\ref{fig2}.
Electron density of a multi-configurational state can be computed as a sum of densities arising from the natural orbitals and corresponding (well-defined) occupation numbers.
We used Multiwfn program \cite{Multiwfn} to perform this summation.
The negative values of the heat map of the electron density difference (blue color) and the positive values (in red) represent respectively the extra hole density and additional electron density in $d^8$ state compared to the $d^9$ state.
From Fig.~\ref{fig2}(a)/(c) that show the density difference in the NiO$_2$/CuO$_2$ planes (xy-plane), we conclude the following:
\begin{enumerate}
\item The hole density is concentrated on the Ni site (darker blue) with $b_1$ ($d_{x^2-y^2}$) symmetry in NdNiO$_2$\ whereas
it is distributed evenly on the four oxygen and the central Cu ions with $b_1$ symmetry in CaCuO$_2$, a result consistent with the wavefunction reported in Table~\ref{wfn}.
\item In NdNiO$_2$, the hole density is spread out around the Ni ion with larger radius, and otherwise in CaCuO$_2$.
This demonstrates that the $3d$ manifold in Cu is much more localized than in Ni and therefore the onsite Coulomb repulsion $U$ is comparatively smaller for Ni.
\item The darker red regions around the Ni site in NdNiO$_2$\ indicate stronger $d^8$ multiplet effects that result in rearrangement of electron density compared to $d^9$ configuration.
\item In CaCuO$_2$, we see darker red regions on the oxygen ions instead, which shows that the significant presence of a hole on these ions results in noticeable electron redistribution.
\end{enumerate}
The electron density difference in the xz-plane (which is perpendicular to the NiO$_2$/CuO$_2$ planes) is quite different in the two compounds.
The hole density in NdNiO$_2$\ is spread out up to 2\,\AA\ in the $z$-direction, unlike in CaCuO$_2$, where it is confined to within 1\,\AA .
We attribute this to the strong radial-type correlations in NdNiO$_2$.
With the creation of additional hole on the 3$d_{x^2-y^2}$\ orbital, the electron density which is spread out in the $d_{z^2}$\ symmetry via the dynamical correlation between 3$d_{z^2}$\ and 4$d_{z^2}$\ orbitals~\cite{katukuri_electronic_2020}, becomes more compact in the $d_{z^2}$\ symmetry through the reverse breathing.
Thus, we see a strong red region with 3$d_{z^2}$\ profile and a blue region with expanded 4$d_{z^2}$\ profile.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.48\textwidth]{Density_difference_2.pdf}
\caption{Electron density difference of the $d^8$ and $d^9$ ground states ($\rho(d^8) - \rho(d^9)$) for NdNiO$_2$\ in the xy-plane (a) and xz-plane (b), and for CaCuO$_2$\ xy-plane (c) and xz-plane (d).
The coordinates of the central Ni (Cu) $d^8$ ion are set to (0,0). The scale of the heat-bar is logarithmic between $\pm$0.001 to $\pm$1.0 and is linear between 0 and $\pm$0.001.
(e) Electron density difference integrated over a sphere centered on on the central Ni(Cu) atoms (full curves) as a function of the radius $r$ shown in (a).
The result of an additional radial integration (dashed curves) as a function of the upper integration limit.}
\label{fig2}
\end{center}
\end{figure}
To obtain a quantitative understanding of the charge density differences for the two compounds, in Fig.~\ref{fig2}(e) we plot the electron density difference integrated over a sphere centered on the central Ni(Cu) atom as a function of the radius $r$ shown in Fig.~\ref{fig2}(a).
Four features, which we marked A-D, clearly demonstrate the contrast in the charge density differences in the two compounds.
From the feature A at $r$ close to Ni (Cu), it is evident that
the extent of hole density around Ni in NdNiO$_2$\ is larger than around Cu in CaCuO$_2$.
The features B and C that are on either side of the position of oxygen ions show that the hole density is significantly larger on oxygen atoms in CaCuO$_2$\ than in the NdNiO$_2$.
It is interesting to note that we see a jump (feature D) in the electron density above zero at $r$ close to the position of Nd ions in NdNiO$_2$, while in CaCuO$_2$\ the curve is flat in the region of Ca ions.
This shows that there is some electron redistribution happening around the Nd ions.
The hole density within a solid sphere (SS) around the central Ni (Cu) atom obtained by additional integration over the radius $r$ is also shown in Fig.~\ref{fig2}(e) with dashed curves.
It can be seen that the total hole density within the SS of $r\sim$4\,\AA, where the neighboring Ni (Cu) ions are located, is only $\sim$0.5 in both the compounds, with slight differences related to the feature D.
This is due to the screening of the hole with the electron density pulled in from the farther surroundings.
As one would expect, a SS with $r$ of the size of the cluster, the total hole density is one in both the compounds.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=0.480\textwidth]{entropy_4.pdf}
\caption{Single orbital entanglement entropy, $s(1)_i$, (dots) and mutual orbital entanglement entropy, $I_{i,j}$, (colored lines) of the orbital basis used to expand the $d^8$ wavefunction in Table~\ref{wfn} for NdNiO$_2$\ (a) and CaCuO$_2$\ (b).
Entanglement entropy of the orbitals centred on the central NiO$_4$/CuO$_4$ plaque are only shown.
The irrep to which the orbitals belong to are also shown.
The green and magenta colors represent the two different set of orbitals, occupied (at the HF level) and the corresponding double-shell (virtual), respectively.
The thickness of the black, blue and green lines denote the strength of $I_{i,j}$, and the size of the dots is proportional to $s(1)_i$.
}
\label{entanglement}
\end{center}
\end{figure}
\subsection{Orbital entanglement entropy }
To analyse the different type of correlations active in the two compounds in $d^8$ configuration, we compute the entanglement entropy~\cite{boguslawski_entanglement_2012,boguslawski_orbital_2013,boguslawski_orbital_2015}.
While the single orbital entropy, $s(1)_i $, quantifies the correlation between $i$-th orbital and the remaining set of orbitals,
the mutual information, $I_{i,j}$ is the two-orbital entropy between $i$ and $j$~\cite{legeza_optimizing_2003,rissler_measuring_2006}, and illustrates the correlation of an orbital with another, in the embedded environment comprising of all other orbitals.
We used {\sc QCMaquis}~\cite{keller_an_2015} embedded in {\sc OpenMolcas}~\cite{fdez_galvan_openmolcas_2019} package to compute the entropies.
In Figure~\ref{entanglement}, $s(1)_i$ and $I_{i,j}$ extracted from CASSCF calculations with CAS-2 active space for NdNiO$_2$\ and CaCuO$_2$\ are shown.
The orbital basis for which the entropy is computed is the same as the basis in which the wavefunction presented in Table~\ref{wfn} is expanded.
As mentioned previously, this orbital basis is obtained from partial localization of the natural orbitals in a way that only the 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals are localized.
Since a large part of electron correlation is compressed in natural orbitals, we see a tiny $s(1)_i$ for all orbitals except for the localized 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals where it is significant. This is consistent with the wavefunction in Table~\ref{wfn}.
The mutual orbital entanglement between pairs of orbitals shows strong entanglement between the 3$d_{x^2-y^2}$\ and the O 2$p$ ZR-like orbitals for both NdNiO$_2$\ and CaCuO$_2$, a consequence of the dominant weight of the configurations spanned by these two orbitals in the wavefunction.
The next strongest entanglement is between the Ni/Cu 3$d$ valence and their double-shell $4d$ orbitals.
Such strong entanglement also observed for the undoped $d^9$ ground state~\cite{katukuri_electronic_2020}, is a result of dynamical radial correlation \cite{helgaker_molecular_2000} and orbital breathing effects~\cite{gunnarsson_density-functional_1989,bogdanov_natphys_2021}.
Interestingly, the entanglement entropy in the range 0.001-0.01 (green lines) is quite similar in the two compounds, although one sees more entanglement connections in NdNiO$_2$.
A comparison of the entropy information between NdNiO$_2$\ and CaCuO$_2$\ reveals that the Ni 3$d$ and 4$d$-like orbitals contribute rather significantly (thicker blue lines) to the total entropy, in contrast to the Cu 3$d$ and 4$d$-like orbitals, something that is also seen in the undoped compounds~\cite{katukuri_electronic_2020}.
\section{Conclusions and discussion}
In conclusion,
our {\it ab initio} many-body quantum chemistry calculations for the electron removal ($d^8$) states find a low-spin closed-shell singlet ground state in NdNiO$_2$\ and that the additional hole is mainly localized on the Ni 3$d_{x^2-y^2}$\ orbital, unlike in CaCuO$_2$, where a Zhang-Rice singlet is predominant.
We emphasise that the $d^8$ wavefunction is highly multi-configurational where the dominant closed-shell singlet configuration weight is only $\sim$42\%.
This result is consistent with the experimental evidence~\cite{rossi2020orbital,goodge-a} of orbitally polarized singlet state as well as the presence of holes on the O $2p$ orbitals.
Importantly, the persistent dynamic radial-type correlations within the Ni $d$ manifold result in stronger $d^8$ multiplet effects in NdNiO$_2$, and consequently the additional hole foot-print is more three dimensional.
In CaCuO$_2$, we find that the electron correlations within the $d_{xy}$ and $d_{xz/yz}$ orbitals changes the hole-doped wavefunction significantly. Specifically, the double hole occupation of Cu $d_{x^2-y^2}$\ is significantly increased and this can influence the transport properties.
It was recently proposed that nickelates could be a legitimate realization of the single-band Hubbard model~\cite{kitatani_nickelate_2020}.
However, our analysis shows that even the three-band Hubbard model~\cite{eskes1991a}, which successfully describes the hole-doped scenario in cuprates, falls short to describe hole-doped nickelates and additional orbital degrees of freedom are indeed necessary for the description of the strong multiplet effects we find.
Much has been discussed about the importance of rare-earth atoms for the electronic structure of superconducting nickelates, e.g. see~\cite{nomura2021superconductivity}.
The three-dimensional nature of the hole density we find in NdNiO$_2$\ might also be hinting at the importance of out-of-plane Nd ions.
It would be interesting to compare the hole density of NdNiO$_2$\ with other iso-structural nickelates such as LaNiO$_2$\ where La $5d$ states are far from the Fermi energy.
Since the infinite-layered monovalent nickelates are thin films and often grown on substrates, one could ask the question of how the electronic structure of the undoped and doped compounds changes with varying Ni-O bond length. Would this influence the role of electronic correlations in $d^9$ nickelates? We will address these in the near future.
\section*{Conflict of Interest Statement}
The authors express no conflict of interests.
\section*{Author Contributions}
VMK and AA designed the project. VMK and NAB performed the calculations. All the authors analysed the data. VMK wrote the paper with inputs from NAB and AA.
\section*{Funding}
We gratefully acknowledge the Max Plank Society for financial support.
\section*{Acknowledgments}
VMK would like to acknowledge Giovanni Li Manni and Oskar Weser for fruitful discussions.
| {'timestamp': '2022-01-17T02:18:00', 'yymm': '2201', 'arxiv_id': '2201.05495', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.05495'} |
\section{Introduction}
If $C$ is a general curve of genus $g$, equipped with a general map
$f \colon C \to \pp^3$ of degree $d$,
it is natural to ask
whether the intersection $f(C) \cap Q$
of its image with a general quadric $Q$
is a general collection of $2d$ points on $Q$.
Interest in this question historically developed as a result of the
work of Hirschowitz \cite{mrat} on the maximal rank conjecture
for rational space curves, and the later extension of Ballico
and Ellia \cite{ball} of this method to nonspecial space curves: The
heart of these arguments revolve precisely around understanding the intersection
of a general curve with a general quadric.
In hopes of both simplifying and extending these results,
Ellingsrud and Hirschowitz \cite{eh}, and later Perrin \cite{perrin},
using the technique of liaison,
gave partial results on the generality of this intersection.
However, a complete analysis has so far remained conjectural.
To state the problem precisely, we make the following definition:
\begin{defi}
We say a stable map $f \colon C \to \pp^r$ from a curve $C$ to $\pp^r$
(with $r \geq 2$)
is a \emph{Weak Brill-Noether curve (WBN-curve)} if it
corresponds to a point in a component of
$\bar{M}_g(\pp^r, d)$ which both
dominates $\bar{M}_g$,
and whose generic member is a map
from a smooth curve, which is an immersion if $r \geq 3$,
and birational onto its image if $r = 2$;
and which is either
nonspecial or nondegenerate.
In the latter case, we refer to it as a \emph{Brill-Noether curve} (\emph{BN-curve}).
\end{defi}
\noindent
The celebrated Brill-Noether theorem
then asserts that BN-curves of degree~$d$ and genus~$g$ to~$\pp^r$ exist if and only if
\[\rho(d, g, r) := (r + 1)d - rg - r(r + 1) \geq 0.\]
Moreover, for $\rho(d, g, r) \geq 0$, the parameter space
of BN-curves is irreducible. (In particular, it makes sense
to talk about a ``general BN-curve''.)
\medskip
In this paper, we give a complete answer to the question posed above:
For $f \colon C \to \pp^3$
a general BN-curve of degree $d$ and genus $g$
(with, of course, $\rho(d, g, 3) \geq 0$),
we show the intersection $f(C) \cap Q$ is a general collection of $2d$ points on $Q$
except in exactly six cases. Furthermore, in these six cases, we compute precisely
what the intersection is.
A natural generalization of this problem is to study the intersection of
a general BN-curve $f \colon C \to \pp^r$ (for $r \geq 2$) with a hypersurface $H$
of degree $n \geq 1$: In particular, we ask when this intersection consists
of a general collection of $dn$ points on $H$ (in all but finitely many cases).
For $r = 2$, the divisor $f(C) \cap H$ on $H$ is linearly equivalent
to $\oo_H(d)$; in particular, it can only be general if $H$ is rational, i.e.\ if $n = 1$ or $n = 2$.
In general, we note that
in order for the intersection to be general, it is evidently necessary for
\[(r + 1)d - (r - 3)g \sim (r + 1)d - (r - 3)(g - 1) = \dim \bar{M}_g(\pp^r, d)^\circ \geq (r - 1) \cdot dn.\]
(Here $\bar{M}_g(\pp^r, d)^\circ$ denotes the component of $\bar{M}_g(\pp^r, d)$
corresponding to the BN-curves, and $A \sim B$ denotes that $A$ differs from $B$ by a quantity bounded by
a function of $r$ alone.)
If the genus of $C$ is as large as possible (subject to the constraint
that $\rho(d, g, r) \geq 0$), i.e.\ if
\[g \sim \frac{r + 1}{r} \cdot d,\]
then the intersection can only be general when
\[(r + 1) \cdot d - (r - 3) \cdot \left(\frac{r + 1}{r} \cdot d \right) \gtrsim (r - 1) n \cdot d;\]
or equivalently if
\[(r + 1) - (r - 3) \cdot \frac{r + 1}{r} \geq (r - 1) n \quad \Leftrightarrow \quad n \leq \frac{3r + 3}{r^2 - r}.\]
For $r = 3$, this implies $n = 1$ or $n = 2$; for $r = 4$, this implies $n = 1$; and
for $r \geq 5$, this is impossible.
\medskip
To summarize, there are only
five pairs $(r, n)$ where this intersection could be, with the exception of finitely many
$(d, g)$ pairs,
a collection of $dn$ general points on $H$: The intersection of a plane curve with a line,
the intersection of a plane curve with a conic, the intersection of a space curve with a quadric,
the intersection of a space curve with a plane, and the intersection of a curve to $\pp^4$
with a hyperplane.
Our three main theorems (five counting the first two cases which are trivial)
give a complete description of this intersection
in these cases:
\begin{thm} \label{main-2}
Let $f \colon C \to \pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then
the intersection $f(C) \cap Q$, of $C$ with a general conic $Q$, consists
of a general collection of $2d$ points on~$Q$.
\end{thm}
\begin{thm} \label{main-2-1}
Let $f \colon C \to \pp^2$ be a general BN-curve of degree~$d$ and genus~$g$. Then
the intersection $f(C) \cap L$, of $C$ with a general line $L$, consists
of a general collection of $d$ points on~$L$.
\end{thm}
\begin{thm} \label{main-3}
Let $f \colon C \to \pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then
the intersection $f(C) \cap Q$, of $C$ with a general quadric $Q$, consists
of a general collection of $2d$ points on $Q$, unless
\[(d, g) \in \{(4, 1), (5, 2), (6, 2), (6, 4), (7, 5), (8, 6)\}.\]
And conversely, in the above cases, we may describe the intersection
$f(C) \cap Q \subset Q \simeq \pp^1 \times \pp^1$ in terms of
the intrinsic geometry of $Q \simeq \pp^1 \times \pp^1$ as follows:
\begin{itemize}
\item If $(d, g) = (4, 1)$, then $f(C) \cap Q$ is the intersection of two general curves
of bidegree $(2, 2)$.
\item If $(d, g) = (5, 2)$, then $f(C) \cap Q$ is a general collection of $10$ points
on a curve of bidegree~$(2, 2)$.
\item If $(d, g) = (6, 2)$, then $f(C) \cap Q$ is a general collection of $12$ points
$p_1, \ldots, p_{12}$ lying on a curve $D$ which satisfy:
\begin{itemize}
\item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$).
\item The curve $D$ has two nodes (and so is in particular of geometric genus $2$).
\item The divisors $\oo_D(2,2)$ and $p_1 + \cdots + p_{12}$ are linearly equivalent
when pulled back to the normalization of $D$.
\end{itemize}
\item If $(d, g) = (6, 4)$, then $f(C) \cap Q$ is the intersection of
two general curves
of bidegrees $(2, 2)$ and $(3,3)$ respectively.
\item If $(d, g) = (7, 5)$, then $f(C) \cap Q$ is a general collection of $14$ points
$p_1, \ldots, p_{14}$ lying on a curve $D$ which satisfy:
\begin{itemize}
\item The curve $D$ is of bidegree $(3, 3)$.
\item The divisor $p_1 + \cdots + p_{14} - \oo_D(2, 2)$ on $D$
is effective.
\end{itemize}
\item If $(d, g) = (8, 6)$, then $f(C) \cap Q$ is a general collection of $16$
points on a curve of bidegree~$(3,3)$.
\end{itemize}
In particular, the above descriptions show $f(C) \cap Q$ is not a general collection
of $2d$ points on~$Q$.
\end{thm}
\begin{thm} \label{main-3-1}
Let $f \colon C \to \pp^3$ be a general BN-curve of degree~$d$ and genus~$g$. Then
the intersection $f(C) \cap H$, of $C$ with a general plane $H$, consists
of a general collection of $d$ points on $H$, unless
\[(d, g) = (6, 4).\]
And conversely, for $(d, g) = (6, 4)$, the intersection $f(C) \cap H$
is a general collection of $6$ points on a conic in $H \simeq \pp^2$; in particular,
it is not a general collection of $d = 6$ points.
\end{thm}
\begin{thm} \label{main-4}
Let $f \colon C \to \pp^4$ be a general BN-curve of degree~$d$ and genus~$g$. Then
the intersection $f(C) \cap H$, of $C$ with a general hyperplane $H$, consists
of a general collection of $d$ points on $H$, unless
\[(d, g) \in \{(8, 5), (9, 6), (10, 7)\}.\]
And conversely, in the above cases, we may describe the intersection
$f(C) \cap H \subset H \simeq \pp^3$ in terms of
the intrinsic geometry of $H \simeq \pp^3$ as follows:
\begin{itemize}
\item If $(d, g) = (8, 5)$, then $f(C) \cap H$ is the intersection of three general quadrics.
\item If $(d, g) = (9, 6)$, then $f(C) \cap H$ is a general collection of $9$ points
on a curve $E \subset \pp^3$ of degree~$4$ and genus~$1$.
\item If $(d, g) = (8, 5)$, then $f(C) \cap H$ is a general collection of $10$ points
on a quadric.
\end{itemize}
\end{thm}
The above theorems can be proven by studying the normal bundle of
the general BN-curve $f \colon C \to \pp^r$: For any hypersurface $S$ of degree $n$,
and unramified map $f \colon C \to \pp^r$ dimensionally transverse to $S$,
basic deformation theory implies that the map
\[f \mapsto (f(C) \cap S)\]
(from the corresponding Kontsevich space of stable maps, to the
corresponding symmetric power of $S$)
is smooth at $[f]$ if and only if
\[H^1(N_f(-n)) = 0.\]
Here, $N_f(-n) = N_f \otimes f^* \oo_{\pp^r}(-n)$
denotes the twist of the normal bundle $N_f$ of the map $f \colon C \to \pp^r$;
this is the vector bundle on the domain $C$ of $f$ defined via
\[N_f = \ker(f^* \Omega_{\pp^r} \to \Omega_C)^\vee.\]
Since a map between reduced irreducible varieties is dominant
if and only if it is generically smooth, the map $f \mapsto (f(C) \cap S)$ is therefore dominant if and only if
$H^1(N_f(-n)) = 0$ for $[f]$ general.
This last condition being visibly open, our problem is thus to prove
the existence of an unramified BN-curve $f \colon C \to \pp^r$ of specified degree and genus,
for which $H^1(N_f(-n)) = 0$.
For this, we will use a variety of techniques, most crucially specialization
to a map from a reducible curve $X \cup_\Gamma Y \to \pp^r$.
We begin, in
Section~\ref{sec:reducible}, by giving several tools
for studying the normal bundle of a map from a reducible curve.
Then in
Section~\ref{sec:inter}, we review results on the closely-related
\emph{interpolation problem} (c.f.\ \cite{firstpaper}).
In Section~\ref{sec:rbn}, we review results about when certain maps from reducible
curves, of the type we shall use, are BN-curves.
Using these techniques, we then concentrate our attention in Section~\ref{sec:indarg} on
maps from reducible curves $X \cup_\Gamma Y \to \pp^r$ where $Y$ is a line or canonical curve.
Consideration of these curves enables us to make an inductive argument
that reduces our main theorems to finite casework.
This finite casework is then taken care of in three steps:
First, in Sections~\ref{sec:hir}--\ref{sec:hir-3}, we again use degeneration
to a map from a reducible curve, considering the special case when $Y \to \pp^r$ factors through a
hyperplane.
Second, in Section~\ref{sec:in-surfaces},
we specialize to immersions of smooth curves contained in Del Pezzo surfaces, and study
the normal bundle of our curve using the
normal bundle exact sequence for a curve in a surface.
Lastly, in Section~\ref{sec:51} we use the geometry of the cubic scroll in $\pp^4$ to
construct an example of an immersion of a smooth curve $f \colon C \hookrightarrow \pp^3$ of degree $5$
and genus $1$ with $H^1(N_f(-2)) = 0$.
Finally, in Section~\ref{sec:converses}, we examine each of the cases
in our above theorems where the intersection is not general. In each
of these cases, we work out precisely what the intersection is
(and show that it is not general).
\subsection*{Conventions}
In this paper we make the following conventions:
\begin{itemize}
\item We work over an algebraically closed field of characteristic zero.
\item
A \emph{curve} shall refer to a nodal curve, which is assumed to be connected unless otherwise specified.
\end{itemize}
\subsection*{Acknowledgements}
The author would like to thank Joe Harris for
his guidance throughout this research.
The author would also like to thank Gavril Farkas, Isabel Vogt, and
members of the Harvard and MIT mathematics departments
for helpful conversations;
and to acknowledge the generous
support both of the Fannie and John Hertz Foundation,
and of the Department of Defense
(NDSEG fellowship).
\section{Normal Bundles of Maps from Reducible Curves \label{sec:reducible}}
In order to describe the normal bundle of a map from a reducible curve,
it will be helpful to introduce some notions concerning modifications
of vector bundles.
The interested reader is encouraged to consult \cite{firstpaper} (sections 2, 3, and~5),
where these notions are developed in full; we include here only a brief summary, which will
suffice for our purposes.
\begin{defi}
If $f \colon X \to \pp^r$ is a map from a scheme $X$ to $\pp^r$,
and $p \in X$ is a point, we write $[T_p C] \subset \pp^r$
for the \emph{projective realization of the tangent space} --- i.e.\ for the
linear subspace $L \subset \pp^r$ containing $f(p)$ and satisfying
$T_{f(p)} L = f_*(T_p C)$.
\end{defi}
\begin{defi} Let $\Lambda \subset \pp^r$ be a linear subspace, and $f \colon C \to \pp^r$
be an unramified map from a curve.
Write $U_{f, \Lambda} \subset C$ for the open subset of points $p \in C$ so that
the projective realization of the tangent space $[T_p C]$ does not meet $\Lambda$. Suppose that $U_{f, \Lambda}$
is nonempty, and contains the singular locus of $C$. Define
\[N_{f \to \Lambda}|_{U_{f, \Lambda}} \subset N_f|_{U_{f, \Lambda}}\]
as the kernel of the differential of the projection from $\Lambda$
(which is regular on a neighborhood of $f(U_{f, \Lambda})$).
We then let $N_{f \to \Lambda}$ be the unique extension of $N_{f \to \Lambda}|_{U_{f, \Lambda}}$
to a sub-vector-bundle (i.e.\ a subsheaf with locally free quotient) of $N_f$ on $C$.
For a more thorough discussion of this construction (written for $f$ an immersion
but which readily generalizes),
see Section~5 of \cite{firstpaper}.
\end{defi}
\begin{defi} Given a subbundle $\mathcal{F} \subset \mathcal{E}$ of a vector bundle on a scheme $X$,
and a Cartier divisor $D$ on $X$, we define
\[\mathcal{E}[D \to \mathcal{F}]\]
as the kernel of the natural map
\[\mathcal{E} \to (\mathcal{E} / \mathcal{F})|_D.\]
Note that $\mathcal{E}[D \to \mathcal{F}]$ is naturally isomorphic to $\mathcal{E}$
on $X \smallsetminus D$. Additionally, note that $\mathcal{E}[D \to \mathcal{F}]$
depends only on $\mathcal{F}|_D$.
For a more thorough discussion of this construction, see Sections~2 and~3 of \cite{firstpaper}.
\end{defi}
\begin{defi}
Given a subspace $\Lambda \subset \pp^r$, an unramified map $f \colon C \to \pp^r$ from a curve, and a Cartier divisor $D$ on $C$,
we define
\[N_f[D \to \Lambda] := N_f[D \to N_{f \to \Lambda}].\]
\end{defi}
We note that these constructions can be iterated on a smooth curve: Given subbundles $\mathcal{F}_1, \mathcal{F}_2 \subset \mathcal{E}$
of a vector bundle on a smooth curve,
there is a unique subbundle $\mathcal{F}_2' \subset \mathcal{E}[D_1 \to \mathcal{F}_1]$
which agrees with $\mathcal{F}_2$ away from $D_1$ (c.f.\ Proposition~3.1 of \cite{firstpaper}).
We may then define:
\[\mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2] := \mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2'].\]
Basic properties of this construction (as well as precise conditions when such iterated modifications
make sense for higher-dimensional
varieties) are investigated in \cite{firstpaper} (Sections~2 and~3).
For example, we have natural isomorphisms $\mathcal{E}[D_1 \to \mathcal{F}_1][D_2 \to \mathcal{F}_2] \simeq \mathcal{E}[D_2 \to \mathcal{F}_2][D_1 \to \mathcal{F}_1]$
in several cases, including when $\mathcal{F}_1 \subseteq \mathcal{F}_2$.
Using these constructions, we may give a partial characterization of the
normal bundle $N_f$ of an unramified map from a reducible curve $f \colon X \cup_\Gamma Y \to \pp^r$:
\begin{prop}[Hartshorne-Hirschowitz]
Let $f \colon X \cup_\Gamma Y \to \pp^r$ be an unramified map from a reducible curve.
Write $\Gamma = \{p_1, p_2, \ldots, p_n\}$,
and for each $i$ let $q_i \neq f(p_i)$ be a point on the projective realization
$[T_{p_i} Y]$ of the tangent space to $Y$ at $p_i$. Then we have
\[N_f|_X = N_{f|_X}(\Gamma)[p_1 \to q_1][p_2 \to q_2] \cdots [p_n \to q_n].\]
\end{prop}
\begin{proof}
This is Corollary~3.2 of \cite{hh}, re-expressed in the above
language. (Hartshorne and Hirschowitz states this only for $r = 3$
and $f$ an immersion; but the argument they give works for $r$ arbitrary.)
\end{proof}
Our basic strategy to study the normal bundle of an unramified map from a
reducible curve $f \colon C \cup_\Gamma D \to \pp^r$
is given by the following lemma:
\begin{lm} \label{glue}
Let $f \colon C \cup_\Gamma D \to \pp^r$ be an unramified map from a reducible curve,
and let $E$ and $F$ be
divisors supported on $C \smallsetminus \Gamma$ and $D \smallsetminus \Gamma$
respectively.
Suppose that the natural map
\[\alpha \colon H^0(N_{f|_D}(-F)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_* (T_p (C \cup_\Gamma D))}\right)\]
is surjective (respectively injective), and that
\begin{gather*}
H^1(N_f|_D (-F)) = 0 \quad \text{(respectively } H^0(N_f|_D (-F)) = H^0(N_{f|_D} (-F))\text{)} \\
H^1(N_{f|_C} (-E)) = 0 \quad \text{(respectively } H^0(N_{f|_C} (-E)) = 0\text{)}.
\end{gather*}
Then we have
\[H^1(N_f(-E-F)) = 0 \quad \text{(respectively } H^0(N_f(-E-F)) = 0\text{)}.\]
\end{lm}
\begin{proof}
Write $\mathcal{K}$ for the sheaf supported along $\Gamma$ whose
stalk at $p \in \Gamma$ is the quotient of tangent spaces:
\[\mathcal{K}_p = \frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}.\]
Additionally, write $\mathcal{N}$ for the (not locally-free) subsheaf of $N_f$
``corresponding to deformations which do not smooth the nodes $\Gamma$''; or in
symbols, as the kernel of the natural map
\[N_f \to T^1_\Gamma,\]
where $T^1$ is the Lichtenbaum-Schlessinger $T^1$-functor.
We have the following exact sequences of sheaves:
\[\begin{CD}
0 @>>> \mathcal{N} @>>> N_f @>>> T^1_\Gamma @>>> 0 \\
@. @VVV @VVV @| @. \\
0 @>>> N_{f|_D} @>>> N_f|_D @>>> T^1_\Gamma @>>> 0 \\
@. @. @. @. @. \\
0 @>>> \mathcal{N} @>>> N_{f|_C} \oplus N_{f|_D} @>>> \mathcal{K} @>>> 0. \\
\end{CD}\]
The first of sequence above is just the definition of $\mathcal{N}$.
Restriction of the first sequence to~$D$ yields the second sequence
(we have $\mathcal{N}|_D \simeq N_{f|_D}$);
the map between them being of course the restriction map.
The final sequence expresses $\mathcal{N}$ as the gluing of $\mathcal{N}|_C \simeq N_{f|_C}$
to $\mathcal{N}|_D \simeq N_{f|_D}$ along $\mathcal{N}|_\Gamma \simeq \mathcal{K}$.
Twisting everything in sight by $-E-F$, we obtain new sequences:
\[\begin{CD}
0 @>>> \mathcal{N}(-E-F) @>>> N_f(-E-F) @>>> T^1_\Gamma @>>> 0 \\
@. @VVV @VVV @| @. \\
0 @>>> N_{f|_D}(-F) @>>> N_f|_D(-F) @>>> T^1_\Gamma @>>> 0 \\
@. @. @. @. @. \\
0 @>>> \mathcal{N}(-E-F) @>>> N_{f|_C}(-E) \oplus N_{f|_D}(-F) @>>> \mathcal{K} @>>> 0. \\
\end{CD}\]
The commutativity of the rightmost square in the first diagram implies that
the image of $H^0(N_f(-E-F)) \to H^0(T^1_\Gamma)$
is contained in the image of $H^0(N_f|_D(-F)) \to H^0(T^1_\Gamma)$.
Consequently, we have
\begin{align}
\dim H^0(N_f(-E-F)) &= \dim H^0(\mathcal{N}(-E-F)) + \dim \im\left(H^0(N_f(-E-F)) \to H^0(T^1_\Gamma)\right) \nonumber \\
&\leq \dim H^0(\mathcal{N}(-E-F)) + \dim \im\left(H^0(N_f|_D(-F)) \to H^0(T^1_\Gamma)\right) \nonumber \\
&= \dim H^0(\mathcal{N}(-E-F)) + \dim H^0(N_f|_D(-F)) - \dim H^0(N_{f|_D}(-F)). \label{glue-dim}
\end{align}
Next, our assumption that $H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is surjective
(respectively our assumptions that $H^0(N_{f|_C}(-E)) = 0$ and $H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is injective) implies
in particular that $H^0(N_{f|_C}(-E) \oplus N_{f|_D}(-F)) \to H^0(\mathcal{K})$ is surjective (respectively injective).
In the ``respectively'' case, this yields $H^0(\mathcal{N}(-E-F)) = 0$, which combined with \eqref{glue-dim}
and our assumption that $H^0(N_f|_D(-F)) = H^0(N_{f|_D}(-F))$ implies $H^0(N_f(-E-F)) = 0$ as desired.
In the other case, we have a bit more work to do; the surjectivity of
$H^0(N_{f|_D}(-F)) \to H^0(\mathcal{K})$ yields
\[\dim H^0(\mathcal{N}(-E-F)) = \dim H^0(N_{f|_C}(-E) \oplus N_{f|_D}(-F)) - \dim H^0(\mathcal{K});\]
or upon rearrangement,
\begin{align*}
\dim H^0(\mathcal{N}(-E-F)) - \dim H^0(N_{f|_D}(-F)) &= \dim H^0(N_{f|_C}(-E)) - \dim H^0(\mathcal{K}) \\
&= \chi(N_{f|_C}(-E)) - \chi(\mathcal{K}).
\end{align*}
(For the last equality, $\dim H^0(N_{f|_C}(-E)) = \chi(N_{f|_C}(-E)) + \dim H^1(N_{f|_C}(-E)) = \chi(N_{f|_C}(-E))$
because $H^1(N_{f|_C}(-E)) = 0$ by assumption. Additionally,
$\dim H^0(\mathcal{K}) = \chi(\mathcal{K})$
because $\mathcal{K}$ is punctual.)
Substituting this into \eqref{glue-dim}, and noting that
$\dim H^0(N_f|_D(-F)) = \chi(N_f|_D(-F))$ because
$H^1(N_f|_D(-F)) = 0$ by assumption, we obtain:
\begin{align}
\dim H^0(N_f(-E-F)) &\leq \dim H^0(N_f|_D(-F)) + \dim H^0(\mathcal{N}(-E-F)) - \dim H^0(N_{f|_D}(-F)) \nonumber \\
&= \chi(N_f|_D(-F)) + \chi(N_{f|_C}(-E)) - \chi(\mathcal{K}) \nonumber \\
&= \chi(N_f|_D(-F)) + \chi(N_f|_C(-E - \Gamma)) \nonumber \\
&= \chi(N_f(-E - F)). \label{glue-done}
\end{align}
For the final two equalities, we have used the exact sequences of sheaves
\begin{gather*}
0 \to N_f|_C(-E - \Gamma) \to N_{f|_C}(-E) \to \mathcal{K} \to 0 \\[1ex]
0 \to N_f|_C(-E - \Gamma) \to N_f(-E-F) \to N_f|_D(-F) \to 0;
\end{gather*}
which are just twists by $-E-F$ of the exact sequences:
\begin{gather*}
0 \to N_f|_C(-\Gamma) \to N_{f|_C} \to \mathcal{K} \to 0 \\[1ex]
0 \to N_f|_C(-\Gamma) \to N_f \to N_f|_D \to 0.
\end{gather*}
\noindent
To finish, we note that, by \eqref{glue-done},
\[\dim H^1(N_f(-E-F)) = \dim H^0(N_f(-E-F)) - \chi(N_f(-E - F)) \leq 0,\]
and so
$H^1(N_f(-E-F)) = 0$ as desired.
\end{proof}
In the case where $f|_D$ factors through a hyperplane,
the hypotheses of Lemma~\ref{glue} become easier to check:
\begin{lm} \label{hyp-glue}
Let $f \colon C \cup_\Gamma D \to \pp^r$ be an unramified map from a reducible curve,
such that $f|_D$ factors as a composition of $f_D \colon D \to H$ with the inclusion of a hyperplane $\iota \colon H \subset \pp^r$,
while $f|_C$ is transverse to $H$ along $\Gamma$.
Let $E$ and $F$ be
divisors supported on $C \smallsetminus \Gamma$ and $D \smallsetminus \Gamma$
respectively.
Suppose that, for some $i \in \{0, 1\}$,
\[H^i(N_{f_D}(-\Gamma-F)) = H^i(\oo_D(1)(\Gamma-F)) = H^i(N_{f|_C} (-E)) = 0.\]
Then we have
\[H^i(N_f(-E-F)) = 0.\]
\end{lm}
\begin{proof}
If $i = 0$, we note that $H^0(\oo_D(1)(\Gamma - F)) = 0$ implies
$H^0(\oo_D(1)(-F)) = 0$. In particular, using
the exact sequences
\[\begin{CD}
0 @>>> N_{f_D}(-F) @>>> N_{f|_D}(-F) @>>> \oo_D(1)(-F) @>>> 0 \\
@. @| @VVV @VVV @. \\
0 @>>> N_{f_D}(-F) @>>> N_f|_D(-F) @>>> \oo_D(1)(\Gamma - F) @>>> 0,
\end{CD}\]
we conclude from the first sequence that
$H^0(N_{f_D}(-F)) \to H^0(N_{f|_D}(-F))$ is an isomorphism, and
from the $5$-lemma applied to the corresponding map
between long exact sequences that $H^0(N_{f|_D}(-F)) = H^0(N_f|_D(-F))$.
Similarly, when $i = 1$, we note that
$H^1(N_{f_D}(-\Gamma-F)) = 0$ implies $H^1(N_{f_D}(-F)) = 0$;
we thus conclude from the second sequence that $H^1(N_f|_D(-F)) = 0$.
It thus remains to check that the map $\alpha$ in Lemma~\ref{glue}
is injective if $i = 0$ and surjective if $i = 1$. For this we use
the commutative diagram
\[\begin{CD}
\displaystyle H^0(N_{f_D}(-F)) @>\beta>> N_{f_D}|_\Gamma \simeq \displaystyle \bigoplus_{p \in \Gamma} \left(\frac{T_p H}{f_*(T_p D)}\right) \\
@VgVV @VV{\iota_*}V \\
\displaystyle H^0(N_{f|_D}(-F)) @>\alpha>> \displaystyle \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_*(T_p (C \cup_\Gamma D))}\right).
\end{CD}\]
Since $f|_C$ is transverse to $H$ along $\Gamma$, the
map $\iota_*$ above is an isomorphism. In particular,
since $g$ is an isomorphism when $i = 0$, it suffices to check
that $\beta$ is injective if $i = 0$ and surjective if $i = 1$.
But using the exact sequence
\[0 \to N_{f_D}(-\Gamma-F) \to N_{f_D}(-F) \to N_{f_D}|_\Gamma \to 0,\]
this follows from our assumption that $H^i(N_{f_D}(-\Gamma-F)) = 0$.
\end{proof}
\section{Interpolation \label{sec:inter}}
If we generalize $N_f(-n)$ to $N_f(-D)$, where $D$ is a general effective divisor,
we get the problem of ``interpolation.'' Geometrically, this corresponds to
asking if there is a curve of degree $d$ and genus $g$ which passes through a
collection of points which are general in $\pp^r$
(as opposed to general in a hypersurface $S$).
This condition is analogous in some sense to the conditions
of semistability and section-semistability
(see Section~3 of~\cite{nasko}), as well as to the
Raynaud condition (property $\star$ of \cite{raynaud});
although we shall not make use of these analogies here.
\begin{defi} \label{def:inter} We say a vector bundle $\mathcal{E}$ on a curve $C$ \emph{satisfies interpolation}
if it is nonspecial, and for a general effective divisor $D$ of any degree,
\[H^0(\mathcal{E}(-D)) = 0 \tor H^1(\mathcal{E}(-D)) = 0.\]
\end{defi}
We have the following results on interpolation from \cite{firstpaper}.
To rephrase them in our current language,
note that if $f \colon C \to \pp^r$ is a general BN-curve for $r \geq 3$, then $f$ is an immersion,
so $N_f$ coincides with the normal bundle $N_{f(C)/\pp^r}$ of the image.
Note also that, from Brill-Noether theory,
a general BN-curve $f \colon C \to \pp^r$ of degree $d$ and genus $g$
is nonspecial (i.e.\ satisfies $H^1(f^* \oo_{\pp^r}(1)) = 0$) if and only if
$d \geq g + r$.
\begin{prop}[Theorem~1.3 of~\cite{firstpaper}] \label{inter}
Let $f \colon C \to \pp^r$ (for $r \geq 3$) be a general BN-curve of degree $d$ and genus $g$, where
\[d \geq g + r.\]
Then $N_f$ satisfies interpolation, unless
\[(d, g,r) \in \{(5,2,3), (6,2,4), (7,2,5)\}.\]
\end{prop}
\begin{prop}[Proposition~4.12 of~\cite{firstpaper}] \label{twist}
Let $\mathcal{E}$ be a vector bundle on a curve $C$, and $D$ be a divisor on $C$.
If $\mathcal{E}$ satisfies interpolation and
\[\chi(\mathcal{E}(-D)) \geq (\rk \mathcal{E}) \cdot (\operatorname{genus} C),\]
then $\mathcal{E}(-D)$ satisfies interpolation. In particular,
\[H^1(\mathcal{E}(-D)) = 0.\]
\end{prop}
\begin{lm} \label{g2} Let $f \colon C \to \pp^r$ (for $r \in \{3, 4, 5\}$)
be a general BN-curve of degree $r + 2$ and genus $2$.
Then $H^1(N_f(-1)) = 0$.
\end{lm}
\begin{proof}
We will show that there exists
an immersion $C \hookrightarrow \pp^r$, which is a BN-curve of degree $r + 2$ and genus $2$, and whose image
meets a hyperplane $H$ transversely
in a general collection of $r + 2$ points. For this, we first find a rational normal
curve $R \subset H$ passing through $r + 2$ general points, which is possible
by Corollary~1.4 of~\cite{firstpaper}.
This rational
normal curve is then the hyperplane section of some rational surface scroll $S \subset \pp^r$
(and we can freely choose the projective equivalence class of $S$).
It thus suffices to prove that there exists a smooth curve $C \subset S$,
for which $C \subset S \subset \pp^r$ is a BN-curve of degree $r + 2$ and genus $2$,
such that $C \cap (H \cap S)$ a set of $r + 2$ general points on $H \cap S$;
or alternatively such that the map
\[C \mapsto (C \cap (H \cap S)),\]
from the Hilbert scheme of curves on $S$, to the Hilbert scheme of points
on $H \cap S$,
is smooth at $[C]$; this in turn would follow from
$H^1(N_{C/S}(-1)) = 0$.
But by Corollary~13.3 of \cite{firstpaper}, the general BN-curve $C' \subset \pp^r$
(which is an immersion since $r \geq 3$) of degree $r + 2$ and genus $2$
in $\pp^r$ is contained in some rational surface
scroll $S'$, and satisfies $\chi(N_{C'/S'}) = 11$. Since we can choose $S$ projectively
equivalent to $S'$,
we may thus find a BN-curve $C \subset S$ of degree~$r + 2$
and genus~$2$ with $\chi(N_{C/S}) = 11$. But then,
\[\chi(N_{C/S}(-1)) = 11 - d \geq g \quad \Rightarrow \quad H^1(N_{C/S}(-1)) = 0. \qedhere\]
\end{proof}
\noindent
Combining these results, we obtain:
\begin{lm} \label{from-inter} Let $f \colon C \to \pp^r$ (for $r \geq 3$)
be a general BN-curve of degree $d$ and genus $g$.
Suppose that $d \geq g + r$.
\begin{itemize}
\item If $r = 3$ and $g = 0$, then $H^1(N_f(-2)) = 0$. In fact, $N_f(-2)$ satisfies interpolation.
\item If $r = 3$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation
except when $(d, g) = (5, 2)$.
\item If $r = 4$ and $d \geq 2g$, then $H^1(N_f(-1)) = 0$. In fact, $N_f(-1)$ satisfies interpolation
except when $(d, g) = (6, 2)$.
\end{itemize}
\end{lm}
\begin{proof}
When $(d, g, r) \in \{(5, 2, 3), (6, 2, 4)\}$, the desired result follows from Lemma~\ref{g2}.
Otherwise,
from Propositions~\ref{inter}, we know that $N_f$ satisfies interpolation.
Hence, the desired conclusion follows by applying
Proposition~\ref{twist}: If $r = 3$, then
\begin{align*}
\chi(N_f(-1)) &= 2d \geq 2g = (r - 1) g\\
\chi(N_f(-2)) &= 0 = (r - 1)g;
\end{align*}
and if $r = 4$ and $d \geq 2g$, then
\[\chi(N_f(-1)) = 2d - g + 1 \geq 3g = (r - 1)g. \qedhere \]
\end{proof}
\begin{lm} \label{addone-raw}
Suppose $f \colon C \cup_u L \to \pp^3$ is an unramified map
from a reducible curve, with $L \simeq \pp^1$, and $u$ a single point,
and $f|_L$ of degree~$1$.
Write $v \neq f(u)$ for some other point on $f(L)$. If
\[H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0,\]
then we have
\[H^1(N_f(-2)) = 0.\]
\end{lm}
\begin{proof}
We apply Lemma~8.5 of \cite{firstpaper} (which is stated for $f$ an immersion,
in which case $N_f = N_{C \cup L}$ and $N_{f|_C} = N_C$, but the same proof works
whenever $f$ is unramified); we take $N_C' = N_{f|_C}(-2)$
and $\Lambda_1 = \Lambda_2 = \emptyset$. This implies $N_f(-2)$ satisfies
interpolation (c.f.\ Definition~\ref{def:inter}) provided that $N_{f|_C}(-2)(u)[u \to v][u \to v]$ satisfies interpolation.
But we have
\[\chi(N_f(-2)) = \chi(N_{f|_C}(-2)(u)[u \to v][u \to v]) = 0;\]
so both of these interpolation statements are equivalent to the vanishing of $H^1$.
That is, we have $H^1(N_f(-2)) = 0$, provided that
\[H^1(N_{f|_C}(-2)(u)[u \to v][u \to v]) = H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0,\]
as desired.
\end{proof}
We finish this section with the following proposition,
which immediately implies Theorems~\ref{main-2} and~\ref{main-2-1}:
\begin{prop} \label{p2}
Let $f \colon C \to \pp^2$ be a curve. Then $N_f(-2)$ satisfies interpolation.
In particular $H^1(N_f(-2)) = H^1(N_f(-1)) = 0$.
\end{prop}
\begin{proof}
By adjunction,
\[N_f \simeq K_C \otimes f^* K_{\pp^3}^{-1} \simeq K_f(3) \imp N_f(-2) \simeq K_C(1).\]
By Serre duality,
\[H^1(K_C(1)) \simeq H^0(\oo_C(-1))^\vee = 0;\]
which since $K_C(1)$ is a line bundle implies it satisfies interpolation.
\end{proof}
\section{Reducible BN-Curves \label{sec:rbn}}
\begin{defi} Let $\Gamma \subset \pp^r$ be a finite set of $n$ points. A pair
$(f \colon C \to \pp^r, \Delta \subset C_{\text{sm}})$,
where $C$ is a curve, $f$ is map from $C$ to $\pp^r$, and $\Delta$ is a subset of $n$ points on the smooth locus $C_{\text{sm}}$,
shall be called a \emph{marked curve (respectively marked BN-curve, respectively marked WBN-curve) passing through $\Gamma$}
if $f \colon C \to \pp^r$ is a map from a curve (respectively a BN-curve, respectively a WBN-curve) and $f(\Delta) = \Gamma$.
Given a marked curve $(f \colon C \to \pp^r, \Delta)$ passing through $\Gamma$,
we realize $\Gamma$ as a subset of $C$ via
$\Gamma \simeq \Delta \subset C$.
For $p \in \Gamma$,
we then define the \emph{tangent line $T_p (f, \Gamma)$ at $p$} to be the unique line $\ell \subset \pp^r$ through $p$
with $T_p \ell = f_* T_p C$.
\end{defi}
Let $\Gamma \subset \pp^r$ be a finite set of $n$ general points,
and $(f_i \colon C_i \to \pp^r, \Gamma_i)$ be marked WBN-curves passing through $\Gamma$.
We then write $C_1 \cup_\Gamma C_2$ for the curve obtained
from $C_1$ and $C_2$ by gluing
$\Gamma_1$ to $\Gamma_2$ via the isomorphism $\Gamma_1 \simeq \Gamma \simeq \Gamma_2$.
The maps $f_i$ give rise to a map $f \colon C_1 \cup_\Gamma C_2 \to \pp^r$
from a reducible curve.
Then we have the following result:
\begin{prop}[Theorem~1.3 of \cite{rbn}] \label{prop:glue}
Suppose that, for at least one $i \in \{1, 2\}$, we have
\[(r + 1) d_i - r g_i + r \geq rn.\]
Then
$f \colon C_1 \cup_\Gamma C_2 \to \pp^r$ is a WBN-curve.
\end{prop}
\begin{prop} \label{prop:interior}
In Proposition~\ref{prop:glue}, suppose that $[f_1, \Gamma_1]$ is general in some component
of the space of marked WBN-curves passing through $\Gamma$,
and that $H^1(N_{f_2}) = 0$. Then $H^1(N_f) = 0$.
\end{prop}
\begin{proof}
This follows from combining Lemmas~3.2 and~3.4 of~\cite{rbn}.
\end{proof}
The following lemmas give information about the spaces of marked BN-curves
passing through small numbers of points.
\begin{lm} \label{small-irred}
Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points,
and $d$ and $g$ be integers with $\rho(d, g, r) \geq 0$.
Then the space of marked BN-curves of degree $d$ and genus $g$ to $\pp^r$
passing through $\Gamma$ is irreducible.
\end{lm}
\begin{proof}
First note that, since $n \leq r + 2$, any $n$ points in linear general position
are related by an automorphism of $\pp^r$. Fix some ordering on $\Gamma$.
The space of BN-curves of degree $d$ and genus $g$
is irreducible, and the source of the generic BN-curve is irreducible;
consequently the space of such BN-curves with an ordered collection of $n$
marked points, and the open subset thereof where the images of the marked points
are in linear general position, is irreducible.
It follows that the space of such marked curves endowed with an automorphism
bringing the images of the ordered marked points to~$\Gamma$ (respecting our fixed ordering on $\Gamma$)
is also irreducible.
But by applying the automorphism to the curve and forgetting the order of the marked points,
this latter
space dominates the space of such BN-curves passing through~$\Gamma$;
the space of such BN-curves passing through~$\Gamma$ is thus irreducible.
\end{proof}
\begin{lm} \label{gen-tang-rat}
Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points, and
$\{\ell_p : p \in \Gamma\}$ be a set of lines with $p \in \ell_p$.
Then the general marked rational normal curve
passing through $\Gamma$ has tangent lines at each point $p \in \Gamma$ distinct from $\ell_p$.
\end{lm}
\begin{proof}
Since the intersection of dense opens is a dense open, it suffices to show
the general marked rational normal curve $(f \colon C \to \pp^r, \Delta)$ passing through $\Gamma$
has tangent line at $p$ distinct from $\ell_p$
for any one $p \in \Gamma$.
For this we consider the map, from the space of such marked rational normal curves, to the space
of lines through $p$, which associates to the curve its tangent line at $p$.
Basic deformation theory implies this map is smooth (and thus nonconstant) at $(f, \Delta)$
so long as $H^1(N_f(-\Delta)(-q)) = 0$, where $q \in \Delta$ is the point sent to $p$ under $f$,
which follows from combining Propositions~\ref{inter} and~\ref{twist}.
\end{proof}
\begin{lm} \label{contains-rat} A general BN-curve $f \colon C \to \pp^r$ can be specialized to an unramified map from a
reducible curve $f^\circ \colon X \cup_\Gamma Y \to \pp^r$,
where $f^\circ|_X$ is a rational normal curve.
\end{lm}
\begin{proof}
Write $d$ and $g$ for the degree and genus of $f$.
We first note it suffices to produce a marked WBN-curve $(f^\circ_2 \colon Y \to \pp^r, \Gamma_2)$ of degree $d - r$
and genus $g' \geq g - r - 1$, passing through a set
$\Gamma$ of $g + 1 - g'$ general points.
Indeed, $g + 1 - g' \leq g + 1 - (g - r - 1) = r + 2$ by assumption;
by Lemma~\ref{gen-tang-rat}, there is a marked rational normal curve $(f^\circ_1 \colon X \to \pp^r, \Gamma_1)$ passing through $\Gamma$,
whose tangent lines at $\Gamma$ are distinct from the tangent lines of $(f_2^\circ, \Gamma_2)$ at~$\Gamma$.
Then $f^\circ \colon X \cup_\Gamma Y \to \pp^r$ is unramified (as promised by our conventions)
and gives the required specialization by
Proposition~\ref{prop:glue}.
It remains to construct $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$. If $g \leq r$, then we note that since
$d$ and $g$ are integers,
\[d \geq d - \frac{\rho(d, g, r)}{r + 1} = g + r - \frac{g}{r + 1} \imp d \geq g + r \quad \Leftrightarrow \quad g + 1 \leq (d - r) + 1.\]
Consequently, by inspection,
there is a marked rational curve $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$ of degree $d - r$ passing through a set $\Gamma$ of $g + 1$ general points.
On the other hand, if $g \geq r + 1$, then we
note that
\[\rho(d - r, g - r - 1, r) = (r + 1)(d - r) - r(g - r - 1) - r(r + 1) = (r + 1)d - rg - r(r + 1) = \rho(d, g, r) \geq 0.\]
We may therefore let $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$ be a marked BN-curve of degree $d - r$ and genus $g - r - 1$
passing through a set $\Gamma$ of $r + 2$ general points.
\end{proof}
\begin{lm} \label{gen-tang}
Let $\Gamma \subset \pp^r$ be a general set of $n \leq r + 2$ points,
$\{\ell_p : p \in \Gamma\}$ be a set of lines with $p \in \ell_p$,
and $d$ and $g$ be integers with $\rho(d, g, r) \geq 0$.
Then the general marked BN-curve $(f \colon C \to \pp^r, \Delta)$ of degree $d$ and genus $g$
passing through $\Gamma$
has tangent lines at every $p \in \Gamma$ which are distinct from $\ell_p$.
\end{lm}
\begin{proof}
By Lemma~\ref{contains-rat}, we may specialize $f \colon C \to \pp^r$
to $f^\circ \colon X \cup_\Gamma Y \to \pp^r$ where $f^\circ|_X$ is a rational
normal curve. Specializing the marked points $\Delta$ to lie on $X$
(which can be done since a marked rational normal curve can pass through $n \leq r + 2$ general points
by Proposition~\ref{inter}),
it suffices to consider the case when $f$ is a rational
normal curve.
But this case was already considered in Lemma~\ref{gen-tang-rat}.
\end{proof}
\begin{lm} \label{contains-rat-sp}
Lemma~\ref{contains-rat} remains true
even if we instead ask
$f^\circ|_X$ to be an arbitrary nondegenerate specialization
of a rational normal curve.
\end{lm}
\begin{proof}
We employ the construction used in the proof of Lemma~\ref{contains-rat},
but flipping the order in which we construct $X$ and $Y$:
First we fix $(f_1^\circ \colon X \to \pp^r, \Gamma_1)$; then we construct $(f_2^\circ \colon Y \to \pp^r, \Gamma_2)$
passing through $\Gamma$,
whose tangent lines at
$\Gamma$ are distinct from the tangent lines of $(f_1^\circ, \Gamma_1)$ at $\Gamma$
thanks to Lemma~\ref{gen-tang}.
\end{proof}
\section{Inductive Arguments \label{sec:indarg}}
Let $f \colon C \cup_u L \to \pp^r$ be an unramified map from a reducible curve,
with $L \simeq \pp^1$, and $u$ a single point,
and $f|_L$ of degree~$1$.
By Proposition~\ref{prop:glue}, these
curves are BN-curves.
\begin{lm} \label{p4-add-line} If $H^1(N_{f|_C}(-1)) = 0$,
then $H^1(N_f(-1)) = 0$.
\end{lm}
\begin{proof}
This is immediate from Lemma~\ref{glue} (taking $D = L$).
\end{proof}
\begin{lm} \label{p3-add-line} If $H^1(N_{f|_C}(-2)) = 0$,
and $f$ is a general map of the above type extending $f|_C$, then $H^1(N_f(-2)) = 0$.
\end{lm}
\begin{proof}
By Lemma~\ref{addone-raw}, it suffices to prove that
for $(u, v) \in C \times \pp^3$ general,
\[H^1(N_{f|_C}(-2)(u)[2u \to v]) = 0.\]
Since $H^1(N_{f|_C}(-2)) = 0$, we also have $H^1(N_{f|_C}(-2)(u)) = 0$;
in particular, Riemann-Roch implies
\begin{align*}
\dim H^0(N_{f|_C}(-2)(u)) &= \chi(N_{f|_C}(-2)(u)) = 2 \\
\dim H^0(N_{f|_C}(-2)) &= \chi(N_{f|_C}(-2)) = 0.
\end{align*}
The above dimension
estimates imply there is a unique section $s \in \pp H^0(N_{f|_C}(-2)(u))$
with $s|_u \in N_{f|_C \to v}|_u$; it remains to show that for $(u, v)$
general, $\langle s|_{2u} \rangle \neq N_{f|_C \to v}|_{2u}$.
For this, it suffices to verify that if $v_1$ and $v_2$
are points with $\{v_1, v_2, f(2u)\}$ coplanar --- but
neither $\{v_1, v_2, f(u)\}$, nor $\{v_1, f(2u)\}$, nor $\{v_2, f(2u)\}$
collinear; and $\{v_1, v_2, f(3u)\}$ not coplanar --- then
$N_{f|_C \to v_1}|_{2u} \neq N_{f|_C \to v_2}|_{2u}$.
To show this, we choose a local coordinate $t$ on $C$,
and coordinates on an appropriate affine open $\aa^3 \subset \pp^3$, so that:
\begin{align*}
f(t) &= (t, t^2 + O(t^3), O(t^3)) \\
v_1 &= (1 , 0 , 1) \\
v_2 &= (-1 , 0 , 1).
\end{align*}
It remains to check that the vectors
$f(t) - v_1$, $f(t) - v_2$, and $\frac{d}{dt} f(t)$
are linearly independent at first order in $t$. That is,
we want to check that the determinant
\[\left|\begin{array}{ccc}
t - 1 & t^2 + O(t^3) & O(t^3) - 1 \\
t + 1 & t^2 + O(t^3) & O(t^3) - 1 \\
1 & 2t + O(t^2) & O(t^2)
\end{array}\right|\not\equiv 0 \mod t^2.\]
Or, reducing the entries of the left-hand side modulo $t^2$, that
\[-4t = \left|\begin{array}{ccc}
t - 1 & 0 & - 1 \\
t + 1 & 0 & - 1 \\
1 & 2t & 0
\end{array}\right|\not\equiv 0 \mod t^2,\]
which is clear.
\end{proof}
\begin{lm} \label{add-can-3}
Let $\Gamma \subset \pp^3$ be a set of $5$ general points,
$(f_1 \colon C \to \pp^3, \Gamma_1)$ be a general marked BN-curve
passing through $\Gamma$, and
$(f_2 \colon D \to \pp^3, \Gamma_2)$
be a general marked canonical curve
passing through $\Gamma$.
If $H^1(N_{f_1}(-2)) = 0$,
then $f \colon C \cup_\Gamma D \to \pp^r$ satisfies $H^1(N_f(-2)) = 0$.
\end{lm}
\begin{rem}
By Lemma~\ref{small-irred}, it makes sense to speak of a
``general marked BN-curve (respectively general marked canonical curve)
passing through $\Gamma$'';
by Lemma~\ref{gen-tang}, the resulting curve $f$ is unramified.
\end{rem}
\begin{proof}
By Lemma~\ref{glue}, our problem reduces
to showing that the natural map
\[H^0(N_{f_2} (-2)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p (\pp^r)}{f_* (T_p (C \cup_\Gamma D))}\right)\]
is surjective, and that
\[H^1(N_f|_D (-2)) = 0.\]
These conditions both being open, we may invoke
Lemma~\ref{contains-rat} to specialize
$(f_1 \colon C \to \pp^3, \Gamma_1)$ to a marked BN-curve with reducible source
$(f_1^\circ \colon C_1 \cup_\Delta C_2 \to \pp^3, \Gamma_1^\circ)$,
with $f_1^\circ|_{C_1}$ a rational
normal curve and $\Gamma_1^\circ \subset C_1$.
It thus suffices to prove the above statements in the case when $f_1 = f_1^\circ$
is a rational normal curve.
For this, we first observe that $f(C) \cap f(D) = \Gamma$:
Since there is a unique rational normal curve through any $6$ points,
and a $1$-dimensional family of possible sixth points on $D$
once $D$ and $\Gamma$ are fixed --- but there is a $2$-dimensional family
of rational normal curves through $5$ points
in linear general position ---
dimension counting shows $f_1(C)$ and $f_2(D)$ cannot meet at a sixth point
for $([f_1, \Gamma_1], [f_2, \Gamma_2])$ general.
In particular, $f$ is an immersion.
Next, we observe that $f(D)$ is contained in a $5$-dimensional space
of cubics. Since it is one linear condition, for a cubic that vanishes on $f(D)$,
to be tangent to $f(C)$ at a point of $\Gamma$, there is necessarily a cubic
surface $S$ containing $f(D)$ which is tangent to $f(C)$ at four points of $\Gamma$.
If $S$ were a multiple of $Q$, say $Q \cdot H$ where $H$ is a hyperplane, then
since $f(C)$ is transverse to $Q$, it would follow that $H$ contains four points of $\Gamma$.
But any $4$ points on $f(C)$ are in linear general position. Consequently, $S$ is not
a multiple of $Q$. Or equivalently, $f(D) = Q \cap S$ gives a presentation of $f(D)$
as a complete intersection.
If $S$ were tangent to $f(C)$ at all five points of $\Gamma$, then restricting the
equation of $S$ to $f(C)$ would give a section of $\oo_C(3) \simeq \oo_{\pp^1}(9)$
which vanished with multiplicity two at five points. Since the only such section
is the zero section, we would conclude that $f(C) \subset S$.
But then $f(C)$ would meet $f(D)$ at all $6$ points of $f(C) \cap Q$,
which we already ruled out above.
Thus, $S$ is tangent to $f(C)$ at precisely four points of $\Gamma$.
Write $\Delta$ for the divisor on $D$ defined by these four points,
and $p$ for the fifth point. Note that for $q \neq p$ in the tangent line to $(f_1, \Delta \cup \{p\})$
at $p$,
\begin{align*}
N_f|_D &\simeq \big(N_{f(D)/S}(\Delta + p) \oplus N_{f(D)/Q}(p)\big)[p \to q] \\
&\simeq \big(\oo_D(2)(\Delta + p) \oplus \oo_D(3)(p)\big)[p \to q] \\
\Rightarrow \ N_f|_D(-2) &\simeq \big(\oo_D(\Delta + p) \oplus \oo_D(1)(p)\big)[p \to q] \\
&\simeq \big(\oo_D(\Delta + p) \oplus K_D(p)\big)[p \to q].
\end{align*}
By Riemann-Roch, $\dim H^0(K_D(p)) = 4 = \dim H^0(K_D)$; so every section
of $K_D(p)$ vanishes at $p$. Consequently,
the fiber of every section of $\oo_D(\Delta + p) \oplus K_D(p)$
at $p$ lies in the fiber of the first factor. Since the fiber $N_{f_2 \to q}|_p$
does not lie in the fiber of the first factor, we have an isomorphism
\[H^0(N_f|_D(-2)) \simeq H^0\Big(\big(\oo_D(\Delta + p) \oplus K_D(p)\big)(-p)\Big) \simeq H^0(\oo_D(\Delta)) \oplus H^0(K_D).\]
Consequently,
\[\dim H^0(N_f|_D(-2)) = \dim H^0(\oo_D(\Delta)) + \dim H^0(K_D) = 1 + 4 = 5 = \chi(N_f|_D(-2)),\]
which implies
\[H^1(N_f|_D(-2)) = 0.\]
\noindent
Next, we prove the surjectivity of the evaluation map
\[\text{ev} \colon H^0(N_{f_2}(-2)) \to \bigoplus_{x \in \Gamma} \left(\frac{T_x (\pp^r)}{f_* (T_x (C \cup_\Gamma D))}\right)\]
For this, we use the isomorphism
\[N_{f_2}(-2) \simeq N_{f(D)/\pp^3}(-2) \simeq N_{f(D)/S}(-2) \oplus N_{f(D)/Q}(-2) \simeq \oo_D \oplus K_D.\]
The restriction of $\text{ev}$ to $H^0(N_{f(D)/S}(-2) \simeq \oo_D)$
maps trivially into the quotient $\frac{T_x (\pp^r)}{f_*(T_x (C \cup_\Gamma D))}$
for $x \in \Delta$, since $S$ is tangent to $f(C)$ along $\Delta$.
Because $S$ is not tangent to $f(C)$ at $p$,
the restriction of $\text{ev}$ to $H^0(N_{f(D)/S}(-2) \simeq \oo_D)$ thus
maps isomorphically onto the factor $\frac{T_p (\pp^r)}{f_*(T_p (C \cup_\Gamma D))}$.
It is therefore sufficient to show that the evaluation map
\[H^0(N_{f(D)/Q}(-2) \simeq K_D) \to \bigoplus_{x \in \Delta} \left(\frac{T_x (\pp^r)}{f_*(T_x (C \cup_\Gamma D))}\right)\]
is surjective. Or equivalently, since $Q$ is not tangent to $f(C)$ at any $x \in \Delta$,
that the evaluation map
\[H^0(K_D) \to K_D|_\Delta\]
is surjective. But this is clear since $\dim H^0(K_D) = 4 = \# \Delta$
and $\Delta$ is a general effective divisor of degree~$4$ on $D$.
\end{proof}
\begin{lm} \label{to-3-skew}
Let $f \colon C \to \pp^4$ be a general BN-curve in $\pp^4$, of arbitrary degree and genus.
Then we can specialize $f$ to an unramified map from a reducible curve
$f^\circ \colon C' \cup L_1 \cup L_2 \cup L_3 \to \pp^4$,
so that each $L_i$ is rational, $f^\circ|_{L_i}$ is of degree~$1$,
and the images of the $L_i$ under $f^\circ$ are in linear general position.
\end{lm}
\begin{proof}
By Lemma~\ref{contains-rat-sp},
our problem reduces to the case $f\colon C \to \pp^4$ is a rational normal curve.
In this case, we begin by taking three general lines in $\pp^4$.
The locus of lines meeting
each of our lines has class $\sigma_2$ in the Chow ring of
the Grassmannian $\mathbb{G}(1, 4)$ of lines in $\pp^4$.
By the standard calculus of Schubert cycles,
we have $\sigma_2^3 = \sigma_{2,2} \neq 0$
in the Chow ring of $\mathbb{G}(1, 4)$.
Thus, there exists a line meeting each of our three given lines.
The (immersion of the)
union of these four lines is then a specialization of a rational
normal curve.
\end{proof}
\begin{lm} \label{add-can-4}
Let $\Gamma \subset \pp^4$ be a set of $6$ points in linear general position;
$(f_1 \colon C \to \pp^4, \Gamma_1)$ be either a general marked
immersion of three disjoint lines,
or a general marked BN-curve in $\pp^4$, passing through $\Gamma$;
and $(f_2 \colon D \to \pp^4, \Gamma_2)$ be a general marked canonical curve
passing through~$\Gamma$.
If $H^1(N_{f_1}(-1)) = 0$, then
$f \colon C \cup_\Gamma D \to \pp^4$
satisfies $H^1(N_f(-1)) = 0$.
\end{lm}
\begin{proof}
By Lemma~\ref{glue}, it suffices to prove that the natural map
\[H^0(N_{f_2}(-1)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}\right)\]
is surjective, and that
\[H^1(N_f|_D(-1)) = 0.\]
These conditions both being open,
we may apply Lemma~\ref{to-3-skew}
to specialize $(f_1, \Gamma_1)$ to a marked curve
with reducible source
$(f_1^\circ \colon C_1 \cup C_2 \to \pp^r, \Gamma_1^\circ)$,
with $C_1 = L_1 \cup L_2 \cup L_3$ a union of $3$ disjoint lines,
and $\Gamma_1^\circ \subset C_1$ with $2$ points on each line.
It thus suffices to prove the above statements in the case when $C = C_1 = L_1 \cup L_2 \cup L_3$
is the union of $3$ general lines.
Write $\Gamma = \Gamma_1 \cup \Gamma_2 \cup \Gamma_3$, where $\Gamma_i \subset L_i$.
It is well known that every canonical curve in $\pp^4$ is the complete intersection of three quadrics;
write $V$ for the vector space of quadrics vanishing along $f(D)$.
For any $2$-secant line $L$ to $f(D)$, it is evident that it is one linear condition
on quadrics in $V$ to contain $L$; and moreover, that general lines impose independent
conditions unless there is a quadric which contains all $2$-secant lines.
Now the projection from a general line in $\pp^4$ of $f(D)$ yields a nodal plane curve
of degree $8$ and geometric genus $5$, which in particular must have
\[\binom{8 - 1}{2} - 5 = 16\]
nodes.
Consequently, the secant variety to $f(D)$
is a hypersurface of degree $16$; and is thus not contained in a quadric.
Thus, vanishing on general lines impose independent
conditions on~$V$. As $f(L_1)$, $f(L_2)$, and $f(L_3)$ are general,
we may thus choose a basis $V = \langle Q_1, Q_2, Q_3 \rangle$
so that $Q_i$ contains $L_j$ if an only if $i \neq j$
(where the $Q_i$ are uniquely defined up to scaling).
By construction, $f(D)$ is the complete intersection $Q_1 \cap Q_2 \cap Q_3$.
We now consider the direct sum decomposition
\[N_{f_2} \simeq N_{f(D)/\pp^4} \simeq N_{f(D)/(Q_1 \cap Q_2)} \oplus N_{f(D)/(Q_2 \cap Q_3)} \oplus N_{f(D)/(Q_3 \cap Q_1)},\]
which induces a direct sum decomposition
\[N_f|_D \simeq N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3) \oplus N_{f(D)/(Q_2 \cap Q_3)}(\Gamma_1) \oplus N_{f(D)/(Q_3 \cap Q_1)}(\Gamma_2).\]
To show that $H^1(N_f|_D(-1)) = 0$, it is sufficient
by symmetry to show that
\[H^1(N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1)) = 0.\]
But we have
\[N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1) \simeq \oo_D(2)(\Gamma_3)(-1) \simeq \oo_D(1)(\Gamma_3) = K_D(\Gamma_3);\]
so by Serre duality,
\[H^1(N_{f(D)/(Q_1 \cap Q_2)}(\Gamma_3)(-1)) \simeq H^0(\oo_D(-\Gamma_3))^\vee = 0.\]
\noindent
Next, we examine the evaluation map
\[H^0(N_{f_2}(-1)) \to \bigoplus_{p \in \Gamma} \left(\frac{T_p(\pp^r)}{f_*(T_p(C \cup_\Gamma D))}\right).\]
For this, we use the direct sum decomposition
\[N_{f_2} \simeq N_{f(D)/\pp^4} \simeq N_{f(D)/(Q_1 \cap Q_2)}(-1) \oplus N_{f(D)/(Q_2 \cap Q_3)}(-1) \oplus N_{f(D)/(Q_3 \cap Q_1)}(-1),\]
together with the decomposition (for $p \in \Gamma_i$):
\[\frac{T_p (\pp^r)}{f_*(T_p(C \cup_{\Gamma_i} L_i))} \simeq \bigoplus_{j \neq i} N_{f(D)/(Q_i \cap Q_j)}|_p.\]
This reduces our problem to showing (by symmetry) the surjectivity of
\[H^0(N_{f(D)/(Q_1 \cap Q_2)}(-1)) \to \bigoplus_{p \in \Gamma_1 \cup \Gamma_2} N_{f(D)/(Q_1 \cap Q_2)}|_p.\]
But for this, it is sufficient to note that
$\Gamma_1 \cup \Gamma_2$ is a general collection of $4$ points
on $D$, and
\[N_{f(D)/(Q_1 \cap Q_2)}(-1) \simeq \oo_D(2)(-1) = \oo_D(1) \simeq K_D.\]
It thus remains to show
\[H^0(K_D) \to K_D|_{\Gamma_1 \cup \Gamma_2}\]
is surjective, where $\Gamma_1 \cup \Gamma_2$ is a general collection of $4$ points
on $D$. But this is clear because $K_D$ is a line bundle
and $\dim H^0(K_D) = 5 \geq 4$.
\end{proof}
\begin{cor} \label{finite} To prove the main theorems (excluding the ``conversely\ldots'' part),
it suffices to verify them in the following special cases:
\begin{enumerate}
\item For Theorem~\ref{main-3}, it suffices to consider the cases where $(d, g)$ is one of:
\begin{gather*}
(5, 1), \quad (7, 2), \quad (6, 3), \quad (7, 4), \quad (8, 5), \quad (9, 6), \quad (9, 7), \\
(10, 9), \quad (11, 10), \quad (12, 12), \quad (13, 13) \quad (14, 14).
\end{gather*}
\item For Theorem~\ref{main-3-1}, it suffices to consider the cases where $(d, g)$ is one of:
\[(7, 5), \quad (8, 6).\]
\item For Theorem~\ref{main-4}, it suffices to consider the cases where $(d, g)$ is one of:
\[(9, 5), \quad (10, 6), \quad (11, 7), \quad (12, 9), \quad (16, 15), \quad (17, 16), \quad (18, 17).\]
\end{enumerate}
In proving the theorems in each of these cases, we may suppose the
corresponding theorem holds for curves of smaller genus.
\end{cor}
\begin{proof}
For Theorem~\ref{main-3}, note that by Lemma~\ref{p3-add-line}
and Proposition~\ref{prop:glue}, it suffices to show Theorem~\ref{main-3} for
each pair $(d, g)$, where $d$ is minimal (i.e.,\ where $\rho(d, g) = \rho(d, g, r = 3) \geq 0$
and $(d, g)$ is not in our list of counterexamples; but either $\rho(d - 1, g) < 0$,
or $(d - 1, g)$ is in our list of counterexamples).
If $\rho(d, g) \geq 0$ and $g \geq 15$, then $(d - 6, g - 8)$ is not in our list of counterexamples,
and $\rho(d - 6, g - 8) = \rho(d, g) \geq 0$. By induction, we know $H^1(N_f(-2)) = 0$
for $f$ a BN-general curve of degree $d - 6$ and genus $g - 8$.
Applying Lemma~\ref{add-can-3} (and Proposition~\ref{prop:glue}), we conclude the desired result.
If $\rho(d, g) \geq 0$ and $g \leq 14$, and $d$ is minimal as above,
then either $(d, g)$ is in our above list, or
$(d, g) \in \{(3, 0), (9, 8), (12, 11)\}$. The case of $(d, g) = (3, 0)$ follows from
Lemma~\ref{from-inter}.
But in these last two cases,
Lemma~\ref{add-can-3} again implies the desired result (using Theorem~\ref{main-3}
for $(d', g') = (d - 6, g - 8)$ as our inductive hypotheses).
For Theorem~\ref{main-3-1}, we note that if $H^1(N_f(-2)) = 0$, then
it follows that $H^1(N_f(-1)) = 0$. It therefore suffices to check
the list of counterexamples appearing in Theorem~\ref{main-3}
besides the counterexample $(d, g) = (6, 4)$ listed in Theorem~\ref{main-3-1}.
The cases $(d, g) \in \{(4, 1), (5, 2), (6, 2)\}$ follow from Lemma~\ref{from-inter},
so we only have to consider the remaining cases (which form the given list).
Finally, for Theorem~\ref{main-4}, Lemma~\ref{p4-add-line} implies it suffices
to show Theorem~\ref{main-4} for each pair $(d, g)$ with $d$ minimal.
If $\rho(d, g) \geq 0$ and $g \geq 18$, then $(d - 8, g - 10)$ is not in our list of counterexamples,
and $\rho(d - 8, g - 10) = \rho(d, g) \geq 0$. By induction, we know $H^1(N_f(-1)) = 0$
for $C$ is a general curve of degree $d - 8$ and genus $g - 10$.
Applying Lemma~\ref{add-can-4}, we conclude the desired result.
If $\rho(d, g) \geq 0$ and $g \leq 17$, and $d$ is minimal as above,
then either $(d, g)$ is in our above list, or
\[(d, g) \in \{(4, 0), (5, 1), (6, 2), (7, 3), (8, 4)\},\]
or
\[(d, g) \in \{(11, 8), (12, 10), (13, 11), (14, 12), (15, 13), (16, 14)\},\]
In the first set of cases above, Lemma~\ref{from-inter} implies the desired
result. But in the last set of cases,
Lemma~\ref{add-can-4} again implies the desired result. Here, for $(d, g) = (11, 8)$,
our inductive hypothesis is that
$H^1(N_f(-1)) = 0$ for $f \colon L_1 \cup L_2 \cup L_3 \to \pp^4$
an immersion of three skew lines.
In the remaining cases, we use Theorem~\ref{main-3}
for $(d', g') = (d - 8, g - 10)$ as our inductive hypothesis.
\end{proof}
\section{Adding Curves in a Hyperplane \label{sec:hir}}
In this section, we explain an inductive strategy involving adding
curves contained in hyperplanes, which will help resolve many of our
remaining cases.
\begin{lm} \label{smoothable} Let $H \subset \pp^r$ (for $r \geq 3$) be a hyperplane,
and let $(f_1 \colon C \to \pp^r, \Gamma_1)$ and
\mbox{$(f_2 \colon D \to H, \Gamma_2)$} be marked curves,
both passing through a set $\Gamma \subset H \subset \pp^r$ of $n \geq 1$ points.
Assume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$,
that $\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse
to $H$ along $\Gamma$. If
\[H^1(N_{f_1}(-\Gamma)) = 0 \quad \text{and} \quad n \geq g - d + r,\]
then $f \colon C \cup_\Gamma D \to \pp^r$
satisfies $H^1(N_f) = 0$
and is a limit of unramified maps from smooth curves.
If in addition $f_1$ is an immersion,
$f(C) \cap f(D)$ is exactly equal to $\Gamma$, and
$\oo_D(1)(\Gamma)$ is very ample away from $\Gamma$ --- i.e.\ if
$\dim H^0(\oo_D(1)(\Gamma)(-\Delta)) = \dim H^0(\oo_D(1)(\Gamma)) - 2$
for any effective divisor $\Delta$ of degree $2$ supported on $D \smallsetminus \Gamma$ --- then
$f$ is a limit of immersions of smooth curves.
\end{lm}
\begin{rem} \label{very-ample-away}
The condition that $\oo_D(1)(\Gamma)$ is very ample away from $\Gamma$
is immediate when $\oo_D(1)$ is very ample (which in particular happens for $r \geq 4$).
It is also immediate when $n \geq g$, in which case $\oo_D(1)(\Gamma)$ is a general line bundle
of degree $d + n \geq g + r \geq g + 3$ and is thus very ample.
\end{rem}
\begin{proof}
Note that $N_{f_1}$ is a subsheaf of $N_f|_C$ with punctual quotient
(supported at $\Gamma$). Twisting down by $\Gamma$, we obtain a short exact sequence
\[0 \to N_{f_1}(-\Gamma) \to N_f|_C(-\Gamma) \to * \to 0,\]
where $*$ denotes a punctual sheaf, which in particular has vanishing $H^1$.
Since $H^1(N_{f_1}(-\Gamma)) = 0$ by assumption,
we conclude that $H^1(N_f|_C(-\Gamma)) = 0$ too.
Since $f_2$ is a general BN-curve, $H^1(N_{f_2}) = 0$.
The exact sequences
\begin{gather*}
0 \to N_f|_C(-\Gamma) \to N_f \to N_f|_D \to 0 \\
0 \to N_{f_2} \to N_f|_D \to N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma) \to 0
\end{gather*}
then imply that, to check $H^1(N_f) = 0$, it suffices to check $H^1(\oo_D(1)(\Gamma)) = 0$.
They moreover imply that
every section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$ lifts to a section
of $N_f$, which, as $H^1(N_f) = 0$, lifts to a global deformation
of $f$.
To check $f$
is a limit of unramified maps from smooth curves, it remains to see that the
generic section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$ corresponds
to a first-order deformation which smoothes the nodes $\Gamma$ --- or equivalently does not vanish at $\Gamma$.
Since by assumption $f_1$ is an immersion
and there are no other nodes where $f(C)$ and $f(D)$ meet besides $\Gamma$,
to see that $f$
is a limit of immersions of smooth curves, it remains to note in addition that
the generic section of $N_H|_D(\Gamma) \simeq \oo_D(1)(\Gamma)$
separates the points of $D$ identified under $f_2$ --- which is true by assumption that $\oo_D(1)(\Gamma)$ is very ample
away from $\Gamma$.
To finish the proof, it thus suffices to check $H^1(\oo_D(1)(\Gamma)) = 0$,
and that the generic section of $\oo_D(1)(\Gamma)$ does not vanish at any point $p \in \Gamma$.
Equivalently, it suffices to check $H^1(\oo_D(1)(\Gamma)(-p)) = 0$ for $p \in \Gamma$.
Since $f_2$ is a general BN-curve, we obtain
\[\dim H^1(\oo_D(1)) = \max(0, g - d + (r - 1)) \leq n - 1.\]
Twisting by $\Gamma \smallsetminus \{p\}$, which is a set of $n - 1$ general points, we therefore obtain
\[H^1(\oo_D(1)(\Gamma \smallsetminus \{p\})) = 0,\]
as desired.
\end{proof}
\begin{lm} \label{lm:hir}
Let $k \geq 1$ be an integer, $\iota \colon H \hookrightarrow \pp^r$ ($r \geq 3$) be a hyperplane,
and $(f_1 \colon C \to \pp^r, \Gamma_1)$ and
\mbox{$(f_2 \colon D \to H, \Gamma_2)$} be marked curves,
both passing through a set $\Gamma \subset H \subset \pp^r$ of $n \geq 1$ points.
Assume that $f_2$ is a general BN-curve of degree $d$ and genus $g$ to $H$,
that $\Gamma_2$ is a general collection of $n$ points on $D$, and that $f_1$ is transverse
to $H$ along $\Gamma$.
Suppose moreover that:
\begin{enumerate}
\item The bundle $N_{f_2}(-k)$ satisfies interpolation.
\item We have
$H^1(N_{f_1}(-k)) = 0$.
\item We have
\[(r - 2) n \leq rd - (r - 4)(g - 1) - k \cdot (r - 2) d.\]
\item We have
\[n \geq \begin{cases}
g & \text{if $k = 1$;} \\
g - 1 + (k - 1)d & \text{if $k > 1$.}
\end{cases}\]
\end{enumerate}
Then $f \colon C \cup_\Gamma D \to \pp^r$ satisfies
\[H^1(N_f(-k)) = 0.\]
\end{lm}
\begin{proof}
Since $N_{f_2}(-k)$ satisfies interpolation by assumption and
\[(r - 2) n \leq \chi(N_{f_2}(-k)) = rd - (r - 4)(g - 1) - k \cdot (r - 2) d,\]
we conclude that $H^1(N_{f_2}(-k)(-\Gamma)) = 0$.
Since $H^1(N_{f_1} (-k)) = 0$ by assumption,
to apply Lemma~\ref{hyp-glue} it remains to check
\[H^1(\oo_D(1 - k)(\Gamma)) = 0.\]
It is therefore sufficient for
\[n = \#\Gamma \geq \dim H^1(\oo_D(1 - k)) = \begin{cases}
g & \text{if $k = 1$;} \\
g - 1 + (k - 1)d & \text{if $k > 1$.}
\end{cases}\]
But this is precisely our final assumption.
\end{proof}
\section{Curves of Large Genus \label{sec:hir-2}}
In this section, we will deal with a number of our special
cases, of larger genus. Taking care of these cases separately
is helpful --- since in the remaining cases, we will not
have to worry about whether our curve is a BN-curve, thanks to
results of~\cite{iliev} and~\cite{keem} on the irreducibility
of the Hilbert scheme of curves.
\begin{lm} \label{bn3}
Let $H \subset \pp^3$ be a plane, $\Gamma \subset H \subset \pp^3$ a set of $6$ general points,
$(f_1 \colon C \to \pp^3, \Gamma_1)$ a general marked BN-curve passing through $\Gamma$
of degree and genus one of
\[(d, g) \in \{(6, 1), (7, 2), (8, 4), (9, 5), (10, 6)\},\]
and $(f_2 \colon D \to H, \Gamma_2)$
a general marked canonical curve
passing through $\Gamma$.
Then $f \colon C \cup_\Gamma D \to \pp^3$ is a BN-curve which satisfies $H^1(N_f) = 0$.
\end{lm}
\begin{proof}
Note that the conclusion is an open condition; we may therefore freely specialize $(f_1, \Gamma_1)$.
Write $\Gamma = \{s, t, u, v, w, x\}$.
In the case $(d, g) = (6, 1)$, we specialize $(f_1, \Gamma_1)$
to
$(f_1^\circ \colon C^\circ = C_1 \cup_p C_2 \cup_{\{q, r\}} C_3 \to \pp^3, \Gamma_1^\circ)$,
where $f_1^\circ|_{C_1}$ is a conic, $f_1^\circ|_{C_2}$ is a line with $C_2$ joined to $C_1$
at one point $p$, and $f_1^\circ|_{C_3}$ is a rational normal curve with $C_3$ joined to $C_1$ at two points $\{q, r\}$;
note that $f_1^\circ$ is a BN-curve by (iterative application of) Proposition~\ref{prop:glue}.
We suppose that $(f_1^\circ|_{C_1}, \Gamma_1^\circ \cap C_1)$ passes through $\{s, t\}$,
while $(f_1^\circ|_{C_2}, \Gamma_1^\circ \cap C_2)$ passes through $u$,
and $(f_1^\circ|_{C_3}, \Gamma_1^\circ \cap C_3)$ passes through $\{v, w, x\}$;
it is clear this can be done so $\{s, t, u, v, w, x\}$ are general.
Writing
\[f^\circ \colon C^\circ \cup_\Gamma D = C_2 \cup_{\{p, u\}} C_3 \cup_{\{q, r, v, w, x\}} (C_1 \cup_{\{s, t\}} D) \to \pp^3,\]
it suffices by Propositions~\ref{prop:glue} and~\ref{prop:interior} to
show that $f^\circ|_{C_1 \cup D}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$.
For $(d, g) = (8, 4)$, we specialize $(f_1, \Gamma_1)$ to
$(f_1^\circ \colon C^\circ = C_1 \cup_{\{p, q, r\}} C_2 \cup_{\{y, z, a\}} C_3 \to \pp^3, \Gamma_1^\circ)$,
where $f_1^\circ|_{C_1}$ is a conic, and
$f_1^\circ|_{C_2}$ and $f_1^\circ|_{C_3}$ are rational normal curves,
with both $C_2$ and $C_3$ joined to $C_1$ at $3$ points (at $\{p, q, r\}$ and $\{y, z, a\}$ respectively);
note that $f_1^\circ$ is a BN-curve by (iterative application of) Proposition~\ref{prop:glue}.
We suppose that $(f_1^\circ|_{C_1}, \Gamma_1^\circ \cap C_1)$ passes through $\{s, t\}$,
while $(f_1^\circ|_{C_2}, \Gamma_1^\circ \cap C_2)$ passes through $\{u, v\}$,
and $(f_1^\circ|_{C_3}, \Gamma_1^\circ \cap C_3)$ passes through $\{w, x\}$;
it is clear this can be done so $\{s, t, u, v, w, x\}$ are general.
Writing
\[f^\circ \colon C^\circ \cup_\Gamma D = C_2 \cup_{\{p, q, r, u, v\}} C_3 \cup_{\{w, x, y, z, a\}} (C_1 \cup_{\{s, t\}} D) \to \pp^3,\]
it again suffices by Propositions~\ref{prop:glue} and~\ref{prop:interior} to
show that $f^\circ|_{C_1 \cup D}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$.
For this, we first note that $f^\circ|_{C_1 \cup D}$ is a curve of degree $6$ and genus $4$,
and that the moduli space of smooth curves of degree $6$ and genus $4$ in $\pp^3$ is
irreducible (they are all canonical curves).
Moreover, by Lemma~\ref{smoothable} (c.f.\ Remark~\ref{very-ample-away} and note that
$\oo_D(1) \simeq K_D$ is very ample),
$f^\circ|_{C_1 \cup D}$ is a limit
of immersions of smooth curves, and satisfies
$H^1(N_{f^\circ|_{C_1 \cup D}}) = 0$; this completes the proof.
\end{proof}
\begin{lm} \label{bn4}
Let $H \subset \pp^4$ be a hyperplane, $\Gamma \subset H \subset \pp^4$ a set of $7$ general points,
\mbox{$(f_1 \colon C \to \pp^4, \Gamma_1)$} a general marked BN-curve passing through $\Gamma$
of degree and genus one of
\[(d, g) \in \{(7, 3), (8, 4), (9, 5)\},\]
and $(f_2 \colon D \to H, \Gamma_2)$
a general marked BN-curve of degree~$9$ and genus~$6$
passing through $\Gamma$.
Then $f \colon C \cup_\Gamma D \to \pp^4$ is a BN-curve which satisfies $H^1(N_f) = 0$.
\end{lm}
\begin{proof}
Again, we note that the conclusion is an open statement; we may therefore freely
specialize $(f_1, \Gamma_1)$. Write $\Gamma = \{t, u, v, w, x, y, z\}$.
First, we claim it suffices to consider the case $(d, g) = (7, 3)$.
Indeed, suppose $(f_1, \Gamma_1)$ is a marked BN-curve of degree $7$ and genus $3$ passing through $\Gamma$.
Then $f_1' \colon C \cup_{\{p, q\}} L \to \pp^4$ and $f_1'' \colon C \cup_{\{p, q\}} L \cup_{\{r, s\}} L' \to \pp^4$
(where $f_1'|_L$ and $f_1''|_L$ and $f_1''|_{L'}$ are lines with $L$ and $L'$ joined to $C$ at two points)
are BN-curves by Proposition~\ref{prop:glue}, of degree and genus
$(8, 4)$ and $(9, 5)$ respectively. If $f \colon C \cup_\Gamma D \to \pp^4$ is a BN-curve
with $H^1(N_f) = 0$, then invoking Propositions~\ref{prop:glue}
and~\ref{prop:interior}, both
\begin{gather*}
f' \colon (C \cup_{\{p, q\}} L) \cup_\Gamma D = (C \cup_\Gamma D) \cup_{\{p, q\}} L \to \pp^4 \\
\text{and} \quad f'' \colon (C \cup_{\{p, q\}} L \cup_{\{r, s\}} L') \cup_\Gamma D = (C \cup_\Gamma D) \cup_{\{p, q\}} L \cup_{\{r, s\}} L' \to \pp^4
\end{gather*}
are BN-curves, which satisfy
$H^1(N_{f'}) = H^1(N_{f''}) = 0$.
So it remains to consider the case $(d, g) = (7, 3)$.
In this case, we begin by specializing $(f_1, \Gamma_1)$ to
$(f_1^\circ \colon C^\circ = C' \cup_{\{p, q\}} L \to \pp^4, \Gamma_1^\circ)$,
where $f_1^\circ|_{C'}$ is a general BN-curve of degree $6$ and genus $2$, and $f_1^\circ|_L$ is a line with $L$
joined to $C'$ at two points $\{p, q\}$.
We suppose that $(f_1^\circ|_L, \Gamma_1^\circ \cap L)$ passes through $t$,
while $(f_1^\circ|_{C'}, \Gamma_1^\circ \cap C')$ passes through $\{u, v, w, x, y, z\}$;
we must check this can be done so $\{t, u, v, w, x, y, z\}$ are general.
To see this, it suffices to show
that the intersection $f_1^\circ(C') \cap H$ and the points $\{f_1^\circ(p), f_1^\circ(q)\}$
independently general. In other words,
we are claiming that the map
\[\{(f_1^\circ|_{C'} \colon C' \to \pp^4, p, q) : p, q \in C'\} \mapsto (f_1^\circ|_{C'}(C') \cap H, f_1^\circ|_{C'}(p), f_1^\circ|_{C'}(q))\]
is dominant; equivalently, that it is smooth at a generic point $(f_1^\circ|_{C'}, p, q)$.
But the obstruction to smoothness lies in
$H^1(N_{f_1^\circ|_{C'}}(-1)(-p-q)) = 0$, which vanishes because
because $N_{f_1^\circ|_{C'}}(-1)$ satisfies interpolation by Lemma~\ref{from-inter}.
We next specialize $(f_2, \Gamma_2)$ to $(f_2^\circ \colon D^\circ = D' \cup_\Delta D_1 \to H, \Gamma_2^\circ)$,
where $f_2^\circ|_{D'}$ is a general BN-curve of degree
$6$ and genus $3$, and $f_2^\circ|_{D_1}$ is a rational normal curve with $D_1$
joined to $D'$ at a set $\Delta$ of $4$ points;
note that $f_2^\circ$ is a BN-curve by Proposition~\ref{prop:glue}.
We suppose that $(f_2^\circ|_{D_1}, \Gamma_2^\circ \cap D_1)$ passes through $t$,
while $(f_2^\circ|_{D'}, \Gamma_2^\circ \cap D')$ passes through $\{u, v, w, x, y, z\}$;
this can be done so $\{t, u, v, w, x, y, z\}$ are still general,
since $f_2^\circ|_{D'}$ (marked at general points of the source) can pass through $6$ general points,
while $(f_2^\circ|_{D_1}$ (again marked at general points of the source) can pass through $5$ general points,
both by Corollary~1.4 of~\cite{firstpaper}.
In addition, $(f_2^\circ|_{D_1}, (\hat{t} = \Gamma_2^\circ \cap D_1) \cup \Delta)$ has a general tangent line at $t$;
to see this, note that we are asserting that the map sending $(f_2^\circ|_{D_1}, \hat{t} \cup \Delta)$
to its tangent line at $t$ is dominant;
equivalently, that it is smooth at a generic point of the source.
But the obstruction to smoothness lies in
$H^1(N_{f_2^\circ|_{D_1}}(-\Delta - 2\hat{t} \, ))$, which vanishes because
$N_{f_2^\circ|_{D_1}}(-2\hat{t} \, )$ satisfies interpolation by combining
Propositions~\ref{inter} and~\ref{twist}.
As $\{p, q\} \subset C'$ is general, we thus know that the tangent lines
to $(f_2^\circ|_{D_1}, \hat{t} \cup \Delta)$ at $t$, and to $(f_1^\circ|_{C'}, \{p, q\})$ at $f_1^\circ(p)$ and $f_1^\circ(q)$,
together span all of $\pp^4$; write $\bar{t}$, $\bar{p}$, and $\bar{q}$ for points on each of these tangent lines
distinct from $t$, $f_1^\circ(p)$, and $f_1^\circ(q)$ respectively.
We then use the exact sequences
\begin{gather*}
0 \to N_{f^\circ}|_L(-\hat{t} - p - q) \to N_{f^\circ} \to N_{f^\circ}|_{C' \cup D^\circ} \to 0 \\
0 \to N_{f^\circ|_{C' \cup D^\circ}} \to N_{f^\circ}|_{C' \cup D^\circ} \to * \to 0,
\end{gather*}
where $*$ is a punctual sheaf (which in particular has vanishing $H^1$).
Write $H_t$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{p}$, and $\bar{q}$;
and $H_p$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{t}$, and $\bar{q}$;
and $H_q$ for the hyperplane spanned by $f_1^\circ(L)$, $\bar{t}$, and $\bar{p}$.
Then $f_1^\circ(L)$ is the complete intersection $H_t \cap H_p \cap H_q$, and so we get a decomposition
\[N_{f^\circ}|_L \simeq N_{f_1^\circ(L) / H_t}(\hat{t} \, ) \oplus N_{f_1^\circ(L) / H_p}(p) \oplus N_{f_1^\circ(L) / H_q}(q),\]
which upon twisting becomes
\[N_{f^\circ}|_L(-\hat{t} - p - q) \simeq N_{f_1^\circ(L) / H_t}(-p-q) \oplus N_{f_1^\circ(L) / H_p}(-\hat{t}-q) \oplus N_{f_1^\circ(L) / H_q}(-\hat{t} - p).\]
Note that $N_{f_1^\circ(L) / H_t}(-p-q) \simeq \oo_L(-1)$ has vanishing $H^1$, and similarly for the other factors;
consequently, $H^1(N_{f^\circ}|_L(-\hat{t} - p - q)) = 0$. We conclude that
$H^1(N_{f^\circ}) = 0$ provided that $H^1(N_{f^\circ|_{C' \cup D^\circ}}) = 0$.
Moreover,
writing $C' \cup_{\{u, v, w, x, y, z\}} D^\circ = D_1 \cup_\Delta (D' \cup_{\{u, v, w, x, y, z\}} C')$
and applying Proposition~\ref{prop:interior}, we know that $H^1(N_{f^\circ|_{C' \cup D^\circ}}) = 0$ provided that
$H^1(N_{f^\circ|_{C' \cup D'}}) = 0$.
And if $f^\circ|_{C' \cup D'}$ is a BN-curve,
then
$f^\circ \colon (C' \cup_{\{u, v, w, x, y, z\}} D') \cup_{\Delta \cup \{p, q\}} (D_1 \cup_t L) \to \pp^4$
is a BN-curve too by Proposition~\ref{prop:glue},
Putting this all together, it is sufficient to show that
$f^\circ|_{C' \cup D'}$ is a BN-curve which satisfies $H^1(N_{f^\circ|_{C' \cup D'}}) = 0$.
Our next step is to specialize $(f_1^\circ|_{C'}, \Gamma_1^\circ \cap C')$ to
$(f_1^{\circ\circ} \colon C^{\circ\circ} = C'' \cup_{\{r, s\}} L' \to \pp^4, \Gamma_1^{\circ\circ})$,
where $f_1^{\circ\circ}|_{C''}$
is a general BN-curve of degree~$5$ and genus~$1$, and $f_1^{\circ\circ}|_{L'}$ is a line
with $L'$ joined to $C''$ at two points $\{r, s\}$.
We suppose that $(f_1^{\circ\circ}|_{C''}, \Gamma_1^{\circ\circ} \cap C'')$ passes through $u$,
while $(f_1^{\circ\circ}|_{C''}, \Gamma_1^{\circ\circ} \cap C'')$ passes through $\{v, w, x, y, z\}$;
as before this can be done so $\{u, v, w, x, y, z\}$ are general.
We also specialize $(f_2^\circ|_{D'}, \Gamma_2^\circ \cap D')$ to
$(f_2^{\circ\circ} \colon D'' \cup_\Delta D_2 \to \pp^4, \Gamma_2^{\circ\circ})$,
where $f_2^{\circ\circ}|_{D''}$ and $f_2^{\circ\circ}|_{D_2}$ are both rational normal curves
with $D''$ and $D_2$ joined at a set $\Delta$ of $4$ general points.
We suppose that $(f_2^{\circ\circ}|_{D_2}, \Gamma_2^{\circ\circ} \cap D_2)$
passes through $u$,
while $(f_2^{\circ\circ}|_{D''}, \Gamma_2^{\circ\circ} \cap D'')$
passes through $\{v, w, x, y, z\}$;
as before this can be done so $\{u, v, w, x, y, z\}$ are general.
The same argument as above, mutatis mutandis, then implies it is sufficient to show that
$f^{\circ\circ}|_{C'' \cup D''} \colon C'' \cup_{\{v, w, x, y, z\}} D'' \to \pp^4$ is a BN-curve which satisfies
$H^1(N_{f^{\circ\circ}|_{C'' \cup D''}}) = 0$.
For this, we first note that $f^{\circ\circ}|_{C'' \cup D''}$ is a curve of degree $8$ and genus $5$,
and that the moduli space of smooth curves of degree $8$ and genus $5$ in $\pp^4$ is
irreducible (they are all canonical curves).
To finish the proof, it suffices to note by Lemma~\ref{smoothable} that
$f^{\circ\circ}|_{C'' \cup D''}$ is a limit of immersions of smooth curves and satisfies
$H^1(N_{f^{\circ\circ}|_{C'' \cup D''}}) = 0$.
\end{proof}
\begin{cor} \label{smooth-enough} To prove the main theorems (excluding the ``conversely\ldots'' part),
it suffices to show the existence of (nondegenerate immersions of) smooth curves, of the following degrees
and genera, which satisfy the conclusions:
\begin{enumerate}
\item For Theorem~\ref{main-3}, it suffices to show the existence of smooth curves, satisfying
the conclusions, where $(d, g)$ is one of:
\[(5, 1), \quad (7, 2), \quad (6, 3), \quad (7, 4), \quad (8, 5), \quad (9, 6), \quad (9, 7).\]
\item For Theorem~\ref{main-3-1}, it suffices to show the existence of smooth curves, satisfying
the conclusions, where $(d, g)$ is one of:
\[(7, 5), \quad (8, 6).\]
\item For Theorem~\ref{main-4}, it suffices to show the existence of smooth curves, satisfying
the conclusions, where $(d, g)$ is one of:
\[(9, 5), \quad (10, 6), \quad (11, 7), \quad (12, 9).\]
\end{enumerate}
(And in constructing the above smooth curves, we may suppose the
corresponding theorem holds for curves of smaller genus.)
\end{cor}
\begin{proof}
By Lemmas~\ref{bn3} and~\ref{lm:hir}, and Proposition~\ref{p2}, we know that Theorem~\ref{main-3}
holds for $(d, g)$ one of
\[(10, 9), \quad (11, 10), \quad (12, 12), \quad (13, 13), \quad (14, 14).\]
Similarly, by Lemmas~\ref{bn4}, \ref{lm:hir}, and~\ref{from-inter}, we know that Theorem~\ref{main-4}
holds for $(d, g)$ one of
\[(16, 15), \quad (17, 16), \quad (18, 17).\]
Eliminating these cases from the lists in Corollary~\ref{finite},
we obtain the given lists of pairs $(d, g)$.
Moreover --- in each of the cases appearing in the statement
of this corollary --- results of \cite{keem} (for $r = 3$) and \cite{iliev} (for $r = 4$)
state that the Hilbert scheme of curves of degree $d$ and genus $g$ in $\pp^r$
has a \emph{unique} component whose points represent smooth irreducible nondegenerate curves.
The condition that our curve be a BN-curve may thus be replaced
with the condition that our curve be smooth irreducible nondegenerate.
\end{proof}
\section{More Curves in a Hyperplane \label{sec:hir-3}}
In this section, we give several more applications
of the technique developed in the previous two sections. Note that from
Corollary~\ref{smooth-enough},
it suffices to show the existence of curves satisfying
the desired conclusions which are limits of immersions of smooth curves;
it not necessary to check that these
curves are BN-curves.
\begin{lm} \label{lm:ind:3} Suppose $N_f(-2)$ satisfies interpolation, where $f \colon C \to \pp^3$ is a general BN-curve
of degree $d$ and genus $g$ to $\pp^3$. Then the same is true for some smooth curve of
degree and genus:
\begin{enumerate}
\item \label{33} $(d + 3, g + 3)$ (provided $d \geq 3$);
\item \label{42} $(d + 4, g + 2)$ (provided $d \geq 3$);
\item \label{46} $(d + 4, g + 6)$ (provided $d \geq 5$).
\end{enumerate}
\end{lm}
\begin{proof}
We apply Lemma~\ref{lm:hir} for $f_2$ a curve of degree up to $4$ (and note that
$N_{f_2}(-2)$ satisfies interpolation by Proposition~\ref{p2}), namely:
\begin{enumerate}
\item $(d_2, g_2) = (3, 1)$ and $n = 3$;
\item $(d_2, g_2) = (4, 0)$ and $n = 3$;
\item $(d_2, g_2) = (4, 2)$ and $n = 5$.
\end{enumerate}
Finally, we note that $C \cup_\Gamma D \to \pp^r$ as above is a limit
of immersions of smooth curves by Lemma~\ref{smoothable}.
\end{proof}
\begin{cor} Suppose that Theorem~\ref{main-3} holds for $(d, g) = (5, 1)$. Then
Theorem~\ref{main-3} holds for $(d, g)$ one of:
\[(7, 2), \quad (6, 3), \quad (9, 6), \quad (9, 7).\]
\end{cor}
\begin{proof}
For $(d, g) = (7, 2)$, we apply Lemma~\ref{lm:ind:3}, part~\ref{42}
(taking as our inductive hypothesis the truth of Theorem~\ref{main-3} for $(d', g') = (3, 0)$).
Similarly, for $(d, g) = (6, 3)$ and $(d, g) = (9, 6)$, we apply
Lemma~\ref{lm:ind:3}, part~\ref{33}
(taking as our inductive hypothesis the truth of Theorem~\ref{main-3} for $(d', g') = (3, 0)$,
and the just-established $(d', g') = (6, 3)$, respectively).
Finally, for $(d, g) = (9, 7)$, we apply Lemma~\ref{lm:ind:3}, part~\ref{46}
(taking as our inductive hypothesis the yet-to-be-established truth of Theorem~\ref{main-3}
for $(d', g') = (5, 1)$).
\end{proof}
\begin{lm} Suppose that Theorem~\ref{main-3-1} holds for $(d, g) = (7, 5)$.
Then Theorem~\ref{main-3-1} holds for $(d, g) = (8, 6)$.
\end{lm}
\begin{proof}
We simply apply Lemma~\ref{glue} with $f\colon C \cup_\Gamma D \to \pp^3$
such that $f|_C$ is a general BN-curve of degree $7$ and genus $5$,
and $f|_D$ is a line, with $C$ joined to $D$ at a set $\Gamma$ of two points.
\end{proof}
\begin{lm} \label{lm:ind:4} Suppose $N_f(-1)$ satisfies interpolation, where $f$ is a general BN-curve
of degree $d$ and genus $g$ in $\pp^4$. Then the same is true for some smooth curve of
degree $d + 6$ and genus $g + 6$, provided $d \geq 4$.
\end{lm}
\begin{proof}
We apply Lemmas~\ref{lm:hir} and~\ref{smoothable}
for $f_2$ a curve of degree $6$ and genus $3$ to $\pp^3$,
with $n = 4$.
Note that
$N_{f_2}(-1)$ satisfies interpolation by Propositions~\ref{inter} and~\ref{twist}.
\end{proof}
\begin{lm} Theorem~\ref{main-4} holds for $(d, g)$ one of:
\[(10, 6), \quad (11, 7), \quad (12, 9).\]
\end{lm}
\begin{proof}
We simply apply Lemma~\ref{lm:ind:4}
(taking as our inductive hypothesis the truth of Theorem~\ref{main-4} for
$(d', g') = (d - 6, g - 6)$).
\end{proof}
To prove the main theorems (excluding the ``conversely\ldots'' part),
it thus remains to produce five smooth curves:
\begin{enumerate}
\item For Theorem~\ref{main-3}, it suffices to find smooth curves, satisfying
the conclusions, of degrees and genera $(5, 1)$, $(7, 4)$, and $(8, 5)$.
\item For Theorem~\ref{main-3-1}, it suffices to find a smooth curve, satisfying
the conclusions, of degree $7$ and genus $5$.
\item For Theorem~\ref{main-4}, it suffices to find a smooth curve, satisfying
the conclusions, of degree $9$ and genus $5$.
\end{enumerate}
\section{Curves in Del Pezzo Surfaces \label{sec:in-surfaces}}
In this section, we analyze the normal bundles of certain curves
by specializing to immersions $f \colon C \hookrightarrow \pp^r$
of smooth curves whose images are contained in Del Pezzo
surfaces $S \subset \pp^r$ (where the Del Pezzo surface is embedded by
its complete anticanonical series).
Since $f$ will be an immersion, we shall identify $C = f(C)$ with its image,
in which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image.
Our basic method in this section will be to use the normal bundle exact
sequence associated to $C \subset S \subset \pp^r$:
\begin{equation} \label{nb-exact}
0 \to N_{C/S} \to N_C \to N_S|_C \to 0.
\end{equation}
Since $S$ is a Del Pezzo surface, we have by adjunction an isomorphism
\begin{equation} \label{ncs}
N_{C/S} \simeq K_C \otimes K_S^\vee \simeq K_C(1).
\end{equation}
\begin{defi} \label{pic-res}
Let $S \subset \pp^r$ be a Del Pezzo surface, $k$ be an integer with $H^1(N_S(-k)) = 0$,
and $\theta \in \pic S$ be any divisor class.
Let $F$ be a general hypersurface of degree $k$.
We consider the moduli space $\mathcal{M}$ of pairs $(S', \theta')$,
with $S'$ a Del Pezzo surface containing $S \cap F$, and $\theta' \in \pic S'$.
Define $V_{\theta, k} \subseteq \pic(S \cap F)$
to be the subvariety obtained by restricting $\theta'$ to $S \cap F \subseteq S'$,
as $(S', \theta)$ varies over the component of $\mathcal{M}$ containing $(S, \theta)$.
Note that there is a unique such component, since $\mathcal{M}$ is smooth at $[(S, \theta)]$
thanks to our assumption that $H^1(N_S(-k)) = 0$.
\end{defi}
Our essential tool is given by the following lemma,
which uses the above normal bundle sequence together with the varieties
$V_{\theta, k}$ to analyze $N_C$.
\begin{lm} \label{del-pezzo}
Let $C \subset S \subset \pp^r$ be a general curve (of any fixed class)
in a general Del Pezzo surface $S \subset \pp^r$,
and $k$ be a natural number with $H^1(N_S(-k)) = 0$. Suppose that (for $F$ a general hypersurface of degree $k$):
\[\dim V_{[C], k} = \dim H^0(\oo_C(k - 1)) \quad \text{and} \quad H^1(N_S|_C(-k)) = 0,\]
and that the natural map
\[H^0(N_S(-k)) \to H^0(N_S|_C(-k))\]
is an isomorphism.
Then,
\[H^1(N_{C}(-k)) = 0.\]
\end{lm}
\begin{proof}
Twisting our earlier normal bundle exact sequence \eqref{nb-exact},
and using the isomorphism \eqref{ncs}, we obtain the exact sequence:
\[0 \to K_C(1-k) \to N_C(-k) \to N_S|_C(-k) \to 0.\]
This gives rise to a long exact sequence in cohomology:
\[\cdots \to H^0(N_C(-k)) \to H^0(N_S|_C(-k)) \to H^1(K_C(1 - k)) \to H^1(N_C(-k)) \to H^1(N_S|_C(-k)) \to \cdots.\]
Since $H^1(N_S|_C(-k)) = 0$ by assumption,
it suffices to show that the image of the natural map
$H^0(N_C(-k)) \to H^0(N_S|_C(-k))$ has codimension
\[\dim H^1(K_C(1 - k)) = \dim H^0(\oo_C(k - 1)) = \dim V_{[C], k}.\]
Because the natural map $H^0(N_S(-k)) \to H^0(N_S|_C(-k))$
is an isomorphism, we may interpret sections
of $N_S|_C(-k)$ as first-order deformations of the Del Pezzo surface $S$
fixing $S \cap F$.
So it remains to show that the space of such deformations
coming from a deformation of $C$ fixing $C \cap F$ has codimension
$\dim V_{[C], k}$.
The key point here is that deforming
$C$ on $S$ does not change its class $[C] \in \pic(S)$,
and every deformation of $S$
comes naturally with a deformation of the element $[C] \in \pic(S)$.
It thus suffices to prove that
the space of first-order deformations of $S$ which leave invariant
the restriction $[C]|_{S \cap F} \in \pic(S \cap F)$
has codimension $\dim V_{[C], k}$.
But since the map $\mathcal{M} \to V_{[C], k}$
is smooth at $(S, [C])$, the vertical tangent space has codimension
in the full tangent space
equal to the dimension of the image.
\end{proof}
In applying Lemma~\ref{del-pezzo},
we will first consider the case where $S \subset \pp^3$ is a general cubic surface,
which is isomorphic to the blowup $\bl_\Gamma \pp^2$ of $\pp^2$ along a set
\[\Gamma = \{p_1, \ldots, p_6\} \subset \pp^2\]
of six general points. Recall that this a Del Pezzo surface,
which is to say that the embedding $\bl_\Gamma \pp^2 \simeq S \hookrightarrow \pp^3$
as a cubic surface is via the complete linear
system for the inverse of the canonical bundle:
\[-K_{\bl_\Gamma \pp^2} = 3L - E_1 - \cdots - E_6,\]
where $L$ is the class of a line in $\pp^2$ and $E_i$ is the exceptional divisor
in the blowup over $p_i$. Note that by construction,
\[N_S \simeq \oo_S(3).\]
In particular, $H^1(N_S(-1)) = H^1(N_S(-2)) = 0$ by Kodaira vanishing.
\begin{lm} \label{cubclass} Let $C \subset \bl_\Gamma \pp^2 \simeq S \subset \pp^3$ be a general curve of class either:
\begin{enumerate}
\item \label{74} $5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$;
\item \label{85} $5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$;
\item \label{86} $6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$;
\item \label{75} $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$;
\end{enumerate}
Then $C$ is smooth and irreducible.
In the first two cases, $H^1(\oo_C(1)) = 0$.
\end{lm}
\begin{proof}
We first show the above linear series are basepoint-free.
To do this, we write each as a sum of terms which are evidently
basepoint-free:
\begin{align*}
5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\
&\qquad + (L - E_1) + (L - E_2) \\
5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + (L - E_1) \\
6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\
&\qquad + L + (2L - E_3 - E_4 - E_5 - E_6) \\
6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\
&\qquad + (L - E_2) + (2L - E_3 - E_4 - E_5 - E_6).
\end{align*}
Since all our linear series are basepoint-free, the
Bertini theorem implies that $C$ is smooth. Moreover, by basepoint-freeness,
we know that $C$ does not contain any of our exceptional divisors.
We conclude that $C$ is a the proper transform in the blowup of
a curve $C_0 \subset \pp^2$. This curve satisfies:
\begin{itemize}
\item In case~\ref{74}, $C_0$ has exactly two nodes, at $p_1$ and $p_2$, and is otherwise smooth.
In particular, $C_0$ (and thus $C$) must be irreducible, since otherwise (by B\'ezout's theorem) it would have
at least $4$ nodes (where the components meet).
\item In case~\ref{85}, $C_0$ has exactly one node, at $p_1$, and is otherwise smooth.
As above, $C_0$ (and thus $C$) must be irreducible.
\item In case~\ref{86}, $C_0$ has exactly four nodes, at $\{p_3, p_4, p_5, p_6\}$, and is otherwise smooth.
As above, $C_0$ (and thus $C$) must be irreducible.
\item In case~\ref{75}, $C_0$ has exactly $5$ nodes, at $\{p_2, p_3, p_4, p_5, p_6\}$, and is otherwise smooth.
As above, $C_0$ must either be irreducible, or the union of a
line and a quintic. (Otherwise, it would have at least $8$ nodes.)
But in the second case, all $5$ nodes must be collinear,
contradicting our assumption that $\{p_2, p_3, p_4, p_5, p_6\}$ are general.
Consequently, $C_0$ (and thus $C$) must be irreducible.
\end{itemize}
We now turn to showing $H^1(\oo_C(1)) = 0$ in the first two cases.
In the first case, we note that $\Gamma$ contains $4 = \operatorname{genus}(C)$ general points $\{p_3, p_4, p_5, p_6\}$
on $C$; consequently, $E_3 + E_4 + E_5 + E_6$ --- and therefore
$\oo_C(1) = (3L - E_1 - E_2) - (E_3 + E_4 + E_5 + E_6)$ --- is a general line bundle of degree $7$,
which implies $H^1(\oo_C(1)) = 0$.
Similarly, in the second case,
we note that $\Gamma$ contains $5 = \operatorname{genus}(C)$
general points $\{p_2, p_3, p_4, p_5, p_6\}$ on $C$.
As in the first case, this implies $H^1(\oo_C(1)) = 0$, as desired.
\end{proof}
\begin{lm} \label{foo}
Let $C \subset \pp^3$ be a general BN-curve of degree and genus $(7, 4)$ or $(8, 5)$.
Then we have $H^1(N_C(-2)) = 0$.
\end{lm}
\begin{proof}
We take $C \subset S$, as constructed in Lemma~\ref{cubclass}, parts~\ref{74} and~\ref{85}
respectively.
These curves have degrees and genera $(7, 4)$ and $(8, 5)$ respectively, which can be seen by calculating the
intersection product with the hyperplane class and using adjunction.
For example, for the curve in part~\ref{74} of class
$5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$, we calculate
\[\deg C = (5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6) \cdot (3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) = 7,\]
and
\[\operatorname{genus} C = 1 + \frac{K_S \cdot C + C^2}{2} = 1 + \frac{-\deg C + C^2}{2} = 1 + \frac{-7 + 13}{2} = 4.\]
Because $N_S \simeq \oo_S(3)$, we have
\[H^1(N_S|_C(-2)) = H^1(\oo_C(1)) = 0.\]
Moreover, $\oo_S(1)(-C)$ is either
$-2L + E_1 + E_2$ or $-2L + E_1$ respectively;
in either case we have $H^0(\oo_S(1)(-C)) = 0$. Consequently, the restriction map
\[H^0(\oo_S(1)) \to H^0(\oo_C(1))\]
is injective. Since
\[\dim H^0(\oo_S(1)) = 4 = \dim H^0(\oo_C(1)),\]
the above restriction map is therefore an isomorphism.
Applying
Lemma~\ref{del-pezzo}, it thus suffices to show that
\[\dim V_{[C], 2} = \dim H^0(\oo_C(1)) = 4.\]
To do this, we first observe that $[C]$ is always a linear combination $aH + bL_1 + cL_2$ of the
hyperplane class $H$, and two nonintersecting lines $L_1$ and $L_2$, such that both $b$ and $c$
are nonvanishing. Indeed:
\begin{align*}
5L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) \\
&\quad - (2L - E_1 - E_3 - E_4 - E_5 - E_6) \\
&\quad - (2L - E_2 - E_3 - E_4 - E_5 - E_6) \\
5L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6 &= 3(3L - E_1 - E_2 - E_3 - E_4 - E_5 - E_6) + E_1 \\
&\quad - 2(2L - E_2 - E_3 - E_4 - E_5 - E_6).
\end{align*}
Writing $F$ for a general quadric hypersurface, and $D = F \cap S$,
we observe that $\pic(D)$ is $4$-dimensional.
It is therefore sufficient to prove that for a general class $\theta \in \pic^{6a + 2b + 2c}(D)$,
there exists a smooth cubic surface $S$ containing $D$ and a pair $(L_1, L_2)$ of disjoint lines on $S$,
such that the restriction $(aH + bL_1 + cL_2)|_D = \theta$.
Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $(L_1, L_2)$,
we may replace $\theta$ by $\theta(-a)$ and set $a = 0$.
We thus seek to show that for $b, c \neq 0$ and $\theta \in \pic^{2b + 2c}(D)$ general,
there exists a smooth cubic surface $S$ containing $D$, and a pair $(L_1, L_2)$ of disjoint lines on $S$,
with $(bL_1 + cL_2)|_D = \theta$.
Equivalently, we want to show the map
\[\{(S, E_1, E_2) : E_1, E_2 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\]
from the space of smooth cubic surfaces $S$ containing $D$ with a choice
of pair of disjoint lines $(E_1, E_2)$,
to the space of pairs of $2$-secant lines to $D$, is dominant.
For this, it suffices to check the vanishing of
$H^1(N_S(-D -E_1 - E_2))$,
for any smooth cubic $S$ containing $D$ and disjoint lines $(E_1, E_2)$ on $S$,
in which lies the obstruction to smoothness of this map.
But $N_S(-D -E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$
has no higher cohomology by Kawamata-Viehweg vanishing.
\end{proof}
\begin{lm}
Let $C \subset \pp^3$ be a general BN-curve of degree $7$ and genus $5$.
Then we have $H^1(N_C(-1)) = 0$.
\end{lm}
\begin{proof}
We take $C \subset S$, as constructed in Lemma~\ref{cubclass}, part~\ref{75}.
Because $N_S \simeq \oo_S(3)$, we have
\[H^1(N_S|_C(-1)) = H^1(\oo_C(2)) = 0.\]
Moreover, $\oo_S(2)(-C) \simeq \oo_S(-E_1)$ has no sections.
Consequently, the restriction map
\[H^0(\oo_S(2)) \to H^0(\oo_C(2))\]
is injective. Since
\[\dim H^0(\oo_S(2)) = 10 = \dim H^0(\oo_C(2)),\]
the above restriction map is therefore an isomorphism.
Applying
Lemma~\ref{del-pezzo}, it thus suffices to show that
\[\dim V_{[C], 1} = \dim H^0(\oo_C) = 1.\]
Writing $F$ for a general hyperplane, and $D = F \cap S$,
we observe that $\pic(D)$ is $1$-dimensional.
Since $[C] = 2H + E_1$,
it is therefore sufficient to prove that for a general class $\theta \in \pic^7(D)$,
there exists a cubic surface $S$ containing $D$ and a line $L$ on $S$,
such that the restriction $(2H + L)|_D = \theta$.
Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $L$,
we may replace $\theta$ by $\theta(-1)$ and look instead for
$L|_D = \theta \in \pic^1(D)$.
Equivalently, we want to show the map
\[\{(S, E_1) : E_1 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\]
from the space of smooth cubic surfaces $S$ containing $D$ with a choice
of line $E_1$,
to the space of $1$-secant lines to $D$, is dominant;
it suffices to check the vanishing of
$H^1(N_S(-D-E_1))$,
for any smooth cubic $S$ containing $D$ and line $E_1$ on $S$,
in which lies the obstruction to smoothness of this map.
But $N_S(-D-E_1) = 6L - 3E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$
has no higher cohomology by Kodaira vanishing.
\end{proof}
Next, we consider the case where $S \subset \pp^4$ is the intersection
of two quadrics, which is isomorphic to the blowup $\bl_\Gamma \pp^2$
of $\pp^2$ along a set
\[\Gamma = \{p_1, \ldots, p_5\}\]
of five general points. Recall that this is a Del Pezzo surface,
which is to say that the embedding
$\bl_\Gamma \pp^2 \simeq S \hookrightarrow \pp^4$ as the intersection
of two quadrics is via the complete linear
system for the inverse of the canonical bundle:
\[-K_{\bl_\Gamma \pp^2} = 3L - E_1 - \cdots - E_5,\]
where $L$ is the class of a line in $\pp^2$ and $E_i$ is the exceptional divisor
in the blowup over $p_i$. Note that by construction,
\[N_S \simeq \oo_S(2) \oplus \oo_S(2).\]
In particular, $H^1(N_S(-1)) = 0$ by Kodaira vanishing.
\begin{lm} \label{qclass} Let $C \subset \bl_\Gamma \pp^2 \simeq S \subset \pp^4$ be a general curve of class either:
\begin{enumerate}
\item $5L - 2E_1 - E_2 - E_3 - E_4 - E_5$;
\item $6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5$.
\end{enumerate}
Then $C$ is smooth and irreducible. In the first case, $H^1(\oo_C(1)) = 0$.
\end{lm}
\begin{proof}
We first show the above linear series
are basepoint-free.
To do this, we write them as a sum of terms which are evidently
basepoint-free:
\begin{align*}
5L - 2E_1 - E_2 - E_3 - E_4 - E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) + (L - E_1) + L \\
6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 &= (3L - E_1 - E_2 - E_3 - E_4 - E_5) \\
&\qquad + (2L - E_2 - E_3 - E_4 - E_5) + L
\end{align*}
As in Lemma~\ref{cubclass}, we conclude that $C$ is smooth and
irreducible. In the first case, we have
$\deg \oo_C(1) = 9 > 8 = 2g - 2$, which implies
$H^1(\oo_C(1)) = 0$ as desired.
\end{proof}
\begin{lm}
Let $C \subset \pp^4$ be a general BN-curve of degree $9$ and genus $5$.
Then we have $H^1(N_C(-1)) = 0$.
\end{lm}
\begin{proof}
We take $C \subset S$, as constructed in Lemma~\ref{qclass}.
Because $N_S \simeq \oo_S(2) \oplus \oo_S(2)$, we have
\[H^1(N_S|_C(-1)) = H^1(\oo_C(1) \oplus \oo_C(1)) = 0.\]
Moreover, $\oo_S(1)(-C) \simeq \oo_S(-2L + E_1)$ has no sections.
Consequently, the restriction map
\[H^0(\oo_S(1) \oplus \oo_S(1)) \to H^0(\oo_C(1) \oplus \oo_C(1))\]
is injective. Since
\[\dim H^0(\oo_S(1) \oplus \oo_S(1)) = 10 = \dim H^0(\oo_C(1) \oplus \oo_C(1)),\]
the above restriction map is therefore an isomorphism.
Applying
Lemma~\ref{del-pezzo}, it thus suffices to show that
\[\dim V_{[C], 1} = \dim H^0(\oo_C) = 1.\]
Writing $F$ for a general hyperplane, and $D = F \cap S$, we observe that $\pic(D)$ is $1$-dimensional.
Since $[C] = 3(3L - E_1 - E_2 - E_3 - E_4 - E_5) - 2(2L - E_1 - E_2 - E_3 - E_4 - E_5) - E_1$,
it is therefore sufficient to prove that for a general class $\theta \in \pic^9(D)$,
there exists a quartic Del Pezzo surface $S$ containing $D$, and a pair $\{L_1, L_2\}$ of
intersecting lines on $S$,
such that the restriction $(3H - 2L_1 - L_2)|_D = \theta$.
Since $H|_D = \oo_D(1)$ is independent of $S$ and the choice of $L$,
we may replace $\theta$ by $\theta^{-1}(3)$ and look instead for
$(2L_1 + L_2)|_D = \theta \in \pic^3(D)$.
For this, it suffices to show the map
\[\{(S, L_1, L_2) : L_1, L_2 \subset S \supset D\} \mapsto \{(L_1, L_2)\},\]
from the space of smooth quartic Del Pezzo surfaces $S$
containing $D$ with a choice
of pair of intersecting lines $(L_1, L_2)$,
to the space of pairs of intersecting $1$-secant lines to $D$, is dominant.
Taking $[L_1] = E_1$ and $[L_2] = L - E_1 - E_2$,
it suffices to check the vanishing of the first cohomology of the vector bundle
$N_S(-D - E_1 - (L - E_1 - E_2))$ --- which is isomorphic to a direct
sum of two copies of the line bundle $2L - E_1 - E_3 - E_4 - E_5$ --- for
any smooth quartic Del Pezzo surface $S$ containing $D$,
in which lies the obstruction to smoothness of this map.
But $2L - E_1 - E_3 - E_4 - E_5$ has no higher cohomology by Kodaira vanishing.
\end{proof}
To prove the main theorems (excluding the ``conversely\ldots'' part),
it thus remains to produce a smooth curve $C \subset \pp^3$ of degree $5$
and genus $1$, with $H^1(N_C(-2)) = 0$.
\section{\boldmath Elliptic Curves of Degree $5$ in $\pp^3$ \label{sec:51}}
In this section, we construct an immersion $f \colon C \hookrightarrow \pp^3$
of degree~$5$ from a smooth elliptic curve,
with $H^1(N_f(-2)) = 0$.
As in the previous section,
we shall identify $C = f(C)$ with its image,
in which case the normal bundle $N_f$ becomes the normal bundle $N_C$ of the image.
Our basic method in this section will be to use the geometry of the cubic scroll $S \subset \pp^4$.
Recall that
the cubic scroll can be constructed
in two different ways:
\begin{enumerate}
\item Let $Q \subset \pp^4$ and $M \subset \pp^4$ be a plane conic,
and a line disjoint from the span of $Q$, respectively. As abstract varieties,
$Q \simeq \pp^1 \simeq M$.
Then $S$ is the ruled surface swept out by lines joining pairs of points
identified under some choice of above isomorphism.
\item Let $x \in \pp^2$ be a point, and consider the blowup $\bl_x \pp^2$
of $\pp^2$ at the point $\{x\}$. Then, $S$ is the image of $f \colon \bl_x \pp^2 \hookrightarrow \pp^4$
under the complete linear series attached to the line bundle
\[2L - E,\]
where $L$ is the class of a line in $\pp^2$, and $E$ is the exceptional divisor
in the blowup.
\end{enumerate}
To relate these two constructions, we fix a line $L \subset \pp^2$ not meeting $x$ in the second
construction, and consider the isomorphism $L \simeq \pp^1 \simeq E$
defined by sending $p \in L$ to the intersection with $E$ of the proper transform
of the line joining $p$ and $x$.
Then the $f(L)$ and $f(E)$ are $Q$ and $M$ respectively in the second construction;
the proper transforms of lines through $x$ are the lines of the ruling.
\medskip
Now take two points $p, q \in L$. Since $f(L)$ is a plane conic,
the tangent lines to $f(L)$ at $p$ and $q$ intersect; we let $y$
be their point of intersection.
From the first description of $S$, it is clear that any line through
$y$ intersects $S$ quasi-transversely --- except for the lines joining $y$ to $p$ and $q$,
each of which meets $S$ in a degree~$2$ subscheme of $f(L)$.
Write $\bar{S}$ for the image of $S$ under projection from $y$; by construction,
the projection $\pi \colon S \to \bar{S} \subseteq \pp^3$ is unramified away from $\{p, q\}$,
an immersion away from $f(L)$, and when restricted to $f(L)$ is a double cover of its image
with ramification exactly at $\{p, q\}$.
At $\{p, q\}$, the differential drops rank transversely,
with kernel the tangent
space to $f(L)$. (By ``drops rank transversely'', we mean that the section $d\pi$ of
$\hom(T_S, \pi^* T_{\pp^3})$ is transverse to the subvariety
of $\hom(T_S, \pi^* T_{\pp^3})$ of maps with less-than-maximal rank.)
If $C \subset \bl_{\{p\}} \pp^2 \simeq S$ is a curve passing through $p$ and $q$,
but transverse to $L$ at each of these points, then any line through $y$ intersects
$C$ quasi-transversely. In particular, if $C$ meets $L$ in at most one point outside of $\{p, q\}$,
the image $\bar{C}$ of $C$ under projection from $y$
is smooth. Moreover, the above analysis of $d\pi$ on $S$ implies that the natural map
\[N_{C/S} \to N_{\bar{C}/\pp^3}\]
induced by $\pi$ is fiberwise injective away from $\{p, q\}$, and has a simple
zero at both $p$ and $q$. That is, we have an exact sequence
\begin{equation} \label{51}
0 \to N_{C/S}(p + q) \to N_{\bar{C}/\pp^3} \to \mathcal{Q} \to 0,
\end{equation}
with $\mathcal{Q}$ a vector bundle.
\medskip
We now specialize to the case where $C$ is the proper transform of a plane cubic, passing through
$\{x, p, q\}$, and transverse to $L$ at $\{p, q\}$. By inspection,
$\bar{C}$ is an elliptic curve of degree $5$ in $\pp^3$; it thus suffices to show
$H^1(N_{\bar{C}/\pp^3}(-2)) = 0$.
\begin{lm} In this case,
\begin{align*}
N_{C/S}(p + q) &\simeq \oo_C(3L - E + p + q) \\
\mathcal{Q} &\simeq \oo_C(5L - 3E - p - q).
\end{align*}
\end{lm}
\begin{proof}
We first note that
\[N_{C/S} \simeq N_{C/\pp^2}(-E) \simeq \oo_C(3L)(-E) \quad \Rightarrow \quad N_{C/S}(p + q) \simeq \oo_C(3L - E + p + q).\]
Next, the Euler exact sequence
\[0 \to \oo_{\bar{C}} \to \oo_{\bar{C}}(1)^4 \to T_{\pp^3}|_{\bar{C}} \to 0\]
implies
\[\wedge^3 (T_{\pp^3}|_{\bar{C}}) \simeq \oo_C(4).\]
Combined with the normal bundle exact sequence
\[0 \to T_C \to T_{\pp^3}|_{\bar{C}} \to N_{\bar{C}/\pp^3} \to 0,\]
and the fact that $C$ is of genus $1$, so $T_C \simeq \oo_C$, we conclude that
\[\wedge^2(N_{\bar{C}/\pp^3}) \simeq \oo_C(4) \otimes T_C^\vee \simeq \oo_C(4) = \oo_C(4(2L - E)) = \oo_C(8L - 4E).\]
The exact sequence \eqref{51} then implies
\[\mathcal{Q} \simeq \wedge^2(N_{\bar{C}/\pp^3}) \otimes (N_{C/S}(p + q))^\vee \simeq \oo_C(8L - 4E)(-3L + E - p - q) = \oo_C(5L - 3E - p - q),\]
as desired.
\end{proof}
\noindent
Twisting by $\oo_C(-2) \simeq \oo_C(-4L + 2E)$, we obtain isomorphisms:
\begin{align*}
N_{C/S}(p + q) &\simeq \oo_C(-L + E + p + q) \\
\mathcal{Q} &\simeq \oo_C(L - E - p - q).
\end{align*}
We thus have an exact sequence
\[0 \to \oo_C(-L + E + p + q) \to N_{\bar{C}/\pp^3}(-2) \to \oo_C(L - E - p - q) \to 0.\]
Since $\oo_C(-L + E + p + q)$ and $\oo_C(L - E - p - q)$ are both general line bundles
of degree zero on a curve of genus $1$, we have
\[H^1(\oo_C(-L + E + p + q)) = H^1(\oo_C(L - E - p - q)) = 0,\]
which implies
\[H^1(N_{\bar{C}/\pp^3}(-2)) = 0.\]
This completes the proof the main theorems, except for the ``conversely\ldots'' parts.
\section{The Converses \label{sec:converses}}
In this section, we show that the intersections appearing in our main theorems
fail to be general in all listed exceptional cases.
We actually go further, describing precisely the intersection of a general BN-curve $f \colon C \to \pp^r$
in terms of the intrinsic geometry of $Q \simeq \pp^1 \times \pp^1$, $H \simeq \pp^2$,
and $H \simeq \pp^3$ respectively.
Since the general BN-curve $f \colon C \to \pp^r$ is an immersion, we can
identify $C = f(C)$ with its image as in the previous two sections, in which case
the normal bundle $N_f$ becomes the normal bundle $N_C$ of its image.
There are two basic phenomenon which occur explain the majority of our exceptional
cases: cases where $C$ is a complete intersection, and cases where $C$ lies
on a surface of low degree. The first two subsections will be devoted
to the exceptional cases that arise for these two reasons respectively.
In the final subsection, we will consider the two remaining exceptional
cases.
\subsection{Complete Intersections}
We begin by dealing with those exceptional cases which
are complete intersections.
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $4$ and genus $1$.
Then the intersection $C \cap Q$ is the intersection of two general curves
of bidegree $(2, 2)$ on $Q \simeq \pp^1 \times \pp^1$. In particular,
it is not a collection of $8$ general points.
\end{prop}
\begin{proof}
It is easy to see that $C$ is the complete intersection of two general quadrics.
Restricting these quadrics to $Q \simeq \pp^1 \times \pp^1$,
we see that $C \cap Q$ is the intersection of two general curves
of bidegree $(2, 2)$.
Since general points impose independent conditions on the $9$-dimensional
space of curves of bidegree $(2, 2)$, a general collection of $8$ points
will lie only on one curve of bidegree $(2, 2)$.
The intersections of two general curves of bidegree $(2, 2)$
is therefore not a collection of $8$ general points.
\end{proof}
\begin{prop} \label{64-to-Q}
Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $4$.
Then the intersection $C \cap Q$ is the intersection of two general curves
of bidegrees $(2, 2)$ and $(3,3)$ respectively on $Q \simeq \pp^1 \times \pp^1$. In particular,
it is not a collection of $12$ general points.
\end{prop}
\begin{proof}
It is easy to see that $C$ is the complete intersection of a
general quadric and cubic.
Restricting these to $Q \simeq \pp^1 \times \pp^1$,
we see that $C \cap Q$ is the intersection of two general curves
of bidegrees $(2, 2)$ and $(3,3)$ respectively.
Since general points impose independent conditions on the $9$-dimensional
space of curves of bidegree $(2, 2)$, a general collection of $12$ points
will not lie any curve of bidegree $(2,2)$, and in particular will not be
such an intersection.
\end{proof}
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $4$.
Then the intersection $C \cap H$ is a general collection of $6$ points
lying on a conic. In particular,
it is not a collection of $6$ general points.
\end{prop}
\begin{proof}
As in Proposition~\ref{64-to-Q},
we see that $C \cap H$ is the intersection of general
conic and cubic curves.
In particular, $C \cap H$ lies on a conic. Conversely, any $6$ points
lying on a conic are the complete intersection of a conic and a cubic by Theorem~\ref{main-2}
(with $(d, g) = (3, 1)$).
Since general points impose independent conditions on the $6$-dimensional
space of plane conics,
a general collection of $6$ points
will not lie on a conic. We thus see our intersection
is not a collection of $6$ general points.
\end{proof}
\begin{prop}
Let $C \subset \pp^4$ be a general BN-curve of degree $8$ and genus $5$.
Then the intersection $C \cap H$ is the intersection of three general quadrics
in $H \simeq \pp^3$. In particular,
it is not a collection of $8$ general points.
\end{prop}
\begin{proof}
It is easy to see that $C$ is the complete intersection of three general quadrics.
Restricting these quadrics to $H \simeq \pp^3$,
we see that $C \cap H$ is the intersection of three general quadrics.
Since general points impose independent conditions on the $10$-dimensional
space of quadrics, a general collection of $8$ points
will lie only on only two quadrics.
The intersection of three general quadrics
is therefore not a collection of $8$ general points.
\end{proof}
\subsection{Curves on Surfaces}
Next, we analyze those cases
which are exceptional because $C$ lies on a surface $S$
of small degree. To show the intersection is general subject to
the constraint imposed by $C \subset S$, it will be useful to have the following lemma:
\begin{lm} \label{pic-res-enough}
Let $D$ be an irreducible curve of genus $g$ on a surface $S$, and $p_1, p_2, \ldots, p_n$
be a collection of $n$ distinct points on $D$. Suppose that $n \geq g$, and that
$p_1, p_2, \ldots, p_g$ are general.
Let $\theta \in \pic(S)$, with $\theta|_D \sim p_1 + p_2 + \cdots + p_n$. Suppose that
\[\dim H^0(\theta) - \dim H^0(\theta(-D)) \geq n - g + 1.\]
Then some curve $C \subset S$ of class $\theta$ meets $D$ transversely at $p_1, p_2, \ldots, p_n$.
\end{lm}
\begin{proof}
Since $p_1, p_2, \ldots, p_g$ are general, and $\theta|_D = p_1 + p_2 + \cdots + p_n$,
it suffices to show there is a curve of class $\theta$ meeting $D$ dimensionally-transversely
and passing through $p_{g + 1}, p_{g + 2}, \ldots, p_n$; the remaining $g$ points
of intersection are then forced to be $p_1, p_2, \ldots, p_g$.
For this, we note there is a $\dim H^0(\theta) - (n - g) > \dim H^0(\theta(-D))$ dimensional
space of sections of $\theta$ which vanish at $p_{g + 1}, \ldots, p_n$.
In particular, there is some section which does not vanish along $D$.
Its zero locus then gives the required curve $C$. (The curve $C$ meets $D$ dimensionally-transversely,
because $C$ does not contain $D$ and $D$ is irreducible.)
\end{proof}
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $5$ and genus $2$.
Then the intersection $C \cap Q$ is a collection of $10$ general points
lying on a curve of bidegree $(2, 2)$ on $Q \simeq \pp^1 \times \pp^1$. In particular,
it is not a collection of $10$ general points.
\end{prop}
\begin{proof}
Since $\dim H^0(\oo_C(2)) = 9$ and $\dim H^0(\oo_{\pp^3}(2)) = 10$,
we conclude that $C$ lies on a quadric.
Restricting to $Q$, we see that $C \cap Q$ lies on a curve
of bidegree $(2,2)$.
Conversely, given $10$ points $p_1, p_2, \ldots, p_{10}$ lying on a curve $D$ of bidegree $(2, 2)$,
we may first find a pair of points $\{x, y\} \subset D$ so that
$x + y + 2H \sim p_1 + \cdots + p_{10}$. We then claim there is a smooth quadric containing
$D$ and the general $2$-secant line $\overline{xy}$ to $D$.
Equivalently, we want to show the map
\[\{(S, L) : L \subset S \supset D\} \mapsto \{L\},\]
from the space of smooth quadric surfaces $S$ containing $D$ with a choice
of line $L$,
to the space of $2$-secant lines to $D$, is dominant;
it suffices to check the vanishing of
$H^1(N_S(-D-L))$,
for any smooth quadric $S$ containing $D$ and line $L$ on $S$,
in which lies the obstruction to smoothness of this map.
But $N_S(-D-L) = \oo_S(0, -1)$
has no higher cohomology by Kodaira vanishing.
Writing $L \in \pic(S)$ for the class of the line $\overline{xy}$,
we see that $(L + 2H)|_D \sim p_1 + \cdots + p_{10}$ as divisor classes.
Applying Lemma~\ref{pic-res-enough}, and noting that
$\dim H^0(\oo_{S}(2H + L)) = 12$ while
$\dim H^0(\oo_{S}(L)) = 2$, there is a curve $C$ of class
$2H + L$ meeting $D$ transversely at $p_1, \ldots, p_{10}$.
Since $\oo_{S}(2H + L)$ is very ample by inspection, $C$
is smooth (for $p_1, \ldots, p_{10}$ general). By results of \cite{keem},
this implies $C$ is a BN-curve.
Since general points impose independent conditions on the $9$-dimensional
space of curves of bidegree $(2, 2)$, a general collection of $10$ points
does not lie on a curve of bidegree $(2, 2)$.
A collection of $10$ general points on a general curve of bidegree $(2,2)$
is therefore not a collection of $10$ general points.
\end{proof}
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $7$ and genus $5$.
Then the intersection $C \cap Q$ is a collection of $14$
points lying on a curve $D \subset Q \simeq \pp^1 \times \pp^1$,
which is general subject to the following conditions:
\begin{enumerate}
\item The curve $D$ is of bidegree $(3, 3)$.
\item The divisor $C \cap Q - 2H$ on $D$ (where $H$ is the hyperplane class)
is effective.
\end{enumerate}
In particular, it is not a collection of $14$ general points.
\end{prop}
\begin{proof}
First we claim the general such curve $C$ lies on a smooth cubic surface $S$ with class
$2H + E_1 = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6$.
Indeed, by Lemma~\ref{cubclass} part~\ref{75}, a general curve of this class is smooth and irreducible;
such a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \cite{keem}.
It remains to see there are no obstructions to lifting a deformation
of $C$ to a deformation of the pair $(S, C)$,
i.e.\ that $H^1(N_S(-C)) = 0$. But $N_S(-C) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$,
which has no higher cohomology by Kodaira vanishing.
Thus, $C \cap Q - 2H$ is the restriction to $D$
of the class of a line on $S$; in particular, $C \cap Q - 2H$
is an effective divisor on $D$.
Conversely, suppose that $p_1, p_2, \ldots, p_{14}$
are a general collection of $14$ points lying on a curve $D$ of bidegree $(3,3)$
with $p_1 + \cdots + p_{14} - 2H \sim x + y$ effective.
We then claim there is a smooth cubic containing
$D$ and the general $2$-secant line $\overline{xy}$ to $D$.
Equivalently, we want to show the map
\[\{(S, L) : L \subset S \supset D\} \mapsto \{L\},\]
from the space of smooth cubic surfaces $S$ containing $D$ with a choice
of line $L$,
to the space of $2$-secant lines to $D$, is dominant;
for this it suffices to check the vanishing of
$H^1(N_S(-D-L))$.
But $N_S(-D-L) = 3L - 2E_1 - E_2 - E_3 - E_4 - E_5 - E_6$,
which has no higher cohomology by Kodaira vanishing.
Choosing an isomorphism $S \simeq \bl_\Gamma \pp^2$ where $\Gamma = \{q_1, q_2, \ldots, q_6\}$,
so that the line $\overline{xy} = E_1$ is the exceptional
divisor over $q_1$,
we now look for a curve $C \subset S$ of class
\[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6.\]
Again by Lemma~\ref{cubclass}, the general such curve is smooth and irreducible;
such a curve has degree~$7$ and genus~$5$, and in particular is a BN-curve by results of \cite{keem}.
Note that
\[\dim H^0(\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 12 \quad \text{and} \quad \dim H^0(\oo_S(E_1)) = 1.\]
Applying Lemma~\ref{pic-res-enough},
we conclude that some curve of our given class meets $D$ transversely
at $p_1, p_2, \ldots, p_{14}$, as desired.
It remains to see from this description that
$C \cap Q$ is not a general collection of $14$ points.
For this, first note that there is a $15$-dimensional space
of such curves $D$ (as $\dim H^0(\oo_Q(3,3)) = 16$).
On each each curve, there is a $2$-dimensional family of effective
divisors $\Delta$; and for fixed $\Delta$, a $10$-dimensional family of divisors
linearly equivalent to $2H + \Delta$ (because $\dim H^0(\oo_D(2H + \Delta)) = 11$
by Riemann-Roch). Putting this together,
there is an (at most) $15 + 2 + 10 = 27$-dimensional family of such collections
of points.
But $\sym^{14}(Q)$ has dimension $28$. In particular, collections of such
points cannot be general.
\end{proof}
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $8$ and genus $6$.
Then the intersection $C \cap Q$ is a general collection of $16$ points
on a curve of bidegree $(3,3)$ on $Q \simeq \pp^1 \times \pp^1$. In particular,
it is not a collection of $16$ general points.
\end{prop}
\begin{proof}
Since $\dim H^0(\oo_C(3)) = 19$ and $\dim H^0(\oo_{\pp^3}(3)) = 20$,
we conclude that $C$ lies a cubic surface. Restricting this cubic
to $Q$, we see that $C \cap Q$ lies on a curve of bidegree $(3,3)$.
Conversely, take a general collection $p_1, \ldots, p_{16}$ of $16$ points on a curve
$D$ of bidegree $(3,3)$. The divisor $p_1 + \cdots + p_{16} - 2H$ is of degree $4$
on a curve $D$ of genus $4$; it is therefore effective, say
\[p_1 + \cdots + p_{16} - 2H \sim x + y + z + w.\]
We then claim there is a smooth cubic containing
$D$ and the general $2$-secant lines $\overline{xy}$ and $\overline{zw}$ to $D$.
Equivalently, we want to show the map
\[\{(S, E_1, E_2) : E_1, E_2 \subset S \supset D\} \mapsto \{(E_1, E_2)\},\]
from the space of smooth cubic surfaces $S$ containing $D$ with a choice
of pair of disjoint lines $(E_1, E_2)$,
to the space of pairs of $2$-secant lines to $D$, is dominant;
for this it suffices to check the vanishing of
$H^1(N_S(-D-E_1 - E_2))$.
But $N_S(-D-E_1 - E_2) = 3L - 2E_1 - 2E_2 - E_3 - E_4 - E_5 - E_6$,
which has no higher cohomology by Kawamata-Viehweg vanishing.
We now look for a curve $C \subset S$ of class
\[[C] = 6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6,\]
which is of degree $8$ and genus $6$.
By Lemma~\ref{cubclass}, we conclude that $C$ is smooth and irreducible;
by results of \cite{keem}, this implies the general curve of this class is a BN-curve.
Note that
\[\dim H^0(\oo_S(6L - E_1 - E_2 - 2E_3 - 2E_4 - 2E_5 - 2E_6)) = 14 \quad \text{and} \quad \dim H^0(\oo_S(E_1 + E_2)) = 1.\]
Applying Lemma~\ref{pic-res-enough},
we conclude that some curve of our given class meets $D$ transversely
at $p_1, p_2, \ldots, p_{16}$, as desired.
Since general points impose independent conditions on the $16$-dimensional
space of curves of bidegree $(3, 3)$, a general collection of $16$ points
will not lie any curve of bidegree $(3,3)$. Our collection of points
is therefore not general.
\end{proof}
\begin{prop}
Let $C \subset \pp^4$ be a general BN-curve of degree $9$ and genus $6$.
Then the intersection $C \cap H$ is a general collection of $9$ points
on an elliptic normal curve
in $H \simeq \pp^3$. In particular,
it is not a collection of $9$ general points.
\end{prop}
\begin{proof}
Since $\dim H^0(\oo_C(2)) = 13$ and $\dim H^0(\oo_{\pp^4}(2)) = 15$,
we conclude that $C$ lies on the intersection of two quadrics.
Restricting these quadrics to $H \simeq \pp^3$,
we see that $C \cap H$ lies on the intersection of two quadrics,
which is an elliptic normal curve.
Conversely, let $p_1, p_2, \ldots, p_9$ be a collection of $9$ points
lying on an elliptic normal curve $D \subset \pp^3$.
Since $D$ is an elliptic curve, there exists (a unique) $x \in D$
with
\[\oo_D(p_1 + \cdots + p_9)(-2) \simeq \oo_D(x).\]
Let $M$ be a general line through $x$.
We then claim there is a quartic Del Pezzo surface containing
$D$ and the general $1$-secant line $M$.
Equivalently, we want to show the map
\[\{(S, E_1) : E_1 \subset S \supset D\} \mapsto \{E_1\},\]
from the space of smooth Del Pezzo surfaces $S$ containing $D$ with a choice
of line $E_1$,
to the space of $1$-secant lines to $D$, is dominant;
for this it suffices to check the vanishing of
$H^1(N_S(-D-E_1))$.
But $N_S(-D-E_1)$ is a direct sum of two copies of the line bundle
$3L - 2E_1 - E_2 - E_3 - E_4 - E_5$,
which has no higher cohomology by Kodaira vanishing.
We now consider curves $C \subset S$ of class
\[[C] = 6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5,\]
which are of degree $9$ and genus $6$.
By Lemma~\ref{qclass}, we conclude that $C$ is smooth and irreducible;
by results of \cite{iliev}, this implies the general curve of this class is a BN-curve.
Note that
\[\dim H^0(\oo_S(6L - E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5)) = 15 \quad \text{and} \quad \dim H^0(\oo_S(3L - E_2 - E_3 - E_4 - E_5)) = 6.\]
Applying Lemma~\ref{pic-res-enough},
we conclude that some curve of our given class meets $D$ transversely
at $p_1, p_2, \ldots, p_9$, as desired.
By Corollary~1.4 of \cite{firstpaper}, there does not exist an elliptic
normal curve in $\pp^3$ passing through $9$ general points.
\end{proof}
\subsection{The Final Two Exceptional Cases}
We have exactly two remaining exceptional cases: The intersection
of a general BN-curve of degree $6$ and genus $2$ in $\pp^3$ with a quadric,
and the intersection of a general BN-curve of degree $10$ and genus $7$ in $\pp^4$
with a hyperplane. We will show in the first case that the intersection fails
to be general since $C$ is the projection of a curve $\tilde{C} \subset \pp^4$,
where $\tilde{C}$ lies on a surface of small degree (a cubic scroll).
In the second case, the intersection fails to be general since $C$
is contained in a quadric hypersurface.
\begin{prop}
Let $C \subset \pp^3$ be a general BN-curve of degree $6$ and genus $2$.
Then the intersection $C \cap Q$ is a collection of $12$ points
lying on a curve $D \subset Q \simeq \pp^1 \times \pp^1$, which is general subject
to the following conditions:
\begin{enumerate}
\item The curve $D$ is of bidegree $(3, 3)$ (and so is in particular of arithmetic genus $4$).
\item The curve $D$ has two nodes (and so is in particular of geometric genus $2$).
\item The divisors $\oo_D(2,2)$ and $C \cap D$ are linearly equivalent
when pulled back to the normalization of $D$.
\end{enumerate}
In particular, it is not a collection of $12$ general points.
\end{prop}
\begin{proof}
We first observe that $\dim H^0(\oo_C(1)) = 5$, so $C$ is the projection from a point $p \in \pp^4$
of a curve $\tilde{C} \subset \pp^4$ of degree $6$ and genus $2$.
Write $\pi \colon \pp^4 \dashedrightarrow \pp^3$ for the map of projection
from $p$, and define the quadric hypersurface $\tilde{Q} = \pi^{-1}(Q)$.
Let $S \subset \pp^4$ be the surface swept out by joining pairs
of points on $\tilde{C}$ conjugate under the hyperelliptic involution.
By Corollary~13.3 of \cite{firstpaper}, $S$ is a cubic surface;
in particular, since $S$ has a ruling, $S$ is a cubic scroll.
Write $H$ for the hyperplane section on $S$, and $F$ for the class
of a line of the ruling.
Th curve $\tilde{D} = \tilde{Q} \cap S$
(which for $C$ general is smooth by Kleiman transversality), is of degree $6$ and genus $2$.
By construction, the intersection $C \cap Q$ lies on $D = \pi(\tilde{D})$. Since $D = \pi(S) \cap Q$,
it is evidently a curve of bidegree $(3, 3)$ on $Q \simeq \pp^1 \times \pp^1$.
Moreover, since $\tilde{D}$ has genus $2$, the geometric genus of $D$ is $2$.
In particular, $D$ has two nodes.
Next, we note that on $S$, the curve $\tilde{C}$ has class $2H$. Indeed, if $[\tilde{C}] = a \cdot H + b \cdot F$,
then $a = \tilde{C} \cdot F = 2$ and $3a + b = \tilde{C} \cdot H = 6$; solving for $a$ and $b$, we obtain
$a = 2$ and $b = 0$.
Consequently, $\tilde{C} \cap \tilde{D}$ has class $2H$ on $\tilde{D}$.
Or equivalently, $C \cap D = \pi(\tilde{C} \cap \tilde{D})$ has class
equal to $\oo_D(2) = \oo_D(2,2)$ when pulled back to the normalization.
Conversely, take $12$ points on $D$ satisfying our assumptions. Write
$\tilde{D}$ for the normalization of $D$, and $p_1, p_2, \ldots, p_{12}$
for the preimages of our points in $\tilde{D}$.
We begin by noting that $\dim H^0(\oo_{\tilde{D}}(1)) = 5$,
so $D$ is the projection from a point $p \in \pp^4$
of $\tilde{D} \subset \pp^4$ of degree $6$ and genus $2$.
As before, write $\pi \colon \pp^4 \dashedrightarrow \pp^3$ for the map of projection
from $p$, and define the quadric hypersurface $\tilde{Q} = \pi^{-1}(Q)$.
Again, we let $S \subset \pp^4$ be the surface swept out by joining pairs
of points on $\tilde{D}$ conjugate under the hyperelliptic involution.
As before, $S$ is a cubic scroll;
write $H$ for the hyperplane section on $S$, and $F$ for the class
of a line of the ruling.
Note that $\tilde{D} \subseteq \tilde{Q} \cap S$; and since both
sides are curves of degree $6$, we have $\tilde{D} = \tilde{Q} \cap S$.
It now suffices to find a curve $\tilde{C} \subset S$ of class $2H$,
meeting $\tilde{D}$ transversely
in $p_1, \ldots, p_{12}$.
For this, note that
\[\dim H^0(\oo_S(2H)) = 12 \quad \text{and} \quad \dim H^0(\oo_S) = 1.\]
Applying Lemma~\ref{pic-res-enough} yields the desired conclusion.
It remains to see from this description that
$C \cap Q$ is not a general collection of $12$ points.
For this, we first note that such a curve $D \subset \pp^1 \times \pp^1$
is the same as specifying an abstract curve of genus $2$, two lines bundles
of degree $3$ (corresponding to the pullbacks of $\oo_{\pp^1}(1)$ from each factor),
and a basis-up-to-scaling for their space of sections (giving us two maps $D \to \pp^1$).
Since there is a $3$-dimensional moduli space of abstract curves $D$ of genus $2$,
and $\dim \pic^3(D) = 2$, and there is a $3$-dimensional family of bases-up-to-scaling
of a $2$-dimensional vector space, the dimension of the space
of such curves $D$ is $3 + 2 + 2 + 3 + 3 = 13$.
Our condition $p_1 + \cdots + p_{12} \sim 2H$ then implies
collections of such points on a fixed $D$ are in bijection with
elements of $\pp \oo_D(2H) \simeq \pp^{10}$. Putting this together,
there is an (at most) $13 + 10 = 23$ dimensional family of such collections of points.
But $\sym^{12}(Q)$ has dimension $24$. In particular, collections of such
points cannot be general.
\end{proof}
\begin{prop}
Let $C \subset \pp^4$ be a general BN-curve of degree $10$ and genus $7$.
Then the intersection $C \cap H$ is a general collection of $10$ points
on a quadric in $H \simeq \pp^3$. In particular,
it is not a collection of $10$ general points.
\end{prop}
\begin{proof}
Since $\dim H^0(\oo_C(2)) = 14$ and $\dim H^0(\oo_{\pp^4}(2)) = 15$,
we conclude that $C$ lies on a quadric.
Restricting this quadric to $H \simeq \pp^3$,
we see that $C \cap H$ lies on a quadric.
For the converse, we take general points $p_1, \ldots, p_{10}$
lying on a general (thus smooth) quadric~$Q$.
Since $\dim H^0(\oo_Q(3,3)) = 16$, we may find a curve $D \subset Q$
of type $(3,3)$ passing through $p_1, \ldots, p_{10}$.
As divisor classes on $D$, suppose that
\[p_1 + p_2 + \cdots + p_{10} - H \sim x + y + z + w.\]
We now pick a general (quartic) rational normal curve $R \subset \pp^4$
whose hyperplane section is $\{x, y, z, w\}$.
We then claim there is a smooth sextic K3 surface $S \subset \pp^4$
containing $D$ and the general $2$-secant lines $\overline{xy}$ and $\overline{zw}$ to $D$.
Equivalently, we want to show the map
\[\{(S, R) : R \subset S\} \mapsto \{(R, D)\},\]
from the space of smooth sextic K3 surfaces $S$,
to the space of pairs $(R, D)$ where $R$ is a rational normal curve
meeting the canonical curve $D = S \cap H$ in four points, is dominant;
for this it suffices to check the vanishing of
$H^1(N_S(-H-R))$ at any smooth sextic K3 containing a rational normal curve $R$
(where $H = [D]$ is the hyperplane class on $S$).
We first note that a sextic K3 surface $S$ containing a rational normal curve $R$
exists, by Theorem~1.1 of~\cite{knutsen}.
On this K3 surface, our vector bundle $N_S(-H-R)$ is the direct sum of the line bundles $H - R$ and $2H - R$;
consequently, it suffices to show $H^1(\oo_S(n)(-R)) = 0$ for $n \geq 1$.
For this we use the exact sequence
\[0 \to \oo_S(n)(-R) \to \oo_S(n) \to \oo_S(n)|_R = \oo_R(n) \to 0,\]
and note that $H^1(\oo_S(n)) = 0$ by Kodaira vanishing,
while $H^0(\oo_S(n)) \to H^0(\oo_R(n))$ is surjective since $R$ is projectively normal.
This shows the existence of the desired K3 surface $S$ containing
$D$ and the general $4$-secant rational normal curve $R$.
Next, we claim that the linear series $H + R$ on $S$ is basepoint-free.
To see this, we first note that $H$ is basepoint free, so any basepoints
must lie on the curve $R$. Now the short exact sequence of sheaves
\[0 \to \oo_S(H) \to \oo_S(H + R) \to \oo_S(H + R)|_R \to 0\]
gives a long exact sequence in cohomology
\[\cdots \to H^0(\oo_S(H + R)) \to H^0(\oo_S(H + R)|_R) \to H^1(\oo_S(H)) \to \cdots.\]
Since the complete linear series
attached to $\oo_S(H + R)|_R \simeq \oo_{\pp^1}(2)$ is basepoint-free,
it suffices to show that
$H^0(\oo_S(H + R)) \to H^0(\oo_S(H + R)|_R)$ is surjective. For this,
it suffices to note that $H^1(\oo_S(H)) = 0$ by Kodaira vanishing.
Thus, $H + R$ is basepoint-free. In particular, the Bertini
theorem implies the general curve of class $H + R$ is smooth.
Such a curve is of degree~$10$ and genus~$7$;
in particular it is a BN-curve by results
of \cite{iliev}.
So it suffices to find a curve of class $H + R$ on $S$
passing through $p_1, p_2, \ldots, p_{10}$.
By construction, as divisors on $D$, we have
\[p_1 + p_2 + \cdots + p_{10} \sim H + R.\]
By Lemma~\ref{pic-res-enough}, it suffices to show
$\dim H^0(\oo_S(H + R)) = 8$ and $\dim H^0(\oo_S(R)) = 1$.
More generally,
for any smooth curve $X \subset S$
of genus $g$,
we claim $\dim H^0(\oo_S(X)) = 1 + g$. To see this, we use the exact sequence
\[0 \to \oo_S \to \oo_S(X) \to \oo_S(X)|_X \to 0,\]
which gives rise to a long exact sequence in cohomology
\[0 \to H^0(\oo_S) \to H^0(\oo_S(X)) \to H^0(\oo_S(X)|_X) \to H^1(\oo_S) \to \cdots.\]
Because $H^1(\oo_S) = 0$, we thus have
\begin{align*}
\dim H^0(\oo_S(X)) &= \dim H^0(\oo_S(X)|_X) + \dim H^0(\oo_S) \\
&= \dim H^0(K_S(X)|_X) + 1 \\
&= \dim H^0(K_X) + 1 \\
&= g + 1.
\end{align*}
In particular, $\dim H^0(\oo_S(H + R)) = 8$ and $\dim H^0(\oo_S(R)) = 1$,
as desired.
Since general points impose independent conditions on the $10$-dimensional
space of quadrics, a general collection of $10$ points
will not lie on a quadric. In particular, our hyperplane
section here is not a general collection of $10$ points.
\end{proof}
| {'timestamp': '2017-08-04T02:03:16', 'yymm': '1605', 'arxiv_id': '1605.06185', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.06185'} |
\chapter*{Preface}
\holmes{The scribes didn't have a large enough set from which to determine patterns.}{Brandon Sauderson}{The Hero of Ages}
\bigskip\noindent
This partial solution manual to our book {\em Introducing Monte Carlo Methods with R},
published by Springer Verlag in the {\sf User R!} series, on December 2009, has been compiled
both from our own solutions and from homeworks
written by the following Paris-Dauphine students in the 2009-2010 Master in Statistical Information Processing (TSI):
Thomas Bredillet, Anne Sabourin, and Jiazi Tang. Whenever appropriate, the \R code
of those students has been identified by a \verb=# (C.) Name= in the text.
We are grateful to those students for allowing us to use their solutions.
A few solutions in Chapter 4 are also taken {\em verbatim} from
the solution manual to {\em Monte Carlo Statistical Methods} compiled by Roberto Casarin from the University of Brescia
(and only available to instructors from Springer Verlag).
We also incorporated in this manual indications about some typos found in the first printing that came to our
attention while composing this solution manual have been indicated as well. Following the new ``print on demand"
strategy of Springer Verlag, these typos will not be found in the versions of the book purchased in the coming months and should
thus be ignored. (Christian Robert's book webpage at Universit\'e Paris-Dauphine \verb+www.ceremade.dauphine.fr/~xian/books.html+
is a better reference for the ``complete" list of typos.)
Reproducing the warning Jean-Michel Marin and Christian P.~Robert
wrote at the start of the solution manual to {\em Bayesian Core}, let us stress here that
some self-study readers of {\em Introducing Monte Carlo Methods with {\sf R}} may come to the realisation that the solutions provided
here are too sketchy for them because the way we wrote those solutions assumes some minimal familiarity with the maths,
the probability theory and with the statistics behind the arguments. There is unfortunately a limit to the time and
to the efforts we can put in this solution manual and studying {\em Introducing Monte Carlo Methods with {\sf R}}
requires some prerequisites in maths
(such as matrix algebra and Riemann integrals), in probability theory (such as the use of joint and conditional densities)
and some bases of statistics (such as the notions of inference, sufficiency and confidence sets) that we cannot cover here.
Casella and Berger (2001) is a good reference in case a reader is lost with the ``basic" concepts or sketchy math derivations.
We obviously welcome solutions, comments and questions on possibly erroneous or ambiguous solutions, as well as suggestions for
more elegant or more complete solutions: since this manual is distributed both freely and independently
from the book, it can be updated and corrected [almost] in real time! Note however that the {\sf R} codes given in the following
pages are not optimised because we prefer to use simple and understandable codes, rather than condensed and
efficient codes, both for time constraints and for pedagogical purposes: some codes were written by our students.
Therefore, if you find better [meaning, more efficient/faster] codes than those provided along those pages, we would be
glad to hear from you, but that does not mean that we will automatically substitute your {\sf R} code for the current one,
because readability is also an important factor.
A final request: this manual comes in two versions, one corresponding to the odd-numbered exercises and
freely available to everyone, and another one corresponding to a larger collection of exercises and with restricted access
to instructors only. Duplication and dissemination of the more extensive ``instructors only" version are obviously prohibited since,
if the solutions to most exercises become freely available, the appeal of using our book as a textbook will be severely
reduced. Therefore, if you happen to possess an extended version of the manual, please refrain from distributing
it and from reproducing it.
\bigskip\noindent
{\bf Sceaux and Gainesville\hfil Christian P.~Robert~and~George Casella\break
\today\hfill}
\chapter{Gibbs Samplers
\newcommand{Gibbs sampling$\;$}{Gibbs sampling$\;$}
\newcommand{Gibbs sampler$\;$}{Gibbs sampler$\;$}
\subsection{Exercise \ref{exo:margikov}}
The density $g_{t}$ of $(X_{t},Y_{t})$ in Algorithm \ref{al:TSGibbs} is decomposed as
\begin{align*}
g_{t}(X_{t},Y_{t}|X_{t-1},&\dots X_{0},Y_{t-1},\dots Y_{0})
= g_{t,X|Y}(X_{t}|Y_{t},X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0})\\
&\times g_{t,Y}(Y_{t}|X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0})
\end{align*}
with
$$
g_{t,Y}(Y_{t}|X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0})=f_{Y|X}(Y_{t}|X_{t-1})
$$
which only depends on $X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0}$ through
$X_{t-1}$, according to Step 1. of Algorithm \ref{al:TSGibbs}. Moreover,
$$
g_{t,X|Y}(X_{t}|Y_{t},X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0})=f_{X|Y}(X_{t}|Y_{t})
$$
only depends on $X_{t-2},\dots X_{0},Y_{t},\dots Y_{0}$ through $Y_{t}$.
Therefore,
$$
g_{t}(X_{t},Y_{t}|X_{t-1},\dots X_{0},Y_{t-1},\dots Y_{0})=g_{t}(X_{t},Y_{t}|X_{t-1})\,,
$$
which shows this is truly an homogeneous Markov chain.
\subsection{Exercise \ref{pb:multiAR}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The (normal) full conditionals are defined in Example
\ref{ex:normgibbs2}. An \R program that implements this Gibbs
sampler is
\begin{verbatim}
# (C.) Anne Sabourin, 2009
T=500 ;p=5 ;r=0.25
X=cur=rnorm(p)
for (t in 1 :T){
for (j in 1 :p){
m=sum(cur[-j])/(p-1)
cur[j]=rnorm(1,(p-1)*r*m/(1+(p-2)*r),
sqrt((1+(p-2)*r-(p-1)*r^2)/(1+(p-2)*r)))
}
X=cbind(X,cur)
}
par(mfrow=c(1,5))
for (i in 1:p){
hist(X[i,],prob=TRUE,col="wheat2",xlab="",main="")
curve(dnorm(x),add=TRUE,col="sienna",lwd=2)}
\end{verbatim}
\item Using instead
\begin{verbatim}
J=matrix(1,ncol=5,nrow=5)
I=diag(c(1,1,1,1,1))
s=(1-r)*I+r*J
rmnorm(500,s)
\end{verbatim}
and checking the duration by \verb+system.time+ shows \verb=rmnorm= is about five times
faster (and exact!).
\item If we consider the constraint
$$
\sum_{i=1}^{m} x_i^2 \le \sum_{i=m+1}^{p} x_i^2
$$
it imposes a truncated normal full conditional on {\em all} components. Indeed, for $1\le i\le m$,
$$
x^2_i \le \sum_{j=m+1}^{p} x_j^2 - \sum_{j=1,j\ne i}^{m} x_j^2\,,
$$
while, for $i>m$,
$$
x^2_i \ge \sum_{j=m+1,j\ne i}^{p} x_j^2 - \sum_{j=1}^{m} x_j^2\,.
$$
Note that the upper bound on $x_i^2$ when $i\le m$ {\em cannot be negative} if we start the Markov chain under the constraint.
The \verb#cur[j]=rnorm(...# line in the above \R program thus needs to be modified into a truncated normal distribution.
An alternative is to use a hybrid solution (see Section \ref{sec:MwithinG} for the validation):
we keep generating the $x_i$'s from the same plain normal full conditionals as before and we only
change the components for which the constraint remains valid, i.e.
\begin{verbatim}
for (j in 1:m){
mea=sum(cur[-j])/(p-1)
prop=rnorm(1,(p-1)*r*mea/(1+(p-2)*r),
sqrt((1+(p-2)*r-(p-1)*r^2)/(1+(p-2)*r)))
if (sum(cur[(1:m)[-j]]^2+prop^2)<sum(cur[(m+1):p]^2))
cur[j]=prop
}
for (j in (m+1):p){
mea=sum(cur[-j])/(p-1)
prop=rnorm(1,(p-1)*r*mea/(1+(p-2)*r),
sqrt((1+(p-2)*r-(p-1)*r^2)/(1+(p-2)*r)))
if (sum(cur[(1:m)]^2)<sum(cur[((m+1):p)[-j]]^2+prop^2))
cur[j]=prop
}
\end{verbatim}
Comparing the histograms with the normal $\mathcal{N}(0,1)$ shows that the marginals are no longer
normal.
\end{enumerate}
\subsection{Exercise \ref{pb:censoredGibbs}}
{\bf Warning: There is a typo in Example \ref{ex:censoredGibbs}, namely that the likelihood function involves
$\Phi(\theta-a)^{n-m}$ in front of the product of normal densities... For coherence with Examples
\ref{ex:7.4.3.1} and \ref{ex:EMCensored2}, in both Example \ref{ex:censoredGibbs} and Exercise \ref{pb:censoredGibbs},
$x$ should be written $\by$, $z$ $\bz$, $\bar x$ $\bar y$ and $x_i$ $y_i$.}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The complete data likelihood is associated with the distribution of the uncensored data
$$
(y_1,\ldots,y_m,z_{m+1},\ldots,z_n)\,,
$$
which constitutes an iid sample of size $n$. In that case, a sufficient statistics is $\{m\bar y+
(n-m(\bar z)\}/n$, which is distributed as $\mathcal{N}(\theta,1/n)$, i.e.~associated with the likelihood
$$
\exp\left\{ \dfrac{-n}{2}\,\left( \dfrac{m \bar x +(n-m) \bar z}{n} - \theta \right)^2 \right\}/\sqrt{n}\,.
$$
In this sense, the likelihood is proportional to the density of $\theta\sim{\mathcal N}(\{m \bar x +(n-m) \bar z\}/n,1/n )$.
(We acknowledge a certain vagueness in the wording of this question!)
\item The full \R code for the Gibbs sampler is
\begin{verbatim}
xdata=c(3.64,2.78,2.91,2.85,2.54,2.62,3.16,2.21,4.05,2.19,
2.97,4.32,3.56,3.39,3.59,4.13,4.21,1.68,3.88,4.33)
m=length(xdata)
n=30;a=3.5 #1/3 missing data
nsim=10^4
xbar=mean(xdata)
that=array(xbar,dim=c(nsim,1))
zbar=array(a,dim=c(nsim,1))
for (i in 2:nsim){
temp=runif(n-m,min=pnorm(a,mean=that[i-1],sd=1),max=1)
zbar[i]=mean(qnorm(temp,mean=that[i-1],sd=1))
that[i]=rnorm(1,mean=(m*xbar+(n-m)*zbar[i])/n,
sd=sqrt(1/n))
}
par(mfrow=c(1,2),mar=c(5,5,2,1))
hist(that[500:nsim],col="grey",breaks=25,
xlab=expression(theta),main="",freq=FALSE)
curve(dnorm(x,mean(that),sd=sd(that)),add=T,lwd=2)
hist(zbar[500:nsim],col="grey",breaks=25
main="",xlab= expression(bar(Z)),freq=FALSE)
curve(dnorm(x,mean(zbar),sd=sd(zbar)),add=T,lwd=2)
\end{verbatim}
(We added the normal density curves to check how close to a normal distribution the posteriors are.)
\end{enumerate}
\subsection{Exercise \ref{pb:blood}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Given the information provided in Table \ref{tab:GibbsBlood}, since we can reasonably assume independence
between the individuals, the distribution of the blood groups is a multinomial distribution whose density is
clearly proportional to
$$
(p_A^2+2p_A p_O)^{n_A} (p_B^2+2p_B p_O)^{n_B}(p_A p_B)^{n_{AB}}(p_O^2)^{n_O}\,.
$$
the proportionality coefficient being the multinomial coefficient
$$
\left( \begin{matrix} &\ n& & \\n_A &n_B &n_{AB} &n_O\end{matrix} \right)\,.
$$
\item If we break $n_A$ into $Z_A$ individuals with genotype \verb+AA+ and $n_A-Z_A$ with genotype \verb+AO+, and
similarly, $n_B$ into $Z_B$ individuals with genotype \verb+BB+ and $n_B-Z_B$ with genotype \verb+BO+, the complete
data likelihood corresponds to the extended multinomial model with likelihood proportional to
$$
(p_A^2)^{Z_A}(2p_A p_O)^{n_A-Z_A} (p_B^2)^{Z_B}(2p_B p_O)^{n_B-Z_B}(p_A p_B)^{n_{AB}}(p_O^2)^{n_O}\,.
$$
\item The Gibbs sampler we used to estimate this model is
\begin{verbatim}
nsim=5000;nA=186;nB=38;nAB=13;nO=284;
pA=array(.25,dim=c(nsim,1));pB=array(.05,dim=c(nsim,1));
for (i in 2:nsim){
pO=1-pA[i-1]-pB[i-1]
ZA=rbinom(1,nA,pA[i-1]^2/(pA[i-1]^2+2*pA[i-1]*pO));
ZB=rbinom(1,nB,pB[i-1]^2/(pB[i-1]^2+2*pB[i-1]*pO));
temp=rdirichlet(1,c(nA+nAB+ZA+1,nB+nAB+ZB+1,
nA-ZA+nB-ZB+2*nO+1));
pA[i]=temp[1];pB[i]=temp[2];
}
par(mfrow=c(1,3),mar=c(4,4,2,1))
hist(pA,main=expression(p[A]),freq=F,col="wheat2")
hist(pB,main=expression(p[B]),freq=F,col="wheat2")
hist(1-pA-pB,,main=expression(p[O]),freq=F,col="wheat2")
\end{verbatim}
It uses the Dirichlet generator \verb+rdirichlet+ found in the \verb+mcsm+ package.
\end{enumerate}
\subsection{Exercise \ref{pb:slice}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item For the target density $f_{X}(x)=\frac{1}{2}e^{-\sqrt{x}}$, a slice sampling algorithm is
based on the full conditionals
\begin{enumerate}
\item $U^{(t+1)}\sim\mathcal{U}_{[0,f_{X}(x^{(t)})]}$
\item $X^{(t+1)}\sim\mathcal{U}_{A^{(t+1)}}$ with $A^{(t+1)}=\{ x,f(x)\geq u^{(t+1)}\}$
\end{enumerate}
Therefore, $U|x\sim\mathcal{U}(0,\frac{1}{2}e^{-\sqrt{x}})$ and, since
$A=\{ x,\frac{1}{2}e^{-\sqrt{x}}\geq u\}$,
i.e.~$A=\{ x,0\leq x\leq\log(2u)²\}$, owe also deduce that $X|u\sim\mathcal{U}(0,(\log(2u))^2)$.
The corresponding \R code is
\begin{verbatim}
T=5000
f=function(x){
1/2*exp(-sqrt(x))}
X=c(runif(1)) ;U=c(runif(1))
for (t in 1:T){
U=c(U,runif(1,0,f(X[t])))
X=c(X,runif(1,0,(log(2*U[t+1]))^2))
}
par(mfrow=c(1,2))
hist(X,prob=TRUE,col="wheat2",xlab="",main="")
acf(X)
\end{verbatim}
\item If we define $Y=\sqrt{X}$, then
\begin{align*}
P(Y \leq y) &= P(X\leq y^{2})\\
&=\int_{0}^{y²}\frac{1}{2}e^{-\sqrt{x}}d\,\text{d}x
\end{align*}
When we differentiate against $y$, we get the density
$$
f_{Y}(y)=y\exp(-y)
$$
which implies that $Y\sim\mathcal{G}a(2,1)$.
Simulating $X$ then follows from $X=Y^2$.
This method is obviously faster and more accurate since the sample points
are then independent.
\end{enumerate}
\subsection{Exercise \ref{pb:normacf}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The linear combinations $X+Y$ and $X-Y$ also are normal with null expectation and with variances
$2(1+\rho)$ and $2(1-\rho)$, respectively. The vector $(X+Y,X-Y)$ itself is equally normal.
Moreover,
$$
\text{cov}(X+Y,X-Y)=\mathbb{E}((X+Y)(X-Y))=\mathbb{E}(X^{2}-Y^{2})=1-1=0
$$
implies that $X+Y$ and $X-Y$ are independent.
\item If, instead,
$$
(X,Y)\sim\mathcal{N}(0,\left(\begin{array}{cc}
\sigma_{x}^{2} & \rho\sigma_{x}\sigma_{y}\\
\rho\sigma_{x}\sigma_{y} & \sigma_{y}^{2}\end{array}\right))
$$
then $\sigma_{x}^{2}\neq\sigma_{y}^{2}$ implies that
$(X+Y)$ and $(X-Y)$ are dependent since $\mathbb{E}((X+Y)(X-Y))=\sigma_{x}^{2}-\sigma_{y}^{2}$.
In this case, $X|Y=y\sim\mathcal{N}(\rho\frac{\sigma_{x}}{\sigma_{y}}y,\sigma_{x}^{2}(1-\rho^{2}))$.
We can simulate $(X,Y)$ by the following Gibbs algorithm
\begin{verbatim}
T=5000;r=0.8;sx=50;sy=100
X=rnorm(1);Y=rnorm(1)
for (t in 1:T){
Yn=rnorm(1,r*sqrt(sy/sx)*X[t],sqrt(sy*(1-r^2)))
Xn=rnorm(1,r*sqrt(sx/sy)*Yn,sqrt(sx*(1-r^2)))
X=c(X,Xn)
Y=c(Y,Yn)
}
par(mfrow=c(3,2),oma=c(0,0,5,0))
hist(X,prob=TRUE,main="",col="wheat2")
hist(Y,prob=TRUE,main="",col="wheat2")
acf(X);acf(Y);plot(X,Y);plot(X+Y,X-Y)
\end{verbatim}
\item If $\sigma_{x}\neq\sigma_{y}$, let us find $a\in\mathbb{R}$ such that $X+aY$ and $Y$ are independent.
We have $\mathbb{E}[(X+aY)(Y)]=0$ if and only if $\rho\sigma_{x}\sigma_{y}+a\sigma_{y}^{2}=0$,
i.e.~$a=-\rho\sigma_{x}/\sigma_{y}$. Therefore, $X-\rho\sigma_{x}/\sigma_{y}Y$ and $Y$ are independent.
\end{enumerate}
\subsection{Exercise \ref{pb:7.1}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The likelihood function naturally involves the tail of the Poisson distribution
for those observations larger than $4$. The full conditional distributions of the observations larger than $4$
are obviously truncated Poisson distributions and the full conditional distribution of the parameter is the
Gamma distribution associated with a standard Poisson sample. Hence the Gibbs sampler.
\item The \R code we used to produce Figure \ref{fig:PoissonRB} is
\begin{verbatim}
nsim=10^3
lam=RB=rep(313/360,nsim)
z=rep(0,13)
for (j in 2:nsim){
top=round(lam[j -1]+6*sqrt(lam[j -1]))
prob=dpois(c(4:top),lam[j -1])
cprob=cumsum(prob/sum(prob))
for(i in 1:13) z[i] = 4+sum(cprob<runif(1))
RB[j]=(313+sum(z))/360
lam[j]=rgamma(1,360*RB[j],scale=1/360);
}
par(mfrow=c(1,3),mar=c(4,4,2,1))
hist(lam,col="grey",breaks=25,xlab="",
main="Empirical average")
plot(cumsum(lam)/1:nsim,ylim=c(1,1.05),type="l",
lwd=1.5,ylab="")
lines(cumsum(RB)/1:nsim,col="sienna",lwd=1.5)
hist(RB,col="sienna",breaks=62,xlab="",
main="Rao-Blackwell",xlim=c(1,1.05))
\end{verbatim}
\item When checking the execution time of both programs with \verb+system.time+,
the first one is almost ten times faster. And completely correct. A natural way
to pick \verb+prob+ is
\begin{verbatim}
> qpois(.9999,lam[j-1])
[1] 6
\end{verbatim}
\end{enumerate}
\subsection{Exercise \ref{pb:Exp-Improper}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The \R program that produced Figure \ref{fig:Exp-Improper} is
\begin{verbatim}
nsim=10^3
X=Y=rep(0,nsim)
X[1]=rexp(1) #initialize the chain
Y[1]=rexp(1) #initialize the chain
for(i in 2:nsim){
X[i]=rexp(1,rate=Y[i-1])
Y[i]=rexp(1,rate=X[i])
}
st=0.1*nsim
par(mfrow=c(1,2),mar=c(4,4,2,1))
hist(X,col="grey",breaks=25,xlab="",main="")
plot(cumsum(X)[(st+1):nsim]/(1:(nsim-st)),type="l",ylab="")
\end{verbatim}
\item Using the Hammersley--Clifford Theorem {\em per se} means using $f(y|x)/f(x|y)=x/y$ which is {\em not integrable}.
If we omit this major problem, we have
$$
f(x,y) = \frac{x\,\exp\{-xy\}}{x\, {\displaystyle \int \dfrac{\text{d}y}{y}}} \propto \exp\{-xy\}
$$
(except that the proportionality term is infinity!).
\item If we constrain both conditionals to $(0,B)$, the Hammersley--Clifford Theorem gives
\begin{align*}
f(x,y) &= \frac{\exp\{-xy\}/(1-e^{-xB})}{{\displaystyle \int \dfrac{1-e^{-yB}}{y(1-e^{-xB})}\,\text{d}y}}\\
&= \frac{\exp\{-xy\}}{{\displaystyle \int \dfrac{1-e^{-yB}}{y}\,\text{d}y}}\\
&\propto \exp\{-xy\}\,,
\end{align*}
since the conditional exponential distributions are truncated. This joint distribution is then well-defined on
$(0,B)^2$. A Gibbs sampler simulating from this joint distribution is for instance
\begin{verbatim}
B=10
X=Y=rep(0,nsim)
X[1]=rexp(1) #initialize the chain
Y[1]=rexp(1) #initialize the chain
for(i in 2:nsim){ #inversion method
X[i]=-log(1-runif(1)*(1-exp(-B*Y[i-1])))/Y[i-1]
Y[i]=-log(1-runif(1)*(1-exp(-B*X[i])))/X[i]
}
st=0.1*nsim
marge=function(x){ (1-exp(-B*x))/x}
nmarge=function(x){
marge(x)/integrate(marge,low=0,up=B)$val}
par(mfrow=c(1,2),mar=c(4,4,2,1))
hist(X,col="wheat2",breaks=25,xlab="",main="",prob=TRUE)
curve(nmarge,add=T,lwd=2,col="sienna")
plot(cumsum(X)[(st+1):nsim]/c(1:(nsim-st)),type="l",
lwd=1.5,ylab="")
\end{verbatim}
where the simulation of the truncated exponential is done by inverting the cdf (and where the
true marginal is represented against the histogram).
\end{enumerate}
\subsection{Exercise \ref{pb:firsthier}}
Let us define
\begin{eqnarray*}
f(x) & = & \frac{b^{a}x^{a-1}e^{-bx}}{\Gamma(a)}\,,\\
g(x) & = & \frac{1}{x}=y\,,\end{eqnarray*}
then we have
\begin{eqnarray*}
f_{Y}(y) & = & f_{X}\left(g^{-1}(y)\right)\mid\frac{d}{dy}g^{-1}(y)\mid\\
& = & \frac{b^{a}}{\Gamma(a)}\left({1}/{y}\right)^{a-1}\exp\left(-{b}/{y}\right)\frac{1}{y^{2}}\\
& = & \frac{b^{a}}{\Gamma(a)}\left({1}/{y}\right)^{a+1}\exp\left(-{b}/{y}\right)\,,
\end{eqnarray*}
which is the ${\cal IG}(a,b)$ density.
\subsection{Exercise \ref{pb:truncnorm}}
{\bf Warning: The function \verb+rtnorm+ requires a predefined \verb+sigma+ that should be part
of the arguments, as in\\
\verb+rtnorm=function(n=1,mu=0,lo=-Inf,up=Inf,sigma=1)+.}\\
Since the \verb+rtnorm+ function is exact (within the precision of the \verb+qnorm+ and \verb+pnorm+
functions, the implementation in \R is straightforward:
\begin{verbatim}
h1=rtnorm(10^4,lo=-1,up=1)
h2=rtnorm(10^4,up=1)
h3=rtnorm(10^4,lo=3)
par(mfrow=c(1,3),mar=c(4,4,2,1))
hist(h1,freq=FALSE,xlab="x",xlim=c(-1,1),col="wheat2")
dnormt=function(x){ dnorm(x)/(pnorm(1)-pnorm(-1))}
curve(dnormt,add=T,col="sienna")
hist(h2,freq=FALSE,xlab="x",xlim=c(-4,1),col="wheat2")
dnormt=function(x){ dnorm(x)/pnorm(1)}
curve(dnormt,add=T,col="sienna")
hist(h3,freq=FALSE,xlab="x",xlim=c(3,5),col="wheat2")
dnormt=function(x){ dnorm(x)/pnorm(-3)}
curve(dnormt,add=T,col="sienna")
\end{verbatim}
\subsection{Exercise \ref{pb:freq_2}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Since $(j=1,2)$
$$
(1-\theta_1-\theta_2)^{x_5+\alpha_3-1} = \sum_{i=0}^{x_5+\alpha_3-1}
{x_5+\alpha_3-1\choose i} (1-\theta_j)^i\theta_{3-j}^{x_5+\alpha_3-1-i}\,,
$$
when $\alpha_3$ is an integer, it is clearly possible to express $\pi(\theta_1,\theta_2|x)$ as
a sum of terms that are products of a polynomial function of $\theta_1$ and of a polynomial
function of $\theta_2$. It is therefore straightforward to integrate those terms in either $\theta_1$
or $\theta_2$.
\item For the same reason as above, rewriting $\pi(\theta_1,\theta_2|x)$ as a density in $(\theta_1,\xi)$
leads to a product of polynomials in $\theta_1$, all of which can be expanded and integrated in $\theta_1$,
producing in the end a sum of functions of the form
$$
\xi^{\delta}\big/(1+\xi)^{x_1+x_2+x_5+\alpha_1+\alpha_3-2}\,,
$$
namely a mixture of $F$ densities.
\item The Gibbs sampler based on (\ref{eq:tannerFull}) is available in the \verb+mcsm+ package.
\end{enumerate}
\subsection{Exercise \ref{pb:RBall}}
{\bf Warning: There is a typo in Example 7.3, \verb+sigma+ should be defined as \verb+sigma2+
and \verb+sigma2{1}+ should be \verb+sigma2[1]+...}\\
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item In Example \ref{ex:betabi}, since $\theta|x\sim {\cal B}e(x+a,n-x+b)$, we have clearly $\BE[\theta \vert x] = (x+a)/(n+a+b)$ (with a missing
parenthesis). The comparison between the empirical average and of the Rao--Blackwellization version is of the form
\begin{verbatim}
plot(cumsum(T)/(1:Nsim),type="l",col="grey50",
xlab="iterations",ylab="",main="Example 7.2")
lines(cumsum((X+a))/((1:Nsim)*(n+a+b)),col="sienna")
\end{verbatim}
All comparisons are gathered in Figure \ref{fig:allrb's}.
\item In Example \ref{ex:Metab-1}, equation (\ref{eq:firstposterior}) defines two standard distributions as full
conditionals. Since $\pi(\theta|\bx,\sigma^2)$ is a normal distribution with mean and variance provided two lines
below, we obviously have
$$
\BE[\theta | \bx,\sigma^2] = \frac{\sigma^2}{\sigma^2+n \tau^2}\;\theta_0 + \frac{n\tau^2}{\sigma^2+n \tau^2} \;\bar x
$$
The modification in the \R program follows
\begin{verbatim}
plot(cumsum(theta)/(1:Nsim),type="l",col="grey50",
xlab="iterations",ylab="",main="Example 7.3")
ylab="",main="Example 7.3")
lines(cumsum(B*theta0+(1-B)*xbar)/(1:Nsim)),col="sienna")
\end{verbatim}
\item The full conditionals of Example \ref{ex:Metab-2} given in Equation (\ref{eq:onewayfull})
are more numerous but similarly standard, therefore
$$
\BE[\theta_i | \bar X_i ,\sigma^2] = \frac{\sigma^2 }{\sigma^2+n_i \tau^2} \mu+\frac{n_i \tau^2 }{\sigma^2+n_i \tau^2}\bar X_i
$$
follows from this decomposition, with the \R lines added to the \verb+mcsm+ \verb+randomeff+ function
\begin{verbatim}
plot(cumsum(theta1)/(1:nsim),type="l",col="grey50",
xlab="iterations",ylab="",main="Example 7.5")
lines(cumsum((mu*sigma2+n1*tau2*x1bar)/(sigma2+n1*tau2))/
(1:nsim)),col="sienna")
\end{verbatim}
\item In Example \ref{ex:censoredGibbs}, the complete-data model is a standard normal model with
variance one, hence $\BE[\theta \vert x, z ] = \dfrac{m \bar x +(n-m) \bar z}{n}$. The additional lines
in the \R code are
\begin{verbatim}
plot(cumsum(that)/(1:Nsim),type="l",col="grey50",
xlab="iterations",ylab="",main="Example 7.6")
lines(cumsum((m/n)*xbar+(1-m/n)*zbar)/(1:Nsim)),
col="sienna")
\end{verbatim}
\item In Example \ref{ex:5.7}, the full conditional on $\lambda$,
$\lambda_i|\beta,t_i,x_i \sim \CG (x_i+\alpha,t_i+\beta)$ and hence
$\BE[\lambda_i|\beta,t_i,x_i] = (x_i+\alpha)/(t_i+\beta)$. The corresponding addition
in the \R code is
\begin{verbatim}
plot(cumsum(lambda[,1])/(1:Nsim),type="l",col="grey50",
xlab="iterations",ylab="",main="Example 7.12")
lines(cumsum((xdata[1]+alpha)/(Time[1]+beta))/(1:Nsim)),
col="sienna")
\end{verbatim}
\end{enumerate}
\begin{figure
\centerline{\includegraphics[width=\textwidth]{Exercise715.jpg}}
\caption{\label{fig:allrb's}
Comparison of the convergences of the plain average with its Rao-Blackwellized counterpart for
five different examples. The Rao-Blackwellized is plotted in {\sf sienna} red and is always more
stable than the original version.}
\end{figure}
\chapter{Metropolis-Hastings Algorithms
\subsection{Exercise \ref{exo:AR}}
A simple \R program to simulate this chain is
\begin{verbatim}
# (C.) Jiazi Tang, 2009
x=1:10^4
x[1]=rnorm(1)
r=0.9
for (i in 2:10^4){
x[i]=r*x[i-1]+rnorm(1) }
hist(x,freq=F,col="wheat2",main="")
curve(dnorm(x,sd=1/sqrt(1-r^2)),add=T,col="tomato"
\end{verbatim}
\subsection{Exercise \ref{exo:rho}}
When $q(y|x)=g(y)$, we have
\begin{align*}
\rho(x,y) &= \min\left(\frac{f(y)}{f(x)} \frac{q(x|y)}{q(y|x)},1\right)\\
&= \min\left(\frac{f(y)}{f(x)} \frac{g(x)}{g(y)},1\right)\\
&= \min\left(\frac{f(y)}{f(x)} \frac{g(x)}{g(y)},1\right)\,.
\end{align*}
Since the acceptance probability satisfies
$$
\frac{f(y)}{f(x)} \frac{g(x)}{g(y)} \ge \frac{f(y)/g(y)}{\max f(x)/g(x)}
$$
it is larger for Metropolis--Hastings than for accept-reject.
\subsection{Exercise \ref{exo:mocho}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The first property follows
from a standard property of the normal distribution, namely that the linear transform of a normal
is again normal. The second one is a consequence of the decomposition $y = X\beta + \epsilon$, when
$\epsilon\sim\mathcal{N}_n(0,\sigma^2 I_n)$ is independent from $X\beta$.
\item This derivation is detailed in Marin and Robert (2007, Chapter 3, Exercise 3.9).
Since
$$
\by|\sigma^2,X\sim\mathcal{N}_n(X\tilde\beta,\sigma^2(I_n+n X(X^\text{T} X)^{-1}X^\text{T} ))\,,
$$
integrating in $\sigma^2$ with $\pi(\sigma^2)=1/\sigma^2$ yields
\begin{eqnarray*}
f(\by|X) & = & (n+1)^{-(k+1)/2}\pi^{-n/2}\Gamma(n/2)\left[\by^\text{T} \by
-\frac{n}{n+1}\by^\text{T} X(X^\text{T} X)^{-1}X^\text{T} \by\right. \\
&&\qquad -\left.\frac{1}{n+1}\tilde\beta^\text{T} X^\text{T} X\tilde\beta\right]^{-n/2}.
\end{eqnarray*}
Using the \R function \verb+dmt(mnormt)+, we obtain the marginal density for the swiss dataset:
\begin{verbatim}
> y=log(as.vector(swiss[,1]))
> X=as.matrix(swiss[,2:6])
> library(mnormt)
> dmt(y,S=diag(length(y))+
[1] 2.096078e-63
\end{verbatim}
with the prior value $\tilde\beta=0$.
\end{enumerate}
\subsection{Exercise \ref{pb:beta}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item We generate an Metropolis-Hastings sample from the ${\cal B}e(2.7,6.3)$ density using uniform simulations:
\begin{verbatim}
# (C.) Thomas Bredillet, 2009
Nsim=10^4
a=2.7;b=6.3
X=runif(Nsim)
last=X[1]
for (i in 1:Nsim) {
cand=rbeta(1,1,1)
alpha=(dbeta(cand,a,b)/dbeta(last,a,b))/
(dbeta(cand,1,1)/dbeta(last,1,1))
if (runif(1)<alpha)
last=cand
X[i]=last
}
hist(X,pro=TRUE,col="wheat2",xlab="",ylab="",main="Beta(2.7,3) simulation")
curve(dbeta(x,a,b),add=T,lwd=2,col="sienna2")
\end{verbatim}
The acceptance rate is estimated by
\begin{verbatim}
> length(unique(X))/5000
[1] 0.458
\end{verbatim}
If instead we use a ${\cal B}e(20,60)$ proposal, the modified lines in the \R program are
\begin{verbatim}
cand=rbeta(20,60,1)
alpha=(dbeta(cand,a,b)/dbeta(last,a,b))/
(dbeta(cand,20,60)/dbeta(last,20,60))
\end{verbatim}
and the acceptance rate drops to zero!
\item In the case of a truncated beta, the following \R program
\begin{verbatim}
Nsim=5000
a=2.7;b=6.3;c=0.25;d=0.75
X=rep(runif(1),Nsim)
test2=function(){
last=X[1]
for (i in 1:Nsim){
cand=rbeta(1,2,6)
alpha=(dbeta(cand,a,b)/dbeta(last,a,b))/
(dbeta(cand,2,6)/dbeta(last,2,6))
if ((runif(1)<alpha)&&(cand<d)&&(c<cand))
last=cand
X[i]=last}
}
test1=function(){
last=X[1]
for (i in 1:Nsim){
cand=runif(1,c,d)
alpha=(dbeta(cand,a,b)/dbeta(last,a,b))
if ((runif(1)<alpha)&&(cand<d)&&(c<cand))
last=cand
X[i]=last
}
}
system.time(test1());system.time(test2())
\end{verbatim}
shows very similar running times but more efficiency for the beta proposal, since the
acceptance rates are approximated by $0.51$ and $0.72$ for \verb+test1+ and \verb+test2+,
respectively. When changing to $c=0.25$, $d=0.75$, \verb+test1+ is more efficient than \verb=test2=,
with acceptances rates of approximately $0.58$ and $0.41$, respectively.
\end{enumerate}
\subsection{Exercise \ref{pb:met_compare}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The Accept-Reject~algorithm with a Gamma $\CG(4,7)$ candidate can be implemented as follows
\begin{verbatim}
# (C.) Jiazi Tang, 2009
g47=rgamma(5000,4,7)
u=runif(5000,max=dgamma(g47,4,7))
x=g47[u<dgamma(g47,4.3,6.2)]
par(mfrow=c(1,3),mar=c(4,4,1,1))
hist(x,freq=FALSE,xlab="",ylab="",col="wheat2",
main="Accept-Reject with Ga(4.7) proposal")
curve(dgamma(x,4.3,6.2),lwd=2,col="sienna",add=T)
\end{verbatim}
The efficiency of the simulation is given by
\begin{verbatim}
> length(x)/5000
[1] 0.8374
\end{verbatim}
\item The Metropolis-Hastings ~algorithm with a Gamma $\CG(4,7)$ candidate can be implemented as follows
\begin{verbatim}
# (C.) Jiazi Tang, 2009
X=rep(0,5000)
X[1]=rgamma(1,4.3,6.2)
for (t in 2:5000){
rho=(dgamma(X[t-1],4,7)*dgamma(g47[t],4.3,6.2))/
(dgamma(g47[t],4,7)*dgamma(X[t-1],4.3,6.2))
X[t]=X[t-1]+(g47[t]-X[t-1])*(runif(1)<rho)
}
hist(X,freq=FALSE,xlab="",ylab="",col="wheat2",
main="Metropolis-Hastings with Ga(4,7) proposal")
curve(dgamma(x,4.3,6.2),lwd=2,col="sienna",add=T)
\end{verbatim}
Its efficiency is
\begin{verbatim}
> length(unique(X))/5000
[1] 0.79
\end{verbatim}
\item The Metropolis-Hastings~algorithm with a Gamma $\CG(5,6)$ candidate can be implemented as follows
\begin{verbatim}
# (C.) Jiazi Tang, 2009
g56=rgamma(5000,5,6)
X[1]=rgamma(1,4.3,6.2)
for (t in 2:5000){
rho=(dgamma(X[t-1],5,6)*dgamma(g56[t],4.3,6.2))/
(dgamma(g56[t],5,6)*dgamma(X[t-1],4.3,6.2))
X[t]=X[t-1]+(g56[t]-X[t-1])*(runif(1)<rho)
}
hist(X,freq=FALSE,xlab="",ylab="",col="wheat2",
main="Metropolis-Hastings with Ga(5,6) proposal")
curve(dgamma(x,4.3,6.2),lwd=2,col="sienna",add=T)
\end{verbatim}
Its efficiency is
\begin{verbatim}
> length(unique(X))/5000
[1] 0.7678
\end{verbatim}
which is therefore quite similar to the previous proposal.
\end{enumerate}
\subsection{Exercise \ref{exo:brakin}}
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}.}
\item Using the candidate given in Example \ref{ex:braking} mean using the \verb+Braking+ \R program of
our package \verb+mcsm+. In the earlier version, there is a missing link in the \R function which must
then be corrected by changing
\begin{verbatim}
data=read.table("BrakingData.txt",sep = "",header=T)
x=data[,1]
y=data[,2]
\end{verbatim}
into
\begin{verbatim}
x=cars[,1]
y=cars[,2]
\end{verbatim}
In addition, since the original \verb$Braking$ function does not return the simulated chains, a final line
\begin{verbatim}
list(a=b1hat,b=b2hat,c=b3hat,sig=s2hat)
\end{verbatim}
must be added into the function.
\item If we save the chains as \verb+mcmc=Braking()+ (note that we use $10^3$ simulations instead of $500$),
the graphs assessing convergence can be plotted by
\begin{verbatim}
par(mfrow=c(3,3),mar=c(4,4,2,1))
plot(mcmc$a,type="l",xlab="",ylab="a");acf(mcmc$a)
hist(mcmc$a,prob=T,main="",yla="",xla="a",col="wheat2")
plot(mcmc$b,type="l",xlab="",ylab="b");acf(mcmc$b)
hist(mcmc$b,prob=T,main="",yla="",xla="b",col="wheat2")
plot(mcmc$c,type="l",xlab="",ylab="c");acf(mcmc$c)
hist(mcmc$c,prob=T,main="",yla="",xla="c",col="wheat2")
\end{verbatim}
Autocorrelation graphs provided by \verb+acf+ show a strong correlation across iterations, while the raw plot
of the sequences show poor acceptance rates. The histograms are clearly unstable as well. This $10^3$ iterations
do not appear to be sufficient in this case.
\item Using
\begin{verbatim}
> quantile(mcmc$a,c(.025,.975))
2.
-6.462483 12.511916
\end{verbatim}
and the same for $b$ and $c$ provides converging confidence intervals on the three parameters.
\end{enumerate}
\subsection{Exercise \ref{ex:challenger2}}
{\bf Warning: There is a typo in question b in that the candidate must also be a double-exponential for $\alpha$, since
there is no reason for $\alpha$ to be positive...}
\begin{enumerate}
\renewcommand{\theenumi}{\arabic{enumi}}
\item The dataset {\tt challenger} is provided with the \verb+mcsm+ package, thus available as
\begin{verbatim}
> library(mcsm)
> data(challenger)
\end{verbatim}
Running a regular logistic regression is a simple call to \verb+glm+:
\begin{verbatim}
> temper=challenger[,2]
> failur=challenger[,1]
> summary(glm(failur~temper, family = binomial))
Deviance Residuals:
Min 1Q Median 3Q Max
-1.0611 -0.7613 -0.3783 0.4524 2.2175
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 15.0429 7.3786 2.039 0.0415 *
temper -0.2322 0.1082 -2.145 0.0320 *
---
Signif. codes: 0 "***" .001 "**" .01 "**" .05 "." .1 "" 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 28.267 on 22 degrees of freedom
Residual deviance: 20.315 on 21 degrees of freedom
AIC: 24.315
\end{verbatim}
The MLE's and the associated covariance matrix are given by
\begin{verbatim}
> challe=summary(glm(failur~temper, family = binomial))
> beta=as.vector(challe$coef[,1])
> challe$cov.unscaled
(Intercept) temper
(Intercept) 54.4441826 -0.79638547
temper -0.7963855 0.01171512
\end{verbatim}
The result of this estimation can be checked by
\begin{verbatim}
plot(temper,failur,pch=19,col="red4",
xlab="temperatures",ylab="failures")
curve(1/(1+exp(-beta[1]-beta[2]*x)),add=TRUE,col="gold2",lwd=2)
\end{verbatim}
and the curve shows a very clear impact of the temperature.
\item The Metropolis--Hastings resolution is based on the \verb+challenge(mcsm)+ function, using the same
prior on the coefficients, $\alpha\sim\mathcal{N}(0,25)$, $\beta\sim\mathcal{N}(0,25/s^2_x)$, where $s^2_x$
is the empirical variance of the temperatures.
\begin{verbatim}
Nsim=10^4
x=temper
y=failur
sigmaa=5
sigmab=5/sd(x)
lpost=function(a,b){
sum(y*(a+b*x)-log(1+exp(a+b*x)))+
dnorm(a,sd=sigmaa,log=TRUE)+dnorm(b,sd=sigmab,log=TRUE)
}
a=b=rep(0,Nsim)
a[1]=beta[1]
b[1]=beta[2]
#scale for the proposals
scala=sqrt(challe$cov.un[1,1])
scalb=sqrt(challe$cov.un[2,2])
for (t in 2:Nsim){
propa=a[t-1]+sample(c(-1,1),1)*rexp(1)*scala
if (log(runif(1))<lpost(propa,b[t-1])-
lpost(a[t-1],b[t-1])) a[t]=propa
else a[t]=a[t-1]
propb=b[t-1]+sample(c(-1,1),1)*rexp(1)*scalb
if (log(runif(1))<lpost(a[t],propb)-
lpost(a[t],b[t-1])) b[t]=propb
else b[t]=b[t-1]
}
\end{verbatim}
The acceptance rate is low
\begin{verbatim}
> length(unique(a))/Nsim
[1] 0.1031
> length(unique(b))/Nsim
[1] 0.1006
\end{verbatim}
but still acceptable.
\item Exploring the output can be done via graphs as follows
\begin{verbatim}
par(mfrow=c(3,3),mar=c(4,4,2,1))
plot(a,type="l",xlab="iterations",ylab=expression(alpha))
hist(a,prob=TRUE,col="wheat2",xlab=expression(alpha),main="")
acf(a,ylab=expression(alpha))
plot(b,type="l",xlab="iterations",ylab=expression(beta))
hist(b,prob=TRUE,col="wheat2",xlab=expression(beta),main="")
acf(b,ylab=expression(beta))
plot(a,b,type="l",xlab=expression(alpha),ylab=expression(beta))
plot(temper,failur,pch=19,col="red4",
xlab="temperatures",ylab="failures")
for (t in seq(100,Nsim,le=100)) curve(1/(1+exp(-a[t]-b[t]*x)),
add=TRUE,col="grey65",lwd=2)
curve(1/(1+exp(-mean(a)-mean(b)*x)),add=TRUE,col="gold2",lwd=2.5)
postal=rep(0,1000);i=1
for (t in seq(100,Nsim,le=1000)){ postal[i]=lpost(a[t],b[t]);i=i+1}
plot(seq(100,Nsim,le=1000),postal,type="l",
xlab="iterations",ylab="log-posterior")
abline(h=lpost(a[1],b[1]),col="sienna",lty=2)
\end{verbatim}
which shows a slow convergence of the algorithm (see the \verb+acf+ graphs on Figure \ref{fig:mhuttle}!)
\item The predictions of failure are given by
\begin{verbatim}
> mean(1/(1+exp(-a-b*50)))
[1] 0.6898612
> mean(1/(1+exp(-a-b*60)))
[1] 0.4892585
> mean(1/(1+exp(-a-b*70)))
[1] 0.265691
\end{verbatim}
\end{enumerate}
\begin{figure
\centerline{\includegraphics[width=\textwidth]{mhuttle.jpg}}
\caption{\label{fig:mhuttle}
Graphical checks of the convergence of the Metropolis--Hastings algorithm associated with
the {\sf challenger} dataset and a logistic regression model.}
\end{figure}
\subsection{Exercise \ref{pb:Norm-DE}}
{\bf Warning: There is a typo in question c, which should involve $\mathcal{N}(0,\omega)$
candidates instead of $\mathcal{L}(0,\omega)$...}\\
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item An \R program to produce the three evaluations is
\begin{verbatim}
# (C.) Thomas Bredillet, 2009
Nsim=5000
A=B=runif(Nsim)
alpha=1;alpha2=3
last=A[1]
a=0;b=1
cand=ifelse(runif(Nsim)>0.5,1,-1) * rexp(Nsim)/alpha
for (i in 1:Nsim){
rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/
(exp(-alpha*abs(cand[i]))/exp(-alpha*abs(last)))
if (runif(1)<rate) last=cand[i]
A[i]=last
}
cand=ifelse(runif(Nsim)>0.5,1,-1) * rexp(Nsim)/alpha2
for (i in 1:Nsim) {
rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/
(exp(-alpha2*abs(cand[i]))/exp(-alpha2*abs(last)))
if (runif(1)<rate) last=cand[i]
B[i]=last
}
par (mfrow=c(1,3),mar=c(4,4,2,1))
est1=cumsum(A)/(1:Nsim)
est2=cumsum(B)/(1:Nsim)
plot(est1,type="l",xlab="iterations",ylab="",lwd=2)
lines(est2,lwd="2",col="gold2")
acf(A)
acf(B)
\end{verbatim}
\item The acceptance rate is given by \verb+length(unique(B))/Nsim+, equal to
$0.49$ in the current simulation. A plot of the acceptance rates can be done
via the \R program
\begin{verbatim}
alf=seq(1,10,le=50)
cand0=ifelse(runif(Nsim)>0.5,1,-1) * rexp(Nsim)
acce=rep(0,50)
for (j in 1:50){
cand=cand0/alf[j]
last=A[1]
for (i in 2:Nsim){
rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/
(exp(-alf[j]*abs(cand[i]))/exp(-alf[j]*abs(last)))
if (runif(1)<rate) last=cand[i]
A[i]=last
}
acce[j]=length(unique(A))/Nsim
}
par(mfrow=c(1,3),mar=c(4,4,2,1))
plot(alf,acce,xlab="",ylab="",type="l",main="Laplace iid")
\end{verbatim}
The highest acceptance rate is obtained for the smallest value of $\alpha$.
\item The equivalent of the above \R program is
\begin{verbatim}
ome=sqrt(seq(.01,10,le=50))
cand0=rnorm(Nsim)
acce=rep(0,50)
for (j in 1:50){
cand=cand0*ome[j]
last=A[1]
for (i in 2:Nsim){
rate=(dnorm(cand[i],a,b^2)/dnorm(last,a,b^2))/
(dnorm(cand[i],sd=ome[j])/dnorm(last,sd=ome[j]))
if (runif(1)<rate) last=cand[i]
A[i]=last
}
acce[j]=length(unique(A))/Nsim
}
plot(ome^2,acce,xlab="",ylab="",type="l",main="Normal iid")
\end{verbatim}
The highest acceptance rate is (unsurprisingly) obtained for $\omega$ close to $1$.
\item The equivalent of the above \R program is
\begin{verbatim}
alf=seq(.1,10,le=50)
cand0=ifelse(runif(Nsim)>0.5,1,-1) * rexp(Nsim)
acce=rep(0,50)
for (j in 1:50){
eps=cand0/alf[j]
last=A[1]
for (i in 2:Nsim){
cand[i]=last+eps[i]
rate=dnorm(cand[i],a,b^2)/dnorm(last,a,b^2)
if (runif(1)<rate) last=cand[i]
A[i]=last
}
acce[j]=length(unique(A))/Nsim
}
plot(alf,acce,xlab="",ylab="",type="l",main="Laplace random walk")
\end{verbatim}
Unsurprisingly, as $\alpha$ increases, so does the acceptance rate. However, given that
this is a random walk proposal, higher acceptance rates do not mean better performances
(see Section 6.5).
\end{enumerate}
\chapter{Convergence Monitoring for MCMC Algorithms
\subsection{Exercise \ref{exo:lem:6.1}}
{\bf Warning: Strictly speaking, we need to assume that the Markov chain $(x^{(t)})$ has a
finite variance for the $h$ transform, since the assumption that $\mathbb{E}_f[h^2(X)]$ exists is not
sufficient (see \citealp{meyn:tweedie:1993}.}
This result was established by \cite{maceachern:berliner:1994}.
We have the proof detailed as Lemma 12.2 in \cite{robert:casella:2004} (with the same
additional assumption on the convergence of the Markov chain missing!).
Define $\delta_k^1,\ldots,\delta_k^{k-1}$
as the shifted versions of $\delta_k = \delta_k^0$; that is,
$$
\delta_k^i = {1 \over T} \; \sum_{t=1}^{T} \; h(\theta^{(tk-i)}) ,
\qquad\qquad i=0,1,\ldots,k-1 \;.
$$
The estimator $\delta_1$ can then be written as $\delta_1 =
{1 \over k} \; \sum_{i=0}^{k-1} \; \delta_k^i $, and hence
\begin{eqnarray*}
{\mathrm {var}}(\delta_1) &=& \displaystyle{ {\mathrm {var}}\left({1 \over k} \;
\sum_{i=0}^{k-1} \; \delta_k^i \right) } \\
&=& \displaystyle{ {\mathrm {var}}(\delta_k^0)/k + \sum_{i\neq j} \;
{\mathrm {cov}}(\delta_k^i,\delta_k^j) / k^2 } \\
&\leq& \displaystyle{ {\mathrm {var}}(\delta_k^0)/k + \sum_{i\neq j} \;
{\mathrm {var}}(\delta_k^0) / k^2 } \\
&=& \displaystyle{ {\mathrm {var}}(\delta_k)\;, }
\end{eqnarray*}
where the inequality follows from the Cauchy--Schwarz inequality
$$
|\text{cov}(\delta_k^i, \delta_k^j)| \leq \text{var}(\delta_k^0).
$$
\subsection{Exercise \ref{exo:misgim}}
This is a direct application of the Ergodic Theorem (see Section \ref{sec:dumdum}).
If the chain $(x^{(t)})$ is ergodic, then the empirical average above converges (almost
surely) to $\mathbb{E}_f[\varphi(X) \big/ \tilde f(X)]=1/C$. This assumes that the support of
$\varphi$ is {\em small enough} (see Exercise \ref{pb:ratio_csts3}). For the variance of the
estimator to be finite, a necessary condition is that
$$
\mathbb{E}_f[\varphi(X) \big/ \tilde f(X)] \propto \int \dfrac{\varphi^2(x)}{f(x)}\,\text{d}x < \infty\,.
$$
As in Exercise \ref{exo:lem:6.1}, we need to assume that the convergence of the Markov chain
is regular enough to ensure a finite variance.
\subsection{Exercise \ref{exo:patchtrap}}
The modified \R program using bootstrap is
\begin{verbatim}
ranoo=matrix(0,ncol=2,nrow=25)
for (j in 1:25){
batch=matrix(sample(beta,100*Ts[j],rep=TRUE),ncol=100)
sigmoo=2*sd(apply(batch,2,mean))
ranoo[j,]=mean(beta[1:Ts[j]])+c(-sigmoo,+sigmoo)
}
polygon(c(Ts,rev(Ts)),c(ranoo[,1],rev(ranoo[,2])),col="grey")
lines(cumsum(beta)/(1:T),col="sienna",lwd=2)
\end{verbatim}
and the output of the comparison is provided in Figure \ref{fig:bootband}.
\begin{figure
\centerline{\includegraphics[width=\textwidth]{bootband.jpg}}
\caption{\label{fig:bootband}
Comparison of two evaluations of the variance of the MCMC estimate of the mean of $\beta$ for
the pump failure model of Example \ref{ex:patchump}.}
\end{figure}
\subsection{Exercise \ref{exo:baseball}}
{\bf Warning: Example \ref{ex:baseball} contains several typos, namely $Y_k\sim\CN(\theta_i,\sigma^2)$
instead of $Y_i\sim\CN(\theta_i,\sigma^2)$, {\sf the $\mu_i$'s being also
iid normal} instead of {\sf the $\theta_i$'s being also iid normal}...}\\
{\bf Warning: Exercise \ref{exo:baseball} also contains a typo in that the posterior distribution on $\mu$
cannot be obtained in a closed form. It should read}
\begin{rema}\noindent
Show that the posterior distribution on $\alpha$ in Example \ref{ex:baseball} can be obtained in a closed form.
\end{rema}
Since
\begin{align*}
\mathbf{\theta} | \by,\mu,\alpha &\sim \pi(\mathbf{\theta}|\by,\mu,\alpha)\\
&\propto \alpha^{-9} \exp\dfrac{-1}{2}\left\{ \sum_{i=1}^{18} \left[
\sigma^{-2} (y_i-\theta_i)^2 + \alpha^{-1}(\theta_i-\mu)^2 \right] \right\}\\
&\propto \exp\dfrac{-1}{2}\left(\sum_{i=1}^{18} \left\{
(\sigma^{-2} + \alpha^{-1})\left[\theta_i-(\sigma^{-2} + \alpha^{-1})^{-1}(\sigma^{-2} y_i
+\alpha^{-1} \mu)\right]^2\right.\right.\\
&\left.\left.\quad + (\alpha+\sigma^2)^{-1}\sum_{i=1}^{18} (y_i-\mu)^2 \right\}\right)
\end{align*}
(which is also a direct consequence of the marginalization $Y_i\sim\CN(\mu,\alpha+\sigma^2)$), we have
\begin{align*}
\pi(\alpha,\mu|\by) &\propto \dfrac{\alpha^{-3}}{(\alpha+\sigma^2)^{9}}\, \exp\left\{-\dfrac{1}{2(\alpha+\sigma^2)}
\sum_{i=1}^{18} (y_i-\mu)^2 -\dfrac{\mu^2}{2}-\dfrac{2}{\alpha} \right\}\\
&\propto \dfrac{\alpha^{-3}}{(\alpha+\sigma^2)^{9}}\, \exp\bigg\{-\dfrac{2}{\alpha} \\
&\quad-\dfrac{1+n(\alpha+\sigma^2)^{-1}}{2}\left[
\mu-(\alpha+\sigma^2)^{-1}\sum_{i=1}^{18} y_i\big/(1+n(\alpha+\sigma^2)^{-1})\right]^2\\
&\quad\left.-\dfrac{1}{2(\alpha+\sigma^2)}\sum_{i=1}^{18} y_i^2 + \dfrac{(\alpha+\sigma^2)^{-2}}{2(1+n(\alpha+\sigma^2)^{-1})}
\left(\sum_{i=1}^{18} y_i\right)^2 \right\}
\end{align*}
and thus
\begin{align*}
\pi(\alpha|\by)
&\propto \dfrac{\alpha^{-3}(1+n(\alpha+\sigma^2)^{-1})^{-1/2}}{(\alpha+\sigma^2)^{9}}\,
\exp\bigg\{-\dfrac{2}{\alpha} \\
&\quad\left.-\dfrac{1}{\alpha+\sigma^2}\sum_{i=1}^{18} y_i^2 + \dfrac{(\alpha+\sigma^2)^{-2}}{1+n(\alpha+\sigma^2)^{-1}}
\left(\sum_{i=1}^{18} y_i\right)^2 \right\}
\end{align*}
Therefore the marginal posterior distribution on $\alpha$ has a closed (albeit complex) form. (It is also
obvious from $\pi(\alpha,\mu|\by)$ above that the marginal posterior on $\mu$ does not have a closed form.)
The baseball dataset can be found in the \verb+amcmc+ package in the \verb+baseball.c+ program and rewritten as
\begin{verbatim}
baseball=c(0.395,0.375,0.355,0.334,0.313,0.313,0.291,
0.269,0.247,0.247,0.224,0.224,0.224,0.224,0.224,0.200,
0.175,0.148)
\end{verbatim}
The standard Gibbs sampler is implemented by simulating
\begin{align*}
\theta_i|y_i,\mu,\alpha &\sim \mathcal{N}\left(\dfrac{\alpha^{-1}\mu+\sigma^{-2}y_i}{\alpha^{-1}+\sigma^{-2}},
(\alpha^{-1}+\sigma^{-2})^{-1} \right)\,,\\
\mu|\mathbf{\theta},\alpha &\sim \mathcal{N}\left(\dfrac{\alpha^{-1}\sum_{i=1}^{18}\theta_i}{1+n\alpha^{-1}},
(n\alpha^{-1}+1)^{-1} \right)\,,\\
\alpha|\mathbf{\theta},\mu&\sim\mathcal{IG}\left(11,2+\sum_{i=1}^{18} (\theta_i-\mu)^2/2 \right)
\end{align*}
which means using an \R loop like
\begin{verbatim}
Nsim=10^4
sigma2=0.00434;sigmam=1/sigma2
theta=rnorm(18)
mu=rep(rnorm(1),Nsim)
alpha=rep(rexp(1),Nsim)
for (t in 2:Nsim){
theta=rnorm(18,mean=(mu[t-1]/alpha[t-1]+sigmam*baseball)/
(1/alpha[t-1]+sigmam),sd=1/sqrt(1/alpha[t-1]+sigmam))
mu[t]=rnorm(1,mean=sum(theta)/(1/alpha[t-1]+n),
sd=1/sqrt(1+n/alpha[t-1]))
alpha[t]=(2+0.5*sum((theta-mu[t])^2))/rgamma(1,11)
}
\end{verbatim}
The result of both \verb+coda+ diagnostics on $\alpha$ is
\begin{verbatim}
> heidel.diag(mcmc(alpha))
Stationarity start p-value
test iteration
var1 passed 1 0.261
Halfwidth Mean Halfwidth
test
var1 passed 0.226 0.00163
> geweke.diag(mcmc(alpha))
Fraction in 1st window = 0.1
Fraction in 2nd window = 0.5
var1
-0.7505
\end{verbatim}
If we reproduce the Kolmogorov--Smirnov analysis
\begin{verbatim}
ks=NULL
M=10
for (t in seq(Nsim/10,Nsim,le=100)){
alpha1=alpha[1:(t/2)]
alpha2=alpha[(t/2)+(1:(t/2))]
alpha1=alpha1[seq(1,t/2,by=M)]
alpha2=alpha2[seq(1,t/2,by=M)]
ks=c(ks,ks.test(alpha1,alpha2)$p)
}
\end{verbatim}
Plotting the vector \verb+ks+ by \verb+plot(ks,pch=19)+
shows no visible pattern that would indicate a lack of uniformity.
Comparing the output with the true target in $\alpha$ follows from the definition
\begin{verbatim}
marge=function(alpha){
(alpha^(-3)/(sqrt(1+18*(alpha+sigma2)^(-1))*(alpha+sigma2)^9))*
exp(-(2/alpha) - (.5/(alpha+sigma2))*sum(baseball^2) +
.5*(alpha+sigma2)^(-2)*sum(baseball)^2/(1+n*(alpha+sigma2)^(-1)))
}
\end{verbatim}
Figure \ref{fig:dafit} shows the fit of the simulated histogram to the above function (when normalized
by \verb+integrate+).
\begin{figure
\centerline{\includegraphics[width=0.7\textwidth]{dafit.jpg}}
\caption{\label{fig:dafit}
Histogram of the $(\alpha^{(t)})$ chain produced by the Gibbs sampler of Example \ref{ex:baseball}
and fit of the exact marginal $\pi(\alpha|\by)$, based on $10^4$ simulations.}
\end{figure}
\subsection{Exercise \ref{pb:the_far_side}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item We simply need to check that this transition kernel $K$ satisfies the
detailed balance condition \eqref{eq:db}, $f(x)K(y|x) = f(y) K(x|y)$ when $f$
is the ${\cal B}e(\alpha,1)$ density: when $x\ne y$,
\begin{align*}
f(x)K(x,y) &= \alpha x^{\alpha-1}\,x\,(\alpha+1)\,y^{\alpha}\\
&= \alpha (\alpha+1) (xy)^\alpha\\
&= f(y)K(y,x)
\end{align*}
so the ${\cal B}e(\alpha,1)$ distribution is indeed stationary.
\item Simulating the Markov chain is straightforward:
\begin{verbatim}
alpha=.2
Nsim=10^4
x=rep(runif(1),Nsim)
y=rbeta(Nsim,alpha+1,1)
for (t in 2:Nsim){
if (runif(1)<x[t-1]) x[t]=y[t]
else x[t]=x[t-1]
}
\end{verbatim}
and it exhibits a nice fit to the beta ${\cal B}e(\alpha,1)$ target. However,
running \verb+cumuplot+ shows a lack of concentration of the distribution, while
the two standard stationarity diagnoses are
\begin{verbatim}
> heidel.diag(mcmc(x))
Stationarity start p-value
test iteration
var1 passed 1001 0.169
Halfwidth Mean Halfwidth
test
var1 failed 0.225 0.0366
> geweke.diag(mcmc(x))
Fraction in 1st window = 0.1
Fraction in 2nd window = 0.5
var1
3.277
\end{verbatim}
are giving dissonant signals. The \verb+effectiveSize(mcmc(x))}+ is then equal to $329$.
Moving to $10^6$ simulations does not modify the picture (but may cause your system to crash!)
\item The corresponding Metropolis--Hastings version is
\begin{verbatim}
alpha=.2
Nsim=10^4
x=rep(runif(1),Nsim)
y=rbeta(Nsim,alpha+1,1)
for (t in 2:Nsim){
if (runif(1)<x[t-1]/y[t]) x[t]=y[t]
else x[t]=x[t-1]
}
\end{verbatim}
It also provides a good fit and also fails the test:
\begin{verbatim}
> heidel.diag(mcmc(x))
Stationarity start p-value
test iteration
var1 passed 1001 0.0569
Halfwidth Mean Halfwidth
test
var1 failed 0.204 0.0268
> geweke.diag(mcmc(x))
Fraction in 1st window = 0.1
Fraction in 2nd window = 0.5
var1
1.736
\end{verbatim}
\end{enumerate}
\subsection{Exercise \ref{pb:proberge}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item A possible \R definition of the posterior is
\begin{verbatim}
postit=function(beta,sigma2){
prod(pnorm(r[d==1]*beta/sigma2))*prod(pnorm(-r[d==0]*beta/sigma2))*
dnorm(beta,sd=5)*dgamma(1/sigma2,2,1)}
\end{verbatim}
and a possible \R program is
\begin{verbatim}
r=Pima.tr$ped
d=as.numeric(Pima.tr$type)-1
mod=summary(glm(d~r-1,family="binomial"))
beta=rep(mod$coef[1],Nsim)
sigma2=rep(1/runif(1),Nsim)
for (t in 2:Nsim){
prop=beta[t-1]+rnorm(1,sd=sqrt(sigma2[t-1]*mod$cov.unscaled))
if (runif(1)<postit(prop,sigma2[t-1])/postit(beta[t-1],
sigma2[t-1])) beta[t]=prop
else beta[t]=beta[t-1]
prop=exp(log(sigma2[t-1])+rnorm(1))
if (runif(1)<sigma2[t-1]*postit(beta[t],prop)/(prop*
postit(beta[t], sigma2[t-1]))) sigma2[t]=prop
else sigma2[t]=sigma2[t-1]
}
\end{verbatim}
(Note the Jacobian $1/\sigma^2$ in the acceptance probability.)
\item Running $5$ chains in parallel is easily programmed with an additional loop
in the above. Running \verb+gelman.diag+ on those five chains then produces a
convergence assessment:
\begin{verbatim}
> gelman.diag(mcmc.list(mcmc(beta1),mcmc(beta2),mcmc(beta3),
+ mcmc(beta4),mcmc(beta5)))
Potential scale reduction factors:
Point est. 97.
[1,] 1.02 1.03
\end{verbatim}
Note also the good mixing behavior of the chain:
\begin{verbatim}
> effectiveSize(mcmc.list(mcmc(beta1),mcmc(beta2),
+ mcmc(beta3),mcmc(beta4),mcmc(beta5)))
var1
954.0543
\end{verbatim}
\item The implementation of the traditional Gibbs sampler with completion is
detailed in \cite{marin:robert:2007}, along with the appropriate \R program.
The only modification that is needed for this problem is the introduction of
the non-identifiable scale factor $\sigma^2$.
\end{enumerate}
\subsection{Exercise \ref{pb:thin_ks}}
In the \verb+kscheck.R+ program available in \verb+mcsm+, you can modify $G$ by
changing the variable \verb+M+ in
\begin{verbatim}
subbeta=beta[seq(1,T,by=M)]
subold=oldbeta[seq(1,T,by=M)]
ks=NULL
for (t in seq((T/(10*M)),(T/M),le=100))
ks=c(ks,ks.test(subbeta[1:t],subold[1:t])$p)
\end{verbatim}
(As noted by a reader, the syntax \verb+ks=c(ks,res)+ is very inefficient in
system time, as you can check by yourself.)
\subsection{Exercise \ref{pb:tan_cvg}}
Since the Markov chain $(\theta^{(t)})$ is converging to the posterior distribution
(in distribution), the density at time $t$, $\pi_t$, is also converging (pointwise)
to the posterior density $\pi(\theta|x)$, therefore $\omega_t$ is converging to
$$
\dfrac{f(x|\theta^{(\infty)}) \pi(\theta^{(\infty)})}{ \pi(\theta^{(\infty)}|x)} = m(x)\,,
$$
for all values of $\theta^{(\infty)}$. (This is connected with Chib's (\citeyear{chib:1995})
method, discussed in Exercise \ref{exo:chibmarge}.)
\subsection{Exercise \ref{pb:essPress}}
If we get back to Example \ref{ex:6.1}, the sequence \verb+beta+ can be checked in terms of
effective sample via an \R program like
\begin{verbatim}
ess=rep(1,T/10)
for (t in 1:(T/10)) ess[t]=effectiveSize(beta[1:(10*t)])
\end{verbatim}
where the subsampling is justified by the computational time required by \verb&effectiveSize&.
The same principle can be applied to any chain produced by an MCMC algorithm.
Figure \ref{esscomp} compares the results of this evaluation over the first three examples of
this chapter. None of them is strongly conclusive about convergence...
\begin{figure
\centerline{\includegraphics[width=\textwidth]{esscomp.jpg}}
\caption{\label{esscomp}
Evolution of the effective sample size across iterations for the first three examples of
Chapter 8.}
\end{figure}
\chapter{Controling and Accelerating Convergence
\subsection{Exercise \ref{pb:ratio_csts}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Since
$$
\pi_1(\theta|x) = \tilde\pi_1(\theta)/c_1
\mbox{ and }\pi_2(\theta|x) =\tilde\pi_2(\theta)/c_2\,,
$$
where only $\tilde\pi_1$ and $\tilde\pi_2$ are known and where $c_1$ and $c_2$ correspond to
the marginal likelihoods, $m_1(x)$ and $m_2(x)$ (the dependence on $x$ is removed for simplification purposes),
we have that
$$
\varrho=\dfrac{m_1(x)}{m_2(x)}
=\dfrac{\int_{\Theta_1} \pi_1(\theta) f_1(x|\theta)\,\text{d}\theta}{\int_{\Theta_1} \pi_2(\theta) f_2(x|\theta)\,\text{d}\theta}
=\int_{\Theta_1} \dfrac{\pi_1(\theta) f_1(x|\theta)}{\tilde\pi_2(\theta)}\,\frac{\tilde\pi_2(\theta)}{m_2(x)}\text{d}\theta_1
$$
and therefore $\tilde\pi_1(\theta)/\tilde\pi_2(\theta)$ is an unbiased estimator of $\varrho$ when $\theta\sim\pi_2(\theta|x)$.
\item Quite similarly,
$$
\dfrac{\int \tilde\pi_1(\theta) \alpha(\theta) \pi_2(\theta|x) \text{d}\theta }{
\int \tilde\pi_2(\theta) \alpha(\theta) \pi_1(\theta|x) \text{d}\theta} =
\dfrac{\int \tilde\pi_1(\theta) \alpha(\theta) \tilde\pi_2(\theta)/c_2 \text{d}\theta }{
\int \tilde\pi_2(\theta) \alpha(\theta) \tilde\pi_1(\theta)/c_1 \text{d}\theta} = \frac{c_1}{c_2} = \varrho\,.
$$
\end{enumerate}
\subsection{Exercise \ref{exo:ESSin}}
We have
\begin{align*}
\text{ESS}_{n} &=1\bigg/\sum_{i=1}^{n}\underline{w}_{i}^{2}
=1\bigg/\sum_{i=1}^{n}\left(w_{i}\bigg/\sum_{j=1}^{n}w_{j}\right)^{2}\\
&=\dfrac{\left(\sum_{i=1}^{n}w_{i}\right)^{2}}{\sum_{i=1}^{n}w_{i}^{2}}
=\dfrac{\sum_{i=1}^{n}w_{i}^2+\sum_{i\neq j}w_{i}w_{j}}{\sum_{i=1}^{n}w_{i}^{2}}
\le n
\end{align*}
(This is also a consequence of Jensen's inequality when considering that the $\underline{w}_{i}$ sum up to one.)
Moreover, the last equality shows that
\[
ESS_{n}=1+\frac{\sum_{i\neq j}w_{i}w_{j}}{\sum_{i=1}^{n}w_{i}^{2}}\ge 1\,,
\]
with equality if and only if a single $\omega_i$ is different from zero.
\subsection{Exercise \ref{exo:simerin}}
{\bf Warning: There is a slight typo in the above in that $\bar {\mathbf X}_k$ should not be in bold. It should thus read}
\begin{rema}
\noindent Establish that
$$
\text{cov}(\bar {X}_k,\bar { X}_{k^\prime}) = {\sigma^2}\big/{\max\{k, k^\prime\}}.
$$
\end{rema}
Since the $X_{i}$'s are iid, for $k'<k$, we have
\begin{align*}
\text{cov}(\overline{X}_{k},\overline{X}_{k'})
& = \text{cov}\left(\frac{1}{k}\sum_{i=1}^{k}X_{i},\frac{1}{k'}\sum_{i=1}^{k'}X_{i}\right)\\
& = \text{cov}\left(\frac{1}{k}\sum_{i=1}^{k'}X_{i},\frac{1}{k'}\sum_{i=1}^{k'}X_{i}\right)\\
& = \frac{1}{kk'}\text{cov}\left(\sum_{i=1}^{k'}X_{i},\sum_{i=1}^{k'}X_{i}\right)\\
& = \frac{1}{kk'}k'\text{cov}\left(X_{i},X_{i}\right)\\
& = \sigma^{2}/k\\
& = \sigma^{2}/\max\{k,k'\}\,.
\end{align*}
\subsection{Exercise \ref{pb:t_RB}}
{\bf Warning: There is a missing variance term in this exercise, which should read}
\begin{rema}
\noindent Show that
\begin{eqnarray*}
\mathbb{E} \left[\exp-X^2|y\right] &=& \frac{1}{\sqrt{2 \pi \sigma^2/y}}
\int\exp\{-x^2\}\,\exp\{-(x-\mu)^2y/2\sigma^2\}\,\text{d}x \\
&=& \frac{1}{\sqrt{2 \sigma^2/y+1}} \exp \left\{ -\frac{\mu^2}{1+2\sigma^2/y}\right\}
\end{eqnarray*}
by completing the square in the exponent to evaluate the integral.
\end{rema}
We have
\begin{align*}
2x^2+(x-\mu)^2y/2\sigma^2 &= x^2(2+y\sigma^{-2}) -2x\mu y\sigma^{-2} + \mu^2 y\sigma^{-2}\\
&= (2+y\sigma^{-2})\left[x-\mu y\sigma^{-2}/(2+y\sigma^{-2})\right]^2+ \\
&\qquad\mu^2 \left[y\sigma^{-2}- y^2\sigma^{-4}/(2+y\sigma^{-2})\right]\\
&= (2+y\sigma^{-2})\left[x-\mu /(1+2\sigma^{2}/y)\right]^2+ \mu^2 /(1+2\sigma^{2}/y)
\end{align*}
and thus
\begin{align*}
\int \exp\{-x^2\}\,&\exp\{-(x-\mu)^2y/2\sigma^2\}\,\dfrac{\text{d}x}{\sqrt{2\pi\sigma^2/y}}\\
&= \exp\left\{ -\frac{\mu^2}{1+2\sigma^2/y}\right\}\\
&\quad\times\int \exp\left\{-(2+y\sigma^{-2})\left[x-\mu /(1+2\sigma^{2}/y)\right]^2/2\right\}
\,\dfrac{\text{d}x}{\sqrt{2\pi\sigma^2/y}}\\
&= \exp \left\{ -\frac{\mu^2}{1+2\sigma^2/y}\right\}\,\dfrac{\sqrt{y\sigma^{-2}}}{\sqrt{2+y\sigma^{-2}}}\\
&= \exp \left\{ -\frac{\mu^2}{1+2\sigma^2/y}\right\}\,\dfrac{1}{\sqrt{1+2\sigma^{2}/y}}
\end{align*}
\subsection{Exercise \ref{exo:motown}}
Since $H(U)$ and $H(1_U)$ take opposite values when $H$ is monotone, i.e.~one is large when the other is small, those
two random variables are negatively correlated
\subsection{Exercise \ref{pb:ratio_csts3} }
{\bf Warning: Another reference problem in this exercise: Exercise \ref{pb:ratio_csts2} should be Exercise \ref{pb:ratio_csts}.}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The ratio \eqref{eq:bridge} is a ratio of convergent estimators of the numerator and the denominator in question b of
Exercise \ref{pb:ratio_csts} when $\theta_{1i}\sim \pi_1(\theta|x)$ and $\theta_{2i} \sim \pi_2(\theta|x)$. (Note that the wording
of this question is vague in that it does not indicate the dependence on $x$.)
\item If we consider the special choice $\alpha(\theta) = 1 / \tilde\pi_1(\theta) \tilde\pi_2(\theta)$ in the representation of
question b of Exercise \ref{pb:ratio_csts}, we do obtain $\varrho = \BE^{\pi_2} [ \tilde\pi_2(\theta) ^{-1} ] / \BE^{\pi_1}
[ \tilde\pi_1(\theta) ^{-1} ]$, assuming both expectations exist. Given that $(i=1,2)$
$$
\BE^{\pi_i} [ \tilde\pi_i(\theta) ^{-1} ] = \int_{\Theta} \dfrac{1}{\tilde\pi_i(\theta)}\,\dfrac{\tilde\pi_i(\theta)}{m_i(x)}\,
\text{d}\theta\,,
$$
this implies that the space $\Theta$ must have a finite measure. If $\text{d}\theta$ represents the dominating measure, $\Theta$
is necessarily compact.
\end{enumerate}
\subsection{Exercise \ref{pb:RBMix}}
Each of those \R programs compare the range of the Monte Carlo estimates with and without Rao--Blackwellization:
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item For the negative binomial mean, $\BE_f(X)=a/b$ since $X\sim\mathcal{N}eg(a,b/(b+1))$.
\begin{verbatim}
y=matrix(rgamma(100*Nsim,a)/b,ncol=100)
x=matrix(rpois(100*Nsim,y),ncol=100)
matplot(apply(x,2,cumsum)/(1:Nsim),type="l",col="grey80",
lty=1,ylim=c(.4*a/b,2*a/b), xlab="",ylab="")
matplot(apply(y,2,cumsum)/(1:Nsim),type="l",col="grey40",
lty=1,add=T,xlab="",ylab="")
abline(h=a/b,col="gold",lty=2,lwd=2)
\end{verbatim}
\item For the generalized $t$ variable, $\BE_f(X)=BE_f[X|Y]=0$. So the improvement is obvious. To make a
more sensible comparison, we consider instead $\BE_f[X^2]=\BE[Y]=a/b$.
\begin{verbatim}
y=matrix(rgamma(100*Nsim,a)/b,ncol=100)
x=matrix(rnorm(100*Nsim,sd=sqrt(y)),ncol=100)
matplot(apply(x^2,2,cumsum)/(1:Nsim),type="l",col="grey80",
lty=1,ylim=(a/b)*c(.2,2), xlab="",ylab="")
matplot(apply(y,2,cumsum)/(1:Nsim),type="l",col="grey40",
lty=1,add=T,xlab="",ylab="")
abline(h=a/b,col="gold",lty=2,lwd=2)
\end{verbatim}
\item {\bf Warning: There is a typo in this question with a missing $n$ in the $\mathcal{B}in(y)$
distribution... It should be}
\begin{rema}
c. $X|y \sim \mathcal{B}in(n,y)$, $Y \sim \mathcal{B}e(a,b)$ ($X$ is beta-binomial).
\end{rema}
In this case, $\BE_f[X]=n\BE_f[Y]=na/(a+b)$.
\begin{verbatim}
y=1/matrix(1+rgamma(100*Nsim,b)/rgamma(100*Nsim,a),ncol=100)
x=matrix(rbinom(100*Nsim,n,prob=y),ncol=100)
matplot(apply(x,2,cumsum)/(1:Nsim),type="l",col="grey80",lty=1,
ylim=(n*a/(a+b))*c(.2,2), xlab="",ylab="")
matplot(n*apply(y,2,cumsum)/(1:Nsim),type="l",col="grey40",lty=1,add=T,
xlab="",ylab="")
abline(h=n*a/(a+b),col="gold",lty=2,lwd=2)
\end{verbatim}
\end{enumerate}
\subsection{Exercise \ref{pb:bandaid}}
It should be clear from display (\ref{eq:sigmatri}) that we only need to delete the
$n^2$ ($k^2$ in the current notation). We replace it with $2 k^2$ and add the
last row and column as in (\ref{eq:sigmatri}).
\subsection{Exercise \ref{pb:term_is}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item For the accept-reject algorithm,
\begin{eqnarray*}
&&\left(X_{1},\ldots,X_{m}\right)\sim f(x)\\
&&\left(U_{1},\ldots,U_{N}\right)\stackrel{\hbox{i.i.d.}}{\sim} \mathcal{U}_{[0,1]}\\
&&\left(Y_{1},\ldots,Y_{N}\right)\stackrel{\hbox{i.i.d.}}{\sim} g(y)\\
\end{eqnarray*}
and the acceptance weights are the $w_{j}=\frac{f\left(Y_{j}\right)}
{Mg\left(Y_{j}\right)}$. $N$ is the stopping time associated with these variables,
that is, $Y_{N}=X_{m}$. We have
\begin{eqnarray*}
\rho_{i}&=&P\left(U_{i}\leq w_{i}|N=n, Y_{1},\ldots,Y_{n}\right)\\
&=&\frac{P\left(U_{i}\leq w_{i},N=n,
Y_{1},\ldots,Y_{n}\right)}{P\left(N=n, Y_{1},\ldots,Y_{n}\right)}
\end{eqnarray*}
where the numerator is the probability that $Y_{N}$ is accepted as
$X_m$, $Y_{i}$ is accepted as one $X_j$ and there are $(m-2)$ $X_j$'s that
are chosen from the remaining $(n-2)$ $Y_\ell$'s. Since
\begin{equation*}
P\left(\hbox{$Y_{j}$ is accepted}\right)=P\left(U_{j}\leq w_{j}\right)=w_{j}\,,
\end{equation*}
the numerator is
\begin{equation*}
w_{i}\sum_{(i_{1},\ldots,i_{m-2})}\prod_{j=1}^{m-2}w_{i_{j}}\prod_{j=m-1}^{n-2}(1-w_{i_{j}})
\end{equation*}
where
\begin{enumerate}
\renewcommand{\theenumii}{\roman{enumii}}
\item $\prod_{j=1}^{m-2}w_{i_{j}}$ is the probability that among the
$N$ $Y_{j}$'s, in addition to both $Y_{N}$ and $Y_{i}$ being accepted,
there are $(m-2)$ other $Y_{j}$'s accepted as $X_\ell$'s;
\item $\prod_{j=m-1}^{n-2}(1-w_{i_{j}})$ is the probability
that there are $(n-m)$ rejected $Y_{j}$'s, given that
$Y_{i}$ and $Y_{N}$ are accepted;
\item the sum is over all subsets of
$(1,\ldots,i-1,i+1,\ldots,n)$ since, except for $Y_{i}$ and $Y_{N}$,
other $(m-2)$ $Y_{j}$'s are chosen uniformly from $(n-2)$ $Y_{j}$'s.
\end{enumerate}
\par\noindent Similarly the denominator
$$
P\left(N=n,Y_{1},\ldots,Y_{n}\right)
=w_{i}\sum_{(i_{1},\ldots,i_{m-1})}\prod_{j=1}^{m-1}w_{i_{j}}\prod_{j=m}^{n-1}(1-w_{i_{j}})
$$
is the probability that $Y_{N}$ is accepted as $X_{m}$ and
$(m-1)$ other $X_{j}$'s are chosen from $(n-1)$ $Y_{\ell}$'s. Thus
\begin{eqnarray*}
\rho_{i}&=&P\left(U_{i}\leq w_{i}|N=n,Y_{1},\ldots,Y_{n}\right)\\
&=&w_{i}\frac{\sum_{(i_{1},\ldots,i_{m-2})}\prod_{j=1}^{m-2}w_{i_{j}}\prod_{j=m-1}^{n-2}(1-w_{i_{j}})}
{\sum_{(i_{1},\ldots,i_{m-1})}\prod_{j=1}^{m-1}w_{i_{j}}\prod_{j=m-1}^{n-1}(1-w_{i_{j}})}
\end{eqnarray*}
\item We have
\begin{eqnarray*}
\delta_{1}&=&\frac{1}{m}\sum_{i=1}^{m}h\left(X_{i}\right)=
\frac{1}{m}\sum_{j=1}^{N}h\left(Y_{j}\right)\mathbb{I}_{U_{j}\leq
w_{j}}\\
\delta_{2}&=&\frac{1}{m}\sum_{j=1}^{N}\mathbb{E}\left(\mathbb{I}_{U_{j}\leq
w_{j}}|N,Y_{1},\ldots,Y_{N}\right)h\left(Y_{j}\right)=
\frac{1}{m}\sum_{i=1}^{N}\rho_{i}h\left(Y_{i}\right)
\end{eqnarray*}
\par\noindent Since
$\mathbb{E}\left(\mathbb{E}\left(X|Y\right)\right)=\mathbb{E}\left(X\right)$,
\begin{eqnarray*}
\mathbb{E}\left(\delta_{2}\right)&=&\mathbb{E}\left(\frac{1}{m}\sum_{j=1}^{N}
\mathbb{E}\left(\mathbb{I}_{U_{j}\leq
w_{j}}|N,Y_{1},\ldots,Y_{N}\right)\right)\\
&=&\frac{1}{m}\sum_{j=1}^{N}\mathbb{E}\left(\mathbb{I}_{U_{j}\leq
w_{j}}\right)h\left(Y_{j}\right)\\
&=&\mathbb{E}\left(\frac{1}{m}\sum_{j=1}^{N}h\left(Y_{j}\right)\mathbb{I}_{U_{j}\leq
w_{j}}\right)=\mathbb{E}\left(\delta_{1}\right)
\end{eqnarray*}
Under quadratic loss, the risk of $\delta_{1}$ and $\delta_{2}$ are:
\begin{eqnarray*}
R\left(\delta_{1}\right)&=&\mathbb{E}\left(\delta_{1}-\mathbb{E}h\left(X\right)\right)^{2}\\
&=&\mathbb{E}\left(\delta_{1}^{2}\right)+\mathbb{E}\left(\mathbb{E}(h(X))\right)^{2}-
2\mathbb{E}\left(\delta_{1}\mathbb{E}(h(X))\right)\\
&=&\mathrm{var}\left(\delta_{1}\right)-\left(\mathbb{E}(\delta_{1})\right)^{2}+
\mathbb{E}\left(\mathbb{E}\left(h(X)\right)\right)^{2}-
2\mathbb{E}\left(\delta_{1}\mathbb{E}\left(h(X)\right)\right)
\end{eqnarray*}
and
\begin{eqnarray*}
R\left(\delta_{2}\right)&=&\mathbb{E}\left(\delta_{2}-\mathbb{E}h\left(X\right)\right)^{2}\\
&=&\mathbb{E}\left(\delta_{2}^{2}\right)+\mathbb{E}\left(\mathbb{E}(h(X))\right)^{2}-
2\mathbb{E}\left(\delta_{2}\mathbb{E}(h(X))\right)\\
&=&\mathrm{var}\left(\delta_{2}\right)-\left(\mathbb{E}(\delta_{2})\right)^{2}+
\mathbb{E}\left(\mathbb{E}\left(h(X)\right)\right)^{2}-
2\mathbb{E}\left(\delta_{2}\mathbb{E}\left(h(X)\right)\right)
\end{eqnarray*}
Since
$\mathbb{E}\left(\delta_{1}\right)=\mathbb{E}\left(\delta_{2}\right)$,
we only need to compare $\mathrm{var}\left(\delta_{1}\right)$ and
$\mathrm{var}\left(\delta_{2}\right)$. From the definition of
$\delta_{1}$ and $\delta_{2}$, we have
\begin{equation*}
\delta_{2}(X)=\mathbb{E}\left(\delta_{1}(X)|Y\right)
\end{equation*}
so
\begin{equation*}
\mathrm{var}\left(\mathbb{E}\left(\delta_{1}\right)\right)=\mathrm{var}\left(\delta_{2}\right)
\leq \mathrm{var}\left(\delta_{1}\right)\,.
\end{equation*}
\end{enumerate}
\subsection{Exercise \ref{pb:rob}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Let us transform $\mathfrak{I}$ into $\mathfrak{I}=\int{\frac{h(y)f(y)}{m(y)}m(y)dy}$,
where $m$ is the marginal density of $Y_1$. We have
\begin{eqnarray*}
\mathfrak{I} &=& \sum_{n\in\N}{\left[P(N=n)\int{\frac{h(y)f(y)}{m(y|N=n)}}\right]}\\
&=& \mathbb{E}_N\left[\mathbb{E}\left[\frac{h(y)f(y)}{m(y)}|N\right]\right].
\end{eqnarray*}
\item As $\beta$ is constant, for every function $c$,
$$\mathfrak{I}=\beta\mathbb{E}[c(Y)]+\mathbb{E}\left[\frac{h(Y)f(Y)}{m(Y)}-\beta c(Y)\right].$$
\item The variance associated with an empirical mean of the
$$
\frac{h(Y_i)f(Y_i)}{m(Y_i)}-\beta c(Y_i)
$$
is
\begin{eqnarray*}
\mathrm{var}(\widehat{\mathfrak{I}}) &=& \beta^2\mathrm{var}(c(Y))+\mathrm{var}
\left(\frac{h(Y)f(Y)}{m(Y)}\right)-2\beta \mathrm{cov}\left[\frac{h(Y)
f(Y)}{m(Y)},c(Y)\right]\\
&=& \beta^2\mathrm{var}(c(Y))-2\beta
\mathrm{cov}[d(Y),c(Y)]+\mathrm{var}(d(Y)).
\end{eqnarray*}
Thus, the optimal choice of $\beta$ is such that
$$\frac{\partial \mathrm{var}(\widehat{\mathfrak{I}})}{\partial \beta}=0$$
and is given by
$$\beta^*=\frac{\mathrm{cov}[d(Y),c(Y)]}{\mathrm{var}(c(Y))}.$$
\item The first choice of $c$ is $c(y)={\mathbb{I}}_{\{y>y_0\}}$,
which is interesting when $p=P(Y>y_0)$ is known. In this case,
$$\beta^*=\frac{\int_{y>y_0}{hf}-\int_{y>y_0}{hf}\int_{y>y_0}{m}}{\int_{y>y_0}{m}-(\int_{y>y_0}{m})^2}=\frac{\int_{y>y_0}{hf}}{p}.$$
Thus, $\beta^*$ can be estimated using the Accept-reject sample. A
second choice of $c$ is $c(y)=y$, which leads to the two first
moments of $Y$. When those two moments $m_1$ and $m_2$ are known
or can be well approximated, the optimal choice of $\beta$ is
$$\beta^*=\frac{\int{yh(y)f(y)dy}-\mathfrak{I}m_1}{m_2}.$$
and can be estimated using the same sample or another instrumental
density namely when $\mathfrak{I}'=\int{yh(y)f(y)dy}$ is simple to
compute, compared to $\mathfrak{I}$.
\end{enumerate}
\chapter{Basic R programming
\subsection{Exercise \ref{exo:baby}}
Self-explanatory.
\subsection{Exercise \ref{exo:helpme}}
Self-explanatory.
\subsection{Exercise \ref{exo:seq}}
One problem is the way in which \R handles parentheses. So
\begin{verbatim}
> n=10
> 1:n
\end{verbatim}
produces
\begin{verbatim}
1 2 3 4 5 6 7 8 9 10
\end{verbatim}
but
\begin{verbatim}
> n=10
> 1:n-1
\end{verbatim}
produces
\begin{verbatim}
0 1 2 3 4 5 6 7 8 9
\end{verbatim}
since the \verb+1:10+ command is executed first, then $1$ is subtracted.
The command \verb+seq(1,n-1,by=1)+ operates just as \verb+1:(n-1)+.
If $n$ is less than $1$ we can use something like \verb@seq(1,.05,by=-.01)@.
Try it, and try some other variations.
\subsection{Exercise \ref{pb:boot1}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item To bootstrap the data you can use the code
\begin{verbatim}
Boot=2500
B=array(0,dim=c(nBoot, 1))
for (i in 1:nBoot){
ystar=sample(y,replace=T)
B[i]=mean(ystar)
}
\end{verbatim}
The quantile can be estimated with \verb+sort(B)[.95*nBoot]+, which in our case/sample is $5.8478$.
\item To get a confidence interval requires a double bootstrap. That is, for each bootstrap sample we
can get a point estimate of the $95\%$ quantile. We can then run an histogram on these quantiles
with \verb@hist@, and get {\em their} upper and lower quantiles for a confidence region.
\begin{verbatim}
nBoot1=1000
nBoot2=1000
B1=array(0,dim=c(nBoot1, 1))
B2=array(0,dim=c(nBoot2, 1))
for (i in 1:nBoot1){
ystar=sample(y,replace=T)
for (j in 1:nBoot2)
B2[j]=mean(sample(ystar,replace=T))
B1[i]=sort(B2)[.95*nBoot2]
}
\end{verbatim}
A $90\%$ confidence interval is given by
\begin{verbatim}
> c(sort(B1)[.05*nBoot1], sort(B1)[.95*nBoot1])
[1] 4.731 6.844
\end{verbatim}
or alternatively
\begin{verbatim}
> quantile(B1,c(.05,.95))
4.731 6.844
\end{verbatim}
for the data in the book. The command \verb@hist(B1)@ will give a histogram of the values.
\end{enumerate}
\subsection{Exercise \ref{exo:RvsC}}
If you type
\begin{verbatim}
> mean
function (x, ...)
UseMethod("mean")
<environment: namespace:base>
\end{verbatim}
you do not get any information about the function \verb+mean+ because it is not written in {\tt R}, while
\begin{verbatim}
> sd
function (x, na.rm = FALSE)
{
if (is.matrix(x))
apply(x, 2, sd, na.rm = na.rm)
else if (is.vector(x))
sqrt(var(x, na.rm = na.rm))
else if (is.data.frame(x))
sapply(x, sd, na.rm = na.rm)
else sqrt(var(as.vector(x), na.rm = na.rm))
}
\end{verbatim}
shows \verb+sd+ is written in {\tt R}. The same applies to \verb+var+ and \verb+cov+.
\subsection{Exercise \ref{exo:attach}}
When looking at the description of \verb+attach+, you can see that this command allows to use
variables or functions that are in a database rather than in the current \verb=.RData=. Those
objects can be temporarily modified without altering their original format. (This is a fragile command
that we do not personaly recommend!)
The function \verb+assign+ is also rather fragile, but it allows for the creation and assignment of
an arbitrary number of objects, as in the documentation example:
\begin{verbatim}
for(i in 1:6) { #-- Create objects 'r.1', 'r.2', ... 'r.6' --
nam <- paste("r",i, sep=".")
assign(nam, 1:i)
}
\end{verbatim}
which allows to manipulate the \verb+r.1+, \verb+r.2+, ..., variables.
\subsection{Exercise \ref{exo:dump&sink}}
This is mostly self-explanatory. If you type the help on each of those functions,
you will see examples on how they work. The most recommended \R function for saving
\R objects is \verb+save+. Note that, when using \verb\write\, the description states
\begin{verbatim}
The data (usually a matrix) 'x' are written to file
'file'. If 'x' is a two-dimensional matrix you need
to transpose it to get the columns in 'file' the same
as those in the internal representation.
\end{verbatim}
Note also that \verb+dump+ and \verb+sink+ are fairly involved and should use with caution.
\subsection{Exercise \ref{exo:match}}
Take, for example {\tt a=3;x=c(1,2,3,4,5)} to see that they are the same,
and, in fact, are the same as \verb|max(which(x == a))|. For
\verb|y=c(3,4,5,6,7,8)|, try \verb|match(x,y)| and \verb|match(y,x)| to
see the difference. In contrast, \verb|
\subsection{Exercise \ref{exo:timin}}
Running \verb=system.time= on the three sets of commands give
\begin{enumerate}
\item 0.004 0.000 0.07
\item 0 0
\item 0.000 0.000 0.00
\end{enumerate}
and the vectorial allocation is therefore the fastest\idxr{system.time@\verb+system.time+}.
\subsection{Exercise \ref{exo:unifix}}
The \R code is
\begin{verbatim}
> A=matrix(runif(4),ncol=2)
> A=A/apply(A,1,sum)
> apply(
[1] 1 1
> B=A;for (t in 1:100) B=
> apply(B,1,sum)
[1] Inf Inf
\end{verbatim}
and it shows that numerical inaccuracies in the product leads to the property to
fail when the power is high enough.
\subsection{Exercise \ref{exo:orange}}
The function \verb=xyplot= is part of the \verb+lattice+ library. Then
\begin{verbatim}
> xyplot(age ~ circumference, data=Orange)
> barchart(age ~ circumference, data=Orange)
> bwplot(age ~ circumference, data=Orange)
> dotplot(age ~ circumference, data=Orange)
\end{verbatim}
produce different representations of the dataset. Fitting a linear model is
simply done by \verb+lm(age ~ circumference, data=Orange)+
and using the tree index as an extra covariate leads to
\begin{verbatim}
>summary(lm(age ~ circumference+Tree, data=Orange))
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -90.0596 55.5795 -1.620 0.116
circumference 8.7366 0.4354 20.066 < 2e-16 ***
Tree.L -348.8982 54.9975 -6.344 6.23e-07 ***
Tree.Q -22.0154 52.1881 -0.422 0.676
Tree.C 72.2267 52.3006 1.381 0.178
Tree^4 41.0233 52.2167 0.786 0.438
\end{verbatim}
meaning that only \verb=Tree.L= was significant.
\subsection{Exercise \ref{exo:sudoku}}
\begin{enumerate}
\item A plain representation is
\begin{verbatim}
> s
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 0 0 0 0 0 6 0 4 0
[2,] 2 7 9 0 0 0 0 5 0
[3,] 0 5 0 8 0 0 0 0 2
[4,] 0 0 2 6 0 0 0 0 0
[5,] 0 0 0 0 0 0 0 0 0
[6,] 0 0 1 0 9 0 6 7 3
[7,] 8 0 5 2 0 0 4 0 0
[8,] 3 0 0 0 0 0 0 8 5
[9,] 6 0 0 0 0 0 9 0 1
\end{verbatim}
where empty slots are represented by zeros.
\item A simple cleaning of non-empty (i.e.~certain) slots is
\begin{verbatim}
for (i in 1:9)
for (j in 1:9){
if (s[i,j]>0) pool[i,j,-s[i,j]]=FALSE
}
\end{verbatim}
\item In {\tt R}, matrices (and arrays) are also considered as vectors. Hence \verb+s[i]+ represents
the $(1+\lfloor (i-1)/9 \rfloor,(i-1)\,\text{mod}\,9+1)$ entry of the grid.
\item This is self-explanatory. For instance,
\begin{verbatim}
> a=2;b=5
> boxa
[1] 1 2 3
> boxb
[1] 4 5 6
\end{verbatim}
\item The first loop checks whether or not, for each remaining possible integer, there exists
an identical entry in the same row, in the same column or in the same box. The second command
sets entries for which only one possible integer remains to this integer.
\item A plain \R program solving the grid is
\begin{verbatim}
while (sum(s==0)>0){
for (i in sample(1:81)){
if (s[i]==0){
a=((i-1
b=trunc((i-1)/9)+1
boxa=3*trunc((a-1)/3)+1
boxa=boxa:(boxa+2)
boxb=3*trunc((b-1)/3)+1
boxb=boxb:(boxb+2)
for (u in (1:9)[pool[a,b,]]){
pool[a,b,u]=(sum(u==s[a,])+sum(u==s[,b])
+sum(u==s[boxa,boxb]))==0
}
if (sum(pool[a,b,])==1){
s[i]=(1:9)[pool[a,b,]]
}
if (sum(pool[a,b,])==0){
print("wrong sudoku")
break()
}
}
}
}
\end{verbatim}
and it stops with the outcome
\begin{verbatim}
> s
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
[1,] 1 3 8 5 2 6 7 4 9
[2,] 2 7 9 3 4 1 8 5 6
[3,] 4 5 6 8 7 9 3 1 2
[4,] 7 4 2 6 3 5 1 9 8
[5,] 9 6 3 1 8 7 5 2 4
[6,] 5 8 1 4 9 2 6 7 3
[7,] 8 9 5 2 1 3 4 6 7
[8,] 3 1 7 9 6 4 2 8 5
[9,] 6 2 4 7 5 8 9 3 1
\end{verbatim}
which is the solved Sudoku.
\end{enumerate}
\chapter{Monte Carlo Integration
\subsection{Exercise \ref{pb:norm_cauchy}}
\begin{enumerate}
\item The plot of the integrands follows from a simple \R program:
\begin{verbatim}
f1=function(t){ t/(1+t*t)*exp(-(x-t)^2/2)}
f2=function(t){ 1/(1+t*t)*exp(-(x-t)^2/2)}
plot(f1,-3,3,col=1,ylim=c(-0.5,1),xlab="t",ylab="",ty="l")
plot(f2,-3,3,add=TRUE,col=2,ty="l")
legend("topright", c("f1=t.f2","f2"), lty=1,col=1 :2)
\end{verbatim}
Both numerator and denominator are expectations under the Cauchy distribution. They can therefore
be approximated directly by
\begin{verbatim}
Niter=10^4
co=rcauchy(Niter)
I=mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))
\end{verbatim}
We thus get
\begin{verbatim}
> x=0
> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))
[1] 0.01724
> x=2
> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))
[1] 1.295652
> x=4
> mean(co*dnorm(co,mean=x))/mean(dnorm(co,mean=x))
[1] 3.107256
\end{verbatim}
\item Plotting the convergence of those integrands can be done via
\begin{verbatim}
# (C.) Anne Sabourin, 2009
x1=dnorm(co,mean=x)
estint2=cumsum(x1)/(1:Niter)
esterr2=sqrt(cumsum((x1-estint2)^2))/(1:Niter)
x1=co*x1
estint1=cumsum(x1))/(1:Niter)
esterr2=sqrt(cumsum((x1-estint1)^2))/(1:Niter)
par(mfrow=c(1,2))
plot(estint1,type="l",xlab="iteration",ylab="",col="gold")
lines(estint1-2*esterr1,lty=2,lwd=2)
lines(estint1+2*esterr1,lty=2,lwd=2)
plot(estint2,type="l",xlab="iteration",ylab="",col="gold")
lines(estint2-2*esterr2,lty=2,lwd=2)
lines(estint2+2*esterr2,lty=2,lwd=2)
\end{verbatim}
Because we have not yet discussed the evaluation of the error for a ratio of estimators, we consider
both terms of the ratio separately. The empirical variances $\hat\sigma$ are given by \verb+var(co*dnorm(co,m=x))+
and \verb+var(dnorm(co,m=x))+ and solving $2\hat\sigma/\sqrt{n}<10^{-3}$ leads to an evaluation of the number of
simulations necessary to get $3$ digits of accuracy.
\begin{verbatim}
> x=0;max(4*var(dnorm(co,m=x))*10^6,
+ 4*var(co*dnorm(co,m=x))*10^6)
[1] 97182.02
> x=2; 4*10^6*max(var(dnorm(co,m=x)),var(co*dnorm(co,m=x)))
[1] 220778.1
> x=4; 10^6*4*max(var(dnorm(co,m=x)),var(co*dnorm(co,m=x)))
[1] 306877.9
\end{verbatim}
\item A similar implementation applies for the normal simulation, replacing \verb=dnorm= with \verb=dcauchy= in the
above. The comparison is clear in that the required number of normal simulations when $x=4$ is $1398.22$, to compare
with the above $306878$.
\end{enumerate}
\subsection{Exercise \ref{exo:tailotwo}}
Due to the identity
$$
\mathbb{P}(X>20) = \int_{20}^{\infty}\dfrac{\exp(-\frac{x^2}{2})}{\sqrt{2\pi}}\text{d}x
= \int_{0}^{1/20}\frac{\exp(-\frac{1}{2*u^2})}{20 u^2 \sqrt{2\pi}}20 \text{d}u\,,
$$
we can see this integral as an expectation under the $\mathcal{U}(0,1/20)$
distribution and thus use a Monte Carlo approximation to $\mathbb{P}(X>20)$.
The following \R code monitors the convergence of the corresponding approximation.
\begin{verbatim}
# (C.) Thomas Bredillet, 2009
h=function(x){ 1/(x^2*sqrt(2*pi)*exp(1/(2*x^2)))}
par(mfrow=c(2,1))
curve(h,from=0,to=1/20,xlab="x",ylab="h(x)",lwd="2")
I=1/20*h(runif(10^4)/20)
estint=cumsum(I)/(1:10^4)
esterr=sqrt(cumsum((I-estint)^2))/(1:10^4)
plot(estint,xlab="Iterations",ty="l",lwd=2,
ylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab="")
lines(estint+2*esterr,col="gold",lwd=2)
lines(estint-2*esterr,col="gold",lwd=2)
\end{verbatim}
The estimated probability is $2.505 e-89$ with an error of $\pm 3.61 e-90$, compared with
\begin{verbatim}
> integrate(h,0,1/20)
2.759158e-89 with absolute error < 5.4e-89
> pnorm(-20)
[1] 2.753624e-89
\end{verbatim}
\subsection{Exercise \ref{exo:fperron+}}
{\bf Warning: due to the (late) inclusion of an extra-exercise in the book,
the ``above exercise" actually means Exercise \ref{exo:tailotwo}!!!}\\
When $Z\sim\mathcal{N}(0,1)$, with density $f$, the quantity of interest is $\mathbb{P}(Z>4.5)$,
i.e.~$\mathbb{E}^{f}[\mathbb{I}_{Z>4.5}]$. When $g$ is the density of
the exponential $\mathcal{E}xp(\lambda)$ distribution truncated at $4.5$,
$$
g(y)=\frac{1_{y>4.5}\lambda\exp(-\lambda y)}{\int_{-4.5}^{\infty}\lambda\exp(-\lambda y)\,\text{d}y}
=\lambda e^{-\lambda(y-4.5)}\mathbb{I}_{y>4.5}\,,
$$
simulating iid $Y^{(i)}$'s from $g$ is straightforward. Given that the indicator function
$\mathbb{I}_{Y>4.5}$ is then always equal to $1$, $\mathbb{P}(Z>4.5)$ is estimated by
$$
\hat{h}_{n}=\frac{1}{n}\sum_{i=1}^{n}\frac{f(Y^{(i)})}{g(Y^{(i)})}.
$$
A corresponding estimator of its variance is
$$
v_{n}=\frac{1}{n²}\sum_{i=1}^{n}(1-\hat{h}_{n})^{2}{f(Y^{(i)})}\big/{g(Y^{(i)})}\,.
$$
The following \R code monitors the convergence of the estimator (with $\lambda=1,10$)
\begin{verbatim}
# (C.) Anne Sabourin, 2009
Nsim=5*10^4
x=rexp(Nsim)
par(mfcol=c(1,3))
for (la in c(.5,5,50)){
y=(x/la)+4.5
weit=dnorm(y)/dexp(y-4.5,la)
est=cumsum(weit)/(1:Nsim)
varest=cumsum((1-est)^2*weit/(1:Nsim)^2)
plot(est,type="l",ylim=c(3e-6,4e-6),main="P(X>4.5) estimate",
sub=paste("based on E(",la,") simulations",sep=""),xlab="",ylab="")
abline(a=pnorm(-4.5),b=0,col="red")
}
\end{verbatim}
When evaluating the impact of $\lambda$ on the variance (and hence on the convergence) of the estimator,
similar graphs can be plotted for different values of $\lambda$. This experiment does not exhibit a clear
pattern, even though large values of $\lambda$, like $\lambda=20$ appear to slow down convergence very much.
Figure \ref{fig:nortail} shows the output of such a comparison. Picking $\lambda=5$ seems however to produce
a very stable approximation of the tail probability.
\begin{figure
\begin{center}
\centerline{\includegraphics[width=.95\textwidth]{nortail.jpg}}
\caption{\label{fig:nortail}
Comparison of three importance sampling approximations to the normal tail probability $\mathbb{P}(Z>4.5)$ based
on a truncated $\mathcal{E}xp(\lambda)$ distribution with $\lambda=.5,5.50$. The straight red line is the true value.}
\end{center}
\end{figure}
\subsection{Exercise \ref{pb:some_jump}}
While the expectation of $\sqrt{x/(1-x)}$ is well defined for $\nu>1/2$, the integral of
$x/(1-x)$ against the $t$ density does not exist for any $\nu$. Using an importance sampling representation,
$$
\int \frac{x}{1-x}\,\frac{f^2(x)}{g(x)}\,\text{d}x = \infty
$$
if $g(1)$ is finite. The integral will be finite around $1$ when $1/(1-t)g(t)$ is integrable, which
means that $g(t)$ can go to infinity at any rate. For instance, if $g(t)\approx(1-t)^{-\alpha}$ around
$1$, any $\alpha>0$ is acceptable.\\
\subsection{Exercise \ref{pb:bayesAR2}}
As in Exercise \ref{pb:norm_cauchy},
the quantity of interest is $\delta^{\pi}(x)=\mathbb{E}^{\pi}(\theta|x)=\int\theta\pi(\theta|x)\,\text{d}\theta$
where $x\sim\mathcal{N}(\theta,1)$ and $\theta\sim\mathcal{C}(0,1)$. The target
distribution is
$$
\pi(\theta|x)\propto{\pi(\theta)e^{-(x-\theta)^{2}/2}} = f_{x}(\theta)\,.
$$
A possible importance function is the prior distribution, $$g(\theta)=\frac{1}{\pi(1+\theta^{2})}$$
and for every $\theta\in\mathbb{R}$, $\frac{f_{x}(\theta)}{g(\theta)}\leq M$, when $M=\pi$.
Therefore, generating from the prior $g$ and accepting simulations according to the
Accept-Reject ratio provides a sample from
$\pi(\theta|x)$. The empirical mean of this sample is then a converging estimator of $\mathbb{E}^{\pi}(\theta|x).$
Furthermore, we directly deduce the estimation error for $\delta$.
A graphical evaluation of the convergence is given by the following \R program:
\begin{verbatim}
f=function(t){ exp(-(t-3)^2/2)/(1+t^2)}
M=pi
Nsim=2500
postdist=rep(0,Nsim)
for (i in 1:Nsim){
u=runif(1)*M
postdist[i]=rcauchy(1)
while(u>f(postdist[i])/dcauchy(postdist[i])){
u=runif(1)*M
postdist[i]=rcauchy(1)
}}
estdelta=cumsum(postdist)/(1:Nsim)
esterrd=sqrt(cumsum((postdist-estdelta)^2))/(1:Nsim)
par(mfrow=c(1,2))
C1=matrix(c(estdelta,estdelta+2*esterrd,estdelta-2*esterrd),ncol=3)
matplot(C1,ylim=c(1.5,3),type="l",xlab="Iterations",ylab="")
plot(esterrd,type="l",xlab="Iterations",ylab="")
\end{verbatim}
\subsection{Exercise \ref{pb:smalltail}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item If $X \sim \mathcal{E}xp(1)$ then for $x \ge a$,
$$
\mathbb{P}[a + X < x] = \int_{0}^{x-a} \exp(-t)\,\text{d}t
= \int_{a}^{x} \exp(-t+a)\,\text{d}t = \mathbb{P}(Y < x)
$$
when $Y \sim\mathcal{E}xp^{+}(a,1)$,
\item If $ X \sim \chi^{2}_{3}$, then
\begin{align*}
\mathbb{P}(X>25)
&= \int_{25}^{+\infty} \frac{2^{-3/2}}{\Gamma(\frac{3}{2})}\,x^{1/2}\exp(-x/2)\,\text{d}x\\
&= \int_{12.5}^{+\infty} \frac{\sqrt(x)\exp(-12.5)}{\Gamma(\frac{3}{2})}\exp(-x+12.5)\,\text{d}x\,.
\end{align*}
The corresponding \R code
\begin{verbatim}
# (C.) Thomas Bredilllet, 2009
h=function(x){ exp(-x)*sqrt(x)/gamma(3/2)}
X = rexp(10^4,1) + 12.5
I=exp(-12.5)*sqrt(X)/gamma(3/2)
estint=cumsum(I)/(1:10^4)
esterr=sqrt(cumsum((I-estint)^2))/(1:10^4)
plot(estint,xlab="Iterations",ty="l",lwd=2,
ylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab="")
lines(estint+2*esterr,col="gold",lwd=2)
lines(estint-2*esterr,col="gold",lwd=2)
\end{verbatim}
gives an evaluation of the probability as $1.543e-05 $ with a $10^{-8}$ error, to compare
with
\begin{verbatim}
> integrate(h,12.5,Inf)
1.544033e-05 with absolute error < 3.4e-06
> pchisq(25,3,low=F)
[1] 1.544050e-05
\end{verbatim}
Similarly, when $X \sim t_{5} $, then
$$
\mathbb{P}(X>50) = \int_{50}^{\infty} \dfrac{\Gamma(3)}{\sqrt(5*\pi)\Gamma(2,5)
(1+\frac{t^2}{5})^{3}\exp(-t+50)}\exp(-t+50)\,\text{d}t
$$
and a corresponding \R code
\begin{verbatim}
# (C.) Thomas Bredilllet, 2009
h=function(x){ 1/sqrt(5*pi)*gamma(3)/gamma(2.5)*1/(1+x^2/5)^3}
integrate(h,50,Inf)
X = rexp(10^4,1) + 50
I=1/sqrt(5*pi)*gamma(3)/gamma(2.5)*1/(1+X^2/5)^3*1/exp(-X+50)
estint=cumsum(I)/(1:10^4)
esterr=sqrt(cumsum((I-estint)^2))/(1:10^4)
plot(estint,xlab="Mean and error range",type="l",lwd=2,
ylim=mean(I)+20*c(-esterr[10^4],esterr[10^4]),ylab="")
lines(estint+2*esterr,col="gold",lwd=2)
lines(estint-2*esterr,col="gold",lwd=2)
\end{verbatim}
As seen on the graph, this method induces jumps in the convergence patterns. Those jumps are indicative of
variance problems, as should be since the estimator does not have a finite variance in this case. The value
returned by this approach differs from alternatives evaluations:
\begin{verbatim}
> mean(I)
[1] 1.529655e-08
> sd(I)/10^2
[1] 9.328338e-10
> integrate(h,50,Inf)
3.023564e-08 with absolute error < 2e-08
> pt(50,5,low=F)
[1] 3.023879e-08
\end{verbatim}
and cannot be trusted.
\item {\bf Warning: There is a missing line in the text of this question, which should read:}
\begin{rema}
\noindent Explore the gain in efficiency from this method. Take $a=4.5$ in part (a) and run an
experiment to determine how many normal $\mathcal{N}(0,1)$ random variables would be needed to calculate $P(Z > 4.5)$
to the same accuracy obtained from using $100$ random variables in this importance sampler.
\end{rema}
If we use the representation
$$
\mathbb{P}(Z>4.5) = \int_{4.5}^\infty \varphi(z)\,\text{d}z = \int_0^\infty \varphi(x+4.5)
\exp(x)\exp(-x)\,\text{d}x\,,
$$
the approximation based on $100$ realisations from an $\mathcal{E}xp(1)$ distribution, $x_1m\ldots,x_100$, is
$$
\frac{1}{100}\,\sum_{i=1}^{100} \varphi(x_i+4.5) \exp(x_i)
$$
and the \R code
\begin{verbatim}
> x=rexp(100)
> mean(dnorm(x+4.5)*exp(x))
[1] 2.817864e-06
> var(dnorm(x+4.5)*exp(x))/100
[1] 1.544983e-13
\end{verbatim}
shows that the variance of the resulting estimator is about $10^{-13}$. A simple simulation of a normal sample of size $m$ and the
resulting accounting of the portion of the sample above $4.5$ leads to a binomial estimator with a variance of $\mathbb{P}(Z>4.5)
\mathbb{P}(Z<4.5)/m$, which results in a lower bound
$$
m \ge \mathbb{P}(Z>4.5) \mathbb{P}(Z<4.5) / 1.5 10^{-13} \approx 0.75 10^{7}\,,
$$
i.e.~close to ten million simulations.
\end{enumerate}
\subsection{Exercise \ref{pb:fitz}}
For the three choices, the importance weights are easily computed:
\begin{verbatim}
x1=sample(c(-1,1),10^4,rep=T)*rexp(10^4)
w1=exp(-sqrt(abs(x1)))*sin(x1)^2*(x1>0)/.5*dexp(x1)
x2=rcauchy(10^4)*2
w2=exp(-sqrt(abs(x2)))*sin(x2)^2*(x2>0)/dcauchy(x2/2)
x3=rnorm(10^4)
w3=exp(-sqrt(abs(x3)))*sin(x3)^2*(x3>0)/dnorm(x3)
\end{verbatim}
They can be evaluated in many ways, from
\begin{verbatim}
boxplot(as.data.frame(cbind(w1,w2,w3)))
\end{verbatim}
to computing the effective sample size \verb=1/sum((w1/sum(w1))^2)= introduced in Example \ref{ex:probit}.
The preferable choice is then $g_1$. The estimated sizes are given by
\begin{verbatim}
> 4*10^6*var(x1*w1/sum(w1))/mean(x1*w1/sum(w1))^2
[1] 10332203
> 4*10^6*var(x2*w2/sum(w2))/mean(x2*w2/sum(w2))^2
[1] 43686697
> 4*10^6*var(x3*w3/sum(w3))/mean(x3*w3/sum(w3))^2
[1] 352952159
\end{verbatim}
again showing the appeal of using the double exponential proposal. (Note that efficiency could be
doubled by considering the absolute values of the simulations.)\\
\subsection{Exercise \ref{pb:top}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item With a positive density $g$ and the representation
$$
m(x) = \int_{\Theta}f(x|\theta)\dfrac{\pi(\theta)}{g(\theta)}g(\theta)\,\text{d}\theta\,,
$$
we can simulate $\theta_i$'s from $g$ to approximate $m(x)$ with
$$
\frac{1}{n}\sum_{i=1}^{n} \dfrac{f(x|\theta_{i})\pi(\theta_{i})}{g(\theta_{i})}\,.
$$
\item When $ g(x) = \pi(\theta|x) = f(x|\theta)\pi(\theta)/K $, then
$$
K\frac{1}{n}\sum_{i=1}^{n} \dfrac{f(x|X_{i})\pi(X_{i})}{f(X_{i}|\theta)\pi(\theta)} = K
$$
and the normalisation constant is the exact estimate. If the normalising constant is unknown, we
must use instead the self-normalising version \eqref{eq:Gby}.
\item Since
$$
\int_{\Theta} {\tau(\theta) \over f(x|\theta) \pi(\theta)} \pi(\theta|x) \text{d}\theta =
\int_{\Theta} {\tau(\theta) \over f(x|\theta) \pi(\theta)} \dfrac{f(x|\theta) \pi(\theta)}{m(x)}
\text{d}\theta = \dfrac{1}{m(x)}\,,
$$
we have an unbiased estimator of $1/m(x)$ based on simulations from the posterior,
$$
{1\over T} \sum_{t=1}^T {\tau(\theta_i^*) \over f(x|\theta_i^*) \pi(\theta_i^*)}
$$
and hence a converging (if biased) estimator of $m(x)$. This estimator of the marginal density can
then be seen as an harmonic mean estimator, but also as an importance sampling estimator \citep{robert:marin:2010}.
\end{enumerate}
\subsection{Exercise \ref{pb:margin}}
{\bf Warning: There is a typo in question b, which should read}
\begin{rema}
\noindent Let $X|Y=y \sim \CG (1,y)$ and $Y \sim \CE xp(1)$.
\end{rema}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item If $(X_i, Y_i) \sim f_{XY}(x,y)$, the Strong Law of Large Numbers tells us that
\begin{displaymath}
\lim_n {1\over n}
\sum_{i=1}^n \frac{f_{XY}(x^\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}
= \int \int \frac{f_{XY}(x^\ast, y) w(x)}{f_{XY}(x, y)} f_{XY}(x,y) \text{d}x \text{d}y.
\end{displaymath}
Now cancel $f_{XY}(x,y)$ and use that fact that $\int w(x)dx=1$ to show
$$
\int \int \frac{f_{XY}(x^\ast, y) w(x)}{f_{XY}(x, y)} f_{XY}(x,y) \text{d}x \text{d}y=\int f_{XY}(x^\ast, y) dy= f_X(x^\ast).
$$
\item The exact marginal is
$$
\int \left[y e^{-yx}\right] e^{-y} dy = \int y^{2-1} e^{-y(1+x)} dy = \frac{\gamma(2)}{(1+x)^2}.
$$
We tried the following \R version of Monte Carlo marginalization:
\begin{verbatim}
X=rep(0,nsim)
Y=rep(0,nsim)
for (i in 1:nsim){
Y[i]=rexp(1)
X[i]=rgamma(1,1,rate=Y[i])
}
MCMarg=function(x,X,Y){
return(mean((dgamma(x,1,rate=Y)/dgamma(X,1,
rate=Y))*dgamma(X,7,rate=3)))
}
True=function(x)(1+x)^(-2)
\end{verbatim}
which uses a $\mathcal{G}a(7,3)$ distribution to marginalize. It works ok, as you
can check by looking at the plot
\begin{verbatim}
> xplot=seq(0,5,.05);plot(xplot,MCMarg(xplot,X,Y)-True(xplot))
\end{verbatim}
\item Choosing $w(x) = f_{X}(x)$ leads to the estimator
\begin{align*}
\dfrac{1}{n} \sum_{i=1}^n \dfrac{f_{XY}(x^\ast, y_i) f_X(x_i)}{f_{XY}(x_i, y_i)}
&=
\dfrac{1}{n} \sum_{i=1}^n \dfrac{f_X(x^\ast)f_{Y|X}(y_i|x^\ast) f_X(x_i)}{f_X(x_i)f_{Y|X}(y_i|x_i)}
\\&= f_X(x^\ast)\,
\dfrac{1}{n} \sum_{i=1}^n \dfrac{f_{Y|X}(y_i|x^\ast)}{f_{Y|X}(y_i|x_i)}
\end{align*}
which produces $f_X(x^\ast)$ modulo an estimate of $1$. If we decompose the variance of the estimator
in terms of
$$
\text{var}\left\{\mathbb{E}\left[\left.\dfrac{f_{XY}(x^\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\right|x_i\right]\right\}+
\mathbb{E}\left\{\text{var}\left[\left.\dfrac{f_{XY}(x^\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\right|x_i\right]\right\}\,,
$$
the first term is
\begin{align*}
\mathbb{E}\left[\left.\dfrac{f_{XY}(x^\ast, y_i) w(x_i)}{f_{XY}(x_i, y_i)}\right|x_i\right]&=
f_X(x^\ast)\mathbb{E}\left[\left.\dfrac{f_{Y|X}(y_i|x^\ast)}{f_{Y|X}(y_i|x_i)}\right|x_i\right]\,\dfrac{w(x_i)}{f_X(x_i)}\\
&= f_X(x^\ast)\dfrac{w(x_i)}{f_X(x_i)}
\end{align*}
which has zero variance if $w(x) = f_{X}(x)$. If we apply a variation calculus argument to the whole quantity, we
end up with
$$
w(x) \propto f_X(x) \bigg/ \int \dfrac{f^2_{Y|X}(y|x^\ast)}{f_{Y|X}(y|x)}\,\text{d}y
$$
minimizing the variance of the resulting estimator. So it is likely $f_X$ is {\em not} optimal...
\end{enumerate}
\chapter{Monte Carlo Optimization}
\subsection{Exercise \ref{exo:smplmix}}
This is straightforward in \R
\begin{verbatim}
par(mfrow=c(1,2),mar=c(4,4,1,1))
image(mu1,mu2,-lli,xlab=expression(mu[1]),ylab=expression(mu[2]))
contour(mu1,mu2,-lli,nle=100,add=T)
Nobs=400
da=rnorm(Nobs)+2.5*sample(0:1,Nobs,rep=T,prob=c(1,3))
for (i in 1:250)
for (j in 1:250)
lli[i,j]=like(c(mu1[i],mu2[j]))
image(mu1,mu2,-lli,xlab=expression(mu[1]),ylab=expression(mu[2]))
contour(mu1,mu2,-lli,nle=100,add=T)
\end{verbatim}
Figure \ref{fig:compamix} shows that the log-likelihood surfaces are quite comparable, despite being
based on different samples. Therefore the impact of allocating $100$ and $300$ points to both components,
respectively, instead of the random $79$ and $321$ in the current realisation, is inconsequential.
\begin{figure
\centerline{\includegraphics[width=\textwidth,height=5truecm]{mixcomp.jpg}}
\caption{\label{fig:compamix}
Comparison of two log-likelihood surfaces for the mixture model \eqref{eq:maxmix}
when the data is simulated with a fixed $100/300$ ratio in both components {\em (left)}
and when the data is simulated with a binomial $\mathcal{B}(400,1/4)$ random number of points
on the first component.}
\end{figure}
\subsection{Exercise \ref{ex:simpleMCO}}
{\bf Warning: as written, this problem has not simple solution! The constraint should be replaced with}
\begin{rema}
$$
x^2(1+\sin(y/3)\cos(8x))+y^2(2+\cos(5x)\cos(8y)) \le 1\,,
$$
\end{rema}
We need to find a lower bound on the function of $(x,y)$. The coefficient of $y^2$ is obviously bounded
from below by $1$, while the coefficient of $x^2$ is positive. Since the function is bounded from below by $y^2$,
this means that $y^2<1$, hence that $\sin(y/3)>\sin(-1/3)>-.33$. Therefore, a lower bound on the function is $0.77x^2+y^2$.
If we simulate uniformly over the ellipse $0.77x^2+y^2<1$, we can subsample the points that satisfy the constraint.
Simulating the uniform distribution on $0.77x^2+y^2<1$ is equivalent to simulate the uniform distribution over the unit
circle $z^2+y^2<1$ and resizing $z$ into $x=z/\sqrt{0.77}$.
\begin{verbatim}
theta=runif(10^5)*2*pi
rho=runif(10^5)
xunif=rho*cos(theta)/.77
yunif=rho*sin(theta)
plot(xunif,yunif,pch=19,cex=.4,xlab="x",ylab="y")
const=(xunif^2*(1+sin(yunif/3)*cos(xunif*8))+
yunif^2*(2+cos(5*xunif)*cos(8*yunif))<1)
points(xunif[const],yunif[const],col="cornsilk2",pch=19,cex=.4)
\end{verbatim}
While the ellipse is larger than the region of interest, Figure \ref{fig:alien} shows that it is
reasonably efficient. The performances of the method are given by \verb+sum(const)/10^4+, which is
equal to $73\%$.
\begin{figure
\centerline{\includegraphics[width=.75\textwidth]{alien.jpg}}
\caption{\label{fig:alien}
Simulation of a uniform distribution over a complex domain via uniform simulation over a
simpler encompassing domain for $10^5$ simulations and an acceptance rate of $0.73\% $.}
\end{figure}
\subsection{Exercise \ref{exo:stogramix}}
Since the log-likelihood of the mixture model in Example \ref{ex:maxmix} has been defined by
\begin{verbatim}
#minus the log-likelihood function
like=function(mu){
-sum(log((.25*dnorm(da-mu[1])+.75*dnorm(da-mu[2]))))
}
\end{verbatim}
in the {\sf mcsm} package, we can reproduce the \R program of Example \ref{ex:find_max3} with the
function $h$ now defined as \verb+like+. The difference with the function $h$ of Example \ref{ex:find_max3}
is that the mixture log-likelihood is more variable and thus the factors $\alpha_j$ and $\beta_j$ need to
be calibrated against divergent behaviours. The following figure shows the impact of the different choices
$(\alpha_j,\beta_j)=(.01/\log(j+1),1/\log(j+1)^{.5})$,
$(\alpha_j,\beta_j)=(.1/\log(j+1),1/\log(j+1)^{.5})$,
$(\alpha_j,\beta_j)=(.01/\log(j+1),1/\log(j+1)^{.1})$,
$(\alpha_j,\beta_j)=(.1/\log(j+1),1/\log(j+1)^{.1})$,
on the convergence of the gradient optimization. In particular, the second choice exhibits a particularly
striking behavior where the sequence of $(\mu_1,\mu_2)$ skirts the true mode of the likelihood in a circular
manner. (The stopping rule used in the \R program is \verb@(diff<10^(-5))@.)
\begin{figure
\centerline{\includegraphics[width=.95\textwidth]{mixrad.jpg}}
\caption{\label{fig:mixrad}
Four stochastic gradient paths for four different choices
$(\alpha_j,\beta_j)=(.01/\log(j+1),1/\log(j+1)^{.5})$ (u.l.h.s.),
$(\alpha_j,\beta_j)=(.1/\log(j+1),1/\log(j+1)^{.5})$ (u.r.h.s.),
$(\alpha_j,\beta_j)=(.01/\log(j+1),1/\log(j+1)^{.1})$ (l.l.h.s.),
$(\alpha_j,\beta_j)=(.1/\log(j+1),1/\log(j+1)^{.1})$ (l.r.h.s.).}
\end{figure}
\subsection{Exercise \ref{exo:freak}}
The \R function \verb+SA+ provided in Example \ref{ex:mix_sa} can be used in the
following \R program to test whether or not the final value is closer to the main mode
or to the secondy mode:
\begin{verbatim}
modes=matrix(0,ncol=2,nrow=100)
prox=rep(0,100)
for (t in 1:100){
res=SA(mean(da)+rnorm(2))
modes[t,]=res$the[res$ite,]
diff=modes[t,]-c(0,2.5)
duff=modes[t,]-c(2.5,0)
prox[t]=sum(t(diff
}
\end{verbatim}
For each new temperature schedule, the function \verb0SA0 must be modified accordingly (for instance by
the on-line change \verb+SA=vi(SA)+). Figure \ref{fig:SAvabien} illustrates the output of an experiment
for four different schedules.
\begin{figure
\centerline{\includegraphics[width=.95\textwidth]{SAva.jpg}}
\caption{\label{fig:SAvabien}
Four simulated annealing outcomes corresponding to the temperature schedules
$T_t=1/1\log(1+t)$,
$T_t=1/10\log(1+t)$,
$T_t=1/10\sqrt{\log(1+t)}$,
and $T_t=(.95)^{1+t}$, based on $100$ replications. (The percentage of recoveries of the main mode
is indicated in the title of each graph.)}
\end{figure}
\subsection{Exercise 5.9}
In principle, $Q(\theta^\prime|\theta,\mathbf{x})$ should also involve the logarithms
of $1/4$ and $1/3$, raised to the powers $\sum Z_i$ and $\sum (1-Z_i)$, respectively.
But, due to the logarithmic transform, the expression does not involve the parameter
$\theta=(\mu_1,\mu_2)$ and can thus be removed from $Q(\theta^\prime|\theta,\mathbf{x})$
with no impact on the optimization problem.
\subsection{Exercise \ref{exo:tan1}}
{\bf Warning: there is a typo in Example \ref{ex:tan_wei}. The EM sequence should be
$$
\hat\theta_1 = \displaystyle{\left\{{\theta_0\,x_1\over 2+\theta_0}
+ x_4\right\}}\bigg/\displaystyle{\left\{{\theta_0\,x_1\over 2+\theta_0} +x_2+x_3+x_4\right\}} \;.
$$
instead of having $x_4$ in the denominator.}
Note first that some $1/4$ factors have been removed from every term as they were not
contributing to the likelihood maximisation. Given a starting point $\theta_0$, the
EM sequence will always be the same.
\begin{verbatim}
x=c(58,12,9,13)
n=sum(x)
start=EM=cur=diff=.1
while (diff>.001){ #stopping rule
EM=c(EM,((cur*x[1]/(2+cur))+x[4])/((cur*x[1]/(2+cur))+x[2]+x[3]+x[4]))
diff=abs(cur-EM[length(EM)])
cur=EM[length(EM)]
}
\end{verbatim}
The Monte Carlo EM version creates a sequence based on a binomial simulation:
\begin{verbatim}
M=10^2
MCEM=matrix(start,ncol=length(EM),nrow=500)
for (i in 2:length(EM)){
MCEM[,i]=1/(1+(x[2]+x[3])/(x[4]+rbinom(500,M*x[1],
prob=1/(1+2/MCEM[,i-1]))/M))
}
plot(EM,type="l",xlab="iterations",ylab="MCEM sequences")
upp=apply(MCEM,2,max);dow=apply(MCEM,2,min)
polygon(c(1:length(EM),length(EM):1),c(upp,rev(dow)),col="grey78")
lines(EM,col="gold",lty=2,lwd=2)
}
\end{verbatim}
and the associated graph shows a range of values that contains the true EM sequence. Increasing
\verb=M= in the above \R program obviously reduces the range.
\subsection{Exercise \ref{exo:maxmim}}
The \R function for plotting the (log-)likelihood surface associated
with \eqref{eq:maxmix} was provided in Example \ref{ex:maxmix}.
We thus simply need to apply this function to the new sample,
resulting in an output like Figure \ref{fig:fakmix}, with a single mode
instead of the usual two modes.
\begin{figure
\centerline{\includegraphics[width=.8\textwidth]{fakmix.jpg}}
\caption{\label{fig:fakmix}
Log-likelihood surface of a mixture model applied to a five component mixture
sample of size $400$.}
\end{figure}
\subsection{Exercise \ref{pb:gyr_tre}}
{\bf Warning: there is a typo in question a where the formula should involve capital
$Z_i$'s, namely}
\begin{rema}
$$
P(Z_i=1) = 1 - P(Z_i=2) = { p \lambda \exp (-\lambda x_i) \over
p \lambda \exp (-\lambda x_i) +(1-p) \mu \exp (-\mu x_i)}.
$$
\end{rema}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The likelihood is
$$L(\theta|{\bf x})=\prod_{i=1}^{12}{[p\lambda e^{-\lambda x_i}+(1-p)\mu e^{-\mu x_i}]},$$
and the complete-data likelihood is
$$
L^c(\theta|{\bf x},{\bf z})=\prod_{i=1}^{12}{[p\lambda e^{-\lambda x_i}\mathbb{I}_{(z_i=1)}+(1-p)\mu e^{-\mu x_i}\mathbb{I}_{(z_i=2)}]},
$$
where $\theta=(p,\lambda,\mu)$ denotes the parameter, using the same arguments as in Exercise
\ref{pb:7.4.5.1}.
\item The EM algorithm relies on the optimization of the expected log-likelihood
\begin{align*}
Q(\theta|\hat\theta_{(j)},{\bf x})&=\sum_{i=1}^{12} \left[\log{(p\lambda e^{-\lambda x_i})}
P_{\hat\theta_{(j)}}(Z_i=1|x_i)\right.\\
&\left.\quad +\log{((1-p)\mu e^{-\mu x_i})}P_{\hat\theta_{(j)}}(Z_i=2|x_i)\right].
\end{align*}
The arguments of the maximization problem are
$$
\left\{%
\begin{array}{lll}
\hat p_{(j+1)}=\hat P/12\\
\hat\lambda_{(j+1)}=\hat S_1/\hat P\\
\hat\mu_{(j+1)}=\hat S_2/\hat P,
\end{array}%
\right.
$$
where
$$
\left\{%
\begin{array}{lll}
\hat P=\sum_{i=1}^{12}{P_{\hat\theta_{(j)}}(Z_i=1|x_i)}\\\\
\hat S_1=\sum_{i=1}^{12}{x_iP_{\hat\theta_{(j)}}(Z_i=1|x_i)}\\\\
\hat S_2=\sum_{i=1}^{12}{x_iP_{\hat\theta_{(j)}}(Z_i=2|x_i)}\\
\end{array}%
\right.
$$
with
$$
P_{\hat\theta_{(j)}}(Z_i=1|x_i
=\frac{\hat p_{(j)}\hat\lambda_{(j)}e^{-\hat\lambda_{(j)}x_i}}{\hat
p_{(j)}\hat\lambda_{(j)}e^{-\hat\lambda_{(j)}x_i}+(1-\hat
p_{(j)})\hat\mu_{(j)}e^{-\hat\mu_{(j)}x_i}}\,.
$$
An \R implementation of the algorithm is then
\begin{verbatim}
x=c(0.12,0.17,0.32,0.56,0.98,1.03,1.10,1.18,1.23,1.67,1.68,2.33)
EM=cur=c(.5,jitter(mean(x),10),jitter(mean(x),10))
diff=1
while (diff*10^5>1){
probs=1/(1+(1-cur[1])*dexp(x,cur[3])/(cur[1]*dexp(x,cur[2])))
phat=sum(probs);S1=sum(x*probs);S2=sum(x*(1-probs))
EM=rbind(EM,c(phat/12,S1/phat,S2/phat))
diff=sum(abs(cur-EM[dim(EM)[1],]))
cur=EM[dim(EM)[1],]
}
\end{verbatim}
and it always produces a single component mixture.
\end{enumerate}
\subsection{Exercise \ref{pb:EMCensored}}
{\bf Warning: Given the notations of Example \ref{ex:EMCensored2}, the function
$\phi$ in question b should be written $\varphi$...}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item The question is a bit vague in that the density of the missing data $(Z_{n-m+1},\ldots,Z_n)$ is a normal
${\cal N}(\theta, 1)$ density if we do not condition on $\by$. Conditional upon $\by$, the missing observations
$Z_i$ are truncated in $a$, i.e.~we know that they are larger than $a$. The conditional distribution of the $Z_i$'s
is therefore a normal ${\cal N}(\theta, 1)$ distribution truncated in $a$, with density
$$
f(z|\theta,y) = \dfrac{\exp\{-(z_i-\theta)^2/2\}}{\sqrt{2\pi}\,P_\theta(Y>a)}\,\mathbb{I}{z\ge a}\,.
= \dfrac{\varphi(z-\theta)}{1-\Phi(a-\theta)}\,\mathbb{I}{z\ge a}\,.
$$
where $\varphi$ and $\Phi$ are the normal pdf and cdf, respectively.
\item We have
\begin{align*}
\BE_{\theta}[Z_i|Y_i] &= \int_a^\infty z\,\dfrac{\varphi(z-\theta)}{1-\Phi(a-\theta)}\,\text{d}z\\
&= \theta + \int_a^\infty (z-\theta)\,\dfrac{\varphi(z-\theta)}{1-\Phi(a-\theta)}\,\text{d}z\\
&= \theta + \int_{a-\theta}^\infty y\,\dfrac{\varphi(y)}{1-\Phi(a-\theta)}\,\text{d}y\\
&= \theta + \left[-\varphi(x)\right]_{a-\theta}^\infty\\
&= \theta + \frac{\varphi(a-\theta)}{1-\Phi(a-\theta)},
\end{align*}
since $\varphi^\prime(x)=-x\varphi(x)$.
\end{enumerate}
\subsection{Exercise \ref{pb:uniroot}}
Running \verb+uniroot+ on both intervals
\begin{verbatim}
> h=function(x){(x-3)*(x+6)*(1+sin(60*x))}
> uniroot(h,int=c(-2,10))
$root
[1] 2.999996
$f.root
[1] -6.853102e-06
> uniroot(h,int=c(-8,1))
$root
[1] -5.999977
$f.root
[1] -8.463209e-06
\end{verbatim}
misses all solutions to $1+\sin(60x)=0$
\subsection{Exercise \ref{pb:used_up1}}
{\bf Warning: this Exercise duplicates Exercise \ref{exo:tan1} and should not
have been included in the book!}\\
\chapter{Random Variable Generation
\renewcommandAccept-Reject{Accept--Reject~}
\subsection{Exercise \ref{pb:discretePIT}}
For a random variable $X$ with cdf $F$, if
\[
F^{-}(u)=\inf\{ x,F(x)\leq u\},
\]
then, for $U\sim\mathcal{U}[0,1]$, for all $y \in \mathbb{R}$,
\begin{eqnarray*}
\mathbb{P}(F^{-}(U)\leq y)&=&\mathbb{P}(\inf\{ x,F(x)\leq U\}\leq y)\\
&& =\mathbb{P}(F(y)\geq U)\qquad\textrm{ as $F$ is non-decreasing }\\
&& =F(y)\qquad\qquad\textrm{ as $U$ is uniform}
\end{eqnarray*}
\subsection{Exercise \ref{pb:boxmuller}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item It is easy to see that $\BE[U_1]=0$, and a standard calculation shows that $\text{var}(U_1)= 1/12$, from which the result follows.
\item Histograms show that the tails of the $12$ uniforms are not long enough. Consider the code
\begin{verbatim}
nsim=10000
u1=runif(nsim)
u2=runif(nsim)
X1=sqrt(-2*log(u1))*cos(2*pi*u2)
X2=sqrt(-2*log(u1))*sin(2*pi*u2)
U=array(0,dim=c(nsim,1))
for(i in 1:nsim)U[i]=sum(runif(12,-.5,.5))
par(mfrow=c(1,2))
hist(X1)
hist(U)
a=3
mean(X1>a)
mean(U>a)
mean(rnorm(nsim)>a)
1-pnorm(a)
\end{verbatim}
\item You should see the difference in the tails of the histogram. Also, the numerical output from the above is
\begin{verbatim}
[1] 0.0016
[1] 5e-04
[1] 0.0013
[1] 0.001349898
\end{verbatim}
where we see that the Box-Muller and \verb+rnorm+ are very good when compared with the exact \verb+pnorm+.
Try this calculation for a range of \verb+nsim+ and \verb+a+.
\end{enumerate}
\subsection{Exercise \ref{exo:acceP}}
For $U\sim\mathcal{U}_{[0,1]}$, $Y\sim g(y)$, and $X\sim f(x)$, such that
$f/g\leq M$, the acceptance condition in the Accept--Reject algorithm is that $U\leq f(Y)/(Mg(Y)).$
The probability of acceptance is thus
\begin{align*}
\mathbb{P}(U \leq f(Y)\big/ Mg(Y))&=\int_{-\infty}^{+\infty}\int_{0}^{\frac{f(y}{Mg(y)}}\,\text{d}ug(y)\,\text{d}y\\
& =\int_{-\infty}^{+\infty}\frac{f(y)}{Mg(y)}g(y)\,\text{d}y\\
& =\frac{1}{M}\int_{-\infty}^{+\infty}f(y)\,\text{d}y\\
& =\frac{1}{M}\,.
\end{align*}
Assume $f/g$ is only known up to a normalising constant, i.e.
$f/g=k.\tilde{f}/\tilde{g}$, with $\tilde{f}/\tilde{g}\leq\tilde{M}$,
$\tilde{M}$ being a well-defined upper bound different from $M$ because of the missing
normalising constants. Since $Y\sim g$,
\begin{align*}
\mathbb{P}(U \leq \tilde{f}(Y)\big/ \tilde{M}\tilde{g}(Y))
& =\int_{-\infty}^{+\infty}\int_{0}^{\frac{\tilde{f}(y}{\tilde{M}\tilde{g}(y)}}\,\text{d}ug(y)\,\text{d}y\\
& =\int_{-\infty}^{+\infty}\frac{\tilde{f}(y)}{\tilde{M}\tilde{g}(y)}g(y)\,\text{d}y\\
& =\int_{-\infty}^{+\infty}\frac{f(y)}{k\tilde{M}g(y)}g(y)\,\text{d}y\\
& =\frac{1}{k\tilde{M}}\,.
\end{align*}
Therefore the missing constant is given by
$$
k=1\bigg/ \tilde{M.}\mathbb{P}(U\leq\tilde{f}(Y)\big/ \tilde{M}\tilde{g}(Y))\,,
$$
which can be estimated from the empirical acceptance rate.
\subsection{Exercise \ref{exo:trueMax}}
The ratio is equal to
$$
\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\,
\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\, x^{\alpha-a}\,(1-x)^{\beta-b}
$$
and it will not diverge at $x=0$ only if $a\le \alpha$ and at $x=1$ only if $b\le \beta$.
The maximum is attained for
$$
\frac{\alpha-a}{x^\star} = \frac{\beta-b}{1-x^\star}\,,
$$
i.e.~is
$$
M_{a,b}=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\,
\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\, \frac{(\alpha-a)^{\alpha-a}(\beta-b)^{\beta-b}}
{(\alpha-a+\beta-b)^{\alpha-a+\beta-b}}\,.
$$
The analytic study of this quantity as a function of $(a,b)$ is quite delicate but if we define
\begin{verbatim}
mab=function(a,b){
lgamma(a)+lgamma(b)+(alph-a)*log(alph-a)+(beta-b)*log(beta-b)
-(alph+bet-a-b)*log(alph+bet-a-b)}
\end{verbatim}
it is easy to see using \verb=contour= on a sequence of $a$'s and $b$'s that the maximum of
$M_{a,b}$ is achieved over integer values when $a=\lfloor \alpha \rfloor$ and
$b=\lfloor \beta \rfloor$.
\subsection{Exercise \ref{exo:inzabove}}
Given $\theta$, exiting the loop is driven by $X=x_0$, which indeed has a probability
$f(x_0|\theta)$ to occur. If $X$ is a discrete random variable, this is truly a probability, while,
if $X$ is a continuous random variable, this is zero. The distribution of the exiting $\theta$ is
then dependent on the event $X=x_0$ taking place, i.e.~is proportional to $\pi(\theta)f(x_0|\theta)$,
which is exactly $\pi(\theta|x_0)$.
\subsection{Exercise \ref{pb:hist}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Try the \R code
\begin{verbatim}
nsim<-5000
n=25;p=.2;
cp=pbinom(c(0:n),n,p)
X=array(0,c(nsim,1))
for(i in 1:nsim){
u=runif(1)
X[i]=sum(cp<u)
}
hist(X,freq=F)
lines(1:n,dbinom(1:n,n,p),lwd=2)
\end{verbatim}
which produces a histogram and a mass function for the binomial $\mathcal{B}(25,.2)$.
To check timing, create the function
\begin{verbatim}
MYbinom<-function(s0,n0,p0){
cp=pbinom(c(0:n0),n0,p0)
X=array(0,c(s0,1))
for (i in 1:s0){
u=runif(1)
X[i]=sum(cp<u)
}
return(X)
}
\end{verbatim}
and use \verb+system.time(rbinom(5000,25,.2))+ and \verb+system.time(MYbinom(5000,25,.2))+
to see how much faster \R is.
\item Create the \R functions {\tt Wait} and {\tt Trans}:
\begin{verbatim}
Wait<-function(s0,alpha){
U=array(0,c(s0,1))
for (i in 1:s0){
u=runif(1)
while (u > alpha) u=runif(1)
U[i]=u
}
return(U)
}
Trans<-function(s0,alpha){
U=array(0,c(s0,1))
for (i in 1:s0) U[i]=alpha*runif(1)
return(U)
}
\end{verbatim}
Use \verb+hist(Wait(1000,.5))+ and \verb+hist(Trans(1000,.5))+ to see the
corresponding histograms. Vary $n$ and $\alpha$. Use the \verb+system.time+ command as in part a to see the timing.
In particular, \verb+Wait+ is very bad if $\alpha$ is small.
\end{enumerate}
\subsection{Exercise \ref{pb:pareto_gen}}
The cdf of the Pareto $\CP(\alpha)$ distribution is
$$
F(x)=1-x^{-\alpha}
$$
over $(1,\infty)$. Therefore, $F^{-1}(U)=(1-U)^{-1/\alpha}$, which is also
the $-1/\alpha$ power of a uniform variate.
\subsection{Exercise \ref{pb:specific}}
Define the \R functions
\begin{verbatim}
Pois1<-function(s0,lam0){
spread=3*sqrt(lam0)
t=round(seq(max(0,lam0-spread),lam0+spread,1))
prob=ppois(t,lam0)
X=rep(0,s0)
for (i in 1:s0){
u=runif(1)
X[i]=max(t[1],0)+sum(prob<u)-1
}
return(X)
}
\end{verbatim}
and
\begin{verbatim}
Pois2<-function(s0,lam0){
X=rep(0,s0)
for (i in 1:s0){
sum=0;k=1
sum=sum+rexp(1,lam0)
while (sum<1){ sum=sum+rexp(1,lam0);k=k+1}
X[i]=k
}
return(X)
}
\end{verbatim}
and then run the commands
\begin{verbatim}
> nsim=100
> lambda=3.4
> system.time(Pois1(nsim,lambda))
user system elapsed
0.004 0.000 0.005
> system.time(Pois2(nsim,lambda))
user system elapsed
0.004 0.000 0.004
> system.time(rpois(nsim,lambda))
user system elapsed
0 0 0
\end{verbatim}
for other values of \verb+nsim+ and \verb+lambda+. You will see that \verb@rpois@ is by far the best, with the exponential generator
(\verb#Pois2#) not being very good for large $\lambda$'s. Note also that \verb@Pois1@ is not appropriate for small $\lambda$'s since
it could then return negative values.
\subsection{Exercise \ref{pb:gammaAR}}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Since, if $X\sim \mathcal{G}a(\alpha,\beta)$, then $\beta X=\sum_{j=1}^{\alpha} \beta X_{j} \sim \mathcal{G}a(\alpha,1)$,
$\beta$ is the inverse of a scale parameter.
\item The Accept-Reject ratio is given by
$$
\dfrac{f(x)}{g(x)} \propto \dfrac{x^{n-1}\,e^{-x}}{\lambda\,e^{-\lambda x}}=\lambda^{-1} x^{n-1} e^{-(1-\lambda)x}\,.
$$
The maximum of this ratio is obtained for
$$
\dfrac{n-1}{x^\star} - (1-\lambda) = 0\,,\quad\text{i.e. for}\quad x^\star = \dfrac{n-1}{1-\lambda}\,.
$$
Therefore,
$$
M\propto \lambda^{-1} \left( \dfrac{n-1}{1-\lambda} \right)^{n-1} \,e^{-(n-1)}
$$
and this upper bound is minimised in $\lambda$ when $\lambda=1/n$.
\item If $g$ is the density of the $\mathcal{G}a(a,b)$ distribution and $f$ the
density of the $\mathcal{G}a(\alpha,1)$ distribution,
$$
g(x) = \frac{x^{a-1}e^{-bx}b^{a}}{\Gamma(a)} \quad\text{and}\quad f(x) = \frac{x^{\alpha-1}e^{-x}}{\Gamma(\alpha)}
$$
the Accept-Reject ratio is given by
$$
\dfrac{f(x)}{g(x)} = \dfrac{x^{\alpha-1}e^{-x} \Gamma(a)}{\Gamma(\alpha) b^{a} x^{a-1}e^{-bx}} \propto
b^{-a}x^{\alpha-a}e^{-x(1-b)} \,.
$$
Therefore,
$$
\dfrac{\partial}{\partial x} \dfrac{f}{g} = b^{a} e^{-x(1-b)}x^{\alpha-a-1}\left\{(\alpha-a)-(1-b)x\right\}
$$
provides $x^\star = {\alpha-a}\big/{1-b} $ as the argument of the maximum of the ratio, since $\frac{f}{g} (0)= 0$.
The upper bound $M$ is thus given by
$$
M(a,b)=b^{-a}\left(\dfrac{\alpha-a}{1-b}\right)^{\alpha-a}e^{-\left(\frac{\alpha-a}{1-b}\right)*(1-b)}
=b^{-a}\left(\frac{\alpha-a}{(1-b) e}\right)^{\alpha-a}\,.
$$
It obviously requires $b<1$ and $a<\alpha$.
\item {\bf Warning: there is a typo in the text of the first printing, it should be:}
\begin{rema}
Show that the maximum of $b^{-a}(1 - b)^{a-\alpha}$ is attained at $b = a/\alpha$, and hence the optimal choice of $b$
for simulating ${\cal{G}}a(\alpha,1)$ is $b=a/\alpha$, which gives the same mean for both ${\cal{G}}a(\alpha,1)$ and ${\cal{G}}a(a,b)$.
\end{rema}
With this modification, the maximum of $M(a,b)$ in $b$ is obtained by derivation, i.e.~for $b$ solution of
$$
\dfrac{a}{b}-\dfrac{\alpha-a}{1-b}=0\,,
$$
which leads to $b = a/\alpha$ as the optimal choice of $b$. Both ${\cal{G}}a(\alpha,1)$ and ${\cal{G}}a(a,a/\alpha)$ have the same mean $\alpha$.
\item Since
$$
M(a,a/\alpha) = (a/\alpha)^{-a}\left(\frac{\alpha-a}{(1-a/\alpha) e}\right)^{\alpha-a}
= (a/\alpha)^{-a} \alpha^{\alpha-a} = \alpha^\alpha/a^a,,
$$
$M$ is decreasing in $a$ and the largest possible value is indeed $a=\lfloor \alpha \rfloor$.
\end{enumerate}
\subsection{Exercise \ref{pb:Norm-DEAR}}
The ratio $f/g$ is
$$
\dfrac{f(x)}{g(x)} = \dfrac{\exp\{-x^2/2\}/\sqrt{2\pi}}{\alpha\exp\{-\alpha|x|\}/2}
=\dfrac{\sqrt{2/\pi}}{\alpha}\,\exp\{\alpha|x|-x^2/2\}
$$
and it is maximal when $x=\pm\alpha$, so $M=\sqrt{2/\pi}\exp\{\alpha^2/2\}/\alpha$.
Taking the derivative in $\alpha$ leads to the equation
$$
\alpha-\frac{1}{\alpha^2} =0\,,
$$
that is, indeed, to $\alpha=1$.
\subsection{Exercise \ref{pb:noncen_chi}}
{\bf Warning: There is a typo in this exercise, it should be:}
\begin{rema}
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item a mixture representation (\ref{eq:mixture_def}), where
$g(x \vert y)$ is the density of $\chi_{p+2y}^{2}$ and $p(y)$ is the density of $\CP(\lambda/2)$, and
\item the sum of a $\chi_{p-1}^{2}$ random variable and the square of a ${\cal N}(\sqrt{\lambda},1)$.
\end{enumerate}
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item Show that both those representations hold.
\item Compare the corresponding algorithms that can be derived from these
representations among themselves and also with {\tt rchisq} for small and large values of $\lambda$.
\end{enumerate}
\end{rema}
If we use the definition of the noncentral chi squared distribution, $\chi_{p}^{2}(\lambda)$ as corresponding to
the distribution of the squared norm $||x||^2$ of a normal vector $x\sim\mathcal{N}_p(\theta,I_p)$
when $\lambda=||\theta||^2$, this distribution is invariant by rotation over the normal vector and it is therefore
the same as when $x\sim\mathcal{N}_p((0,\ldots,0,\sqrt{\lambda}),I_p)$, hence leading to the representation (ii),
i.e.~as a sum of a $\chi_{p-1}^{2}$ random variable and of the square of a ${\cal N}(||\theta||,1)$ variable.
Representation (i) holds by a regular mathematical argument based on the series expansion of the modified
Bessel function since the density of a non-central chi-squared distribution is
$$
f(x|\lambda) = {1\over 2} (x/\lambda)^{(p-2)/4} I_{(p-2)/2}(\sqrt{\lambda x}) e^{-(\lambda+x)/2}\,,
$$
where
$$
I_\nu (t) = \left({t\over 2}\right)^{\nu} \sum_{k=0}^\infty
{(z/2)^{2k} \over k! \Gamma(\nu+k+1)}.
$$
Since \verb=rchisq= includes an optional non-centrality parameter \verb=nc=, it can be used
to simulate directly a noncentral chi-squared distribution. The two scenarios (i) and (ii) lead
to the following \R codes.
\begin{verbatim}
> system.time({x=rchisq(10^6,df=5,ncp=3)})
user system elapsed
> system.time({x=rchisq(10^6,df=4)+rnorm(10^6,mean=sqrt(3))^2})
user system elapsed
1.700 0.056 1.757
> system.time({x=rchisq(10^6,df=5+2*rpois(10^6,3/2))})
user system elapsed
1.168 0.048 1.221
\end{verbatim}
Repeated experiments with other values of $p$ and $\lambda$ lead to the same conclusion that the
Poisson mixture representation is the fastest.\\
\subsection{Exercise \ref{pb:bayesAR}}
Since the ratio $\pi(\theta|{\mathbf x})/\pi(\theta)$ is the likelihood, it is obvious that
the optimal bound $M$ is the likelihood function evaluated at the MLE (assuming $\pi$ is a true
density and not an improper prior).
Simulating from the posterior can then be done via
\begin{verbatim}
theta0=3;n=100;N=10^4
x=rnorm(n)+theta0
lik=function(the){prod(dnorm(x,mean=the))}
M=optimise(f=function(the){prod(dnorm(x,mean=the))},
int=range(x),max=T)$obj
theta=rcauchy(N)
res=(M*runif(N)>apply(as.matrix(theta),1,lik));print(sum(res)/N)
while (sum(res)>0){le=sum(res);theta[res]=rcauchy(le)
res[res]=(M*runif(le)>apply(as.matrix(theta[res]),1,lik))}
\end{verbatim}
The rejection rate is given by $0.9785$, which means that the Cauchy proposal is quite inefficient.
An empirical confidence (or credible) interval at the level $95\%$ on $\theta$ is $(2.73,3.799)$.
Repeating the experiment with $n=100$ leads (after a while) to the interval $(2.994,3.321)$, there is
therefore an improvement.
| {'timestamp': '2010-01-17T18:43:52', 'yymm': '1001', 'arxiv_id': '1001.2906', 'language': 'en', 'url': 'https://arxiv.org/abs/1001.2906'} |
\section{Introduction}
Since it was initiated by the Brazil workers' party~\cite{wainwright2003making} in the 90s, Participatory budgeting (PB)~\cite{cabannes2004participatory} has been gaining increased attention all over the world.
Essentially, the idea behind PB is a direct democracy approach in which the way to utilize a common budget (most usually a municipality budget) is being decided upon by the stakeholders themselves (most usually city residents).
In particular, given a set of proposed projects with their costs, and a designated total budget to be used, voters express their preferences over the projects and then an aggregation method takes the votes and decides upon a subset of the projects to be implemented.
As research on PB from the perspective of computational social choice is accordingly increasing (see, e.g., the survey of Aziz and Shah~\cite{aziz2020participatory}; as well as some specific recent papers on PB~\cite{talmon2019framework,pbsub,goel2015knapsack,aziz2017proportionally}), there is a need to have publicly-available datasets;
this is the goal behind the \emph{PArticipatory BUdgeting LIBrary} (in short, \emph{Pabulib}), that is available in \url{http://pabulib.org}.
The main aim of this document is to define a data format that is used in Pabulib.
\section{The \texttt{.pb} File Format}
The data concerning one instance of participatory budgeting is to be stored in a single UTF-8 text file with the extension \texttt{.pb}.
The content of the file is to be divided into three sections:
\begin{itemize}
\item \textbf{META} section with general metadata like the country, budget, number of votes.
\item \textbf{PROJECTS} section with projects costs and possibly some other metadata regarding projects like category, target etc.
\item \textbf{VOTES} section with votes, that can be in one of the four types: approval, ordinal, cumulative, scoring; and optionally with metadata regarding voters like age, sex etc.
\end{itemize}
\section{A Simple Example}
\begin{Verbatim}[frame=single]
META
key; value
description; Municipal PB in Wieliczka
country; Poland
unit; Wieliczka
instance; 2020
num_projects; 5
num_votes; 10
budget; 2500
rule; greedy
vote_type; approval
min_length; 1
max_length; 3
PROJECTS
project_id; cost; category
1; 600; culture, education
2; 800; sport
4; 1400; culture
5; 1000; health, sport
7; 1200; education
VOTES
voter_id; age; sex; vote
1; 34; f; 1,2,4
2; 51; m; 1,2
3; 23; m; 2,4,5
4; 19; f; 5,7
5; 62; f; 1,4,7
6; 54; m; 1,7
7; 49; m; 5
8; 27; f; 4
9; 39; f; 2,4,5
10; 44; m; 4,5
\end{Verbatim}
\section{Detailed Description}
The \textbf{bold} part is obligatory.
\subsection{Section 1: META}
\begin{itemize}
\item \bftt{key}
\begin{itemize}
\item \bftt{description}
\item \bftt{country}
\item \bftt{unit} -- name of the municipality, region, organization, etc., holding the PB process
\item \texttt{subunit} -- name of the sub-jurisdiction or category within which the preferences are aggregated and funds are allocated
\begin{itemize}
\item \textit{Example}: in Paris, there are 21 PBs -- a city-wide budgets and 20 district-wide budgets. For the city-wide budget, \texttt{unit} is Paris, and \texttt{subunit} is undefined, while for the district-wide budgets, \texttt{unit} is also Paris, and \texttt{subunit} is the name of the district (e.g., IIIe arrondissement).
\item \textit{Example}: before 2019, in Warsaw there have been district-wide and neighborhood-wide PBs. For all of them, \texttt{unit} is Warsaw, while \texttt{subunit} is the name of the district for district-wide budgets, and the name of the neighborhood for neighborhood-wide budgets. To associate neighborhoods with districts (if desired), an additional property \texttt{district} can be used.
\item \textit{Example}: assume that in a given city, there are distinct PBs for each of $n>1$ categories (environmental projects, transportation projects, etc.). For all of them, \texttt{unit} is the city name, while \texttt{subunit} is the name of the category.
\end{itemize}
\item \bftt{instance} -- a unique identifier of the specific edition of the PB process (year, edition number, etc.) used by the organizers to identify that edition; note that \texttt{instance} will not necessarily correspond to the year in which the vote is actually held, as some organizers identify the edition by the fiscal year in which the PB projects are to be carried out
\item \bftt{num\_projects}
\item \bftt{num\_votes}
\item \bftt{budget} -- the total amount of funds to be allocated
\item \bftt{vote\_type}
\begin{itemize}
\item \texttt{approval} -- each vote is a vector of Boolean values, $\mathbf{v} \in \mathbb{B}^{|P|}$, where $P$ is the set of all projects,
\item \texttt{ordinal} -- each vote is a permutation of a subset of $P$ such that $|P| \in [\mathtt{min\_length}, \mathtt{max\_length}]$, corresponding to a strict preference ordering,
\item \texttt{cumulative} -- each vote is a vector $\mathbf{v} \in \mathbb{R}_{+}^{|P|}$ such that ${\lVert\mathbf{v}\rVert}_{1} \le \mathtt{max\_sum\_points} \in \mathbb{R}_{+}$,
\item \texttt{scoring} -- each vote is a vector $\mathbf{v} \in I^{|P|}$, where $I \subseteq \mathbb{R}$.
\end{itemize}
\item \bftt{rule}
\begin{itemize}
\item \texttt{greedy} -- projects are ordered decreasingly by the value of the aggregation function (i.e., the total score), and are funded until funds are exhausted or there are no more projects
\item other rules will be defined in future versions
\end{itemize}
\item \texttt{date\_begin} -- the date on which voting starts
\item \texttt{date\_end} -- the date on which voting ends
\item \texttt{language} -- language of the description texts (i.e., full project names)
\item \texttt{edition}
\item \texttt{district}
\item \texttt{comment}
\item if \texttt{vote\_type} = \texttt{approval}:
\begin{itemize}
\item \texttt{min\_length} [default: 1]
\item \texttt{max\_length} [default: num\_projects]
\item \texttt{min\_sum\_cost} [default: 0]
\item \texttt{max\_sum\_cost} [default: $\infty$]
\end{itemize}
\item if \texttt{vote\_type} = \texttt{ordinal}:
\begin{itemize}
\item \texttt{min\_length} [default: 1]
\item \texttt{max\_length} [default: num\_projects]
\item \texttt{scoring\_fn} [default: Borda]
\end{itemize}
\item if \texttt{vote\_type} = \texttt{cumulative}:
\begin{itemize}
\item \texttt{min\_length} [default: 1]
\item \texttt{max\_length} [default: num\_projects]
\item \texttt{min\_points} [default: 0]
\item \texttt{max\_points} [default: max\_sum\_points]
\item \texttt{min\_sum\_points} [default: 0]
\item \bftt{max\_sum\_points}
\end{itemize}
\item if \texttt{vote\_type} = \texttt{scoring}:
\begin{itemize}
\item \texttt{min\_length} [default: 1]
\item \texttt{max\_length} [default: num\_projects]
\item \texttt{min\_points} [default: $-\infty$]
\item \texttt{max\_points} [default: $\infty$]
\item \texttt{default\_score} [default: 0]
\end{itemize}
\item \texttt{non-standard fields}
\end{itemize}
\item \bftt{value}
\end{itemize}
\subsection{Section 2: PROJECTS}
\begin{itemize}
\item \bftt{project\_id}
\item \bftt{cost}
\item \texttt{name} -- full project name
\item \texttt{category} -- for example: education, sport, health, culture, environmental protection, public space, public transit and roads
\item \texttt{target} -- for example: adults, seniors, children, youth, people with disabilities, families with children, animals
\item \texttt{non-standard fields}
\end{itemize}
\subsection{Section 3: VOTES}
\begin{itemize}
\item \bftt{voter\_id}
\item \texttt{age}
\item \texttt{sex}
\item \texttt{voting\_method} (e.g., paper, Internet, mail)
\item if \texttt{vote\_type} = \texttt{approval}:
\begin{itemize}
\item \bftt{vote} -- ids of the approved projects, separated by commas.
\end{itemize}
\item if \texttt{vote\_type} = \texttt{ordinal}:
\begin{itemize}
\item \bftt{vote} -- ids of the selected projects, from the most preferred one to the least preferred one, separated by commas.
\end{itemize}
\item if \texttt{vote\_type} = \texttt{cumulative}:
\begin{itemize}
\item \bftt{vote} -- project ids, in the decreasing order induced by \texttt{points}, separated by commas; projects not listed are assumed to be awarded $0$ points.
\item \bftt{points} -- points assigned to the selected projects, listed in the same order as project ids in \bftt{vote}.
\end{itemize}
\item if \texttt{vote\_type} = \texttt{scoring}:
\begin{itemize}
\item \bftt{vote} -- project ids, in the decreasing order induced by \texttt{points}, separated by commas; projects not listed are assumed to be awarded \texttt{default\_score} points.
\item \bftt{points} -- points assigned to the selected projects, listed in the same order as project ids in \bftt{vote}.
\end{itemize}
\item \texttt{non-standard fields}
\end{itemize}
\section{Outlook}
We have introduced the PArticipatory BUdgeting LIBrary (Pabulib; available at \url{http://pabulib.org}), and have described the \texttt{.pb} file format that is used in it.
We hope that Pabulib will foster meaningful research on PB, in particularly helping the computational social choice community offer better aggregation methods to be used in real-world instances of PB.
\section*{Acknowledgement}
Nimrod Talmon has been supported by the Israel Science Foundation (grant No. 630/19). Dariusz Stolicki and Stanis\l aw Szufa have been supported under the Polish Ministry of Science and Higher Education grant no. 0395/DLG/2018/10.
\bibliographystyle{plain}
| {'timestamp': '2020-12-14T02:27:17', 'yymm': '2012', 'arxiv_id': '2012.06539', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.06539'} |
\section{Introduction}
Any deformation of a Weyl or Clifford algebra can be realized
through a change of generators in the undeformed algebra
\cite{mathias,ducloux}. ``$q$-Deformations''
of Weyl or Clifford algebras that were covariant under the action of
a simple Lie algebra $\mbox{\bf g\,}$ are characterized by their being
covariant under the action of the quantum group $U_h\mbox{\bf g\,}$, where $q=e^h$.
Here we briefly summarize our systematic construction procedure
\cite{fiojmp,fiocmp} of
all the possible corresponding changes of generators, together with
the corresponding realizations of the $U_h\mbox{\bf g\,}$-action.
This paves the way \cite{fiojmp} for a physical
interpretation of deformed
generators as ``composite operators'', functions of the
undeformed ones. For instance, if the latter act as
creators and annihilators on
a bosonic or fermionic Fock space, then the former would act as
creators and annihilators of
some sort of ``dressed states'' in the same space.
Since there exists \cite{fiocmp} a
basis of $\mbox{\bf g\,}$-invariants that depend on the undeformed generators in a
non-polynomial way, but on the deformed ones in a polynomial way,
these changes of generators might be employed
to simplify the dynamics of some $\mbox{\bf g\,}$-covariant quantum physical systems
based on some complicated $\mbox{\bf g\,}$-invariant Hamiltonian.
Let us list the essential ingredients of our construction procedure:
\begin{enumerate}
\item \mbox{\bf g\,}, a simple Lie algebra.
\item The cocommutative Hopf algebra $H\equiv(U\mbox{\bf g\,},\cdot,\Delta,
\varepsilon,S)$ associated to
$U\mbox{\bf g\,}$; $\cdot,\Delta,\varepsilon,S$ denote the product,
coproduct, counit, antipode.
\item The quantum group \cite{dr2} $H_h\equiv(U_h\mbox{\bf g\,},\bullet,\Delta_h,
\varepsilon_h,S_h,\mbox{$\cal R$\,})$.
\item An algebra isomorphism\cite{dr3}
$\varphi_h:U_h\mbox{\bf g\,}\rightarrow U\mbox{\bf g\,}[[h]]$,
$\varphi_h\circ\bullet=\cdot\circ(\varphi_h\otimes \varphi_h)$.
\item A corresponding Drinfel'd twist\cite{dr3}
$\mbox{$\cal F$}\equiv\mbox{$\cal F$}^{(1)}\!\otimes\!\mbox{$\cal F$}^{(2)}\!=\!{\bf 1}^{\otimes^2}\!\!
+\!O(h)\in U\mbox{\bf g\,}\![[h]]^{\otimes^2}$:
\[
(\varepsilon\otimes \mbox{id})\mbox{$\cal F$}={\bf 1}=(\mbox{id}\otimes \varepsilon)\mbox{$\cal F$},
\qquad\: \:\Delta_h(a)=(\varphi_h^{-1}\otimes \varphi_h^{-1})\big
\{\mbox{$\cal F$}\Delta[\varphi_h(a)]\mbox{$\cal F$}^{-1}\big\}.
\]
\item $\gamma':=\mbox{$\cal F$}^{(2)}\cdot S\mbox{$\cal F$}^{(1)}$ and
$\gamma:=S\mbox{$\cal F$}^{-1(1)}\cdot \mbox{$\cal F$}^{-1(2)}$.
\item The generators $a^+_i,a^i$ of a ordinary Weyl
or Clifford algebra $\mbox{$\cal A$}$.
\item The action $\triangleright:U\mbox{\bf g\,}\times\mbox{$\cal A$}\rightarrow \mbox{$\cal A$}$;
$\mbox{$\cal A$}$ is a left module algebra under $\triangleright$.
\item The
representation $\rho$ of \mbox{\bf g\,} to which $a^+_i,a^i$ belong:
\[
x\triangleright a^+_i=\rho(x)^j_ia^+_j\qquad\qquad
x\triangleright a^i=\rho(Sx)_j^ia^j.
\]
\item The Jordan-Schwinger algebra homomorphism
$\sigma:U\mbox{\bf g\,}\in\mbox{$\cal A$}[[h]]$:
\[
\sigma(x):=
\rho(x)^i_ja^+_ia^j\qquad\mbox{if~}x\in\mbox{\bf g\,}\qquad\qquad\qquad
\sigma(yz)=\sigma(y)\sigma(z)
\]
\item The generators $\tilde A^+_i,\tilde A^i$ of a deformed Weyl
or Clifford algebra $\mbox{${\cal A}_h$}$.
\item The action $\triangleright_h:U_h\mbox{\bf g\,}\times\mbox{${\cal A}_h$}\rightarrow \mbox{${\cal A}_h$}$; $\mbox{${\cal A}_h$}$ is a
left module algebra under $\triangleright_h$.
\item The representation $\rho_h=\rho\circ \varphi_h$
of $U_h\mbox{\bf g\,}$ to which $\tilde A^+_i,\tilde A^i$ belong:
\[
X\triangleright_h \tilde A^+_i=\tilde\rho(X)^j_i\tilde A^+_j\qquad\qquad
X\triangleright_h \tilde A^i=\tilde\rho(S_h X)_j^i\tilde A^j.
\]
\item $*$-structures $*,*_h,\star,\star_h$ in $H,H_h,\mbox{$\cal A$},\mbox{${\cal A}_h$}$, if any.
\end{enumerate}
\section{Constructing the deformed generators}
\label{con}
\begin{prop}\cite{fiojmp}
One can realize the quantum group action $\triangleright_h$ on $\mbox{$\cal A$}[[h]]$ by setting
for any $X\in U_h\mbox{\bf g\,}$ and $\beta \in\mbox{$\cal A$}[[h]]$
(with $X_{(\bar 1)}\otimes X_{(\bar 2)}:=\Delta_h(X)$)
\begin{equation}
X\triangleright_h \beta := \sigma[\varphi_h(X_{(\bar 1)})]\,\beta\,
\sigma[\varphi_h(S_hX_{(\bar 2)})].
\end{equation}
\end{prop}
\begin{prop}\cite{fiojmp,fiocmp}
For any \mbox{\bf g\,}-invariants $u,v\in\mbox{$\cal A$}[[h]]$ the elements of $\mbox{$\cal A$}[[h]]$
\begin{equation}
\begin{array}{lll}
A_i^+ &:= & u\,\sigma(\mbox{$\cal F$}^{(1)})\,a_i^+\,
\sigma(S\mbox{$\cal F$}^{(2)}\gamma)u^{-1} \nonumber\\
A^i &:= &v\,\sigma(\gamma'S\mbox{$\cal F$}^{-1(2)})\,a^i\,
\sigma(\mbox{$\cal F$}^{-1(1)})v^{-1}.
\end{array}
\end{equation}
transform under $\triangleright_h$ as $\tilde A^+_i,\tilde A^i$.
\end{prop}
A suitable choice of $uv^{-1}$ may make $A^+_i,A^j$ fulfil also the
QCR of $\mbox{${\cal A}_h$}$ \cite{fiocmp}. In particular we have shown the
\begin{prop}\cite{fiocmp}
If $\rho$ is the defining representation of \mbox{\bf g\,},
$A^+_i,A^j$ fulfil the corresponding QCR provided
\begin{equation}
\begin{array}{llll}
uv^{-1} & = & \frac{\Gamma(n\!+\!1)}{\Gamma_{q^2}(n\!+\!1)}\qquad
\qquad & \mbox{\rm if ~}\mbox{\bf g\,}=sl(N) \cr
uv^{-1} & = & \frac{\Gamma[\frac 12(n\!+\!1\!+\!\frac N2-l)]
\Gamma[\frac 12(n\!+\!1\!+\!\frac N2\!+\!l)]}{\Gamma_{q^2}
[\frac 12(n\!+\!1\!+\!\frac N2\!+\!l)]
\Gamma_{q^2}[\frac 12(n\!+\!1\!+\!\frac N2-l)]}
\qquad\qquad & \mbox{\rm if ~}\mbox{\bf g\,}=so(N),
\end{array}
\nonumber
\end{equation}
where $\Gamma,\Gamma_{q^2}$ are Euler's gamma-function and its
$q$-deformation,
$n:=a^+_ia^i$, $l:=\sqrt{\sigma({\cal C}_{so(N)})}$, and
${\cal C}_{so(N)}$ is the quadratic Casimir of $so(N)$.
\end{prop}
If $A^+_i,A^j$ fulfil the QCR, then also
\begin{equation}
A^+_{i,\alpha}:=\alpha \,A^+_i\, \alpha^{-1}\qquad\qquad
A^{i,\alpha}:=\alpha \,A^i\, \alpha^{-1}
\end{equation}
will do, for any $\alpha\in\mbox{$\cal A$}[[h]]$ of the form $\alpha={\bf 1}+O(h)$.
By cohomological arguments one can prove
that there are no more elements in $\mbox{$\cal A$}[[h]]$ which do \cite{fiocmp}.
$A^+_{i,\alpha},A^{i,\alpha}$ transform as
$\tilde A^+_i,\tilde A^i$ under the following modified realization of
$\triangleright_h$:
\begin{equation}
X\triangleright_h^{\alpha} \beta := \alpha \sigma[\varphi_h(X_{(\bar 1)})]
\alpha^{-1}\,\beta\,
\alpha \sigma[\varphi_h(S_hX_{(\bar 2)})]\alpha^{-1}.
\end{equation}
The algebra homomorphism
$f_{\alpha}:\mbox{${\cal A}_h$}\rightarrow \mbox{$\cal A$}[[h]]$ such that
$f_{\alpha}(\tilde A^+_i)=A^+_{i,\alpha}$ and
$f_{\alpha}(\tilde A^i)=A^{i,\alpha}$ is what is usually
called a ``$q$-deforming map''.
For a compact section of $U\mbox{\bf g\,}$ one can choose a unitary \mbox{$\cal F$},
$\mbox{$\cal F$}^{*\otimes *}=\mbox{$\cal F$}^{-1}$. Then the $U\mbox{\bf g\,}$-covariant $*$-structure
$(a^i)^{\star}=a^+_i$ in $\mbox{$\cal A$}$
is also $U_h\mbox{\bf g\,}$-covariant in $\mbox{$\cal A$}[[h]]$ and
has the form
$(A^{i,\alpha})^{\star}=A^+_{i,\alpha}$, provided we choose
$u=v^{-1}$ and $\alpha^{\star}=\alpha^{-1}$. More formally,
under this assumption $\star\circ f_{\alpha}=f_{\alpha}\circ \star_h$,
with $\star_h$ defined by $(\tilde A^i)^{\star_h}=\tilde A^+_i$
If $H_h$ is instead a {\it triangular} deformation
of $U\mbox{\bf g\,}$, the previous construction can be equally performed
and leads essentially to the same results \cite{fiojmp},
provided we choose
in the previous formulae $u\equiv v\equiv {\bf 1}$. This follows
from the triviality of the coassociator \cite{dr3}, that
characterizes triangular deformations $H_h$.
\section*{Acknowledgments}
It is a pleasure to thank J.\ Wess for
his stimulus, support and
warm hospitality at his Institute.
This work was supported through a TMR fellowship
granted by the European Commission, Dir. Gen. XII for Science,
Research and Development, under the contract ERBFMBICT960921.
\section*{References}
| {'timestamp': '1998-01-07T17:48:09', 'yymm': '9710', 'arxiv_id': 'q-alg/9710024', 'language': 'en', 'url': 'https://arxiv.org/abs/q-alg/9710024'} |
\section{Introduction}
For all terms related to digraphs which are not defined below, see Bang-Jensen and Gutin \cite{Bang_Jensen_Gutin}.
In this paper,
by a {\it directed graph} (or simply {\it digraph)}
$D$ we mean a pair $(V,A)$, where
$V=V(D)$ is the set of vertices and $A=A(D)\subseteq V\times V$ is the set of arcs.
For an arc $(u,v)$, the first vertex $u$ is called its {\it tail} and the second
vertex $v$ is called its {\it head}; we also denote such an arc by $u\to v$.
If $(u,v)$ is an arc, we call $v$ an {\it out-neighbor} of $u$, and $u$ an {\it in-neighbor} of $v$.
The number of out-neighbors of $u$ is called the {\it out-degree} of $u$, and the number of in-neighbors of $u$ --- the {\it in-degree} of $u$.
For an integer $k\ge 2$, a {\it walk} $W$ {\it from} $x_1$ {\it to} $x_k$ in $D$ is an alternating sequence
$W = x_1 a_1 x_2 a_2 x_3\dots x_{k-1}a_{k-1}x_k$ of vertices $x_i\in V$ and arcs $a_j\in A$
such that the tail of $a_i$ is $x_i$ and the head of $a_i$ is $x_{i+1}$ for every
$i$, $1\le i\le k-1$.
Whenever the labels of the arcs of a walk are not important, we use the notation
$x_1\to x_2 \to \dotsb \to x_k$ for the walk, and say that we have an $x_1x_k$-walk.
In a digraph $D$, a vertex $y$ is {\it reachable} from a vertex $x$ if $D$ has a walk from $x$ to $y$. In
particular, a vertex is reachable from itself. A digraph $D$ is {\it strongly connected}
(or, just {\it strong}) if, for every pair $x,y$ of distinct vertices in $D$,
$y$ is reachable from $x$ and $x$ is reachable from $y$.
A {\it strong component} of a digraph $D$ is a maximal induced subdigraph of $D$ that is strong.
If $x$ and $y$ are vertices of a digraph $D$, then the
{\it distance from x to y} in $D$, denoted $\dist(x,y)$, is the minimum length of
an $xy$-walk, if $y$ is reachable from $x$, and otherwise $\dist(x,y) = \infty$.
The {\it distance from a set $X$ to a set $Y$} of vertices in $D$ is
\[
\dist(X,Y) = \max
\{
\dist(x,y) \colon x\in X,y\in Y
\}.
\]
The {\it diameter} of $D$ is $\diam(D) = \dist(V,V)$.
Let $p$ be a prime, $e$ a positive integer, and $q = p^e$. Let
$\fq$ denote the finite field of $q$ elements, and $\fq^*=\fq\setminus\{0\}$.
Let $\fq^2$
denote the Cartesian product $\fq \times \fq$, and let
$f\colon\fq^2\to\fq$ be an arbitrary function. We define a digraph $D = D(q;f)$ as follows:
$V(D)=\fq^{2}$, and
there is an arc from a vertex ${\bf x} = (x_1,x_2)$ to a vertex
${\bf y} = (y_1,y_{2})$ if and only if
\[
x_2 + y_2 = f(x_1,y_1).
\]
If $(x,y)$ is an arc in $D$, then ${\bf y}$ is uniquely determined by ${\bf x}$ and $y_1$, and ${\bf x}$ is uniquely determined by ${\bf y}$ and $x_1$.
Hence, each vertex of $D$ has both its in-degree and out-degree equal to $q$.
By Lagrange's interpolation,
$f$ can be uniquely represented by
a bivariate polynomial of degree at most $q-1$ in each of the variables. If ${f}(x,y) = x^m y^n$, $1\le m,n\le q-1$, we call $D$ a {\it monomial} digraph, and denote it also by $D(q;m,n)$. Digraph $D(3; 1,2)$ is depicted in Fig.\ $1.1$. It is clear, that ${\bf x}\to {\bf y}$ in $D(q;m,n)$ if and only if ${\bf y}\to {\bf x}$ in $D(q;n,m)$. Hence, one digraph is obtained from the other by reversing the direction of every arc. In general, these digraphs are not isomorphic, but if one of them is strong then so is the other and their diameters are equal. As this paper is concerned only with the diameter of $D(q;m,n)$, it is sufficient to assume that $1\le m\le n\le q-1$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\sffamily\scriptsize\bfseries}
\tikzset{edge/.style = {->,> = triangle 45}}
\node[vertex] (a) at (0,0) {$(0,2)$};
\node[vertex] (b) at (4,0) {$(1,1)$};
\node[vertex] (c) at (8,0) {$(1,0)$};
\node[vertex] (d) at (0,-4) {$(0,1)$};
\node[vertex] (e) at (4,-4) {$(2,2)$};
\node[vertex] (f) at (8,-4) {$(2,0)$};
\node[vertex] (g) at (4,-1.5) {$(2,1)$};
\node[vertex] (h) at (4,-2.5) {$(1,2)$};
\node[vertex] (i) at (8,-2) {$(0,0)$};
\draw[edge] (a) to (b);
\draw[edge] (b) to (a);
\draw[edge] (a) to (d);
\draw[edge] (d) to (a);
\draw[edge] (b) to (c);
\draw[edge] (c) to (b);
\draw[edge] (g) to (b);
\draw[edge] (h) to (e);
\draw[edge] (c) to (b);
\draw[edge] (d) to (e);
\draw[edge] (e) to (d);
\draw[edge] (e) to (f);
\draw[edge] (f) to (e);
\draw[edge] (c) to (i);
\draw[edge] (i) to (c);
\draw[edge] (f) to (i);
\draw[edge] (i) to (f);
\draw[edge] (g) to (a);
\draw[edge] (a) to (g);
\draw[edge] (c) to (g);
\draw[edge] (d) to (h);
\draw[edge] (h) to (d);
\draw[edge] (f) to (h);
\path
(g) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=330,in=300,looseness=8] node {} (g);
\path
(h) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=160,in=130,looseness=8] node {} (h);
\path
(i) edge [->,>={triangle 45[flex,sep=-1pt]},loop,out=210,in=170,looseness=8] node {} (i);
\end{tikzpicture}
\caption{The digraph $D(3;1,2)$: $x_2+y_2 = x_1y_1^2$.}
\end{center}
\end{figure}
The digraphs $D(q; {f})$
and $D(q;m,n)$ are directed analogues
of
some algebraically defined graphs, which have been studied extensively
and have many applications. See
Lazebnik and Woldar \cite{LazWol01} and references therein; for some
subsequent work see Viglione \cite{Viglione_thesis},
Lazebnik and Mubayi \cite{Lazebnik_Mubayi},
Lazebnik and Viglione \cite{Lazebnik_Viglione},
Lazebnik and Verstra\"ete \cite{Lazebnik_Verstraete},
Lazebnik and Thomason \cite{Lazebnik_Thomason},
Dmytrenko, Lazebnik and Viglione \cite{DLV05},
Dmytrenko, Lazebnik and Williford \cite{DLW07},
Ustimenko \cite{Ust07}, Viglione \cite{VigDiam08},
Terlep and Williford \cite{TerWil12}, Kronenthal \cite{Kron12},
Cioab\u{a}, Lazebnik and Li \cite{CLL14},
Kodess \cite{Kod14},
and Kodess and Lazebnik \cite{Kod_Laz_15}.
The questions of strong connectivity of digraphs $D(q;{f})$ and $D(q; m,n)$ and descriptions of their components were completely answered in
\cite{Kod_Laz_15}. Determining the diameter of a component of $D(q;{f})$ for an arbitrary prime power $q$ and an arbitrary $f$ seems to be out of reach, and most of our results below are concerned with some instances of this problem for strong monomial digraphs. The following theorems are the main results of this paper.
\begin{theorem
\label{main}
Let $p$ be a prime, $e,m,n$ be positive integers, $q=p^e$, $1\le m\le n\le q-1$, and $D_q= D(q;m,n)$. Then the following statements hold.
\begin{enumerate}
\item\label{gen_lower_bound} If $D_q$ is strong, then $\diam (D_q)\ge 3$.
\item\label{gen_upper_bound}
If $D_q$ is strong, then
\begin{itemize}
\item for $e = 2$, $\diam(D_q)\le 96\sqrt{n+1}+1$;
\item for $e \ge 3$, $\diam(D_q)\le 60\sqrt{n+1}+1$.
\end{itemize}
\item\label{diam_le_4} If $\gcd(m,q-1)=1$ or $\gcd(n,q-1)=1$, then $\diam(D_q)\le 4$.
If $\gcd(m,q-1) = \gcd(n,q-1) = 1$, then $\diam(D_q) = 3$.
\item \label{main3} If $p$ does not divide $n$, and $q > (n^2-n+1)^2$,
then $\diam(D(q;1,n)) = 3$.
\item If $D_q$ is strong, then:
\begin{enumerate}
\item[(a)\label{bound_q_le25}]
If $q > n^2$, then $\diam(D_q) \le 49$.
\item[(b)\label{bound_q_m4n4}]
If $q > (m-1)^4$, then $\diam(D_q)\le 13$.
\item[(c)]\label{bound_q_le6} If $q > (n-1)^4$, then $\diam(D(q;n,n))\le 9$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
The converse to either of the statements in part (\ref{diam_le_4}) of Theorem \ref{main} is not true. Consider, for instance,
$D(9;2,2)$ of diameter $4$, or $D(29;7,12)$ of diameter $3$.
\end{remark}
\begin{remark}
The result of part \ref{bound_q_le25}a can hold for some $q\le m^2$.
\end{remark}
For prime $q$, some of the results of Theorem \ref{main} can be strengthened.
\begin{theorem}
\label{thm_diam_p}
Let $p$ be a prime, $1\le m \le n\le p-1$, and $D_p= D(p;m,n)$. Then $D_p$ is strong and the following statements hold.
\begin{enumerate}
\item\label{diam_bound_p}
$\diam (D_p) \le 2p-1$ with equality if
and only if
$m=n=p-1$.
\item\label{bound_p_sqrt60}
If $(m,n)\not\in\{((p-1)/2,(p-1)/2),((p-1)/2,p-1), (p-1,p-1)\}$,
then $\diam(D_p)\le 120\sqrt{m}+1$.
\item\label{bound_p_le10}
If $p > (m-1)^3$,
then $\diam(D_p) \le 19$.
\end{enumerate}
\end{theorem}
The paper is organized as follows. In section \ref{preres} we present all results which are needed for our proofs of Theorems \ref{main} and \ref{thm_diam_p} in sections \ref{proofs1} and \ref{proofs2}, respectively. Section \ref{open} contains concluding remarks and open problems.
\section{Preliminary results.}\label{preres}
We begin with a general result that gives necessary and sufficient conditions for a digraph $D(q;m,n)$ to be strong.
\begin{theorem} {\rm [\cite{Kod_Laz_15}, Theorem 2]}
\label{thm_conn}
$D(q;m,n)$ is strong if and only if $\gcd(q-1,m,n)$ is not divisible by any
$q_d = (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.
In particular, $D(p;m,n)$ is strong for any $m,n$.
\end{theorem}
Every walk of length $k$ in $D = D(q; m,n)$ originating at $(a,{b})$ is of the form
\begin{align}
(a, b) &\to (x_1,- b + a^m x_1^n)\nonumber\\
&\to (x_2, b - a^m x_1^n + x_1^m x_2^n)\nonumber\\
&\to \cdots \nonumber\\
&\to(x_k, x_{k-1}^m x_k^n- x_{k-2}^m x_{k-1}^n+\cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b)\nonumber.
\end{align}
Therefore, in order to prove that $\diam(D)\le k$, one can show that for any choice of $a,b,u,v\in\fq$, there exists $(x_1,\dotso,x_k)\in\fq^k$ so that
\begin{equation}
\label{eqn:walk_length_k}
(u,v) = (x_k, x_{k-1}^m x_k^n- \cdots +(-1)^{k-1} a^m x_1^n+(-1)^k b).
\end{equation}
In order to show that $\diam(D)\ge l$, one can show that there exist $a,b,u,v\in~\fq$ such that
(\ref{eqn:walk_length_k}) has no solution in $\fq^k$ for any $k < l$.
\bigskip
\subsection{
Waring's Problem
}
In order to obtain an upper bound on $\diam(D(q; m,n))$ we will use some results concerning Waring's problem over finite fields.
Waring's number $\gamma(r,q)$ over $\fq$ is defined as the smallest positive integer $s$ (should it exist) such that the equation
\[
x_1^r + x_2^r + \dotsb + x_s^r = a
\]
has a solution $(x_1,\dotso,x_s)\in\fq^s$ for any $a\in\fq$.
Similarly, $\delta(r,q)$ is defined as the smallest positive integer $s$ (should it exist) such that
for any $a\in\fq$, there exists $(\epsilon_1,\dotso,\epsilon_s)$,
each $\epsilon_i\in\{-1,1\}\subseteq\mathbb{F}_q$,
for which the equation
\[
\epsilon_1 x_1^r + \epsilon_2 x_2^r + \dotsb + \epsilon_s x_s^r = a
\]
has a solution $(x_1,\dotso,x_s)\in\fq^s$.
It is easy to argue that $\delta(r,q)$ exists if and only if
$\gamma(r,q)$ exists, and in this case $\delta(r,q)\le \gamma(r,q)$.
A criterion on the existence of $\gamma(r,q)$ is the following theorem by Bhashkaran \cite{Bhashkaran_1966}.
\begin{theorem} {\rm [\cite{Bhashkaran_1966}, Theorem G]}
\label{thm:waring_exist}
Waring's number $\gamma(r,q)$ exists if and only if $r$ is not divisible by any $q_d
= (q-1)/(p^{d}-1)$ for any positive divisor $d$ of $e$, $d < e$.
\end{theorem}
The study of various bounds on $\gamma(r,q)$ has drawn considerable attention. We will use the following two upper bounds on Waring's number due to J.~Cipra \cite{Cipra_2009}.
\begin{theorem}{\rm [\cite{Cipra_2009}, Theorem 4]}
\label{thm:waring_bound}
If $e = 2$ and $\gamma(r,q)$ exists,
then $\gamma(r,q)\le 16\sqrt{r+1}$. Also, if
$e \ge 3$ and $\gamma(r,q)$ exists,
then $\gamma(r,q)\le 10\sqrt{r+1}$.
\end{theorem}
\begin{cor} {\rm [\cite{Cipra_2009}, Corollary 7]}
\label{thm:diam_le_8}
If $\gamma(r,q)$ exists and $r < \sqrt{q}$, then $\gamma(r,q)\le 8$.
\end{cor}
For the case $q = p$, the following bound will be of interest.
\begin{theorem}{\rm [Cochrane, Pinner \cite{Cochrane_Pinner_2008}, Corollary 10.3]}
\label{thm:Cochrane_Pinner}
If $|\{x^k\colon x\in\mathbb{F}_p^\ast\}|>2$, then $\delta(k,p)\le 20\sqrt{k}$.
\end{theorem}
The next two statements concerning very strong bounds on Waring's number in large fields follow from the work of Weil \cite{Weil}, and Hua and Vandiver \cite{Hua_Vandiver}.
\begin{theorem}{\rm [Small \cite{Small_1977}]}
\label{thm:waring_Small_estimates}
If $q > (k-1)^4$, then $\gamma(k,q) \le 2$.
\end{theorem}
\begin{theorem} {\rm [Cipra \cite{Cipra_thesis}, p.~4]}
\label{thm:waring_small_estimates}
If $ p > (k-1)^3$, then $\gamma(k,p)\le 3$.
\end{theorem}
For a survey on Waring's number over finite fields, see Castro and Rubio (Section 7.3.4, p.~211),
and Ostafe and Winterhof (Section 6.3.2.3, p.~175)
in Mullen and Panario \cite{Handbook2013}. See also Cipra \cite{Cipra_thesis}.
We will need the following technical lemma.
\begin{lemma}
\label{lemma:alt}
Let $\delta = \delta(r,q)$ exist, and $k \ge 2\delta$.
Then for every $a\in\fq$ the equation
\begin{equation}
\label{eqn:lemma_alt}
x_1^r - x_2^r + x_3^r - \dotsb + (-1)^{k+1} x_k^r = a
\end{equation}
has a solution $(x_1,\dotso,x_k)\in\fq^k$.
\end{lemma}
\begin{proof}
Let $a\in\fq$ be arbitrary. There exist $\varepsilon_1,\dotso,\varepsilon_\delta$, each
$\varepsilon_i\in\{-1,1\}\subseteq \fq$, such that
the equation
$\sum_{i=1}^{\delta} \varepsilon_i y_i^r = a$ has a solution
$(y_1,\dotso,y_{\delta})\in\fq^{\delta}$.
As $k \ge 2\delta$, the alternating sequence
$1,-1,1,\dotso,(-1)^k$ with $k$ terms contains the sequence
$\varepsilon_1,\dotso,\varepsilon_\delta$ as a subsequence.
Let the indices of this subsequence be
$j_1,j_2,\dotso,j_{\delta}$.
For each $l$, $1\le l\le k$, let
$x_l = 0$ if $l\neq j_i$ for any $i$, and
$x_l = y_i$ for $l = j_i$. Then $(x_1,\dotso,x_k)$ is a solution of
(\ref{eqn:lemma_alt}).
\end{proof}
\subsection{The Hasse-Weil bound}
In the next section we will use
the Hasse-Weil bound,
which provides
a bound on the number of $\fq$-points on a plane non-singular absolutely irreducible projective curve over a finite field $\fq$.
If the number of points on the curve $C$ of genus $g$ over the
finite field $\fq$ is $|C(\fq)|$, then
\begin{equation}
\label{hasse_weil_bound}
||C(\fq)| - q -1|
\le
2g\sqrt{q}.
\end{equation}
It is also known that for a non-singular curve
defined by a homogeneous polynomial of degree $k$, $g= (k-1)(k-2)/2$. Discussion of all related notions and a proof of this result can be found in
Hirschfeld, Korchm\'{a}ros, Torres \cite{Hirschfeld} (Theorem 9.18, p.~343) or in Sz\H{o}nyi \cite{Szonyi1997} (p.~197).
\section{Proof of Theorem \ref{main}} \label{proofs1}
\noindent {\bf (\ref{gen_lower_bound}).}
As there is a loop at $(0,0)$, and there are arcs between $(0,0)$ and $(x,0)$ in either direction, for every $x\in \fq^*$, the number of vertices in $D_q$ which are at distance at most 2 from $(0,0)$ is
at most $1+ (q-1)+(q-1)^2 < q^2$. Thus, there are vertices in $D_q$ which are at distance
at least 3 from $(0,0)$, and so $\diam(D_q)\ge 3$.
\bigskip
\noindent {\bf (\ref{gen_upper_bound}).}
As $D_q$ is strong, by Theorem \ref{thm_conn},
for any positive divisor $d$ of $e$, $d<e$,
$q_d\centernot\mid\gcd (p^e-1, m,n)$. As, clearly, $q_d\,|\,(p^e-1)$, either $q_d\centernot\mid m$ or $q_d\centernot\mid n$. This implies by Theorem \ref{thm:waring_exist} that either $\gamma(m,q)$ or $\gamma(n,q)$ exists.
Let $(a,b)$ and $(u,v)$ be arbitrary vertices of $D_q$. By (\ref{eqn:walk_length_k}), there exists a walk of length at most $k$ from $(a,b)$ to $(u,v)$ if the equation
\begin{equation}
\label{eqn:main}
v = x_{k-1}^m u^n- x_{k-2}^m x_{k-1}^n+\cdots +(-1)^{k-1} a^m
x_1^n+(-1)^k b
\end{equation}
has a solution $(x_1,\ldots, x_k)\in \fq^k$.
Assume first that $\gamma_m = \gamma(m,q)$ exists.
Taking $k=6\gamma_m + 1$,
and $x_i = 0$ for $i\equiv 1 \mod 3$, and $x_i = 1$ for $i\equiv 0\mod 3$, we have that (\ref{eqn:main}) is equivalent to
\[
-x_{k-2}^m + x_{k-5}^m -\cdots +(-1)^k x_5^m + (-1)^{k-1}x_2^m = v-(-1)^k b-u^n.
\]
As the number of terms on the left is $(k-1)/3 = 2 \gamma_m$, this equation has a solution in $\fq^{2\gamma_m}$ by Lemma \ref{lemma:alt}.
Hence, (\ref{eqn:main}) has a solution in $\fq^{k}$.
If $\gamma_n = \gamma(n,q)$ exists, then the argument is similar: take $k = 6\gamma_n+1$, $x_i = 0$ for $i\equiv 0 \mod 3$, and $x_i = 1$ for $i\equiv 1\mod 3$.
The result now follows from the bounds on $\gamma(r,q)$ in Theorem \ref{thm:waring_bound}.
\begin{remark}
As $m\le n$, if $\gamma(m,q)$ exists, the upper bounds in Theorem~\ref{main},
part {\bf (\ref{gen_upper_bound})}, can be improved by replacing $n$ by $m$. Also, if a better upper bound on $\delta(m,q)$ than $\gamma(m,q)$ (respectively, on $\delta(n,q)$ than $\gamma(n,q)$) is known,
the upper bounds in Theorem~\ref{main}, {\bf (\ref{gen_upper_bound})},
can be further improved: use $k = 6\delta(m,q)+1$ (respectively, $k = 6\delta(n,q)+1$) in the proof. Similar comments apply to other parts
of Theorem \ref{main} as well as Theorem \ref{thm_diam_p}.
\end{remark}
\bigskip
\noindent {\bf (\ref{diam_le_4}).}
Recall the basic fact $\gcd(r,q-1)=1 \Leftrightarrow \{x^r\colon x \in\fq\} = \fq$.
Let $k=4$. If $\gcd(m,q-1) = 1$, a solution to (\ref{eqn:walk_length_k}) of the form $(0,x_2,1,u)$ is seen to exist for any choice of $a,b,u,v\in\fq$. If $\gcd(n,q-1) = 1$, there exists a solution of the form $(1,x_2,0,u)$. Hence, $\diam (D_q) \le 4$.
Let $k=3$, and $\gcd(m,q-1) = \gcd(n,q-1) = 1$. If $a=0$, then a solution to (\ref{eqn:walk_length_k}) of the form $(x_1,1,u)$ exists. If $a\neq 0$, a solution of the form $(x_1,0,u)$ exists. Hence, $D_q$ is strong and $\diam (D_q) \le 3$. Using the lower bound from part {\bf (\ref{gen_lower_bound})}, we conclude that $\diam (D_q) = 3$.
\bigskip
\noindent {\bf (\ref{main3}).} As was shown in part \ref{diam_le_4}, for any $n$,
$\diam(D(q; 1,n))\le 4$. If, additionally, $\gcd(n,q-1) = 1$, then $\diam(D(q; 1,n)) = 3$.
It turns out that if $p$ does not divide $n$, then only for finitely many $q$ is the diameter of $D(q;1,n)$ actually 4.
For $k=3$, (\ref{eqn:walk_length_k}) is equivalent to
\begin{equation}
\label{eqn:proof_hasse}
(u,v) = (x_3,x_2 x_3^n-x_1 x_2^n + a x_1^n-b),
\end{equation}
which has solution $(x_1,x_2,x_3) = (0,u^{-n}(b+v),u)$, provided $u\neq 0$.
Suppose now that $u = 0$. Aside from the trivial case $a = 0$, the question of the existence of a solution to (\ref{eqn:proof_hasse}) shall be resolved if we prove that the equation
\begin{equation}
\label{eqn:surj}
a x^n - x y^n + c = 0
\end{equation}
has a solution for any $a, c\in\fq^*$ (for $c=0$, (\ref{eqn:surj}) has solutions).
The projective curve corresponding to this equation is the zero locus of the homogeneous polynomial
\[
F(X,Y,Z) = aX^n Z - X Y^n + c Z^{n+1}.
\]
It is easy to see that, provided $p$ does not divide $n$,
\[
F=F_X=F_Y=F_Z =0 \;\; \Leftrightarrow \;\; X=Y=Z=0,
\]
and thus the curve has no singularities and is absolutely irreducible.
Counting the two points $[1:0:0]$ and $[0:1:0]$ on the line at infinity $Z = 0$, we obtain from (\ref{hasse_weil_bound}), the inequality
$N\ge q-1-2g\sqrt{q}$, where $N=N(c)$ is the number of solutions of (\ref{eqn:surj}). As $g= n(n-1)/2$,
solving the inequality $q-1-n(n-1)\sqrt{q}>0$ for $q$, we obtain a lower bound on $q$ for which $N \ge 1$.
\bigskip
\noindent{\bf (\ref{bound_q_le25}a).}
The result follows from Corollary \ref{thm:diam_le_8} by an argument similar to that of the proof of part {\bf (\ref{gen_upper_bound})}.
\bigskip
\noindent {\bf (\ref{bound_q_m4n4}b).}
For $k=13$, (\ref{eqn:walk_length_k}) is equivalent to
\[
(u,v)
=
(x_{13},
-b + a^m x_1^n -x_1^m x_2^n + x_2^m x_3^n -\dotsb - x_{11}^m x_{12}^n + x_{12}^m x_{13}^n).
\]
If $q > (m-1)^4$, set $x_1 = x_4 = x_7 = x_{10} = 0$,
$x_3 = x_6 = x_9 = x_{12} = 1$. Then
$v - u^n + b = -x_{11}^m + x_8^m - x_5^m + x_2^m$, which has a solution $(x_2,x_5,x_8,x_{11})\in\fq^4$ by Theorem \ref{thm:waring_Small_estimates} and Lemma \ref{lemma:alt}.
\bigskip
\noindent {\bf (\ref{bound_q_le6}c).}
For $k=9$, (\ref{eqn:walk_length_k}) is equivalent to
\[
(u,v)
=
(x_9,
-b + a^n x_1^n -x_1^n x_2^n + x_2^n x_3^n -\dotsb - x_7^m x_8^n + x_8^n x_9^n).
\]
If $q > (n-1)^4$, set $x_1 = x_4 = x_5 = x_8 = 0$,
$x_3 = x_7 = 1$. Then
$v + b = x_2^n + x_6^n$, which has a solution $(x_2,x_6)\in\fq^2$ by Theorem \ref{thm:waring_Small_estimates}.
\bigskip
\section{Proofs of Theorem \ref{thm_diam_p}} \label{proofs2}
\begin{lemma}\label{AutoLemma}
Let $D=D(q;m,n)$. Then, for any $\lambda\in\mathbb{F}_q^*$, the function $\phi:V(D) \rightarrow V(D)$ given by $\phi((a,b)) = (\lambda a, \lambda^{m+n} b)$ is
a digraph automorphism of $D$.
\end{lemma}
The proof of the lemma is straightforward. It amounts to showing that $\phi$ is a bijection and that it preserves adjacency: ${\bf x} \to {\bf y}$ if and only if $\phi({\bf x}) \to \phi({\bf y})$. We omit the details. Due to Lemma \ref{AutoLemma}, any walk in $D$ initiated at a vertex $(a,b)$ corresponds to a walk initiated at a vertex $(0,b)$ if $a=0$, or at a vertex $(1,b')$, where $b'= a^{-m-n} b$, if $a\neq 0$. This implies that if we wish to show that $\diam (D_p) \le 2p-1$, it is sufficient to show that the distance from any vertex $(0,b)$ to any other vertex is at most $2p-1$, and that the distance from any vertex $(1,b)$ to any other vertex is at most $2p-1$.
First we note that by Theorem \ref{thm_conn}, $D_p = D(p;m,n)$ is strong for any choice of $m,n$.
For $a\in\mathbb{F}_p$, let integer $\overline{a}$, $0\le \overline{a} \le p-1$, be the representative of the residue class $a$.
It is easy to check that $\diam (D(2; 1,1)) = 3$.
Therefore, for the remainder of the proof, we may assume that $p$ is odd.
\bigskip
\noindent{\bf (\ref{diam_bound_p}).}
In order to show that diam$(D_p) \le 2p-1$, we use (\ref{eqn:walk_length_k}) with $k= 2p-1$, and prove that for any two vertices $(a,b)$ and $(u,v)$ of $D_p$ there
is always a solution $(x_1, \ldots, x_{2p-1})\in \fq^{2p-1}$ of
$$(u,v) = (x_{2p-1}, -b + a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots -
x_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mx_{2p-1}^n),
$$
or, equivalently, a solution ${\bf x} = (x_1, \ldots, x_{2p-2})\in \fq^{2p-2}$ of
\begin{equation} \label{eq:1}
a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots -
x_{2p-3}^mx_{2p-2}^n + x_{2p-2}^mu^n = b+v.
\end{equation}
As the upper bound $2p-1$ on the diameter is exact and holds for all $p$, we need a more subtle argument compared to the ones we used before. The only way we can make it is (unfortunately) by performing a case analysis on $\overline{b+v}$ with a nested case structure. In most of the cases we just exhibit a solution ${\bf x}$ of (\ref{eq:1}) by describing its components $x_i$.
It is always a straightforward verification that ${\bf x}$ satisfies (\ref{eq:1}), and we will suppress our comments as cases proceed.
Our first observation is that if $\overline{b+v} = 0$, then ${\bf x} = (0,\dots, 0)$ is a solution to (\ref{eq:1}).
We may assume now that $\overline{b+v}\ne 0$.\\
\noindent\underline{Case 1.1}: $\overline{b+v}\ge \frac{p-1}{2} + 2$
\noindent
We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(p-(\overline{b+v}))$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$;
if $4(p-(\overline{b+v}))< i \le 2p-2$, then $x_i=0$.
Note that $x_i^mx_{i+1}^n = 0$ unless $i\equiv 3 \mod 4$,
in which case $x_i^mx_{i+1}^n = 1$. If we group the terms
in groups of four so that each group is of the form
\[
-x_i^mx_{i+1}^n+x_{i+1}^mx_{i+2}^n-x_{i+2}^mx_{i+3}^n+x_{i+3}^mx_{i+4}^n,
\]
where $i\equiv 1 \mod 4$, then assuming $i$, $i+1$, $i+2$, $i+3$, and $i+4$ are within the range of
$1\le i<i+4 \le 4(\overline{b+v})$, it is easily seen that one group contributes
$-1$ to
\[
a^mx_1^n - x_1^mx_2^n + x_2^mx_3^n - \dots - x_{2p-3}^mx_{2p-2}^n
+ x_{2p-2}^mx_{2p-1}^n.
\]
There are $\frac{4(p-(\overline{b+v}))}{4} = p-(\overline{b+v})$ such
groups, and so the solution provided adds $-1$ exactly
$p-(\overline{b+v})$ times.
Hence, ${\bf x}$ is a solution to (\ref{eq:1}).
\medskip
For the remainder of the proof, solutions to (\ref{eq:1}) will
be given without justification as the justification is similar
to what's been done above.
\vspace{5mm}
\noindent\underline{Case 1.2}: $\overline{b+v}\le \frac{p-1}{2}$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(\overline{b+v})-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(\overline{b+v})-1< i \le 2p-2 $, then $x_i=0$.
\vspace{5mm}
\noindent\underline{Case 1.3}: $\overline{b+v}= \frac{p-1}{2}+1$
This case requires several nested subcases.
\vspace{3mm}
\underline{Case 1.3.1}: $u=x_{2p-1}=0$
Here, there is no need to restrict $x_{2p-2}$ to be
$0$. The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as:
if $1\le i \le 2p-2$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$.
\vspace{3mm}
\underline{Case 1.3.2}: $a=0$
Here, there is no need to restrict $x_1$ to be 0. Therefore, the components of a solution ${\bf x}$ of (\ref{eq:1})
are defined as:
if $1\le i\le 2p-2$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$.
\vspace{5mm}
\underline{Case 1.3.3}: $u\ne 0$ and $a\ne 0$
Because of Lemma \ref{AutoLemma}, we may assume without loss of generality that $a=1$.
Let $x_{2p-2} = 1$, so that $x_{2p-2}^mu^n=u^n\ne 0$ and let $t=\overline{b+v-u^n}$. Note that
$t\ne\frac{p-1}{2}+1$.
\vspace{3mm}
\underline{Case 1.3.3.1}: $t=0$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i < 2p-2 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 1.3.3.2}: $0< t\le \frac{p-1}{2}$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i\le 4(t-1)+1$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0,1 \mod{4}$;
if $4(t-1)+1< i < 2p-2 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 1.3.3.3}: $t\ge \frac{p-1}{2}+2$
The components of a solution ${\bf x}$ of (\ref{eq:1}) are defined as: $x_{2p-2} = 1$, and
if $1\le i\le 4(p-t)$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0,3 \mod{4}$;
if $4(p-t)< i < 2p-2 $, then $x_i=0$.\\
The whole range of possible values $\overline{b+v}$ has been checked. Hence, $\diam(D)\le 2p-1$.
\bigskip
We now show that if $\diam(D)=2p-1$, then $m=n=p-1$. To do so, we assume
that $m\ne p-1$ or $n\ne p-1$ and prove the contrapositive. Specifically, we show that $\diam(D)\le 2p-2<2p-1$ by
again using (\ref{eqn:walk_length_k}) but with $k= 2p-2$. We prove that for any two vertices $(a,b)$ and $(u,v)$ of $D_p$ there
is always a solution $(x_1, \ldots, x_{2p-2})\in \fq^{2p-2}$ of
$$(u,v) = (x_{2p-2}, b - a^mx_1^n + x_1^mx_2^n - \dots -
x_{2p-4}^mx_{2p-3}^n + x_{2p-3}^mx_{2p-2}^n),
$$
or, equivalently, a solution ${\bf x} = (x_1, \ldots, x_{2p-3})\in \fq^{2p-3}$ of
\begin{equation} \label{eq:2}
-a^mx_1^n + x_1^mx_2^n - x_2^mx_3^n + \dots -
x_{2p-4}^mx_{2p-3}^n + x_{2p-3}^mu^n = -b+v.
\end{equation}
We perform a case analysis on $\overline{-b+v}$.
\vspace{5mm}
Our first observation is that if $\overline{-b+v} = 0$, then ${\bf x} = (0,\dots, 0)$ is a solution to (\ref{eq:2}). We may
assume for the remainder of the proof that $\overline{-b+v}\ne 0$.
\vspace{3mm}
\noindent\underline{Case 2.1}: $\overline{-b+v}\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(\overline{-b+v})$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4(\overline{-b+v})< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.2}: $\overline{-b+v}\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as follows:
if $1\le i\le 4(p-(\overline{-b+v}))-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-(\overline{-b+v}))-1< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.3}: $\overline{-b+v}= \frac{p-1}{2}$
\underline{Case 2.3.1}: $a=0$
We define the components of ${\bf x}$ as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$.
\vspace{3mm}
\underline{Case 2.3.2}: $a\ne 0$
Here, we may assume without loss of generality that $a=1$ by Lemma (\ref{AutoLemma}).
\vspace{3mm}
\underline{Case 2.3.2.1}: $n\ne p-1$
If $n\ne p-1$, then there exists $\beta\in\mathbb{F}_p^*$ such that $\beta^n\not\in\{0,1\}$. For such a $\beta$,
let $x_1=\beta$ and consider $t=\overline{-b+v+a^mx_1^n}=\overline{-b+v+\beta^n}\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.3.2.1.1}: $t=0$
\noindent\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.1.2}: $t\le \frac{p-1}{2}-1$
\noindent\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.1.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_1=\beta$ and
if $2\le i\le 4(p-t)+1$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0, 1 \mod{4}$;
if $4(p-t)+1< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2}: $n=p-1$
\vspace{3mm}
\underline{Case 2.3.2.2.1}: $u\in\mathbb{F}_p^*$
Here, we have that $u^n=1$, so that the components of a solution ${\bf x}$ of (\ref{eq:2}) are defined as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$.
\vspace{3mm}
\underline{Case 2.3.2.2.2}: $u=0$
Since $n=p-1$, it must be the case that $m\ne p-1$ so that there exists $\alpha\in\mathbb{F}_p^*$ such that $\alpha^m\not\in\{0.1 \}$.
For such an $\alpha$, let $x_2=\alpha, x_3=1$ and consider $t=\overline{-b+v+x_2^mx_3^n}=\overline{-b+v+\alpha^m}
\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4 \le i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.3.2.2.2.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_1=0, x_2=\alpha, x_3=1$ and
if $4\le i\le 4(p-t)+3$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)+3< i \le 2p-3 $, then $x_i=0$.
\vspace{3mm}
\noindent\underline{Case 2.4}: $\overline{-b+v}= \frac{p-1}{2}+1$
\vspace{3mm}
\underline{Case 2.4.1}: $u=0$
We define the components of ${\bf x}$ as:
if $1\le i\le 2p-3$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$.
\vspace{3mm}
\underline{Case 2.4.2}: $u\ne 0$
Here, we may assume without loss of generality that $u=1$ by Lemma (\ref{AutoLemma}).
\vspace{3mm}
\underline{Case 2.4.2.1}: $m\ne p-1$
If $m\ne p-1$, then there exists $\alpha\in\mathbb{F}_p^*$ such that $\alpha^m\not\in\{0,1\}$. For such an $\alpha$,
let $x_{2p-3}=\alpha$ and consider $t=\overline{-b+v-x_{2p-3}^mu^n}=\overline{-b+v-\alpha^m}\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.4.2.1.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1 \le i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.1.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1\le i\le 4t$, then $x_i=0$ for $i\equiv 1,2 \mod{4}$, and $x_i=1$ for $i\equiv 0, 3 \mod{4}$;
if $4t< i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.1.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_{2p-3}=\alpha$ and
if $1\le i\le 4(p-t)-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)-1< i \le 2p-4 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2}: $m=p-1$
\vspace{3mm}
\underline{Case 2.4.2.2.1}: $a\in\mathbb{F}_p^*$
Here, we have that $a^m=1$, so that the components of a solution ${\bf x}$ of (\ref{eq:2}) are defined as:
if $1\le i\le 2p-5$, then $x_i=0$ for $i\equiv 2,3 \mod{4}$, and $x_i=1$ for $i\equiv 0, 1 \mod{4}$.
\vspace{3mm}
\underline{Case 2.4.2.2.2}: $a=0$
Since $m=p-1$, it must be the case that $n\ne p-1$ so that there exists $\beta\in\mathbb{F}_p^*$ such that $\beta^n\not\in\{0.1 \}$.
For such a $\beta$, let $x_{2p-5}=1, x_{2p-4}=\beta$ and consider $t=\overline{-b+v-x_{2p-5}^mx_{2p-4}^n}=\overline{-b+v-\beta^n}
\not\in\{\frac{p-1}{2}, \frac{p-1}{2}+1 \}$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.1}: $t=0$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i \le 2p-6 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.2}: $t\le \frac{p-1}{2}-1$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i\le 4t-2$, then $x_i=0$ for $i\equiv 0,3 \mod{4}$, and $x_i=1$ for $i\equiv 1, 2 \mod{4}$;
if $4t-2< i \le 2p-6 $, then $x_i=0$.
\vspace{3mm}
\underline{Case 2.4.2.2.2.3}: $t\ge \frac{p-1}{2}+2$
\noindent We define the components of ${\bf x}$ as: $x_{2p-5}=1, x_{2p-4}=\beta, x_{2p-3}=0$ and
if $1\le i\le 4(p-t)-1$, then $x_i=0$ for $i\equiv 0,1 \mod{4}$, and $x_i=1$ for $i\equiv 2, 3 \mod{4}$;
if $4(p-t)-1< i \le 2p-6 $, then $x_i=0$.\\
All cases have been checked, so if $m\ne p-1$ or $n\ne p-1$, then $\diam(D) < 2p-1$.
\vspace{5mm}
We now prove that if $m=n=p-1$, then $d:= \diam (D(p;m,n))=2p-~1$.
In order to do this, we explicitly describe the structure of the digraph $D(p;p-1,p-1)$,
from which the diameter becomes clear. In this description, we
look at sets of vertices of a given distance from the vertex $(0,0)$, and show that some of them are at distance $2p-1$.
We recall the following important general properties of our digraphs that will be used in the proof.
\begin{itemize}
\item Every out-neighbor $(u,v)$ of a vertex $(a,b)$ of $D(q;m,n)$ is completely determined by its first component $u$.
\item Every vertex of $D(q;m,n)$ has its out-degree and in-degree equal $q$.
\item In $D(q; m,m)$, ${\bf x}\to {\bf y}$ if and only if
${\bf y}\to {\bf x}$
\end{itemize}
In $D(p;p-1,p-1)$, we have that $(x_1, y_1)\to
(x_2, y_2)$ if and only if
\[
y_1 + y_2 = x_1^{p-1}x_2^{p-1} = \begin{cases}
0 & \textrm{ if $x_1=0$ or $x_2=0$}, \\
1 & \textrm{ if $x_1$ and $x_2$ are non-zero}. \\
\end{cases}
\]
For notational convenience, we set
\[
(*, a) = \{(x, a): x\in\mathbb{F}_p^*\}
\]
and, for $1\le k\le d$, let
\[
N_k = \{v\in V(D(p;m,n)): \text{dist}((0,0), v) = k \}.
\]
We assume that $N_0=\{(0,0)\}$.
It is clear from this definition that these $d+1$ sets $N_k$ partition the vertex set of $D(p;p-1,p-1)$; for every $k$, $1\le k\le d-1$, every out-neighbor of a vertex from $N_k$ belongs to $N_{k-1}\cup N_k\cup N_{k+1}$, and $N_{k+1}$ is the set of all out-neighbors of all vertices from
$N_k$ which are not in $N_{k-1}\cup N_k$.
Thus we have $N_0=\{(0,0)\}$, $N_1= (*,0)$, $N_2=(*,1)$, $N_3=\{(0,-1)\}$. If $p>2$, $N_4=\{(0,1)\}$, $N_5=(*,-1)$. As there exist two (opposite) arcs between each vertex of $(*,x)$ and each vertex $(*,-x+1)$, these subsets of vertices induce the complete bipartite subdigraph $\overrightarrow{K}_{p-1,p-1}$ if $x\ne -x+1$, and the complete subdigraph $\overrightarrow{K}_{p-1}$ if $x =-x+1$. Note that our $\overrightarrow{K}_{p-1,p-1}$ has no loops, but $\overrightarrow{K}_{p-1}$ has a loop on every vertex.
Digraph $D(5;4,4)$ is depicted in Fig. $1.2$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,inner sep=2pt,minimum size=.5em, scale = 1.0},font=\sffamily\scriptsize\bfseries}
\tikzset{edge/.style = {->,> = stealth'},shorten >=1pt}
\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,0)$}] (a) at (0,0) {};
\node[vertex] (b1) at (1,1.5) {};
\node[vertex] (b2) at (1,.5) {};
\node[vertex] (b3) at (1,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,0)$}] (b4) at (1,-1.5) {};
\node[vertex] (c1) at (2,1.5) {};
\node[vertex] (c2) at (2,.5) {};
\node[vertex] (c3) at (2,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,1)$}] (c4) at (2,-1.5) {};
\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-1)$}] (d) at (3,0) {};
\node[vertex,label={[xshift=-0.2cm, yshift=0.0cm]$(0,1)$}] (e) at (4,0) {};
\node[vertex] (f1) at (5,1.5) {};
\node[vertex] (f2) at (5,.5) {};
\node[vertex] (f3) at (5,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,-1)$}] (f4) at (5,-1.5) {};
\node[vertex] (g1) at (6,1.5) {};
\node[vertex] (g2) at (6,.5) {};
\node[vertex] (g3) at (6,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,2)$}] (g4) at (6,-1.5) {};
\node[vertex,label={[xshift=0.25cm, yshift=-0.8cm]$(0,-2)$}] (h) at (7,0) {};
\node[vertex,label={[xshift=-0.3cm, yshift=0.00cm]$(0,2)$}] (i) at (8,0) {};
\node[vertex] (j1) at (9,1.5) {};
\node[vertex] (j2) at (9,.5) {};
\node[vertex] (j3) at (9,-.5) {};
\node[vertex,label={[xshift=0.0cm, yshift=-0.8cm]$(\ast,-2)$}] (j4) at (9,-1.5) {};
\path
(a) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=240,in=270, looseness = 50] node {} (a);
\foreach \x in {b1,b2,b3,b4}
{
\draw [edge] (a) to (\x);
\draw [edge] (\x) to (a);
}
\foreach \x in {b1,b2,b3,b4}
{
\foreach \y in {c1,c2,c3,c4}
{
\draw [edge] (\x) to (\y);
\draw [edge] (\y) to (\x);
}
}
\foreach \x in {c1,c2,c3,c4}
{
\draw [edge] (d) to (\x);
\draw [edge] (\x) to (d);
}
\draw [edge] (d) to (e);
\draw [edge] (e) to (d);
\foreach \x in {f1,f2,f3,f4}
{
\draw [edge] (e) to (\x);
\draw [edge] (\x) to (e);
}
\foreach \x in {f1,f2,f3,f4}
{
\foreach \y in {g1,g2,g3,g4}
{
\draw [edge] (\x) to (\y);
\draw [edge] (\y) to (\x);
}
}
\foreach \x in {g1,g2,g3,g4}
{
\draw [edge] (h) to (\x);
\draw [edge] (\x) to (h);
}
\draw [edge] (h) to (i);
\draw [edge] (i) to (h);
\foreach \x in {j1,j2,j3,j4}
{
\draw [edge] (i) to (\x);
\draw [edge] (\x) to (i);
}
\path
(j1) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j1);
\path
(j2) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j2);
\path
(j3) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j3);
\path
(j4) edge [->,>={stealth'[flex,sep=-1pt]},loop,out=30,in=-20, looseness = 35] node {} (j4);
\path
(j1) edge[bend right,<->,>=stealth'] node [left] {} (j2);
\path
(j1) edge[bend right = 60,<->,>=stealth'] node [left] {} (j3);
\path
(j1) edge[bend right = 320,<->,>=stealth'] node [left] {} (j4);
\path
(j2) edge[bend right,<->,>=stealth'] node [left] {} (j3);
\path
(j2) edge[bend right = 60,<->,>=stealth'] node [left] {} (j4);
\path
(j3) edge[bend right,<->,>=stealth'] node [left] {} (j4);
\end{tikzpicture}
\caption{The digraph $D(5;4,4)$: $x_2+y_2 = x_1^4y_1^4$.}
\end{center}
\end{figure}
The structure of $D(p;p-1,p-1)$ for any other prime $p$ is similar. We can describe it as follows: for each $t\in \{0,1, \ldots, (p-1)/2\}$, let
$$
N_{4{\overline t}} = \{(0, t)\}, \;\;
N_{4{\overline t}+1} = (*, -t),
$$
and for each $t\in \{0,1, \ldots, (p-3)/2\}$, let
$$
N_{4{\overline t}+2} = (*, t+1), \;
N_{4{\overline t}+3} = \{(0, -t-1)\}.
$$
Note that for $0\le {\overline t}<(p-1)/2$, $N_{4{\overline t}+1}\neq N_{4{\overline t}+2}$, and for ${\overline t}=(p-1)/2$, $N_{2p-1}=(*,(p+1)/2)$. Therefore, for $p\ge 3$, $D(p;p-1,p-1)$ contains $(p-1)/2$ induced copies of
$\overrightarrow{K}_{p-1,p-1}$ with partitions $N_{4{\overline t}+1}$ and $N_{4{\overline t}+2}$, and a copy of $\overrightarrow{K}_{p-1}$ induced by $N_{2p-1}$. The proof is a trivial induction on $\overline{t}$. Hence, $\diam (D(p;p-1,p-1)) = 2p-1$. This ends the proof of Theorem~\ref{thm_diam_p}~(\ref{diam_bound_p}).
\bigskip
\noindent{\bf (\ref{bound_p_sqrt60}).}
We follow the argument of the proof of Theorem \ref{main}, part {\bf (\ref{gen_upper_bound})} and use Lemma \ref{lemma:alt}, with $k = 6\delta(m,p)+1$. We note, additionally, that if $m\not\in\{p,(p-1)/2\}$, then $\gcd(m,p-1) < (p-1)/2$, which implies $|\{ x^m \colon x\in\mathbb{F}_p^\ast \} | > 2$. The result then follows from Theorem \ref{thm:Cochrane_Pinner}.
\bigskip
\noindent{\bf (\ref{bound_p_le10}).}
We follow the argument of the proof of Theorem \ref{main}, part {\bf (\ref{bound_q_m4n4}b)} and use Lemma \ref{lemma:alt} and Theorem \ref{thm:waring_small_estimates}.
\medskip
This ends the proof of Theorem~\ref{thm_diam_p}.
\bigskip
\section{Concluding remarks.}\label{open}
Many results in this paper follow the same pattern: if Waring's number $\delta(r,q)$ exists and is bounded above by $\delta$, then one can show that $\diam(D(q;m,n))\le 6\delta + 1$. Determining the exact value of $\delta(r,q)$ is an open problem, and it is likely to be very hard. Also, the upper bound $6\delta +1$ is not exact in general. Out of all partial results concerning $\delta(r,q)$, we used only those ones which helped us deal with the cases of the diameter of $D(q; m,n)$ that we considered, especially where the diameter was small. We left out applications of all asymptotic bounds on $\delta(r,q)$. Our computer work demonstrates that some upper bounds on the diameter mentioned in this paper are still far from being tight. Here we wish to mention only a few strong patterns that we observed but have not been able to prove so far. We state them as problems.
\bigskip
\noindent{\bf Problem 1.}
Let $p$ be prime, $q=p^e$, $e \ge 2$, and suppose $D(q;m,n)$ is strong. Let
$r$ be the largest divisor of $q-1$
not divisible by any
$q_d = (p^e-1)/(q^d-1)$
where $d$ is a positive divisor of $e$ smaller than $e$. Is it true that
\[
\max_{1\le m\le n\le q-1}
\{
\diam(D(q;m,n))
\}
=
\diam(D(q;r,r))?
\]
Find an upper bound on $\diam(D(q;r,r))$ better than the one of
Theorem \ref{main}, part {\bf (\ref{bound_q_le6}c)}.
\bigskip
\noindent{\bf Problem 2.} Is it true that for every prime $p$ and $1\le m \le n$,
$(m,n)\neq (p-1,p-1)$, $\diam (D(p;m,n)) \le (p+3)/2$ with the equality if and only if $(m,n)=((p-1)/2, (p-1)/2)$ or $(m,n)=((p-1)/2, p-1)$?
\bigskip
\noindent{\bf Problem 3.} Is it true that for every prime $p$, $\diam (D(p;m,n))$ takes only one of two consecutive values which are completely determined by $\gcd((p-1, m, n)$?
\section{Acknowledgement}
The authors are thankful to the anonymous referee whose careful reading and thoughtful comments led to a number of significant improvements in the paper.
| {'timestamp': '2018-07-31T02:20:52', 'yymm': '1807', 'arxiv_id': '1807.11360', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.11360'} |
\section{Introduction}
In the last decade Machine Learning (ML) has been rapidly evolving due to the profound performance improvements that Deep Learning (DL) has ushered. Deep Learning has outperformed previous state-of-the-art methods in many fields of Machine Learning, such as Natural Language Processing (NLP)~\cite{deng2018feature}, image processing~\cite{larsson2018robust} and speech generation~\cite{van2016wavenet}. As the number of new methods incorporating Deep Learning in many scientific fields increase, the proposed solutions begin to span across other disciplines where Machine Learning was used in a limited capacity. One such example is the quantitative analysis of the stock markets and the usage of Machine Learning to predict price movements or the volatility of the future prices or the detection of anomalous events in the markets.
In the field of quantitative analysis, the mathematical modelling of the markets has been the de facto approach to model stock price dynamics for trading, market making, hedging, and risk management. By utilizing a time series of values, such as the price fluctuations of financial products being traded in the markets, one can construct statistical models which can assist in the extraction of useful information about the current state of the market and a set of probabilities for possible future states, such as price or volatility changes. Many models, such as the Black-Scholes-Merton model~\cite{black1973pricing}, attempted to mathematically deduce the price of options and can be used to provide useful indications of future price movements.
However, at some point as more market participants started using the same model the behaviour of the price changed to the point that it could no longer be taken advantage of. Newer models, such as the stochastic modelling of limit order book dynamics \cite{cont2010stochastic}, the jump-diffusion processes for stock dynamics \cite{bandi2016price} and volatility estimation of market microstructure noise \cite{ait2009estimating} have been attempts predict multiple aspects of the financial markets. However such models are designed to be tractable, even at the cost of reliability and accuracy, and thus they do not necessarily fit empirical data very well.
The aforementioned properties put handcrafted models at a disadvantage, since the financial markets very frequently exhibit irrational behaviour, mainly due to the large influence of human activity, which frequently causes these models to fail. Combining Machine Learning models with handcrafted features usually improves the forecasting abilities of such models, by overcoming some of the aforementioned limitations, and improving predictions about various aspects of financial markets. This led many organizations that participate in the Financial Markets, such as Hedge Funds and investment firms, to increasingly use ML models, along with the conventional mathematical models, to make crucial decisions.
Furthermore, the introduction of electronic trading, that also led to the automation of trading operations, has magnified the volume of exchanges, producing a wealth of data. Deep Learning models are perfect candidates for analyzing such amounts of data, since they perform significantly better than the conventional Machine Learning methodologies when a large amount of data is available. This is one of the reasons that Deep Learning is starting to have a role in analyzing the data coming from financial exchanges. \cite{kercheval2015modelling, tsantekidis2017using}
The most detailed type of data that financial exchanges are gathering is the comprehensive logs of every submitted order and event that is happening within their internal matching engine. This log can be used to reconstruct the Limit Order Book (LOB), which is explained further in Section \ref{data-section}. A basic task that can arise from this data is the prediction of future price movements of an asset by examining the current and past supply and demand of Limit Orders. This type of comprehensive logs kept by the exchanges is excessively large and traditional Machine Learning techniques, such as Support Vector Machines (SVMs) \cite{vapnik1995support}, usually cannot be applied out-of-the-box.
Utilizing this kind of data directly with existing Deep Learning methods is also not possible due to their non-stationary nature. Prices fluctuate and suffer from stochastic drift, so in order for them to be effectively utilized by DL methods a preprocessing step is required to generate stationary features from them.
The main contribution of this work is the proposal of a set of stationary features that can be readily extracted from the Limit Order Book. The proposed features are thoroughly evaluated for predicting future mid price movements from large-scale high-frequency Limit Order data using several different Deep Learning models, ranging from simple Multilayer Perceptrons (MLPs) and CNNs to Recurrent Neural Networks (RNNs). Also we propose a novel Deep Learning model that combines the feature extraction ability of Convolutional Neural Networks (CNNs) with the Long Short Term Memory (LSTM) networks' power to analyze time series.
In Section~2 related work which employs ML models on financial data is briefly presented. Then, the dataset used is described in detail in Section~3. In Section~4 the proposed stationary feature extraction methodology is presented in detail, while in Section~5 the proposed Deep Learning methods are described. In Section~6 the experimental evaluation and comparisons are provided. Finally, conclusions are drawn and future work is discussed in Section~7.
\section{Related Work}
The task of regressing the future movements of financial assets has been the subject of many recent works such as \cite{kazem2013support, hsieh2011forecasting, lei2018wavelet}. Proven models such as GARCH are improved and augmented with machine learning component such as Artificial Neural Networks \cite{michell2018stock}. New hybrid models are employed along with Neural Networks to improve upon previous performance \cite{huang2012hybrid}.
One of the most volatile financial markets is FOREX, the currency markets. In \cite{galeshchuk2016neural}, neural networks are used to predict the future exchange rate of major FOREX pairs such as USD/EUR. The model is tested with different prediction steps ranging from daily to yearly which reaches the conclusion that shorter term predictions tend to be more accurate. Other financial metrics, such as cash flow prediction, are very closely correlated to price prediction.
In \cite{heaton2016deep}, the authors propose the ``Deep Portfolio Theory'' which applies autoencoders in order to produce optimal portfolios. This approach outperforms several established benchmarks, such as the Biotechnology IBB Index. Likewise in \cite{takeuchi2013applying}, another type of autoencoders, known as Restricted Boltzmann Machine (RBM), is applied to encode the end-of-month prices of stocks. Then, the model is fine-tuned to predict whether the price will move more than the median change and the direction of such movement. This strategy is able to outperform a benchmark momentum strategy in terms of annualized returns.
Another approach is to include data sources outside the financial time series, e.g., \cite{xiong2015deep}, where phrases related to finance, such as ``mortgage'' and ``bankruptcy'' were monitored on the Google trends platform and included as an input to a recurrent neural network along with the daily S\&P 500 market fund prices. The training target is the prediction of the future volatility of the market fund's price. This approach can greatly outperform many benchmark methods, such as the autoregressive GARCH and Lasso techniques.
The surge of DL methods has dramatically improved the performance over many conventional machine learning methods on tasks, such as speech recognition \cite{graves2013speech}, image captioning\cite{xu2015show, mao2014deep}, and question answering \cite{zhu2016visual7w}. The most important building blocks of DL are the Convolutional Neural Networks (CNN) \cite{lecun1995convolutional}, and the Recurrent Neural Networks (RNNs). Also worth mentioning is the improvement of RNNs with the introduction of Long Short-Term Memory Units (LSTMs) \cite{hochreiter1997long}, which has made the analysis of time series using DL easier and more performant.
Unfortunately DL methods are prone to overfit especially in tasks such as price regression and many works exist trying to prevent such overfitting \cite{niu2012short, xi2014new}. Some might attribute overfitting to the lack of huge amounts of data that other tasks such as image and speech processing have available to them. A very rich data source for financial forecasting is the Limit Order Book. One of the few applications of ML in high frequency Limit Order Book data is \cite{kercheval2015modelling}, where several handcrafted features are created, including price deltas, bid-ask spreads and price and volume derivatives. An SVM is then trained to predict the direction of future mid price movements using all the handcrafted features. In \cite{tran2017temporal} a neural network architecture incorporating the idea of bilinear projection augmented with a temporal attention mechanism is used to predict LOB mid price.
Similarly in \cite{ntakaris2018mid, tran2017tensor} utilize the Limit Order Book data along with ML methods such as multilinear methods and smart feature selection to predict the future price movements. In our previous work~\cite{tsantekidis2017forecasting, tsantekidis2017using, passalis2017time} we introduced a large-scale high-frequency Limit Order Book dataset, that is also used in this paper, and we employed three simple DL models, the Convolutional Neural Networks (CNN), the Long-Short Term Memory Recurrent Neural Networks (LSTM RNNs) and the Neural Bag-of-Features (N-BoF) model, to tackle the problem of forecasting the mid price movements. However, these approaches directly used the non-stationary raw Order Book data, making them vulnerable to distribution shifts and harming their ability to generalize on unseen data, as we also experimentally demonstrate in this paper.
To the best of our knowledge this is the first work that proposes a structured approach for extracting stationary price features from the Limit Order Book that can be effectively combined with Deep Learning models. We also provide an extensive evaluation of the proposed methods on a large-scale dataset with more than 4 million events. Also, a powerful model, that combines the CNN feature extraction properties with the LSTM's time series modelling capabilities, is proposed in order to improve the accuracy of predicting the price movement of stocks. The proposed combined model is also compared with the previously introduced methods using the proposed stationary price features.
\section{Limit Order Book Data}
\label{data-section}
In an order-driven financial market, a market participant can place two types of buy/sell orders. By posting a {\em limit order}, a trader promises to buy (sell) a certain amount of an asset at a specified price or less (more). The limit order book compromises on the valid limit order that are not executed or cancelled yet.
This Limit Order Book (LOB) contains all existing buy and sell orders that have been submitted and are awaiting to be executed. A limit order is placed on the queue at a given price level, where, in the case of standard limit orders, the execution priority at a given price level is dictated by the arrival time (first in, first out). A {\em market order} is is an order to immediately buy/sell a certain quantity of the asset at the best available price in the limit order book. If the requested price of a limit order is far from the best prices, it may take a long time for the execution of the limit order, in which case, the order can finally be cancelled by the trader. The orders are split between two sides, the bid (buy) and the ask (sell) side. Each side contains the orders sorted by their price, in descending order for the bid side and ascending order for the ask side.
\newcommand{\rho}{\rho}
\newcommand{\upnu}{\upnu}
Following the notation used in \cite{cont2010stochastic}, a price grid is defined as $\{\rho^{(1)}(t),\dots,\rho^{(n)}(t)\}$, where $\rho^{(j)}(t) > \rho^{(i)}(t)$ for all $j>i$. The price grid contains all possible prices and each consecutive price level is incremented by a single tick from the previous price level. The state of the order book is a continuous-time process $v(t) \equiv \left(v^{(1)}(t), v^{(2)}(t), \dots, v^{(n)}(t) \right)_{t \geq 0}$, where $|v^{(i)}(t)|$ is the number of outstanding limit orders at price $\rho^{(i)}(t)$, $1 \leq i \leq n$. If $v^{(i)}(t) < 0$, then there are $-v^{(i)}(t)$ bid orders at price $\rho^{(i)}(t)$; if $v^{(i)}(t)>0$, then there are $v^{(i)}(t)$ ask orders at price $\rho^{(i)}(t)$. That is, $v^{(i)}(t) > 0$ refers to ask orders and $v^{(i)}(t) < 0$ bid orders.
The location of the best ask price in the price grid is defined by:
\[
i_a^{(1)}(t) = \inf\{i = 1, \dots, n\ ;\ v^{(i)}(t)>0 \},
\]
and, correspondingly, the location of the best bid price is defined by:
\[
i_b^{(1)}(t) = \sup\{i = 1, \dots, n\ ;\ v^{(i)}(t)<0 \}.
\]
For simplicity, we denote the best ask and bid prices as $p_a^{(1)}(t) \equiv \rho^{\left(i_a^{(1)}(t) \right)}(t)$ and $p_b^{(1)}(t) \equiv \rho^{\left(i_b^{(1)} (t)\right)}(t)$, respectively. Notice that if there are no ask (bid) orders in the book, the ask (bid) price is not defined.
More generally, given that the $k$th best ask and bid prices exist, their locations are denoted as $i_a^{(k)}(t) \equiv i_a(t) + k-1$ and $i_b^{(k)}(t) \equiv i_b(t) + k-1$. The $k$th best ask and bid prices are correspondingly denoted by $p_a^{(k)}(t) \equiv \rho^{\left(i_a^{(k)}(t) \right)}(t)$ and $p_b^{(k)}(t) \equiv \rho^{\left(i_b^{(k)}(t) \right)}(t)$, respectively. Correspondingly, we denote the number of outstanding limit orders at the $k$th best ask and bid levels by $\upnu_a^{(k)}(t) \equiv v^{\left(i_a^{(k)}(t)\right)}(t)$ and $\upnu_b^{(k)}(t) \equiv v^{\left(i_b^{(k)}(t)\right)}(t)$, respectively.
Limit Order Book data can be used for a variety of tasks, such as the estimation of the future price trend or the regression of useful metrics, like the price volatility. Other possible tasks may include the early prediction of anomalous events, like extreme changes in price which may indicate manipulation in the markets. These examples are a few of multiple applications which can aid investors to protect their capital when unfavourable conditions exist in the markets or, in other cases, take advantage of them to profit.
Most modern methods that utilize financial time series data employ subsampling techniques, such as the well-known OHLC (Open-High-Low-Close) candles \cite{yang2000drift}, in order to reduce the number of features of each time interval. Although the OHLC candles preserve useful information, such as the market trend and movement ranges within the specified intervals, it removes possibly important microstructure information. Since the LOB is constantly receiving new orders in inconsistent intervals, it is not possible to subsample time-interval features from it in a way that preserves all the information it contains. This problem can be addressed, to some extent, using recurrent neural network architectures, such as LSTMs, that are capable of natively handling inputs of varying size. This allows to directly utilize the data fully without using a time interval-based subsampling.
The LOB data used in this work is provided by Nasdaq Nordic and consists of 10 days worth of LOB events for 5 different Finnish company stocks, namely Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki and Wärtsilä Oyj \cite{ntakaris2017benchmark,siikanen2016limit}. The exact time period of the gathered data begins from the 1st of June 2010 to the 14th of June 2010. Also, note that trading only happens during business days.
The data consists of consecutive snapshots of the LOB state after each state altering event takes place. This event might be an order insertion, execution or cancellation and after it interacts with the LOB and change its state a snapshot of the new state is taken. The LOB depth of the data that are used is $10$ for each side of Order Book, which ends up being 10 active orders (consisting of price and volume) for each side adding up to a total of $40$ values for each LOB snapshot. This ends up summing to a total of $4.5$ million snapshots that can be used to train and evaluate the proposed models.
In this work the task we aim to accomplish is the prediction of price movements based on current and past changes occurring in the LOB. This problem is formally defined as follows: Let $\mathbf{x}(t) \in \mathbb{R}^q$ denote the feature vector that describes the condition of the LOB at time $t$ for a specific stock, where $q$ is the dimensionality of the corresponding feature vector. The direction of the mid-price of that stock is defined as $l_k(t) = \{-1, 0, 1\}$ depending on whether the mid price decreased (-1), remained stationary (0) or increased (1) after $k$ LOB events occurred.
The number of orders $k$ is also called \textit{prediction horizon}. We aim to learn a model $f_k(\mathbf{x}(t))$, where $f_k: \mathbb{R}^{n} \rightarrow \{-1, 0, 1\} $, that predicts the direction $l_{k}(t)$ of the mid-price after $k$ orders.
In the following Section the aforementioned features and labels, as well as the procedure to calculate them are explained in depth.
\section{Stationary Feature and Label Extraction}
The raw LOB data cannot be directly used for any ML task without some kind of preprocessing. The order volume values can be gathered for all stocks' LOBs and normalized together, since they are expected to follow the same distribution. However, this is not true for price values, since the value of a stock or asset may fluctuate and increase with time to never before seen levels. This means that the statistics of the price values can change significantly with time, rendering the price time series non-stationary.
Simply normalizing all the price values will not resolve the non-stationarity, since there will always be unseen data that may change the distribution of values to ranges that are not present in the current data. We present two solutions for this problem, one used in past work where normalization is applied constantly using past available statistics and a new approach to completely convert the price data to stationary values.
\subsection{Input Normalization}
\label{sec:input-normalization}
The most common normalization scheme is standardization (z-score):
\begin{equation}
x_{\text{norm}} = \dfrac{{x} - \bar{x}}{\sigma_{\bar{x}}}
\label{zscore-eq},
\end{equation}
where ${x}$ is a feature to be normalized, $\bar{x}$ is the mean and $\sigma_{\bar{x}}$ is the standard deviation across all samples. Such normalization is separately applied to the order size values and the price values. Using this kind of ``global'' normalization allows the preservation of the different scales between prices of different stocks, which we are trying to avoid. The solution presented in \cite{tsantekidis2017forecasting,tsantekidis2017using} is to use z-score to normalize each stock-day worth of data with the means and standard deviations calculated using previous day's data of the same stock. This way a major problem is avoided which is the distribution shift in stock prices, that can be caused by events such as stock splits or the large shifts in price that can happen over longer periods of time.
Unfortunately this presents another important issue for learning. The difference between the price values in different LOB levels are almost always minuscule. Since all the price levels are normalized using z-score with the same statistics, extracting features at that scale is hard. In this work we propose a novel approach to remedy this problem. Instead of normalizing the raw values of the LOB depth, we modify the price values to be their percentage difference to the current mid price of the Order Book. This removes the non-stationarity from the price values, makes the feature extraction process easier and significantly improves the performance of ML models, as it is also experimentally demonstrated in Section~\ref{sec:experiments}. To compensate for the removal of the price value itself we add an extra value to each LOB depth sample which is the percentage change of the mid price since the previous event.
The mid-price is defined as the mid-point between the best bid and the best ask prices at time $t$ by
\begin{equation}
p_m^{(1)} (t) = \dfrac{p_a^{(1)}(t) + p_b^{(1)}(t)}{2}
\label{mid-price-def}.
\end{equation}
Let
\begin{align}
{p'}_a^{(i)}(t) =& \dfrac{p_a^{(i)}(t)}{p_m(t)} - 1, \label{stationary-price-a} \\
{p'}_b^{(i)}(t) =& \dfrac{p_b^{(i)}(t)}{p_m(t)} - 1, \label{stationary-price-b}
\end{align}
and
\begin{equation}
{p'}_m(t) = \dfrac{p_m(t)}{p_m(t-1)} - 1. \label{mid-price-change-def}
\end{equation}
Equations (\ref{stationary-price-a}) and (\ref{stationary-price-b}) serve as statistic features that represent the proportional difference between $i$th price and the mid-price at time $t$. Equation (\ref{mid-price-def}), on the other hand, serves as a dynamic feature that captures the proportional mid-price movement over the time period (that is, it represents asset's return in terms of mid-prices).
We also use the cumulative sum of the sizes of the price levels as a feature, also know as Total Depth:
\begin{align}
\upnu'^{(k)}_a(t) =& \sum_{i=1}^k{\upnu_a^{(i)}(t)}
\vspace{0.1cm} \label{size-cumsum-a}\\
\upnu'^{(k)}_b(t) =& \sum_{i=1}^k{\upnu_b^{(i)}(t)}
\label{size-cumsum-b}
\end{align}
where $\upnu^{(i)}_a(t)$ is number of outstanding limit order at the $i$th best ask price level and $\upnu^{(i)}_b(t)$ is number of outstanding limit order at the $b$th best ask price level.
The proposed stationary features are briefly summarized in Table \ref{features-table}. After constructing these three types of stationary features, each of them is separately normalized using standardization (z-score), as described in (\ref{zscore-eq}), and concatenated into a single feature vector $\myvec{x}_t$, where $t$ denotes the time step.
The input used for the time-aware models, such as the CNN, LSTM and CNN-LSTM, is the sequence of vectors $\myvec{X} = \{\myvec{x}_0, \myvec{x}_1, \dots , \myvec{x}_w\}$, where $w$ is the number of total number of events each one represented by a different time step input. For the models that need all the input into a single vector, such as the SVM and MLP models, the matrix $\myvec{X}$ is flatten into a single dimension so it can be used as input for these models.
\begin{table}[t]
\caption{Brief description of each proposed stationary feature}
\label{features-table}
\begin{center}
\begin{tabular}{ | c | c|}
\hline
\textbf{Feature} & \textbf{Description} \\
\hline\hline
Price level difference & \parbox[c]{10cm}{\vspace{0.2em}The difference of each price level to the current mid price, see Eq. (\ref{stationary-price-a}),(\ref{stationary-price-b})
\[{p'}^{(i)}(t) = \dfrac{p^{(i)}(t)}{p_m(t)} - 1 \]
} \\
\hline
Mid price change & \parbox[c]{10cm}{\vspace{0.2em} The change of the current mid price to the mid price of the previous time step, see Eq. (\ref{mid-price-change-def}) \\
\[
{p'}_m(t) = \dfrac{p_m(t)}{p_m(t-1)} - 1
\]
} \\
\hline
Depth size cumsum & \parbox[c]{10cm}{ \vspace{0.2em} Total depth at each price level, see Eq. (\ref{size-cumsum-a}), (\ref{size-cumsum-b})
\[
\upnu'^{(k)}(t) = \sum_{i=1}^k{\upnu^{(i)}(t)}
\]
} \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Labels}
\label{sec:labels}
The proposed models aim to predict the future movements of the mid price. Therefore, the ground truth labels must be appropriately generated to reflect the future mid price movements. Note that the mid price is a ``virtual'' value and no order can be guaranteed to immediately executed if placed at that exact price. However being able to predict its upwards or downwards movement provides a good estimate of the price of the future orders. A set of discrete choices must be constructed from our data to use as target for our classification models. The labels for describing the movement denoted by $y_t \in \{-1, 0, 1\}$, where $t$ denotes the timestep.
Simply using $p_m(t + k) > p_m(t)$ to determine the upward direction of the mid price would introduce unmanageable amount of noise, since the smallest change would be registered as an upward or downward movement. To remedy this, in our previous work \cite{tsantekidis2017forecasting, tsantekidis2017using} the noisy changes of the mid price were filtered by employing two averaging filters. One averaging filter was used on a window of size $k$ of the past values of the mid price and another averaging was applied on a future window $k$:
\begin{align}
m_b(t) =& \dfrac{1}{k+1} \sum_{i=0}^k p_m(t-i) \label{m-b} \\
m_a(t) =& \dfrac{1}{k} \sum_{i=1}^k p_m(t+i) \label{m-a}
\end{align}
where $p_t$ is the mid price as described in Equation~(\ref{mid-price-def}).
The label $l_t$, that expresses the direction of price movement at time $t$, is extracted by comparing the previously defined quantities ($m_b$ and $m_a$). However, using the $m_b$ values to create labels for the samples, as in \cite{tsantekidis2017forecasting, tsantekidis2017using}, is making the problem significantly easier and predictable due to the slower adaptation of the mean filter values to sudden changes in price. Therefore, in this work we remedy this issue by replacing $m_b$ with the mid price. Therefore, the labels are redefined as:
\begin{equation}
l_t =
\begin{cases}
\ \ 1, & \text{if } \dfrac{m_a(t)}{p_m(t)} > 1 + \alpha
\vspace{0.2cm}\\
-1, & \text{if } \dfrac{m_a(t)}{p_m(t)} < 1 - \alpha
\vspace{0.2cm}\\
\ \ 0, & \text{otherwise}
\end{cases}
\label{direction-eq}
\end{equation}
where $\alpha$ is the threshold that determines how significant a mid price change $m_a(t)$ must be in order to label the movement as upward or downward. Values that do not satisfy this inequality are considered as insignificant and are labeled as having no price movement, or in other words being ``stationary''. The resulting labels present the trend to be predicted. This process is applied across all time steps of the dataset to produce labels for all the depth samples.
\section{Machine Learning Models}
In this section we explain the particular inner workings of the CNN and LSTM models that are used and present how they are combined to form the proposed CNN-LSTM model. The technical details of each model are explained along with the employed optimization procedure.
\begin{figure}
\centering
\includegraphics[scale=0.4]{CNN}
\caption{A visual representation of the evaluated CNN model. Each layer includes the filter input size and the number of filters used.}
\label{fig:cnn-model}
\end{figure}
\subsection{Convolutional Neural Networks}
\label{sec:conv-nets}
Convolutional Neural Networks (CNNs) consist of the sequential application of convolutional and pooling layers usually followed by some fully connected layers, as shown in Figure~\ref{fig:cnn-model}. Each convolutional layer $i$ is equipped with a set of filters $\mathbf{W}_i \in \mathbb{R} ^{S \times D \times N}$ that is convolved with an input tensor, where $S$ is the number of used filters, $D$ is the {filter size}, and $N$ is the number of the input channels. The input tensor $\mathbf{X} \in \mathbb{R}^{(B \times T \times F)}$ is consisted by the temporally ordered features described in Section \ref{sec:input-normalization}, where $B$ is the batch size, $T$ is the number of time steps and $F$ is the number of features per time step.
In this work we leverage the causal padding introduced in \cite{van2016wavenet} to avoid using future information to produce features for the current time step. Using a series of convolutional layers allows for capturing the fine temporal dynamics of the time series as well as correlating temporally distant features. After the last convolutional/pooling layer a set of fully connected layers are used to classify the input time series. The network's output expresses the categorical distribution for the three direction labels (upward, downward and stationary), as described in (\ref{direction-eq}), for each time-step.
We also employ a temporal batching technique, similar to the one used in LSTMs, to increase the computational efficiency and reduce memory requirements of our experiments when training with CNNs. Given the above described input tensor $\myvec{X}$ and convolution filters $\myvec{W}_i$ the last convolution produces a tensor with dimensions $ (B,T,S,N) $, which in most uses cases is flattened to a tensor of size $(B, T \times S \times N)$ before being fed to a fully connected layer. Instead we retain the temporal ordering by only reducing the tensor to dimension $(B, T, S \times N) $. An identical fully connected network with a softmax output is applied for each $S \times N$ vectors leading to $T$ different predictions.
Since we are using causal convolutions with "full" padding, all the convolutional layers produce the same time steps $T$, hence we do not need to worry about label alignment to the correct time step. Also the causal convolutions ensure that no information from the future leaks to past time step filters. This technique reduces the receptive field of the employed CNN, but this can be easily remedied by using a greater number of convolutional layers and/or a larger filter size $D$.
\subsection{Long Short Term Memory Recurrent Neural Networks}
One of the most appropriate Neural Network architectures to apply on time series is the Recurrent Neural Network (RNN) architecture. Although powerful in theory, this type of network suffers from the vanishing gradient problem, which makes the gradient propagation through a large number of steps impossible. An architecture that was introduced to solve this problem is the Long Short Term Memory (LSTM) networks~\cite{hochreiter1997long}. This architecture protects its hidden activation from the decay of unrelated inputs and gradients by using gated functions between its ``transaction'' points. The protected hidden activation is the ``cell state'' which is regulated by said gates in the following manner:
\begin{align}
\myvec{f}_t &= \sigma(\myvec{W}_{xf} \cdot \myvec{x} + \myvec{W}_{hf} \cdot \myvec{h}_{t-1} + \myvec{b}_f) \\
\myvec{i}_t &= \sigma(\myvec{W}_{xi} \cdot \myvec{x} + \myvec{W}_{hi} \cdot \myvec{h}_{t-1} + \myvec{b}_i) \\
\myvec{c}'_t &= tanh(\myvec{W}_{hc} \cdot \myvec{h}_{t-1} + \myvec{W}_{xc} \cdot \myvec{x}_t + \myvec{b}_c) \\
\myvec{c}_t &= \myvec{f}_t \cdot \myvec{c}_{t-1} + \myvec{i}_t \cdot \myvec{c}'_t \\
\myvec{o}_t &= \sigma(\myvec{W}_{oc} \cdot \myvec{c}_t + \myvec{W}_{oh} \cdot \myvec{h}_{t-1} + \myvec{b}_o) \\
\myvec{h}_t &= \myvec{o}_t \cdot \sigma(\myvec{c}_t)
\end{align}
where $\myvec{f}_t$, $\myvec{i}_t$ and $\myvec{o}_t$ are the activations of the input, forget and output gates at time-step $t$, which control how much of the input and the previous state will be considered and how much of the cell state will be included in the hidden activation of the network. The protected cell activation at time-step $t$ is denoted by $\myvec{c}_t$, whereas $\myvec{h}_t$ is the activation that will be given to other components of the model. The matrices $\myvec{W}_{xf}, \myvec{W}_{hf}, \myvec{W}_{xi}, \myvec{W}_{hi}, \myvec{W}_{hc}, \myvec{W}_{xc}, \myvec{W}_{oc}, \myvec{W}_{oh}$ are used to denote the weights connecting each of the activations with the current time step inputs and the previous time step activations.
\subsection{Combination of models (CNN-LSTM)}
We also introduce a powerful combination of the two previously described models. The CNN model is identically applied as described in Section \ref{sec:conv-nets}, using causal convolutions and temporal batching to produce a set of features for each time step. In essence the CNN acts as the feature extractor of the LOB depth time series, which produces a new time series of features with the same length as the original one, with each of them having time steps corresponding to one another.
An LSTM layer is then applied on the time series produced by the CNN, and in turn produces a label for each time step. This works in a very similar way to the fully connected layer described in \ref{sec:conv-nets} for temporal batching, but instead of the Fully Connected layer the LSTM allows the model to incorporate the features from past steps. The model architecture is visualized in Figure~\ref{fig:cnnlstm}.
\subsection{Optimization}
\label{sec:optimization}
The parameters of the models are learned by minimizing the categorical cross entropy loss defined as:
\begin{equation}
\mathcal{L}(\myvec{W}) = -\sum_{i=1}^{L} y_i \cdot \log \hat{y}_i,
\end{equation}
where $L$ is the number of different labels and the notation $\myvec{W}$ is used to refer to the parameters of the models. The ground truth vector is denoted by $\mathbf{y}$, while $\hat{\mathbf{y}}$ is the predicted label distribution. The loss is summed over all samples in each batch. Due to the unavoidable class imbalance of this type of dataset, a weighted loss is employed to improve the mean recall and precision across all classes:
\begin{equation}
\label{eq:loss}
\mathcal{L}(\myvec{W}) = -\sum_{i=1}^{L} c_{y_i} \cdot y_i \cdot \log \hat{y}_i,
\end{equation}
where $c_{y_i}$ is the assigned weight for the class of $y_i$. The individual weight $c_i$ assigned to each class $i$ is calculated as:
\begin{equation}
c_i = \dfrac{|\mathcal{D}|}{n \cdot |\mathcal{D}_i|},
\end{equation}
where $ |\mathcal{D}| $ is the total number of samples in our dataset $\mathcal{D}$, $n$ is the total number of classes (which in our case is 3) and $\mathcal{D}_i$ is set of samples from our dataset that have been labeled to belong in class $i$.
The most commonly used method to minimize the loss function defined in (\ref{eq:loss}) and learn the parameters $\myvec{W}$ of the model is gradient descent \cite{werbos1990backpropagation}:
\begin{equation}
\myvec{W}' = \myvec{W} - \eta \cdot \dfrac{\partial \mathcal{L}}{\partial \myvec{W}}
\end{equation}
where $\myvec{W}'$ are the parameters of the model after each gradient descent step and $\eta$ is the learning rate. In this work we utilize the RMSProp optimizer \cite{tieleman2012lecture}, which is an adaptive learning rate method and has been shown to improve the training time and performance of DL models.
\begin{figure}
\centering
\includegraphics[scale=0.3]{CNNLSTM}
\caption{CNN-LSTM model}
\label{fig:cnnlstm}
\end{figure}
The LSTM, CNN and CNN-LSTM models along with all the training algorithms were developed using Keras \cite{chollet2015keras}, which is a framework built on top of the Tensorflow library \cite{tensorflow2015-whitepaper}.
\section{Experimental Evaluation}
\label{sec:experiments}
All the models were tested for step sizes $k = 10, 50, 100,$ and $200$ in (\ref{m-a}), where the $\alpha$ value for each was set at $2 \times 10^{-5},\ 9 \times 10^{-5},\ 3 \times 10^{-4}$ and $ 3.5 \times 10^{-4} $ respectively. The parameter $\alpha$ was chosen in conjunction with the future horizon with the aim to have relatively balanced distribution of labels across classes. In a real trading scenario it is not possible to have a profitable strategy that creates as many trade signals as ``no-trade'' signals, because it would accumulate enormous commission costs. For that reason $\alpha$ is selected with the aim to get a logical ratio of about 20\% long, 20\% short and 60\% stationary labels. The effect of varying the parameter $\alpha$ on the class distribution of labels is shown in Table \ref{alpha-table}. Note that increasing the $\alpha$ allows for reducing the number of trade signals which should be changed depending on the actual commission and slippage costs that are expected to occur.
\begin{table}
\caption{Example of sample distribution across classes depending on $\alpha$ for prediction horizon $k =100$}
\label{alpha-table}
\begin{center}
\begin{tabular}{ | c |c| c| c|}
\hline
\hspace{2em}$\alpha$\hspace{2em} & \hspace{1em}Down\hspace{1em} & Stationary & \hspace{1.5em}Up\hspace{1.5em} \\
\hline\hline
$1.0 \times 10^{-5}$ & $0.39$&$0.17$&$0.45$ \\ \hline
$2.0 \times 10^{-5}$ & $0.38$&$0.19$&$0.43$ \\ \hline
$5.0 \times 10^{-5}$ & $0.35$&$0.25$&$0.41$ \\ \hline
$1.0 \times 10^{-4}$ & $0.30$&$0.33$&$0.36$ \\ \hline
$2.0 \times 10^{-4}$ & $0.23$&$0.49$&$0.28$ \\ \hline
$3.0 \times 10^{-4}$ & $0.18$&$0.60$&$0.22$ \\ \hline
$3.5 \times 10^{-4}$ & $0.15$&$0.66$&$0.19$ \\ \hline
\end{tabular}
\end{center}
\end{table}
We tested the CNN and LSTM models using the raw features and the proposed stationary features separately and compared the results. The architecture of the three models that were tested is described bellow.
The proposed CNN model consists of the following sequential layers:
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{enumerate}
\item 1D Convolution with 16 filters of size $(10,42)$
\item 1D Convolution with 16 filters of size $(10,)$
\item 1D Convolution with 32 filters of size $(8,)$
\item 1D Convolution with 32 filters of size $(6,)$
\item 1D Convolution with 32 filters of size $(4,)$
\item Fully connected layer with 32 neurons
\item Fully connected layer with 3 neurons
\end{enumerate}
\end{minipage}
\end{center}
The activation function used for all the convolutional and fully connected layer of the CNN is the Parametric Rectifying Linear Unit (PRELU) \cite{he2015delving}. The last layer uses the softmax function for the prediction of the probability distribution between the different classes. All the convolutional layers are followed by a Batch Normalization (BN) layer after them.
The LSTM network uses 32 hidden neurons followed by a feed-forward layer with 64 neurons using Dropout and PRELU as activation function. Experimentally we found out that the hidden layer of the LSTM should contain 64 or less hidden neurons to avoid over-fitting the model. Experimenting with higher number of hidden neurons would be feasible if the dataset was even larger.
Finally the CNN-LSTM model applies the convolutional feature extraction layers on the input and then feeds them in the correct temporal order to an LSTM model. The CNN component is comprised of the following layers:
\begin{center}
\begin{minipage}{0.6\textwidth}
\begin{enumerate}
\item 1D Convolution with 16 filters of size $(5,42)$
\item 1D Convolution with 16 filters of size $(5,)$
\item 1D Convolution with 32 filters of size $(5,)$
\item 1D Convolution with 32 filters of size $(5,)$
\end{enumerate}
\end{minipage}
\end{center}
Note that the receptive field of each convolutional filter in the CNN module is smaller that the standalone CNN, since the LSTM can capture most of the information from past time steps. The LSTM module has the same exact architecture as the standalone LSTM. A visual representation of this CNN-LSTM model is shown in Figure~\ref{fig:cnnlstm}. Likewise, PRELU is the activation function used for the CNN and the fully connected layers, while the softmax function is used for the output layer of the network to predict the probability distribution of the classes.
\begin{figure}
\centering
\includegraphics[scale=0.70]{lstm_cost_per_step}
\caption{Mean cost per recurrent step of the LSTM network}
\label{bad-step-score}
\end{figure}
\begin{table*}
\caption{Experimental results for different prediction horizons $k$. The values that are reported are the mean of each metric for the last 20 training epochs.}
\label{results-table}
\begin{center}
\footnotesize
\bgroup
\def\arraystretch{0.8
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|}
\hline
\multirow{1}{*}{\textbf{Feature Type}} &
\multicolumn{1}{c|}{\textbf{Model}} &
\multicolumn{1}{c|}{\textbf{Mean Recall}} &
\multicolumn{1}{c|}{\textbf{Mean Precision}} &
\multicolumn{1}{c|}{\textbf{Mean F1}} & \multicolumn{1}{c|}{\textbf{Cohen's} $\kappa$} \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=10$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.43 $ & $0.33 $ & $0.04 $ \\ \cline{2-6}
& MLP & ${ 0.34 }$ & ${ 0.34 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\ \cline{2-6}
& CNN & ${ 0.51 }$ & ${ 0.42 }$ & ${ 0.38 }$ & ${ 0.14 }$ \\ \cline{2-6}
& LSTM & ${ 0.49 }$ & ${ 0.41 }$ & ${ 0.35 }$ & ${ 0.12 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.33 $ & $\mathbf{0.46 }$ & $0.30 $ & $0.011 $ \\ \cline{2-6}
& MLP & ${ 0.34 }$ & ${ 0.35 }$ & ${ 0.09 }$ & ${ 0.00 }$ \\ \cline{2-6}
& CNN & ${ 0.54 }$ & ${ 0.44 }$ & ${ 0.43 }$ & ${ 0.19 }$ \\ \cline{2-6}
& LSTM & ${ 0.55 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.18 }$ \\ \cline{2-6}
& CNNLSTM & $\mathbf{ 0.56 }$ & ${ 0.45 }$ & $\mathbf{ 0.44 }$ & $\mathbf{ 0.21 }$ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=50$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.41 $ & $0.32 $ & $0.03 $ \\ \cline{2-6}
& MLP & ${ 0.41 }$ & ${ 0.38 }$ & ${ 0.21 }$ & ${ 0.04 }$ \\ \cline{2-6}
& CNN & ${ 0.50 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.13 }$ \\ \cline{2-6}
& LSTM & ${ 0.46 }$ & ${ 0.40 }$ & ${ 0.34 }$ & ${ 0.10 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.39 $ & $0.41 $ & $0.38 $ & $0.09 $ \\ \cline{2-6}
& MLP & $0.49 $ & $0.43 $ & $0.38 $ & $0.14 $ \\ \cline{2-6}
& CNN & $0.55 $ & $0.45 $ & $0.43 $ & $0.20 $ \\ \cline{2-6}
&LSTM & $\mathbf{0.56 } $ & $0.46 $ & $0.44 $ & $0.21 $ \\ \cline{2-6}
& CNNLSTM & $\mathbf{0.56 }$ & $\mathbf{0.47 }$ & $\mathbf{0.47 }$ & $\mathbf{0.24 } $ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=100$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.46 $ & $0.33 $ & $0.05 $ \\ \cline{2-6}
& MLP & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.26 }$ & ${ 0.06 }$ \\ \cline{2-6}
& CNN & ${ 0.49 }$ & ${ 0.42 }$ & ${ 0.37 }$ & ${ 0.12 }$ \\ \cline{2-6}
& LSTM & ${ 0.45 }$ & ${ 0.39 }$ & ${ 0.34 }$ & ${ 0.09 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.36 $ & $0.46 $ & $0.35 $ & $0.07 $ \\ \cline{2-6}
& MLP & ${ 0.50 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\ \cline{2-6}
& CNN & ${ 0.54 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.21 }$ \\ \cline{2-6}
& LSTM & $\mathbf{ 0.56 }$ & ${ 0.46 }$ & ${ 0.44 }$ & ${ 0.20 }$ \\ \cline{2-6}
& CNNLSTM & ${ 0.55 }$ & $\mathbf{ 0.47 }$ & $\mathbf{ 0.48 }$ & $\mathbf{ 0.24 }$ \\ \cline{1-6}
\multicolumn{6}{|c|}{\multirow{2}{*}{Prediction Horizon $k=200$}} \\
\multicolumn{6}{|c|}{} \\ \cline{1-6}
\multirow{4}{*}{\textbf{Raw Values}}
& SVM & $0.35 $ & $0.44 $ & $0.31 $ & $0.04 $ \\ \cline{2-6}
& MLP & ${ 0.44 }$ & ${ 0.40 }$ & ${ 0.32 }$ & ${ 0.08 }$ \\ \cline{2-6}
& CNN & ${ 0.47 }$ & ${ 0.43 }$ & ${ 0.39 }$ & ${ 0.14 }$ \\ \cline{2-6}
& LSTM & ${ 0.42 }$ & ${ 0.39 }$ & ${ 0.36 }$ & ${ 0.08 }$ \\ \cline{1-6}
\multirow{5}{*}{\textbf{Stationary Features}}
& SVM & $0.38 $ & $0.46 $ & $0.36 $ & $0.10 $ \\ \cline{2-6}
& MLP & ${ 0.49 }$ & ${ 0.45 }$ & ${ 0.42 }$ & ${ 0.17 }$ \\ \cline{2-6}
& CNN & ${ 0.51 }$ & ${ 0.47 }$ & ${ 0.45 }$ & ${ 0.20 }$ \\ \cline{2-6}
& LSTM & ${ 0.52 }$ & ${ 0.47 }$ & ${ 0.46 }$ & ${ 0.22 }$ \\ \cline{2-6}
& CNNLSTM & $\mathbf{ 0.53 }$ & $\mathbf{ 0.48 }$ & $\mathbf{ 0.49 }$ & $\mathbf{ 0.25 }$ \\ \cline{1-6}
\end{tabular}
\egroup
\end{center}
\end{table*}
One recurring effect we observe when training LSTM networks on LOB data is that for the first steps of observation the predictions $y_i$ yield a bigger cross entropy cost, meaning worse performance in our metrics. We run a set of experiments where the LSTM was trained for all the steps of the input windows $T$. The resulting mean cost per time step can be observed in Figure \ref{bad-step-score}. As a result, trying to predict the price movement using insufficient past information is not possible and should be avoided since it leads to noisy gradients. To avoid this, a ``burn-in'' input is initially used to build its initial perception of the market before actually making correct decisions. In essence the first ``burn-in'' steps of the input are skipped, by not allowing any gradient to alter our model until after the 100th time step. We also apply the same method to the CNN-LSTM model.
\begin{figure*}
\centering
\includegraphics[width=1.02\linewidth]{all_training}
\caption{F1 and Cohen's $\kappa$ metrics during training for prediction horizon $k=100$. Plots are smoothed with a mean filter with window=3 to reduce fluctuations.
}
\label{fig:f1-kappa-training}
\end{figure*}
For training the models, the dataset is split as follows. The first 7 days of each stock are used to train the models, while the final 3 days are used as test data. The experiments were conducted for 4 different prediction horizons $k$, as defined in (\ref{m-a}) and (\ref{direction-eq}).
Performance is measured using Cohen's kappa \cite{cohen1960coefficient}, which is used to evaluate the concordance between sets of given answers, taking into consideration the possibility of random agreements happening. The mean recall, mean precision and mean F1 score between all 3 classes is also reported. Recall is the number of true positives samples divided by the sum of true positives and false negatives, while precision is the number of true positive divided by the sum of true positives and false positives. F1 score is the harmonic mean of the precision and recall metrics.
The results of the experiments are shown in Table \ref{results-table}. The results are compared for the models trained on the raw price features with the ones trained using the extracted stationary features. The results confirm that extracting stationary features from the data significantly improve performance of Deep Learning models such as CNNs and LSTMs.
We also trained a Linear SVM model and a simple MLP model and compared them to the DL models. The SVM model was trained using Stochastic Gradient Descent since the size of the dataset is too large to use a regular Quadratic Programming solver. The SVM model implementation is provided by the sklearn library \cite{pedregosa2011scikit}. The MLP model consists of three fully connected layers with sizes 128, 64, 32, and PRELU as activations for each layers. Dropout is also used to avoid overfitting and the softmax activation function was used in the last layer.
Since both the SVM and the MLP models cannot iterate over timesteps to gain the same amount of information as the CNN and LSTM-based models, a window of 50 depth events is used and is flattened into a single sample. This process is applied in a rolling fashion for all the dataset to generate a dataset upon which the two models can be trained. One important note is the training fluctuations that are observed in Figure \ref{fig:f1-kappa-training}, which are caused by the great class imbalance. Similar issues where observed in initial experiments with CNN and LSTM models but using the weighted loss described in \ref{sec:optimization} the fluctuations subsided.
The proposed stationary price features significantly outperform the raw price features for all the tested models. This can be attributed to a great extent to the stationary nature of the proposed features. The employed price differences provide an intrinsically stationary and normalized price measure that can be directly used. This is in contrast with the raw price values that requires careful normalization to ensure that their values remain into a reasonable range and suffer for significantly non-stationarity issues when the price increases to levels not seen before. By converting the actual prices to the their difference to the mid price and normalize that, this important feature is exaggerated to avoid being suppressed by the much larger price movements through time. The proposed combination model CNN-LSTM also outperforms its separated individual component models as shown in Figure \ref{fig:f1-kappa-training} and Table \ref{results-table} showing that it can better handle the LOB data and use them to take advantage of the microstructure existing within the data to produce more accurate predictions.
\section{Conclusion}
In this paper we proposed a novel method for extracting stationary features from raw LOB data, suitable for use with different DL models. Using different ML models, i.e., SVMs, MLPs, CNNs and LSTMs, it was experimentally demonstrated that the proposed features significantly outperform the raw price features. The proposed stationary features achieve this by making the difference between the prices in the LOB depth the main metric instead of the price itself, which usually fluctuates much more through time than the price level within the LOB. A novel combined CNN-LSTM model was also proposed for time series predictions and it was demonstrated that exhibits more stable behaviour and leads to better results that the CNN and LSTM models.
There are several interesting future research directions. As with all the DL application, more data would enable the use of bigger models that would not be at risk of being overtrained as it was observed in this work. An RNN-type of network could be also used to perform a form of ``intelligent'' r-sampling extracting useful features from a specific and limited time-interval of depth events, which would avoid losing information and allow for the later models produce prediction for a certain time period and not for a number of following events. Another important addition would be an attention mechanism \cite{xu2015show}, \cite{cho2015describing}, which would allow for the better observation of the features by the network allowing it to ignore noisy parts of the data and use only the relevant information.
\section*{Acknowledgment}
The research leading to these results has received funding from the H2020 Project BigDataFinance MSCA-ITN-ETN 675044 (http://bigdatafinance.eu), Training for Big Data in Financial Research and Risk Management.
\bibliographystyle{elsarticle-num}
| {'timestamp': '2018-10-24T02:19:17', 'yymm': '1810', 'arxiv_id': '1810.09965', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.09965'} |
\section{Introduction}
Given a data set and a model with some unknown parameters, the inverse problem aims to find the values of the model parameters that best fit the data.
In this work, in which we focus on systems of interacting elements,
the inverse problem concerns the statistical inference
of the underling interaction network and of its coupling coefficients from observed data on the dynamics of the system.
Versions of this problem are encountered in physics, biology (e.g., \cite{Balakrishnan11,Ekeberg13,Christoph14}), social sciences and finance (e.g.,\cite{Mastromatteo12,yamanaka_15}), neuroscience (e.g., \cite{Schneidman06,Roudi09a,tyrcha_13}), just to cite a few, and are becoming more and more important due to the increase in the amount of data available from these fields.\\
\indent
A standard approach used in statistical inference is to predict the interaction couplings by maximizing the likelihood function.
This technique, however, requires the evaluation of the
partition function that, in the most general case, concerns a number of computations scaling exponentially with the system size.
Boltzmann machine learning uses Monte Carlo sampling to compute the gradients of the Log-likelihood looking for stationary points \cite{Murphy12} but this method is computationally manageable only for small systems. A series of faster approximations, such as naive mean-field, independent-pair approximation \cite{Roudi09a, Roudi09b}, inversion of TAP equations \cite{Kappen98,Tanaka98}, small correlations expansion \cite{Sessak09}, adaptive TAP \cite{Opper01}, adaptive cluster expansion \cite{Cocco12} or Bethe approximations \cite{Ricci-Tersenghi12, Nguyen12} have, then, been developed. These techniques take as input means and correlations of observed variables and most of them assume a fully connected graph as underlying connectivity network, or expand around it by perturbative dilution. In most cases, network reconstruction turns out to be not accurate for small data sizes and/or when couplings are strong or, else, if the original interaction network is sparse.\\
\indent
A further method, substantially improving performances for small data, is the so-called Pseudo-Likelyhood Method (PLM) \cite{Ravikumar10}. In Ref. \cite{Aurell12} Aurell and Ekeberg performed a comparison between PLM and some of the just mentioned mean-field-based algorithms on the pairwise interacting Ising-spin ($\sigma = \pm 1$) model, showing how PLM performs sensitively better, especially on sparse graphs and in the high-coupling limit, i.e., for low temperature.
In this work, we aim at performing statistical inference on a model whose interacting variables are continuous $XY$ spins, i.e., $\sigma \equiv \left(\cos \phi,\sin \phi\right)$ with $\phi \in [0, 2\pi )$. The developed tools can, actually, be also straightforward applied to the $p$-clock model \cite{Potts52} where the phase $\phi$ takes discretely equispaced $p$ values in the $2 \pi$ interval, $\phi_a = a 2 \pi/p$, with $a= 0,1,\dots,p-1$. The $p$-clock model, else called vector Potts model, gives a hierarchy of discretization of the $XY$ model as $p$ increases. For $p=2$, one recovers the Ising model, for $p=4$ the Ashkin-Teller model \cite{Ashkin43}, for $p=6$ the ice-type model \cite{Pauling35,Baxter82} and the eight-vertex model \cite{Sutherland70,Fan70,Baxter71} for $p=8$.
It turns out to be very useful also for numerical implementations of the continuous $XY$ model.
Recent analysis on the multi-body $XY$ model has shown that for a limited number of discrete phase values ($p\sim 16, 32$) the thermodynamic critical properties of the $p\to\infty$ $XY$ limit are promptly recovered \cite{Marruzzo15, Marruzzo16}.
Our main motivation to study statistical inference is that these kind of models have recently turned out to be rather useful in describing the behavior of optical systems,
including standard mode-locking lasers \cite{Gordon02,Gat04,Angelani07,Marruzzo15} and random lasers \cite{Angelani06a,Leuzzi09a,Antenucci15a,Antenucci15b,Marruzzo16}.
In particular, the inverse problem on the pairwise XY model analyzed here might be of help in recovering images from light propagated through random media.
This paper is organized as follows: in Sec. \ref{sec:model} we introduce the general model and we discuss its derivation also as a model for light transmission through random scattering media.
In Sec. \ref{sec:plm} we introduce the PLM with $l_2$ regularization and with decimation, two variants of the PLM respectively introduced in Ref. \cite{Wainwright06} and \cite{Aurell12} for the inverse Ising problem.
Here, we analyze these techniques for continuous $XY$ spins and we test them on thermalized data generated by Exchange Monte Carlo numerical simulations of the original model dynamics. In Sec. \ref{sec:res_reg} we present the results related to the PLM-$l_2$. In Sec. \ref{sec:res_dec} the results related to the PLM with decimation are reported and its performances are compared to the PLM-$l_2$ and to a variational mean-field method analyzed in Ref. \cite{Tyagi15}. In Sec. \ref{sec:conc}, we outline conclusive remarks and perspectives.
\section{The leading $XY$ model}
\label{sec:model}
The leading model we are considering is defined, for a system of $N$ angular $XY$ variables, by the Hamiltonian
\begin{equation}
\mathcal{H} = - \sum_{ik}^{1,N} J_{ik} \cos{\left(\phi_i-\phi_k\right)}
\label{eq:HXY}
\end{equation}
The $XY$ model is well known in statistical mechanics, displaying important physical
insights, starting from the Berezinskii-Kosterlitz-Thouless
transition in two dimensions\cite{Berezinskii70,Berezinskii71,Kosterlitz72} and moving to, e.g., the
transition of liquid helium to its superfluid state \cite{Brezin82}, the roughening transition of the interface of a crystal in equilibrium with its vapor \cite{Cardy96}. In presence of disorder and frustration \cite{Villain77,Fradkin78} the model has been adopted to describe synchronization problems as the Kuramoto model \cite{Kuramoto75} and in the theoretical modeling of Josephson junction arrays \cite{Teitel83a,Teitel83b} and arrays of coupled lasers \cite{Nixon13}.
Besides several derivations and implementations of the model in quantum and classical physics, equilibrium or out of equilibrium, ordered or fully frustrated systems, Eq. (\ref{eq:HXY}), in its generic form,
has found applications also in other fields. A rather fascinating example being the behavior of starlings flocks \cite{Reynolds87,Deneubourg89,Huth90,Vicsek95, Cavagna13}.
Our interest on the $XY$ model resides, though, in optics. Phasor and phase models with pairwise and multi-body interaction terms can, indeed, describe the behavior of electromagnetic modes in both linear and nonlinear optical systems in the analysis of problems such as light propagation and lasing \cite{Gordon02, Antenucci15c, Antenucci15d}. As couplings are strongly frustrated, these models turn out to be especially useful to the study of optical properties in random media \cite{Antenucci15a,Antenucci15b}, as in the noticeable case of random lasers \cite{Wiersma08,Andreasen11,Antenucci15e} and they might as well be applied to linear scattering problems, e.g., propagation of waves in opaque systems or disordered fibers.
\subsection{A propagating wave model}
We briefly mention a derivation of the model as a proxy for the propagation of light through random linear media.
Scattering of light is held responsible to obstruct our view and make objects opaque. Light rays, once that they enter the material, only exit after getting scattered multiple times within the material. In such a disordered medium, both the direction and the phase of the propagating waves are random. Transmitted light
yields a disordered interference pattern typically having low intensity, random phase and almost no resolution, called a speckle. Nevertheless, in recent years it has been realized that disorder is rather a blessing in disguise \cite{Vellekoop07,Vellekoop08a,Vellekoop08b}. Several experiments have made it possible to control the behavior of light and other optical processes in a given random disordered medium,
by exploiting, e.g., the tools developed for wavefront shaping to control the propagation of light and to engineer the confinement of light \cite{Yilmaz13,Riboli14}.
\\
\indent
In a linear dielectric medium, light propagation can be described through a part of the scattering matrix, the transmission matrix $\mathbb{T}$, linking the outgoing to the incoming fields.
Consider the case in which there are $N_I$ incoming channels and $N_O$ outgoing ones; we can indicate with $E^{\rm in,out}_k$ the input/output electromagnetic field phasors of channel $k$. In the most general case, i.e., without making any particular assumptions on the field polarizations, each light mode and its polarization polarization state can be represented by means of the $4$-dimensional Stokes vector. Each $ t_{ki}$ element of $\mathbb{T}$, thus, is a $4 \times 4$ M{\"u}ller matrix. If, on the other hand, we know that the source is polarized and the observation is made on the same polarization, one can use a scalar model and adopt Jones calculus \cite{Goodman85,Popoff10a,Akbulut11}:
\begin{eqnarray}
E^{\rm out}_k = \sum_{i=1}^{N_I} t_{ki} E^{\rm in}_i \qquad \forall~ k=1,\ldots,N_O
\label{eq:transm}
\end{eqnarray}
We recall that the elements of the transmission matrix are random complex coefficients\cite{Popoff10a}. For the case of completely unpolarized modes, we can also use a scalar model similar to Eq. \eqref{eq:transm}, but whose variables are the intensities of the outgoing/incoming fields, rather than the fields themselves.\\
In the following, for simplicity, we will consider Eq. (\ref{eq:transm}) as our starting point,
where $E^{\rm out}_k$, $E^{\rm in}_i$ and $t_{ki}$ are all complex scalars.
If Eq. \eqref{eq:transm} holds for any $k$, we can write:
\begin{eqnarray}
\int \prod_{k=1}^{N_O} dE^{\rm out}_k \prod_{k=1}^{N_O}\delta\left(E^{\rm out}_k - \sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j \right) = 1
\nonumber
\\
\label{eq:deltas}
\end{eqnarray}
Observed data are a noisy representation of the true values of the fields. Therefore, in inference problems it is statistically more meaningful to take that noise into account in a probabilistic way,
rather than looking at the precise solutions of the exact equations (whose parameters are unknown).
To this aim we can introduce Gaussian distributions whose limit for zero variance are the Dirac deltas in Eq. (\ref{eq:deltas}).
Moreover, we move to consider the ensemble of all possible solutions of Eq. (\ref{eq:transm}) at given $\mathbb{T}$, looking at all configurations of input fields. We, thus, define the function:
\begin{eqnarray}
Z &\equiv &\int_{{\cal S}_{\rm in}} \prod_{j=1}^{N_I} dE^{\rm in}_j \int_{{\cal S}_{\rm out}}\prod_{k=1}^{N_O} dE^{\rm out}_k
\label{def:Z}
\\
\times
&&\prod_{k=1}^{N_O}
\frac{1}{\sqrt{2\pi \Delta^2}} \exp\left\{-\frac{1}{2 \Delta^2}\left|
E^{\rm out}_k -\sum_{j=1}^{N_I} t_{kj} E^{\rm in}_j\right|^2
\right\}
\nonumber
\end{eqnarray}
We stress that the integral of Eq. \eqref{def:Z} is not exactly a Gaussian integral. Indeed, starting from Eq. \eqref{eq:deltas}, two constraints on the electromagnetic field intensities must be taken into account.
The space of solutions is delimited by the total power ${\cal P}$ received by system, i.e.,
${\cal S}_{\rm in}: \{E^{\rm in} |\sum_k I^{\rm in}_k = \mathcal{P}\}$, also implying a constraint on the total amount of energy that is transmitted through the medium, i. e.,
${\cal S}_{\rm out}:\{E^{\rm out} |\sum_k I^{\rm out}_k=c\mathcal{P}\}$, where the attenuation factor $c<1$ accounts for total losses.
As we will see more in details in the following, being interested in inferring the transmission matrix through the PLM, we can omit to explicitly include these terms in Eq. \eqref{eq:H_J} since they do not depend on $\mathbb{T}$ not adding any information on the gradients with respect to the elements of $\mathbb{T}$.
Taking the same number of incoming and outcoming channels, $N_I=N_O=N/2$, and ordering the input fields in the first $N/2$ mode indices and the output fields in the last $N/2$ indices, we can drop the ``in'' and ``out'' superscripts and formally write $Z$ as a partition function
\begin{eqnarray}
\label{eq:z}
&& Z =\int_{\mathcal S} \prod_{j=1}^{N} dE_j \left( \frac{1}{\sqrt{2\pi \Delta^2}} \right)^{N/2}
\hspace*{-.4cm} \exp\left\{
-\frac{ {\cal H} [\{E\};\mathbb{T}] }{2\Delta^2}
\right\}
\\
&&{\cal H} [\{E\};\mathbb{T}] =
- \sum_{k=1}^{N/2}\sum_{j=N/2+1}^{N} \left[E^*_j t_{jk} E_k + E_j t^*_{kj} E_k^*
\right]
\nonumber
\\
&&\qquad\qquad \qquad + \sum_{j=N/2+1}^{N} |E_j|^2+ \sum_{k,l}^{1,N/2}E_k
U_{kl} E_l^*
\nonumber
\\
\label{eq:H_J}
&&\hspace*{1.88cm } = - \sum_{nm}^{1,N} E_n J_{nm} E_m^*
\end{eqnarray}
where ${\cal H}$ is a real-valued function by construction, we have introduced the effective input-input coupling matrix
\begin{equation}
U_{kl} \equiv \sum_{j=N/2+1}^{N}t^*_{lj} t_{jk}
\label{def:U}
\end{equation}
and the whole interaction matrix reads (here $\mathbb{T} \equiv \{ t_{jk} \}$)
\begin{equation}
\label{def:J}
\mathbb J\equiv \left(\begin{array}{ccc|ccc}
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\phantom{()}&-\mathbb{U} \phantom{()}&\phantom{()}&\phantom{()}&{\mathbb{T}}&\phantom{()}\\
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\hline
\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}&\phantom{()}\\
\phantom{()}& \mathbb T^\dagger&\phantom{()}&\phantom{()}& - \mathbb{I} &\phantom{()}\\
\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}&\phantom{a}\\
\end{array}\right)
\end{equation}
Determining the electromagnetic complex amplitude configurations that minimize the {\em cost function} ${\cal H}$, Eq. (\ref{eq:H_J}), means to maximize the overall distribution peaked around the solutions of the transmission Eqs. (\ref{eq:transm}). As the variance $\Delta^2\to 0$, eventually, the initial set of Eqs. (\ref{eq:transm}) are recovered. The ${\cal H}$ function, thus, plays the role of an Hamiltonian and $\Delta^2$ the role of a noise-inducing temperature. The exact numerical problem corresponds to the zero temperature limit of the statistical mechanical problem. Working with real data, though, which are noisy, a finite ``temperature''
allows for a better representation of the ensemble of solutions to the sets of equations of continuous variables.
Now, we can express every phasor in Eq. \eqref{eq:z} as $E_k = A_k e^{\imath \phi_k}$. As a working hypothesis we will consider the intensities $A_k^2$ as either homogeneous or as \textit{quenched} with respect to phases.
The first condition occurs, for instance, to the input intensities $|E^{\rm in}_k|$ produced by a phase-only spatial light modulator (SLM) with homogeneous illumination \cite{Popoff11}.
With \textit{quenched} here we mean, instead, that the intensity of each mode is the same for every solution of Eq. \eqref{eq:transm} at fixed $\mathbb T$.
We stress that, including intensities in the model does not preclude the inference analysis but it is out of the focus of the present work and will be considered elsewhere.
If all intensities are uniform in input and in output, this amount to a constant rescaling for each one of the four sectors of matrix $\mathbb J$ in Eq. (\ref{def:J}) that will not change the properties of the matrices.
For instance, if the original transmission matrix is unitary, so it will be the rescaled one and the matrix $\mathbb U$ will be diagonal.
Otherwise, if intensities are \textit{quenched}, i.e., they can be considered as constants in Eq. (\ref{eq:transm}),
they are inhomogeneous with respect to phases. The generic Hamiltonian element will, therefore, rescale as
\begin{eqnarray}
E^*_n J_{nm} E_m = J_{nm} A_n A_m e^{\imath (\phi_n-\phi_m)} \to J_{nm} e^{\imath (\phi_n-\phi_m)}
\nonumber
\end{eqnarray}
and the properties of the original $J_{nm}$ components are not conserved in the rescaled one. In particular, we have no argument, anymore, to possibly set the rescaled $U_{nm}\propto \delta_{nm}$.
Eventually, we end up with the complex couplings $XY$ model, whose real-valued Hamiltonian is written as
\begin{eqnarray}
\mathcal{H}& = & - \frac{1}{2} \sum_{nm} J_{nm} e^{-\imath (\phi_n - \phi_m)} + \mbox{c.c.}
\label{eq:h_im}
\\ &=& - \frac{1}{2} \sum_{nm} \left[J^R_{nm} \cos(\phi_n - \phi_m)+
J^I_{nm}\sin (\phi_n - \phi_m)\right]
\nonumber
\end{eqnarray}
where $J_{nm}^R$ and $J_{nm}^I$ are the real and imaginary parts of $J_{nm}$. Being $\mathbb J$ Hermitian, $J^R_{nm}=J^R_{mn}$ is symmetric and $J_{nm}^I=-J_{mn}^I$ is skew-symmetric.
\begin{comment}
\textcolor{red}{
F: comment about quenched:
I think that to obtain the XY model, it is not necessary that the intensities are strictly quenched (that is also a quite unfeasible situation, I guess).
Indeed eq (2) does not deal with the dynamics of the modes, but just connect the in and out ones.
For this, what it is necessary to have the XY model, it is that the intensities are always the same on the different samples
(so that the matrix $t_{ij}$ is the same for different phase data). If the intensities are fixed, then they can be incorporated in $t_{ij}$ and eq (2) can be written just for phases as described. \\
}
\end{comment}
\section{Pseudolikelihood Maximization}
\label{sec:plm}
The inverse problem consists in the reconstruction of the parameters $J_{nm}$ of the Hamiltonian, Eq. (\ref{eq:h_im}).
Given a set of $M$ data configurations of $N$ spins
$\bm\sigma = \{ \cos \phi_i^{(\mu)},\sin \phi_i^{(\mu)} \}$, $i = 1,\dots,N$ and $\mu=1,\dots,M$, we want to \emph{infer} the couplings:
\begin{eqnarray}
\bm \sigma \rightarrow \mathbb{J}
\nonumber
\end{eqnarray}
With this purpose in mind,
in the rest of this section we implement the working equations for the techniques used.
In order to test our methods, we generate the input data, i.e., the configurations, by Monte-Carlo simulations of the model.
The joint probability distribution of the $N$ variables $\bm{\phi}\equiv\{\phi_1,\dots,\phi_N\}$, follows the Gibbs-Boltzmann distribution:
\begin{equation}\label{eq:p_xy}
P(\bm{\phi}) = \frac{1}{Z} e^{-\beta \mathcal{H\left(\bm{\phi}\right)}} \quad \mbox{ where } \quad Z = \int \prod_{k=1}^N d\phi_k e^{-\beta \mathcal{H\left(\bm{\phi}\right)}}
\end{equation}
and where we denote $\beta=\left( 2\Delta^2 \right)^{-1}$ with respect to Eq. (\ref{def:Z}) formalism.
In order to stick to usual statistical inference notation, in the following we will rescale the couplings by a factor $\beta / 2$: $\beta J_{ij}/2 \rightarrow J_{ij}$.
The main idea of the PLM is to work with the conditional probability distribution of one variable $\phi_i$ given all other variables,
$\bm{\phi}_{\backslash i}$:
\begin{eqnarray}
\nonumber
P(\phi_i | \bm{\phi}_{\backslash i}) &=& \frac{1}{Z_i} \exp \left \{ {H_i^x (\bm{\phi}_{\backslash i})
\cos \phi_i + H_i^y (\bm{\phi}_{\backslash i}) \sin \phi_i } \right \}
\\
\label{eq:marginal_xy}
&=&\frac{e^{H_i(\bm{\phi}_{\backslash i}) \cos{\left(\phi_i-\alpha_i(\bm{\phi}_{\backslash i})\right)}}}{2 \pi I_0(H_i)}
\end{eqnarray}
where $H_i^x$ and $H_i^y$ are defined as
\begin{eqnarray}
H_i^x (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \cos \phi_j - \sum_{j (\neq i) } J_{ij}^{I} \sin \phi_j \phantom{+ h^R_i} \label{eq:26} \\
H_i^y (\bm{\phi}_{\backslash i}) &=& \sum_{j (\neq i)} J^R_{ij} \sin \phi_j + \sum_{j (\neq i) } J_{ij}^{I} \cos \phi_j \phantom{ + h_i^{I} }\label{eq:27}
\end{eqnarray}
and $H_i= \sqrt{(H_i^x)^2 + (H_i^y)^2}$, $\alpha_i = \arctan H_i^y/H_i^x$ and we introduced the modified Bessel function of the first kind:
\begin{equation}
\nonumber
I_k(x) = \frac{1}{2 \pi}\int_{0}^{2 \pi} d \phi e^{x \cos{ \phi}}\cos{k \phi}
\end{equation}
Given $M$ observation samples $\bm{\phi}^{(\mu)}=\{\phi^\mu_1,\ldots,\phi^\mu_N\}$, $\mu = 1,\dots, M$, the
pseudo-loglikelihood for the variable $i$ is given by the logarithm of Eq. (\ref{eq:marginal_xy}),
\begin{eqnarray}
\label{eq:L_i}
L_i &=& \frac{1}{M} \sum_{\mu = 1}^M \ln P(\phi_i^{(\mu)}|\bm{\phi}^{(\mu)}_{\backslash i})
\\
\nonumber
& =& \frac{1}{M} \sum_{\mu = 1}^M \left[ H_i^{(\mu)} \cos( \phi_i^{(\mu)} - \alpha_i^{(\mu)}) - \ln 2 \pi I_0\left(H_i^{(\mu)}\right)\right] \, .
\end{eqnarray}
The underlying idea of PLM is that an approximation of the true parameters of the model is obtained for values that maximize the functions $L_i$.
The specific maximization scheme differentiates the different techniques.
\subsection{PLM with $l_2$ regularization}
Especially for the case of sparse graphs, it is useful to add a regularizer, which prevents the maximization routine to move towards high values of
$J_{ij}$ and $h_i$ without converging. We will adopt an $l_2$ regularization so that the Pseudolikelihood function (PLF) at site $i$ reads:
\begin{equation}\label{eq:plf_i}
{\cal L}_i = L_i
- \lambda \sum_{i \neq j} \left(J_{ij}^R\right)^2 - \lambda \sum_{i \neq j} \left(J_{ij}^I\right)^2
\end{equation}
with $\lambda>0$.
Note that the values of $\lambda$ have to be chosen arbitrarily, but not too large, in order not to overcome $L_i$.
The standard implementation of the PLM consists in maximizing each ${\cal L}_i$, for $i=1\dots N$, separately. The expected values of the couplings are then:
\begin{equation}
\{ J_{i j}^*\}_{j\in \partial i} := \mbox{arg max}_{ \{ J_{ij} \}}
\left[{\cal L}_i\right]
\end{equation}
In this way, we obtain two estimates for the coupling $J_{ij}$, one from maximization of ${\cal L}_i$, $J_{ij}^{(i)}$, and another one from ${\cal L}_j$, say $J_{ij}^{(j)}$.
Since the original Hamiltonian of the $XY$ model is Hermitian, we know that the real part of the couplings is symmetric while the imaginary part is skew-symmetric.
The final estimate for $J_{ij}$ can then be obtained averaging the two results:
\begin{equation}\label{eq:symm}
J_{ij}^{\rm inferred} = \frac{J_{ij}^{(i)} + \bar{J}_{ij}^{(j)}}{2}
\end{equation}
where with $\bar{J}$ we indicate the complex conjugate.
It is worth noting that the pseudolikelihood $L_i$, Eq. \eqref{eq:L_i}, is characterized by the
following properties: (i) the normalization term of Eq.\eqref{eq:marginal_xy} can be
computed analytically at odd with the {\em full} likelihood case that
in general require a computational time which scales exponentially
with the size of the systems; (ii) the $\ell_2$-regularized pseudolikelihood
defined in Eq.\eqref{eq:plf_i} is strictly concave (i.e. it has a single
maximizer)\cite{Ravikumar10}; (iii) it is consistent, i.e. if $M$ samples are
generated by a model $P(\phi | J*)$ the maximizer tends to $J*$
for $M\rightarrow\infty$\cite{besag1975}. Note also that (iii) guarantees that
$|J^{(i)}_{ij}-J^{(j)}_{ij}| \rightarrow 0$ for $M\rightarrow \infty$.
In Secs. \ref{sec:res_reg}, \ref{sec:res_dec}
we report the results obtained and we analyze the performances of the PLM having taken the configurations from Monte-Carlo simulations of models whose details are known.
\subsection{PLM with decimation}
Even though the PLM with $l_2$-regularization allows to dwell the inference towards the low temperature region and in the low sampling case with better performances that mean-field methods, in some situations some couplings are overestimated and not at all symmetric. Moreover, in the technique there is the bias of the $l_2$ regularizer.
Trying to overcome these problems, Decelle and Ricci-Tersenghi introduced a new method \cite{Decelle14}, known as PLM + decimation: the algorithm maximizes the sum of the $L_i$,
\begin{eqnarray}
{\cal L}\equiv \frac{1}{N}\sum_{i=1}^N \mbox{L}_i
\end{eqnarray}
and, then, it recursively set to zero couplings which are estimated very small. We expect that as long as we are setting to zero couplings that are unnecessary to fit the data, there should be not much changing on ${\cal L}$. Keeping on with decimation, a point is reached where ${\cal L}$ decreases abruptly indicating that relevant couplings are being decimated and under-fitting is taking place.
Let us define by $x$ the fraction of non-decimated couplings. To have a quantitative measure for the halt criterion of the decimation process, a tilted ${\cal L}$ is defined as,
\begin{eqnarray}
\mathcal{L}_t &\equiv& \mathcal{L} - x \mathcal{L}_{\textup{max}} - (1-x) \mathcal{L}_{\textup{min}} \label{$t$PLF}
\end{eqnarray}
where
\begin{itemize}
\item $\mathcal{L}_{\textup{min}}$ is the pseudolikelyhood of a model with independent variables. In the XY case: $\mathcal{L}_{\textup{min}}=-\ln{2 \pi}$.
\item
$\mathcal{L}_{\textup{max}}$ is the pseudolikelyhood in the fully-connected model and it is maximized over all the $N(N-1)/2$ possible couplings.
\end{itemize}
At the first step, when $x=1$, $\mathcal{L}$ takes value $\mathcal{L}_{\rm max}$ and $\mathcal{L}_t=0$. On the last step, for an empty graph, i.e., $x=0$, $\mathcal{L}$ takes the value $\mathcal{L}_{\rm min}$ and, hence, again $\mathcal{L}_t =0$.
In the intermediate steps, during the decimation procedure, as $x$ is decreasing from $1$ to $0$, one observes firstly that $\mathcal{L}_t$ increases linearly and, then, it displays an abrupt decrease indicating that from this point on relevant couplings are being decimated\cite{Decelle14}. In Fig. \ref{Jor1-$t$PLF} we give an instance of this behavior for the 2D short-range XY model with ordered couplings. We notice that the maximum point of $\mathcal{L}_t$ coincides with the minimum point of the reconstruction error, the latter defined as
\begin{eqnarray}\label{eq:errj}
\mbox{err}_J \equiv \sqrt{\frac{\sum_{i<j} (J^{\rm inferred}_{ij} -J^{\rm true}_{ij})^2}{N(N-1)/2}} \label{err}
\end{eqnarray}
We stress that the ${\cal L}_t$ maximum is obtained ignoring the underlying graph, while the err$_J$ minimum can be evaluated once the true graph has been reconstructed.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_new.eps}
\caption{The tilted likelyhood ${\cal L}_t$ curve and the reconstruction error vs the number of decimated couplings for an ordered, real-valued J on 2D XY model with $N=64$ spins. The peak of ${\cal L}_t$ coincides with the dip of the error.}
\label{Jor1-$t$PLF}
\end{figure}
In the next sections we will show the results obtained on the $XY$ model analyzing the performances of the two methods and comparing them also with a mean-field method \cite{Tyagi15}.
\section{Inferred couplings with PLM-$l_2$}
\label{sec:res_reg}
\subsection{$XY$ model with real-valued couplings}
In order to obtain the vector of couplings, $J_{ij}^{\rm inferred}$ the function $-\mathcal{L}_i$ is minimized through the vector of derivatives ${\partial \mathcal{L}_i}/\partial J_{ij}$. The process is repeated for all the couplings obtaining then a fully connected adjacency matrix. The results here presented are obtained with $\lambda = 0.01$.
For the minimization we have used the MATLAB routine \emph{minFunc\_2012}\cite{min_func}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_JR_soJR_TPJR}
\caption{Top panels: instances of single site coupling reconstruction for the case of $N=64$ XY spins on a 2D lattice with ordered $J$ (left column) and bimodal distributed $J$ (right column).
Bottom panels: sorted couplings.}
\label{PL-Jor1}
\end{figure}
To produce the data by means of numerical Monte Carlo simulations a system with $N=64$ spin variables is considered on a deterministic 2D lattice with periodic boundary conditions.
Each spin has then connectivity $4$, i.e., we expect to infer an adjacency matrix with $N c = 256$ couplings different from zero.
The dynamics of the simulated model is based on the Metropolis algorithm and parallel tempering\cite{earl05} is used to speed up the thermalization of the system.
The thermalization is tested looking at the average energy over logarithmic time windows and
the acquisition of independent configurations
starts only after the system is well thermalized.
For the values of the couplings we considered two cases: an ordered case, indicated in the figure as $J$ ordered (e.g., left column of Fig. \ref{PL-Jor1}) where the couplings can take values $J_{ij}=0,J$, with $J=1$,
and a quenched disordered case, indicated in the figures as $J$ disordered (e.g., right column of Fig. \ref{PL-Jor1})
where the couplings can take also negative values, i.e.,
$J_{ij}=0,J,-J$, with a certain probability. The results here presented were obtained with bimodal distributed $J$s:
$P(J_{ij}=J)=P(J_{ij}=-J)=1/2$. The performances of the PLM have shown not to depend on $P(J)$.
We recall that in Sec. \ref{sec:plm} we used the temperature-rescaled notation, i.e., $J_{ij}$ stands for $J_{ij}/T$.
To analyze the performances of the PLM, in Fig. \ref{PL-Jor1} the inferred couplings, $\mathbb{J}^R_{\rm inf}$, are shown on top of the original couplings, $\mathbb{J}^R_{\rm true}$.
The first figure (from top) in the left column shows the $\mathbb{J}^R_{\rm inf}$ (black) and the $\mathbb{J}^R_{\rm tru}$ (green) for a given spin
at temperature $T/J=0.7$ and number of samples $M=1024$. PLM appears to reconstruct the correct couplings, though zero couplings are always given a small inferred non-zero value.
In the left column of Fig. \ref{PL-Jor1}, both the $\mathbb{J}^R_{\rm{inf}}$ and the $\mathbb{J}^R_{\rm{tru}}$ are sorted in decreasing order and plotted on top of each other.
We can clearly see that $\mathbb{J}^R_{\rm inf}$ reproduces the expected step function. Even though the jump is smeared, the difference between inferred couplings corresponding to the set of non-zero couplings
and to the set of zero couplings can be clearly appreciated.
Similarly, the plots in the right column of Fig. \ref{PL-Jor1} show the results obtained for the case with bimodal disordered couplings, for the same working temperature and number of samples.
In particular, note that the algorithm infers half positive and half negative couplings, as expected.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_errJ_varT_varM}
\caption{Reconstruction error $\mbox{err}_J$, cf. Eq. (\ref{eq:errj}), plotted as a function of temperature (left) for three values of the number of samples $M$ and as a function $M$ (right) for three values of temperature in the ordered system, i.e., $J_{ij}=0,1$.
The system size is $N=64$.}
\label{PL-err-Jor1}
\end{figure}
In order to analyze the effects of the number of samples and of the temperature regimes, we plot in Fig. \ref{PL-err-Jor1} the reconstruction error, Eq. (\ref{err}), as a function of temperature for three different sample sizes $M=64,128$ and $512$.
The error is seen to sharply rise al low temperature, incidentally, in the ordered case, for $T<T_c \sim 0.893$, which is the Kosterlitz-Thouless transition temperature of the 2XY model\cite{Olsson92}.
However, we can see that if only $M=64$ samples are considered, $\mbox{err}_J$ remains high independently on the working temperature.
In the right plot of Fig. \ref{PL-err-Jor1}, $\mbox{err}_J$ is plotted as a function of $M$ for three different working temperatures $T/J=0.4,0.7$ and $1.3$. As we expect,
$\mbox{err}_J$ decreases as $M$ increases. This effect was observed also with mean-field inference techniques on the same model\cite{Tyagi15}.
To better understand the performances of the algorithms, in Fig. \ref{PL-varTP-Jor1} we show several True Positive (TP) curves obtained for various values of $M$ at three different temperatures $T$. As $M$ is large and/or temperature is not too small, we are able to reconstruct correctly all the couplings present in the system (see bottom plots).
The True Positive curve displays how many times the inference method finds a true link of the original network as a function of the index of the vector of sorted absolute value of reconstructed couplings $J_{ij}^{\rm inf}$.
The index $n_{(ij)}$ represents the related spin couples $(ij)$. The TP curve is obtained as follows:
first the values $|J^{\rm inf}_{ij}|$ are sorted in descending order and the spin pairs $(ij)$ are ordered according to the sorting position of $|J^{\rm inf}_{ij}|$. Then,
a cycle over the ordered set of pairs $(ij)$, indexed by $n_{(ij)}$, is performed, comparing with the original network coupling $J^{\rm true}_{ij}$ and verifying whether it is zero or not. The true positive curve is computed as
\begin{equation}
\mbox{TP}[n_{(ij)}]= \frac{\mbox{TP}\left[n_{(ij)}-1\right] (n_{ij}-1)+ 1 -\delta_{J^{\rm true}_{ij},0}}{n_{(ij)}}
\end{equation}
As far as $J^{\rm true}_{ij} \neq 0$, TP$=1$. As soon as the true coupling of a given $(ij)$ couple in the sorted list is zero, the TP curve departs from one.
In our case, where the connectivity per spin of the original system is $c=4$ and there are $N=64$ spins, we know that we will have $256$ non-zero couplings.
If the inverse problem is successful, hence, we expect a steep decrease of the TP curve when $n_{ij}=256$ is overcome.
In Fig. \ref{PL-varTP-Jor1}
it is shown that, almost independently of $T/J$, the TP score improves as $M$ increases. Results are plotted for three different temperatures, $T=0.4,1$ and $2.2$, with increasing number of samples $M = 64, 128,512$ and $1024$ (clockwise).
We can clearly appreciate the improvement in temperature if the size of the data-set is not very large: for small $M$, $T=0.4$ performs better.
When $M$ is high enough (e.g., $M=1024$), instead, the TP curves do not appear to be strongly influenced by the temperature.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor11_2D_l2_TPJR_varT_varM}
\caption{TP curves for 2D short-range ordered $XY$ model with $N=64$ spins at three different values of $T/J$ with increasing - clockwise from top - $M$.}
\label{PL-varTP-Jor1}
\end{figure}
\subsection{$XY$ model with complex-valued couplings}
For the complex $XY$ we have to contemporary infer $2$ apart coupling matrices, $J^R_{i j}$ and $J^I_{i j}$. As before, a system of $N=64$ spins is considered on a 2D lattice.
For the couplings we have considered both ordered and bimodal disordered cases.
In Fig. \ref{PL-Jor3}, a single row of the matrix $J$ (top) and the whole sorted couplings (bottom) are displayed for the ordered model (same legend as in Fig. \ref{PL-Jor1}) for the real, $J^R$ (left column), and the imaginary part, $J^I$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_l2_JRJI_soJRJI_TPJRJI}
\caption{Results related to the ordered complex XY model with $N=64$ spins on a 2D lattice. Top: instances of single site reconstruction for the real, JR (left column), and
the imaginary, JI (right column), part of $J_{ij}$. Bottom: sorted values of JR (left) and JI (right).}
\label{PL-Jor3}
\end{figure}
\section{PLM with Decimation}
\label{sec:res_dec}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_varT_varM}
\caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted as a function of decimated couplings. Top: Different ${\cal L}_t$ curves obtained for different values of $M$ plotted on top of each other. Here $T=1.3$. The black line indicates the expected number of decimated couplings, $x^*=(N (N-1) - N c)/2=1888$. As we can see, as $M$ increases, the maximum point of ${\cal L}_t$ approaches $x^*$. Bottom: Different ${\cal L}_t$ curves obtained for different values of T with $M=2048$. We can see that, with this value of $M$, no differences can be appreciated on the maximum points of the different ${\cal L}_t$ curves.}
\label{var-$t$PLF}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_tPLF_peak_statistics_varM_prob.eps}
\caption{Number of most likely decimated couplings, estimated by the maximum point of $\mathcal{L}_t$, as a function of the number of samples $M$. We can clearly see that the maximum point of $\mathcal{L}_t$ tends toward $x^*$, which is the right expected number of zero couplings in the system.}
\label{PLF_peak_statistics}
\end{figure}
For the ordered real-valued XY model we show in Fig. \ref{var-$t$PLF}, top panel, the outcome on the tilted pseudolikelyhood, $\mathcal{L}_t$ Eq. \eqref{$t$PLF}, of the progressive decimation: from a fully connected lattice down to an empty lattice. The figure shows the behaviour of $\mathcal{L}_t$ for three different data sizes $M$. A clear data size dependence of the maximum point of $\mathcal{L}_t$, signalling the most likely value for decimation, is shown. For small $M$ the most likely number of couplings is overestimated and for increasing $M$ it tends to the true value, as displayed in Fig. \ref{PLF_peak_statistics}. In the bottom panel of Fig. \ref{var-$t$PLF} we display instead different
$\mathcal{L}_t$ curves obtained for three different values of $T$.
Even though the values of $\mathcal{L}_t$ decrease with increasing temperature, the value of the most likely number of decimated couplings appears to be quite independent on $T$ with $M=2048$ number of samples.
In Fig. \ref{fig:Lt_complex} we eventually display the tilted pseudolikelyhood for a 2D network with complex valued ordered couplings, where the decimation of the real and imaginary coupling matrices proceeds in parallel, that is,
when a real coupling is small enough to be decimated its imaginary part is also decimated, and vice versa.
One can see that though the apart errors for the real and imaginary parts are different in absolute values, they display the same dip, to be compared with the maximum point of $\mathcal{L}_t$.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_dec_tPLF_new}
\caption{Tilted Pseudolikelyhood, ${\cal L}_t$, plotted with the reconstruction errors for the XY model with $N=64$ spins on a 2D lattice. These results refer to the case of ordered and complex valued couplings. The full (red) line indicates ${\cal L}_t$. The dashed (green)
and the dotted (blue) lines show the reconstruction errors (Eq. \eqref{eq:errj}) obtained for the real and the imaginary couplings respectively. We can see that both ${\rm err_{JR}}$ and ${\rm err_{JI}}$ have a minimum at $x^*$.}
\label{fig:Lt_complex}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor1_dec_JR_soJR_TPJR}
\caption{XY model on a 2D lattice with $N=64$ sites and real valued couplings. The graphs show the inferred (dashed black lines) and true couplings (full green lines) plotted on top of each other. The left and right columns refer to the
cases of ordered and bimodal disordered couplings, respectively. Top figures: single site reconstruction, i.e., one row of the matrix $J$. Bottom figures: couplings are plotted sorted in descending order.}
\label{Jor1_dec}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Jor3_dec_JRJI_soJRJI_TPJRJI}
\caption{XY model on a 2D lattice with $N=64$ sites and ordered complex-valued couplings.
The inferred and true couplings are plotted on top of each other. The left and right columns show the real and imaginary parts, respectively, of the couplings. Top figures refer to a single site reconstruction, i.e., one row of the matrix $J$. Bottom figures report the couplings sorted in descending order.}
\label{Jor3_dec}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_TPJR_varT}
\caption{True Positive curves obtained with the three techniques: PLM with decimation, (blue) dotted line, PLM with $l_2$ regularization, (greed) dashed line, and mean-field, (red) full line. These results refer to real valued ordered couplings with $N=64$ spins on a 2D lattice. The temperature is here $T=0.7$ while the four graphs refer to different sample sizes: $M$ increases clockwise.}
\label{MF_PL_TP}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{MF_PL_Jor1_2D_errJ_varT_varM}
\caption{Variation of reconstruction error, ${\rm err_J}$, with respect to temperature as obtained with the three different techniques, see Fig. \ref{MF_PL_TP}, for four different sample size: clockwise from top $M=512,1024, 2048$ and $4096$.}
\label{MF_PL_err}
\end{figure}
Once the most likely network has been identified through the decimation procedure, we perform the same analysis displayed in Fig. \ref{Jor1_dec} for ordered and then quenched disordered real-valued couplings
and in Fig. \ref{Jor3_dec} for complex-valued ordered couplings. In comparison to the results shown in Sec. \ref{sec:res_reg},
the PLM with decimation leads to rather cleaner results. In Figs. \ref{MF_PL_err} and \ref{MF_PL_TP} we compare the performances of the PLM with decimation in respect to ones of the PLM with $l_2$-regularization. These two techniques are also analysed in respect to a mean-field technique previously implemented on the same XY systems\cite{Tyagi15}.
For what concerns the network of connecting links, in Fig. \ref{MF_PL_TP} we compare the TP curves obtained with the three techniques. The results refer to the case of ordered and real valued couplings, but similar behaviours were obtained for the other cases analysed.
The four graphs are related to different sample sizes, with $M$ increasing clockwise. When $M$ is high enough, all techniques reproduce the true network.
However, for lower values of $M$ the performances of the PLM with $l_2$ regularization and with decimation drastically overcome those ones of the previous mean field technique.
In particular, for $M=256$ the PLM techniques still reproduce the original network while the mean-field method fails to find more than half of the couplings.
When $M=128$, the network is clearly reconstructed only through the PLM with decimation while the PLM with $l_2$ regularization underestimates the couplings.
Furthermore, we notice that the PLM method with decimation is able to clearly infer the network of interaction even when $M=N$ signalling that it could be considered also in the under-sampling regime $M<N$.
In Fig. \ref{MF_PL_err} we compare the temperature behaviour of the reconstruction error.
In can be observed that for all temperatures and for all sample sizes the reconstruction error, ${\rm err_J}$, (plotted here in log-scale) obtained with the PLM+decimation is always smaller than
that one obtained with the other techniques. The temperature behaviour of ${\rm err_J}$ agrees with the one already observed for Ising spins in \cite{Nguyen12b} and for XY spins in \cite{Tyagi15} with a mean-field approach: ${\rm err_J}$ displays a minimum around $T\simeq 1$ and then it increases for very lower $T$; however,
the error obtained with the PLM with decimation is several times smaller than the error estimated by the other methods.
\section{Conclusions}
\label{sec:conc}
Different statistical inference methods have been applied to the inverse problem of the XY model.
After a short review of techniques based on pseudo-likelihood and their formal generalization to the model we have tested their performances against data generated by means of Monte Carlo numerical simulations of known instances
with diluted, sparse, interactions.
The main outcome is that the best performances are obtained by means of the pseudo-likelihood method combined with decimation. Putting to zero (i.e., decimating) very weak bonds, this technique turns out to be very precise for problems whose real underlying interaction network is sparse, i.e., the number of couplings per variable does not scale with number of variables.
The PLM + decimation method is compared to the PLM + regularization method, with $\ell_2$ regularization and to a mean-field-based method. The behavior of the quality of the network reconstruction is analyzed by looking at the overall sorted couplings and at the single site couplings, comparing them with the real network, and at the true positive curves in all three approaches. In the PLM +decimation method, moreover, the identification of the number of decimated bonds at which the tilted pseudo-likelihood is maximum allows for a precise estimate of the total number of bonds. Concerning this technique, it is also shown that the network with the most likely number of bonds is also the one of least reconstruction error, where not only the prediction of the presence of a bond is estimated but also its value.
The behavior of the inference quality in temperature and in the size of data samples is also investigated, basically confirming the low $T$ behavior hinted by Nguyen and Berg \cite{Nguyen12b} for the Ising model. In temperature, in particular, the reconstruction error curve displays a minimum at a low temperature, close to the critical point in those cases in which a critical behavior occurs, and a sharp increase as temperature goes to zero. The decimation method, once again, appears to enhance this minimum of the reconstruction error of almost an order of magnitude with respect to other methods.
The techniques displayed and the results obtained in this work can be of use in any of the many systems whose theoretical representation is given by Eq. \eqref{eq:HXY} or Eq. \eqref{eq:h_im}, some of which are recalled in Sec. \ref{sec:model}. In particular, a possible application can be the field of light waves propagation through random media and the corresponding problem of the reconstruction of an object seen through an opaque medium or a disordered optical fiber \cite{Vellekoop07,Vellekoop08a,Vellekoop08b, Popoff10a,Akbulut11,Popoff11,Yilmaz13,Riboli14}.
| {'timestamp': '2016-03-24T01:11:16', 'yymm': '1603', 'arxiv_id': '1603.05101', 'language': 'en', 'url': 'https://arxiv.org/abs/1603.05101'} |
\section{Introduction}
Model Predictive Control~(MPC) is widely known as an advanced control technique for nonlinear systems that can handle time-varying references with preview information as well as constraints. In the vast body of literature, standard MPC formulations penalize deviations from a set point (typically the origin) or a (feasible) reference trajectory, while providing stability and recursive feasibility guarantees for such settings~\cite{Mayne2000,rawlings2009model,borrelli2017predictive,Grune2011}.
In practice, it is often difficult or cumbersome to pre-compute reference trajectories which are feasible, i.e., which both satisfy path constraints and evolve according to the system dynamics. It is therefore appealing to use reference trajectories (or reference paths) that are simple to compute and only satisfy path constraints, but not necessarily the system dynamics. However, using such trajectories introduces difficulties in providing stability guarantees~\cite{rawlings2012fundamentals}.
This paper aims at (partially) filling a gap between practice and theory that exists for MPC formulations with infeasible references. Previous work~\cite{Rawlings2008a} has considered infeasible set points, and shown that stabilization to the closest feasible set point can be guaranteed. However, in the case of time-varying references, this analysis does directly not apply. To that end, we propose to use an Input-to-State Stability~(ISS) approach instead.
Our contribution in this paper is twofold. First, we prove that MPC formulations that are formulated with an infeasible reference, can actually stabilize towards an optimal trajectory, subject to specific terminal conditions. However, selecting such terminal conditions requires in general that an optimization problem is solved beforehand to compute a feasible reference. Consequently, if this step is performed, one has no reason to provide an infeasible reference to the MPC controller. Therefore, in a second step, we extend this first result to construct an ISS analysis to further show that if sub-optimal terminal conditions are chosen, the controlled system in closed-loop is stabilized around a neighborhood of the optimal trajectory.
This paper is structured as follows. In Section~\ref{sec:mpftc} we introduce the MPC tracking problem, while in Section~\ref{sec:economic_mpc} we prove that, while having an ideal setting in mind, one can design an MPC formulation that stabilizes the closed-loop system to an optimal feasible reference, even if an infeasible reference is used, at the price of defining suitable terminal conditions. Then, in Section~\ref{sec:iss} we further extend the results by proving ISS for practical settings, i.e., in case the terminal conditions are based on the infeasible reference. Finally, in Section~\ref{sec:simulations} we illustrate the derived theory with a numerical example, and draw conclusions in Section~\ref{sec:conclusions}.
\section{Preliminaries}\label{sec:mpftc}
Consider a discrete-time Linear Time-Varying~(LTV) system
\begin{equation}\label{eq:sys}
{\mathbf{x}}_{k+1}=f_k({\mathbf{x}}_k,\u_k)=A_k {\mathbf{x}}_k + B_k \u_k,
\end{equation}
where ${\mathbf{x}}_k\in\mathbb{R}^{n_{\mathbf{x}}}$ and $\u_k\in\mathbb{R}^{n_\u}$ are the state and input vectors at time $k$, and the matrices $A_k\in\mathbb{R}^{n_{\mathbf{x}}\times n_{\mathbf{x}}}$ and $B_k\in\mathbb{R}^{n_{\mathbf{x}}\times n_\u}$ are time-varying. While we only consider LTV systems in this paper, we comment in Remark~\ref{rem:nl_sys} on how the results could be extended to general nonlinear systems. The state and inputs are subject to constraints $h({\mathbf{x}},\u):\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}^{n_\u}\rightarrow\mathbb{R}^{n_h}$, where the inequality $h({\mathbf{x}},\u)\leq 0$ is defined element-wise. The constraint $h({\mathbf{x}},\u)$ models, e.g., regions of the state space which should be avoided, and actuator limitations. Our objective is to control the system such that the state ${\mathbf{x}}_k$ tracks a user-provided parameterized reference trajectory $\r(t)=(\r^{\mathbf{x}}(t),\r^\u(t))$ as closely as possible. We assume that the reference trajectory is parameterized with time parameter $t$, with natural dynamics
\begin{equation}
\label{eq:tau_controlled}
t_{k+1} = t_k + t_\mathrm{s},
\end{equation}
where $t_\mathrm{s}=1$ for discrete-time systems, or the sampling time for sampled-data systems. Throughout the remainder of the paper, we will refer to any time dependence of the reference using notation $(\r^{\mathbf{x}}_k,\r^\u_k):=(\r^{\mathbf{x}}(t_k),\r^\u(t_k))$.
In order to track the reference $(\r^{\mathbf{x}}_k,\r^\u_k)$, we formulate the tracking MPC problem as
\begin{subequations}
\label{eq:nmpc}
\begin{align}
\begin{split}\hspace{-0.5em}V({\mathbf{x}}_k,t_k):=&\min_{\substack{{\x}},\substack{{\u}}} \sum_{n=k}^{k+N-1}
q_\r(\xb,\ub,t_n)\\
&\hspace{3.5em}+p_\r(\xb[k+N],t_{k+N})\hspace{-2em}
\end{split}\label{eq:nmpc_cost}\\
\text{s.t.}\ &\xb[k][k] = {\mathbf{x}}_{k},\label{eq:nmpcState} \\
&\xb[n+1] = f_n(\xb,\ub),\label{eq:nmpcDynamics} & \hspace{-1em}n\in \mathbb{I}_k^{k+N-1},\\
&h(\xb,\ub) \leq{} 0, \label{eq:nmpcInequality_known}& \hspace{-1em}n\in \mathbb{I}_k^{k+N-1},\\
&\xb[k+N] \in\mathcal{X}^\mathrm{f}_\r(t_{k+N})\label{eq:nmpcTerminal},
\end{align}
\end{subequations}
where $k$ is the current time instance and $N$ is the prediction horizon. In tracking MPC, typical choices for the stage and terminal costs are
\begin{align}
q_\r(\xb,\ub,t_n) &:= \matr{c}{\xb-\rx_n\\\ub-\ru_n}^\top{}\hspace{-0.7em}W\matr{c}{\xb-\rx_n\\\ub-\ru_n},\label{eq:stage_cost}\\
p_\r(\xb,t_n) &:= (\xb-\rx_{n})^\top{}P(\xb-\rx_{n}),\label{eq:terminal_cost}
\end{align}
where $W\in\mathbb{R}^{(n_{\mathbf{x}}+n_\u) \times (n_{\mathbf{x}}+n_\u)}$ and $P\in\mathbb{R}^{n_{\mathbf{x}}\times n_{\mathbf{x}}}$ are symmetric positive-definite matrices. In order to avoid further technicalities, we avoid more general costs for the sake of simplicity. The predicted states and controls at the prediction time $n$ given the states at the current time $k$, are defined as $\xb$, and $\ub$, respectively.
The initial condition is enforced by constraint \eqref{eq:nmpcState}, and constraint \eqref{eq:nmpcDynamics} enforces the system dynamics. Constraint \eqref{eq:nmpcInequality_known} denotes constraints, e.g., state and control limits, and constraint \eqref{eq:nmpcTerminal} defines a terminal set containing the reference $\r$. Note that, differently from standard formulations, the terminal constraint depends on the time parameter $t_{k+N}$ relative to the reference.
In the following, we first recall the standard stability properties of tracking MPC. Then, in Sections~\ref{sec:economic_mpc} and~\ref{sec:iss} we will derive
Input-to-State Stability~(ISS) results when the parameterized reference trajectory is not feasible with respect to the system dynamics.
In order to prove stability, we introduce the following standard assumptions, see, e.g.,~\cite{rawlings2009model,Grune2011}.
\begin{Assumption}[System and cost regularity]\label{a:cont}
The system model $f$ is continuous, and the stage cost $q_\r:\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}^{n_\u}\times\mathbb{R}\rightarrow\mathbb{R}_{\geq{}0}$, and terminal cost $p_\r:\mathbb{R}^{n_{\mathbf{x}}}\times\mathbb{R}\rightarrow\mathbb{R}_{\geq{}0}$, are continuous at the origin and satisfy $q_\r(\rx_k,\ru_k,t_k)=0$, and $p_\r(\rx_k,t_k)=0$. Additionally, $q_\r({\x}_k,{\u}_k,t_k)\geq{}\alpha_1(\|{\x}_k-\rx_k\|)$ for all feasible ${\mathbf{x}}_k$, $\u_k$, and $p_\r({\x}_k,t_k)\leq\alpha_2(\|{\x}_k-\rx_k\|)$, where $\alpha_1$ and $\alpha_2$ are $\mathcal{K}_\infty$-functions.
\end{Assumption}
\begin{Assumption}[Reference feasibility] \label{a:rec_ref}
The reference trajectory satisfies the system dynamics~\eqref{eq:nmpcDynamics} and the system constraints~\eqref{eq:nmpcInequality_known}, i.e., $\r^{\mathbf{x}}_{k+1}=f_k(\r^{\mathbf{x}}_k,\r^\u_k)$ and $h(\r^{\mathbf{x}}_k,\r^\u_k) \leq{} 0$, $\forall{}k\in\mathbb{I}_0^\infty$.
\end{Assumption}
\begin{Assumption}[Stabilizing Terminal Conditions] \label{a:terminal}
There exists a parametric stabilizing terminal set $\mathcal{X}^\mathrm{f}_\r(t)$ and a terminal control law $\kappa^\mathrm{f}_\r({\mathbf{x}},t)$ yielding:
\begin{align*}
\mathbf{x}_{+}^\kappa=f_k(\mathbf{x}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t)), && t_+ = t_k + t_\mathrm{s},
\end{align*}
such that
$p_\r({\mathbf{x}}_{+}^\kappa,t_{+}) - p_\r({\mathbf{x}}_k,t_k) \leq{} - q_\r({\mathbf{x}}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t_k),t_k)$,
${\mathbf{x}}_k\in\mathcal{X}^\mathrm{f}_\r(t_k)\Rightarrow {\mathbf{x}}^\kappa_{+}\in\mathcal{X}^\mathrm{f}_\r(t_{+})$, and $h({\mathbf{x}}_k,\kappa^\mathrm{f}_\r({\mathbf{x}}_k,t_k)) \leq{} 0$ hold for all $k\in\mathbb{I}_0^\infty$.
\end{Assumption}
\begin{Proposition} [Nominal Asymptotic Stability]\label{prop:stab_feas}
{Suppose that Assumptions \ref{a:cont}, \ref{a:rec_ref}, and \ref{a:terminal} hold,
and that the initial state $({\mathbf{x}}_k,t_k)$ at time $k$ belongs to the feasible set of Problem \eqref{eq:nmpc}. Then the system \eqref{eq:sys} in closed-loop with the solution of~\eqref{eq:nmpc} applied in receding horizon is an asymptotically stable system.} \label{prop:stable}
\begin{proof}
See the standard proof in, e.g., \cite{rawlings2009model,borrelli2017predictive}.
\end{proof}
\end{Proposition}
Proposition~\ref{prop:stab_feas} recalls the known stability results from the existing literature, which apply to tracking MPC schemes. The resulting design procedure for asymptotically stable tracking MPC is indeed complicated by the task of precomputing a feasible reference trajectory $(\r^{\mathbf{x}}_k,\r^\u_k)$ that satisfies Assumption~\ref{a:rec_ref}. However, in practice, it may be convenient to use a reference trajectory that is infeasible w.r.t. the system dynamics, yet simpler to define. While in standard MPC settings the stability with respect to an unreachable set point has been studied in~\cite{Rawlings2008a}, the approach therein applies to time-invariant infeasible references. In order to overcome such a limitation, we consider a setting where the reference can be time-varying and does not need to satisfy Assumption~\ref{a:rec_ref}, and the terminal conditions \eqref{eq:nmpcTerminal} do not need to hold at the reference trajectory, but in a neighborhood. While the results proposed in this paper are developed for a standard MPC formulation, we point out that they hold in other settings as well, including Model Predictive path Following Control~(MPFC)~\cite{Faulwasser2016} or Model Predictive Flexible trajectory Tracking Control~(MPFTC)~\cite{batkovic2020safe}.
\section{Optimal Feasible Reference}
Consider the optimal state and input trajectories obtained as the solution of the optimal control problem (OCP)
\begin{subequations}
\label{eq:ocp}
\begin{align}\begin{split}
\hspace{-1em}({\x}^{\mathrm{r}},{\u}^{\mathrm{r}})\hspace{-.25em}:=\hspace{-.5em}\lim_{M\rightarrow\infty}\hspace{-0.3em}
\arg&\min_{{\boldsymbol{\xi}},{\boldsymbol{\nu}}} \sum_{n=0}^{M-1}
\hspace{-.3em}q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n)\hspace{-0.1em}+\hspace{-0.1em}p_\r({\boldsymbol{\xi}}_M,t_M)\label{eq:ocp_cost} \hspace{-20em}\end{split}\\
\text{s.t.}\ &{\boldsymbol{\xi}}_0={\mathbf{x}}_{0}, \label{eq:ocpState} &\\
&{\boldsymbol{\xi}}_{n+1} = f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n),\label{eq:ocpDynamics} & n\in \mathbb{I}_{0}^{M-1},\\
&h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) \leq{} 0, \label{eq:ocpInequality_known}& \hspace{-1em}n\in \mathbb{I}_0^{M-1},
\end{align}
\end{subequations}
with the corresponding value function
\begin{equation}
V^\mathrm{O}({\mathbf{x}}_k,t_k) := \lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u_n^{\mathrm{r}},t_n)+p_\r({\mathbf{x}}_M^{\mathrm{r}},t_M)
\end{equation}
The terminal cost in~\eqref{eq:ocp_cost} and initial state constraint~\eqref{eq:ocpState} can in principle be omitted or formulated otherwise, e.g., the terminal cost can be replaced with a terminal constraint instead, but we include them in the formulation since they are often taking this form. We use here the same stage cost as in~\eqref{eq:nmpc_cost} and assume it is positive-definite. We exclude positive semi-definite costs solely for the sake of simplicity.
We define the Lagrangian of the OCP~\eqref{eq:ocp} as
\begin{align*}
\mathcal{L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, {\boldsymbol{\lambda}},{\boldsymbol{\mu}},\mathbf{t}) &= {\boldsymbol{\lambda}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) +p_\r({\boldsymbol{\xi}}_M,t_M)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) +{\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} {\boldsymbol{\lambda}}_{n+1}^\top ({\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)),
\end{align*}
and denote the optimal multipliers as ${\boldsymbol{\lambda}}^{\mathrm{r}},{\boldsymbol{\mu}}^{\mathrm{r}}$, and the solution of~\eqref{eq:ocp} as ${\mathbf{y}}^{\mathrm{r}}:=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$.
Hereafter, we will refer to the reference ${\mathbf{y}}^{\mathrm{r}}$ as the \emph{feasible reference}, as it satisfies Assumption~\ref{a:rec_ref}.
\begin{Remark}
Note that Problem~\eqref{eq:ocp} is formulated as an infinite horizon OCP since a reference could be defined over an infinite time horizon. For instance, a stationary reference can be viewed as being infinitely long as it remains at the same point at all times.
\end{Remark}
In the following, we will prove the stability of \eqref{eq:sys} w.r.t. ${\mathbf{y}}^{\mathrm{r}}$ by relying on the trajectories~${\mathbf{y}}^{\mathrm{r}}$ and ${\boldsymbol{\lambda}}^{\mathrm{r}}$ from~\eqref{eq:ocp}, where ${\mathbf{y}}^{\mathrm{r}}$ is used as an auxiliary reference.
Our analysis will proceed as follows. We will first discuss an ideal case in which the terminal conditions are constructed based on ${\mathbf{y}}^{\mathrm{r}}$. By exploiting ideas from economic MPC we will prove that asymptotic stability can be obtained in that case. Since our objective is to avoid using any information on ${\mathbf{y}}^{\mathrm{r}}$, we will then turn to the realistic MPC formulation~\eqref{eq:nmpc}, and we will prove ISS.
\subsection{Ideal MPC and Asymptotic Stability} \label{sec:economic_mpc}
Our analysis builds on tools that are used in the stability analysis of economic MPC schemes. The interested reader is referred to the following most relevant publications related to our analysis~\cite{Diehl2011,Amrit2011a,Zanon2018a,Faulwasser2018}.
Economic and tracking MPC schemes differ in the cost function, which satisfies
\begin{align}
\begin{split}
\label{eq:tracking_cost}
q_\r({\mathbf{x}}^{\mathrm{r}}_k,\u^{\mathrm{r}}_k,t_k) =0,\ &q_\r({\mathbf{x}}_k,\u_k,t_k) >0,\\
&\forall \ {\mathbf{x}}_k\neq{}{\x}^{\mathrm{r}}_k,\ \u_k\neq{\u}^{\mathrm{r}}_k,
\end{split}
\end{align}
in tracking schemes but not in economic ones. Note that~\eqref{eq:tracking_cost} can only hold if $\r={\mathbf{y}}^{\mathrm{r}}$, that is, if Assumption~\ref{a:rec_ref} holds.
Consequently, even if the cost is positive-definite, any MPC scheme formulated with an infeasible reference $\r$ is an economic MPC.
We refer to~\cite{Zanon2018a,Faulwasser2018} for a detailed discussion on the topic.
On the contrary, if ${\mathbf{y}}^{\mathrm{r}}$ is used as reference, we obtain the tracking stage cost $q_{{\mathbf{y}}^{\mathrm{r}}}$. Since precomputing a feasible reference ${\mathbf{y}}^{\mathrm{r}}$ can be impractical or involved, we focus next on the case of \emph{infeasible references}.
Consider the Lagrangian of the OCP~\eqref{eq:ocp}
\begin{align*}
\mathcal{L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, {\boldsymbol{\lambda}},{\boldsymbol{\mu}},\mathbf{t}) &= {\boldsymbol{\lambda}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) +p_\r({\boldsymbol{\xi}}_M,t_M)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
q_\r({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) +{\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} {\boldsymbol{\lambda}}_{n+1}^\top ({\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)),
\end{align*}
and denote the optimal multipliers as ${\boldsymbol{\lambda}}^\mathrm{r},{\boldsymbol{\mu}}^\mathrm{r}$, and the solution of~\eqref{eq:ocp} as ${\mathbf{y}}^{\mathrm{r}}:=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$. In order to construct a tracking cost from the economic one, we use the Lagrange multipliers of the OCP~\eqref{eq:ocp} to construct a \emph{rotated} problem, which has the same constraints as the original MPC problem~\eqref{eq:nmpc} and the following \emph{rotated stage and terminal costs}
\begin{align*}
&\bar q_\r(\xb,\ub,t_n):=q_\r(\xb,\ub,t_n)-q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n,t_n)\\
&\hspace{1em}+ {\boldsymbol{\lambda}}_n^{\mathrm{r}\top}(\xb[n][k]-{\mathbf{x}}^{\mathrm{r}}_n)- {\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n+1} (f_n(\xb[n][k],\ub[n][k])-f_n({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n)), \\
&\bar{p}_\r(\xb,t_n):= p_\r(\xb)-p_\r({\mathbf{x}}_{n}^{\mathrm{r}},t_n)+{\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n}(\xb-{\mathbf{x}}^{\mathrm{r}}_{n}).
\end{align*}
As we prove in the following Lemma~\ref{lem:rot_ocp}, adopting the rotated stage cost $\bar q_\r$ and terminal cost $\bar p_\r$ in the OCP~\eqref{eq:ocp} does not change its primal solution. Such property of the rotated costs will be exploited next in the formulation of the \emph{ideal} MPC problem.
\begin{Lemma}
\label{lem:rot_ocp}
If OCP~\eqref{eq:ocp} is formulated using the rotated cost instead of the original one, then the Second Order Sufficient optimality Conditions (SOSC) are satisfied~\cite{Nocedal2006}, and the following claims hold:
\begin{enumerate}
\item[i)] the primal solution is unchanged;
\item[ii)] the rotated cost penalizes deviations from the optimal solution of Problem~\eqref{eq:ocp}, i.e.,
\begin{align*}
\bar q_\r({\mathbf{x}}_n^{\mathrm{r}},\u_n^{\mathrm{r}},t_n) =0,\ \bar q_\r({\mathbf{x}}_n,\u_n,t_n)>0,
\end{align*}
for all $({\mathbf{x}}_n,\u_n) \neq ({\x}_n^{\mathrm{r}},{\u}_n^{\mathrm{r}})$ satisfying $h({\mathbf{x}}_n,\u_n) \leq 0$.
\end{enumerate}
\end{Lemma}
\begin{proof}
First, we prove that if Problem~\eqref{eq:ocp} is formulated using stage cost $\bar q_\r$ and terminal cost $\bar p_\r$ instead of $q_\r$ and $p_\r$, the primal solution remains unchanged.
This is a known result from the literature on economic MPC and is based on the observation that all terms involving ${\boldsymbol{\lambda}}^\mathrm{r}$ in the rotated cost form a telescopic sum and cancel out, such that only ${{\boldsymbol{\lambda}}_0^\mathrm{r}}^\top ({\boldsymbol{\xi}}_0-{\mathbf{x}}_0^\mathrm{r})$ remains. Since the initial state is fixed, the cost only differs by a constant term and the primal solution is unchanged. The cost $\bar q_\r$ being nonnegative is a consequence of the fact that the stage cost Hessian is positive definite by Assumption \ref{a:cont}, the system dynamics are LTV, and the Lagrange multipliers $\bar {\boldsymbol{\lambda}}$ associated with Problem~\eqref{eq:ocp} using cost $\bar q_\r$ are $0$.
To prove the second claim, we define the Lagrangian of the rotated problem as
\begin{align*}
\mathcal{\bar L}^\mathrm{O}({\boldsymbol{\xi}}, {\boldsymbol{\nu}}, \bar {\boldsymbol{\lambda}},\bar {\boldsymbol{\mu}},\mathbf{t})
= \ & \bar{{\boldsymbol{\lambda}}}_0^\top ({\boldsymbol{\xi}}_0 - {\mathbf{x}}_{0}) + \bar p_\r ({\boldsymbol{\xi}}_M,t_M)\\
&\hspace{-2em}+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1}
\bar{q}_\mathrm{\r}({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) + \bar {\boldsymbol{\mu}}_n^\top h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\\
&\hspace{-2em}+\lim_{M\rightarrow\infty}\sum_{n=0}^{M-1} \bar{{\boldsymbol{\lambda}}}_{n+1}^\top ( {\boldsymbol{\xi}}_{n+1} - f_n({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) ).
\end{align*}
For compactness we denote next $\nabla_n:=\nabla_{({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)}$. Since by construction $\nabla_n \bar q_\mathrm{\r}=\nabla_n \mathcal{L}^\mathrm{O} - \nabla_n {\boldsymbol{\mu}}_n^{{\mathrm{r}}\top} h $, we obtain
\begin{align*}
\nabla_n \mathcal{\bar L}^\mathrm{O} &= \nabla_n \bar q_\mathrm{\r} + \matr{c}{\bar {\boldsymbol{\lambda}}_n \\ 0} - \nabla_n \bar {\boldsymbol{\lambda}}_{n+1}^\top f_n + \nabla_n \bar {\boldsymbol{\mu}}_n^\top h \\
&\hspace{-1.2em}= \nabla_n \mathcal{L}^\mathrm{O} + \matr{c}{\bar {\boldsymbol{\lambda}}_{n} \\ 0} - \nabla_n \bar {\boldsymbol{\lambda}}_{n+1}^\top f_n + \nabla_n (\bar {\boldsymbol{\mu}}_n-{\boldsymbol{\mu}}_n^\mathrm{r})^\top h.
\end{align*}
Therefore, the KKT conditions of the rotated problem are solved by the same primal variables as the original problem and $\bar {\boldsymbol{\mu}}_n = {\boldsymbol{\mu}}_n^\mathrm{r}$, $\bar {\boldsymbol{\lambda}}_n=0$. With similar steps we show that $\bar{\lambda}_P=0$, since $\nabla_M\bar{p}_\r=\nabla_M \mathcal{L}^\mathrm{O}$.
Because the system dynamics are LTV and the stage cost quadratic, we have that
$\nabla^2_n \bar q_\mathrm{\r} = \nabla^2_n q_\mathrm{\r}\succ0$
Moreover, since the solution satisfies the SOSC,
we directly have that $\bar q_\mathrm{\r}({\mathbf{x}}_n^\mathrm{r},\u_n^\mathrm{r},t_n) =0$ and $\bar q_\mathrm{\r}({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n,t_n) > 0$ for all $({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n)\neq({\mathbf{x}}_n^\mathrm{r},\u_n^\mathrm{r})$ s.t. $h({\boldsymbol{\xi}}_n,{\boldsymbol{\nu}}_n) \leq 0$.
\end{proof}
\begin{Remark}
\label{rem:nl_sys}
The only reason that limits our result to LTV systems is that this entails $\nabla^2_n \bar q_\mathrm{\r} = \nabla^2_n q_\mathrm{\r}\succ0$. It seems plausible that this limitation could be overcome by assuming that OCP~\eqref{eq:ocp} satisfies the SOSC for all initial states at all times. However, because further technicalities would be necessary to obtain the proof, we leave this investigation for future research.
\end{Remark}
\begin{Corollary} The rotated value function of OCP~\eqref{eq:ocp}, i.e.,
\begin{align*}
\bar V^\mathrm{O}({\mathbf{x}}_k,t_k) &=\ V^\mathrm{O}({\mathbf{x}}_k,t_k) + {\boldsymbol{\lambda}}^{{\mathrm{r}}^\top}_k ({\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}})\\
&-\lim_{M\rightarrow\infty}\sum_{n=k}^{k+M-1}q_\r({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n,t_n)-p_\r({\mathbf{x}}^{\mathrm{r}}_{k+M},t_M),
\end{align*}
is positive definite, and its minimum is $\bar V^\mathrm{O}({\x}_k^\mathrm{r},t_k)=0$.
\end{Corollary}
\begin{proof}
We note from the proof in Lemma~1, that the rotated stage and terminal costs are positive definite and that they are zero at the feasible reference $({\mathbf{x}}^{\mathrm{r}}_n,\u^{\mathrm{r}}_n)$, hence, the rotated value function is also positive, and zero at ${\mathbf{x}}^\r_k$.
\end{proof}
\paolor{While Proposition~\ref{prop:stab_feas} proves the stability of system~(1) in closed-loop with the solution of~\eqref{eq:nmpc} under Assumptions~\ref{a:rec_ref} and~\ref{a:terminal}, in Theorem~\ref{thm:as_stab_0} we will prove stability in case the reference trajectory does not satisfy Assumption~\ref{a:rec_ref}. The stability proof in Theorem~\ref{thm:as_stab_0}
builds on the following \emph{ideal} formulation}
\begin{subequations}
\label{eq:ideal_nmpc}
\begin{align}
\begin{split}V^\mathrm{i}({\mathbf{x}}_k,t_k) = \min_{{\x},{\u}}&\sum_{n=k}^{k+N-1} q_\r(\xb,\ub,t_n) \\
&\hspace{2em}+p_{\tilde{\mathbf{y}}^{\mathrm{r}}}(\xb[k+N],t_{k+N})
\end{split} \\
\mathrm{s.t.} \ \ &\eqref{eq:nmpcState}-\eqref{eq:nmpcInequality_known}, \ \xb[k+N] \in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_{k+N}),\label{eq:ideal_nmpc_terminal}
\end{align}
\end{subequations}
where
\begin{align}\label{eq:minimizer_tilde_yr}
\tilde{\mathbf{y}}^{\mathrm{r}}_k &:= \arg\min_{{\mathbf{x}}} p_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}},t_k)-{\boldsymbol{\lambda}}_k^{{\mathrm{r}}\top}({\mathbf{x}}-{\mathbf{x}}^{\mathrm{r}}_k).
\end{align}
The Problems~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} only differ in the terminal cost and constraint: in~\eqref{eq:ideal_nmpc} they are written with respect to the solution ${\mathbf{y}}^{\mathrm{r}}$ and ${\boldsymbol{\lambda}}^{\mathrm{r}}$ of~\eqref{eq:ocp} rather than~$\r$. In order to distinguish the solutions of~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc}, we denote the solution of~\eqref{eq:nmpc} by ${\mathbf{x}}^\star$, $\u^\star$, and the solution of~\eqref{eq:ideal_nmpc} by ${\mathbf{x}}^\mathrm{i}$, $\u^\mathrm{i}$. In addition, when the stage cost $\bar{q}_\r$ and terminal cost $\bar p_{{\tilde\y}^{\mathrm{r}}}$ are used, we obtain the corresponding \emph{rotated} formulation of~\eqref{eq:ideal_nmpc}
\begin{align}
\label{eq:ideal_rot_nmpc}
\begin{split}\bar V^\mathrm{i}({\mathbf{x}}_k,t_k) = \min_{{\mathbf{x}},\u} &\sum_{n=k}^{k+N-1} \bar q_\r(\xb,\ub,t_n) \\
&\hspace{2em}+ \bar p_{\tilde{\mathbf{y}}^\mathrm{r}}(\xb[k+N],t_{k+N})
\end{split} \\
\mathrm{s.t.}\hspace{0em} \ \ &\eqref{eq:nmpcState}-\eqref{eq:nmpcInequality_known}, \ \xb[k+N] \in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{r}}(t_{k+N}),\nonumber
\end{align}
where the rotated terminal cost is defined as
\begin{align}\begin{split}\label{eq:rot_tilde_terminal_cost}
\bar{p}_{{\tilde\y}^{\mathrm{r}}}(\xb,t_n)&:= p_{{\tilde\y}^{\mathrm{r}}}(\xb,t_n)-p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_{n}^{\mathrm{r}},t_n)\\
&+{\boldsymbol{\lambda}}^{{\mathrm{r}}\top}_{n}(\xb-{\mathbf{x}}^{\mathrm{r}}_{n}).\end{split}
\end{align}
Note that by Lemma~\ref{lem:rot_ocp}, the rotated cost $\bar q_\r$ penalizes deviations from ${\mathbf{y}}^{\mathrm{r}}$, i.e., the solution to \eqref{eq:ocp}. We will prove next that $\bar p_{{\tilde\y}^\r}$ also penalizes deviations from ${\mathbf{y}}^\r$, implying that \emph{the rotated ideal MPC formulation is of tracking type}.
\begin{Lemma}
\label{lem:rot_mpc}
Consider the \emph{rotated} \emph{ideal} MPC Problem~\eqref{eq:ideal_rot_nmpc}, formulated using the rotated costs $\bar q_\r$ and $\bar{p}_{{\tilde\y}^\r}$, and the terminal set $\mathcal{X}_{{\mathbf{y}}^\r}^\mathrm{f}$. Then, the primal solution of~\eqref{eq:ideal_rot_nmpc} coincides with the primal solution of the ideal MPC Problem~\eqref{eq:ideal_nmpc}.
\end{Lemma}
\begin{proof}
From~\eqref{eq:minimizer_tilde_yr} and~\eqref{eq:rot_tilde_terminal_cost} we have that $\bar p_{{\tilde\y}^\r}({\mathbf{x}}_k^{\mathrm{r}},t_k) =0$ and that
$\nabla \bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}^{\mathrm{r}}_k,t_k) = \nabla p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}^{\mathrm{r}}_k,t_k) + \nabla p_{{\mathbf{y}}^{\mathrm{r}}}({\tilde\y}^{\mathrm{r}}_k,t_k) = 0$, since the terminal costs are quadratic~\eqref{eq:terminal_cost}. The proof then follows along the same lines as Lemma~\ref{lem:rot_ocp} and~\cite{Diehl2011,Amrit2011a}.
\end{proof}
In order to prove Theorem~\ref{thm:as_stab_0}, we need that the terminal conditions of the rotated ideal formulation~\eqref{eq:ideal_rot_nmpc} satisfy Assumption~\ref{a:terminal}. To that end, we introduce the following assumption.
\begin{Assumption}\label{a:terminal_for_rotated}
There exists a parametric stabilizing terminal set $\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t)$ and a terminal control law $\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}},t)$ yielding:
\begin{align*}
\mathbf{x}_{+}^\kappa=f_k(\mathbf{x}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t)), && t_+ = t_k + t_\mathrm{s},
\end{align*}
so that
$\bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_{+}^\kappa,t_{+})- \bar p_{{\tilde\y}^{\mathrm{r}}}({\mathbf{x}}_k,t_k) \leq{} - \bar q_\r({\mathbf{x}}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t_k),t_k)$, ${\mathbf{x}}_k\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_k)\Rightarrow {\mathbf{x}}^\kappa_{+}\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}(t_{+})$, and $h({\mathbf{x}}_k,\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}({\mathbf{x}}_k,t_k)) \leq{} 0$ hold for all $k\in\mathbb{I}_0^\infty$.
\end{Assumption}
Note that Assumption~\ref{a:terminal_for_rotated} only differs from Assumption~\ref{a:terminal} by the fact that the set and control law are centered on ${\mathbf{y}}^{\mathrm{r}}$ rather than $\r$, and that the costs are rotated.
\begin{Theorem}
\label{thm:as_stab_0}
Suppose that Assumptions \ref{a:cont} and~\ref{a:terminal_for_rotated} hold, and that Problem~\eqref{eq:ocp} is feasible for initial state $({\mathbf{x}}_k,t_k)$. Then, system~\eqref{eq:sys} in closed-loop with the ideal MPC~\eqref{eq:ideal_nmpc} is asymptotically stabilized to the optimal trajectory ${\x}^{\mathrm{r}}$.
\end{Theorem}
\begin{proof}
By Lemma~\ref{lem:rot_mpc}, the rotated ideal MPC problem has positive-definite stage and terminal costs penalizing deviations from the optimal trajectory ${\mathbf{y}}^{\mathrm{r}}$. Hence, the rotated ideal MPC problem is of tracking type.
Assumption~\ref{a:cont} directly entails a lower bounding by a $\mathcal{K}_\infty$ function, and can also be used to prove an upper bound~\cite[Theorem 2.19]{rawlings2009model}, such that the following holds
\begin{equation*}
\alpha_1(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|) \leq{} \bar V^\mathrm{i}({\mathbf{x}}_k,t_k)\leq{} \alpha_2(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|),
\end{equation*}
where $\alpha_1,\alpha_2\in\mathcal{K}_\infty$. Then, solving Problem~\eqref{eq:ideal_rot_nmpc}, we obtain $\bar V^{\mathrm{i}}({\mathbf{x}}_k,t_k)$ and the optimal trajectories $\{\xb[k]^\mathrm{i},...,\xb[k+N]^\mathrm{i}\}$, and $\{\ub[k]^\mathrm{i},...,\ub[k+N-1]^\mathrm{i}\}$. By relying on Assumptions~\ref{a:rec_ref} and~\ref{a:terminal}, using terminal control law $\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}$, we can construct the feasible sub-optimal trajectories $\{\xb[k+1]^\mathrm{i},...,\xb[k+N]^\mathrm{i},f_{k+N}(\xb[k+N]^\mathrm{i},\kappa^\mathrm{f}_{{\mathbf{y}}^\r})\}$ and $\{\ub[k+1]^\mathrm{i},...,\ub[k+N]^\mathrm{i},\kappa^\mathrm{f}_{{\mathbf{y}}^{\mathrm{r}}}\}$ at time $k+1$, which can be used to derive the decrease condition following standard arguments~\cite{rawlings2009model,borrelli2017predictive}:
$$\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1},t_{k+1})-\bar{V}^\mathrm{i}({\mathbf{x}}_k,t_k)\leq{}-\alpha_3(\|{\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k\|).$$
This entails that the \emph{rotated} \emph{ideal} value function $\bar{V}^\mathrm{i}({\mathbf{x}}_k,t_k)$ is a Lyapunov function, and that the closed-loop system is asymptotically stabilized to ${\mathbf{x}}^{\mathrm{r}}$.
Finally, using Lemma~\ref{lem:rot_mpc} we establish asymptotic stability also for the \emph{ideal} MPC scheme~\eqref{eq:ideal_nmpc}, since the primal solutions of the two problems coincide.
\end{proof}
Theorem~\ref{thm:as_stab_0} establishes the first step towards the desired result:
an MPC problem can be formulated using an \emph{infeasible reference}, which stabilizes system~\eqref{eq:sys} to the optimal trajectory of Problem~\eqref{eq:ocp} provided that the appropriate terminal conditions are used.
At this stage, the main issue is to express the terminal constraint set as
a positive invariant set containing ${\x}^{\mathrm{r}}$, and the terminal control law stabilizing the system to ${\x}^{\mathrm{r}}$.
To that end, one needs to know the feasible reference trajectory~${\x}^{\mathrm{r}}$, i.e., to solve Problem~\eqref{eq:ocp}. Since solving Problem~\eqref{eq:ocp} is not practical, we prove in the next section how sub-optimal terminal conditions can be used instead to prove ISS for the closed-loop system.
\subsection{Practical MPC and ISS}\label{sec:iss}
In this subsection, we analyze the case in which the terminal conditions are not enforced based on the feasible reference trajectory, but
rather based on an \emph{approximatively feasible} reference (see Assumption~\ref{a:approx_feas}).
Since in that case asymptotic stability cannot be proven, we will prove ISS for the closed-loop system, where the input is some terminal reference ${\mathbf{y}}^{\mathrm{f}}$. In particular, we are interested in the practical approach ${\mathbf{y}}^{\mathrm{f}}=\r(t_{k+N})$ and the ideal setting ${\mathbf{y}}^{\mathrm{f}}={\mathbf{y}}^{\mathrm{r}}(t_{k+N})$.
To that end, we define the following closed-loop dynamics
\begin{align}\label{eq:iss_dynamics}
{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}) = f_k({\mathbf{x}}_{k},\u_\mathrm{MPC}({\mathbf{x}}_{k},{\mathbf{y}}^\mathrm{f})) = \bar f_k({\mathbf{x}}_{k},{\mathbf{y}}^\mathrm{f}),
\end{align}
where we stress that~$\u_\mathrm{MPC}$ is obtained as~$\ub[k]^\star$ solving problem~\eqref{eq:nmpc} (in case one uses ${\mathbf{y}}^\mathrm{f}=\r$ and terminal cost $p_\r$); or as~$\ub[k]^\mathrm{i}$ solving the ideal problem~\eqref{eq:ideal_nmpc} (in case one uses ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$ and terminal cost $p_{\tilde{\mathbf{y}}^\r}$). In the following we will use the notation ${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})$ to stress that the terminal reference ${\mathbf{y}}^\mathrm{f}$ is used in the computation of the control yielding the next state. Additionally, we define
the following quantities
\begin{align*}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_k,t_k) &:= \sum_{n=k}^{N-1} \bar q_\r(\xb^\star,\ub^\star,t_n) + \bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N}), \\
\bar J_\r^{\mathrm{i}}({\mathbf{x}}_k,t_k) &:= \sum_{n=k}^{N-1} \bar q_\r(\xb^\mathrm{i},\ub^\mathrm{i},t_n) + \bar p_\r(\xb[k+N]^\mathrm{i},t_{k+N}),
\end{align*}
and we remind that
\begin{align*}
\bar V({\mathbf{x}}_k,t_k) &= \sum_{n=k}^{N-1} \bar q_\r(\xb^\star,\ub^\star,t_n) + \bar p_\r(\xb[k+N]^\star,t_{k+N}),\\
\bar V^\mathrm{i}({\mathbf{x}}_k,t_k) &= \sum_{n=k}^{N-1} \bar q_\r(\xb^\mathrm{i},\ub^\mathrm{i},t_n) + \bar p _{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N}).
\end{align*}
Before formulating the stability result in the next theorem, we need to introduce an additional assumption on the reference infeasibility.
\begin{Assumption}[Approximate feasibility of the reference]
\label{a:approx_feas}
The reference ${\mathbf{y}}^{\mathrm{f}}$ satisfies the constraints \eqref{eq:nmpcInequality_known}, i.e., $h({\x}^{\mathrm{f}}_n,{\u}^{\mathrm{f}}_n) \leq{} 0$, $n\in \mathbb{I}_k^{k+N-1}$, for all $k\in\mathcal{N}^+$. Additionally, recursive feasibility holds for both Problem~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} when the system is controlled in closed-loop using the feedback from Problem~\eqref{eq:nmpc}.
\end{Assumption}
\begin{Remark}
Assumption~\ref{a:approx_feas} essentially only requires that the reference used in the definition of the terminal conditions (constraint and cost) is feasible with respect to the system constraints, and not the system dynamics. However, recursive feasibility holds if the reference satisfies, e.g., $\|{\mathbf{x}}_{n+1}^\mathrm{f}-f_n({\mathbf{x}}_n^\mathrm{f},\u_n^\mathrm{f})\|\leq{}\epsilon$, for some small $\epsilon$ i.e., if the reference satisfies the system dynamics approximately.
Note that, if $\epsilon=0$, then Assumption~\ref{a:rec_ref} is satisfied and Assumption~\ref{a:approx_feas} is not needed anymore. Finally, the infeasibility due to $\epsilon\neq0$ could be formally accounted for so as to satisfy Assumption~\ref{a:approx_feas} by taking a robust MPC approach, see, e.g.,~\cite{Mayne2005,Chisci2001}.
\end{Remark}
From a practical standpoint, Assumption~\ref{a:approx_feas} sets a rather mild requirement. In fact, it is not uncommon to use infeasible references for simplicity or satisfying approximated system dynamics to capture the most relevant dynamics of the system (keeping $\epsilon$ small).
{We are now ready to state the main result of the paper.}
\begin{Theorem}\label{thm:iss}
Suppose that Problem~\eqref{eq:ocp} is feasible and Assumptions~\ref{a:cont} and~\ref{a:terminal} hold for the reference ${\mathbf{y}}^{\mathrm{r}}$ with costs $\bar{q}_\r$ and $\bar{p}_{{\tilde\y}^\r}$ and terminal set $\mathcal{X}_{{\mathbf{y}}^\r}$. Suppose moreover that Problem~\eqref{eq:nmpc} and Problem~\eqref{eq:ideal_nmpc} are feasible at time $k$ with inital state $({\mathbf{x}}_k,t_k)$, and that reference ${\mathbf{y}}^\mathrm{f}$, with terminal set $\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{f}}$, satisfies Assumption~\ref{a:approx_feas}. Then, system~\eqref{eq:iss_dynamics} obtained from~\eqref{eq:sys} in closed-loop with MPC formulation~\eqref{eq:nmpc} is ISS.
\end{Theorem}
\begin{proof}
We prove the result using the value function $\bar V^\mathrm{i}({\mathbf{x}}_k,t_k)$ of the rotated ideal Problem~\eqref{eq:ideal_rot_nmpc} as an ISS-Lyapunov function candidate \cite{jiang2001input}. From the prior analysis in Theorem \ref{thm:as_stab_0} we know that Assumption~\ref{a:rec_ref} holds for ${\mathbf{y}}^{\mathrm{r}}$ since Problem~\eqref{eq:ocp} is feasible, and that $\bar V^\mathrm{i}({\mathbf{x}}_k,t_k)$ is a Lyapunov function {when the \emph{ideal} terminal conditions} ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$ are used. Hence, when {we apply the ideal control input $\ub[k][k]^\mathrm{i}$, i.e., use \eqref{eq:iss_dynamics} to obtain the next state ${\mathbf{x}}_{k+1}({\mathbf{y}}^{\mathrm{r}})=\bar{f}_k({\mathbf{x}}_k,{\mathbf{y}}^{\mathrm{r}})$}, we have the following relations
\begin{align*}
\alpha_1(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|) \leq \bar V^\mathrm{i}({\mathbf{x}}_k,t_k) \leq \alpha_2(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|),\\
\bar V^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^{\mathrm{r}}),t_{k+1}) - \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) \leq -\alpha_3(\| {\mathbf{x}}_k-{\mathbf{x}}^{\mathrm{r}}_k \|),
\end{align*}
with $\alpha_i\in \mathcal{K}_\infty$, $i=1,2,3$.
We are left with proving ISS, i.e., that $\exists \, \sigma\in\mathcal{K}$ such that when the reference ${\mathbf{y}}^\mathrm{f}$ is treated as an external input, the next state is given by ${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})=\bar f_k({\mathbf{x}}_k,{\mathbf{y}}^\mathrm{f})$, the following holds
\begin{align}\begin{split}
\label{eq:iss_decrease}
\bar V^\mathrm{i}({{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})},t_{k+1})- \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k)\leq&\sigma( \| {\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^{\mathrm{r}} \| )\\&-\alpha_3(\| {\mathbf{x}}_k-{\x}^{\mathrm{r}}_k \|).
\end{split}\end{align}
In order to bound $\bar V^\mathrm{i}({{\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})},t_{k+1}) - \bar V^\mathrm{i}({\mathbf{x}}_{k},t_{k})$, we first derive an upper bound on $\bar J_\r^\mathrm{i}$ which depends on $\bar V^\mathrm{i}$.
To that end, we observe that the rotated cost of the ideal trajectory $\xb^\mathrm{i}$, $\ub^\mathrm{i}$ satisfies
\begin{align*}
\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)&= \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k)-\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N})\\
&+\bar p_\r(\xb[k+N]^\mathrm{i},t_{k+N}).
\end{align*}
Defining
\begin{align*}
\phi({\mathbf{y}}^\mathrm{f})&:=\bar p_{{\mathbf{y}}^\mathrm{f}}(\xb[k+N]^\mathrm{i},t_{k+N})-\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\mathrm{i},t_{k+N}),
\end{align*}
there exists a $\sigma_1 \in \mathcal{K}$ such that $\phi({\mathbf{y}}^\mathrm{f}) \leq{} \sigma_1(\|{\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^\r\|)$
since, by~\eqref{eq:terminal_cost}, $\phi({\mathbf{y}}^\mathrm{f})$ is a continuous function of ${\mathbf{y}}^\mathrm{f}$ and $\phi({\mathbf{y}}^\mathrm{r})=0$.
Then, the following upper bound is obtained
\begin{align*}
\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)&\leq \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma_1(\| {\mathbf{y}}^\mathrm{f}-{\mathbf{y}}^\mathrm{r} \| ).
\end{align*}
Upon solving Problem~\eqref{eq:nmpc}, we obtain $\bar V({\mathbf{x}}_{k},t_k)\leq\bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k)$. Starting from the optimal solution ${\mathbf{x}}^\star$, and $\u^\star$, we will construct an upper bound on the decrease condition. To that end, we first need to evaluate the cost of this trajectory, i.e.,
\begin{align*}\begin{split}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_{k},t_k)&=\bar V({\mathbf{x}}_{k},t_k)-\bar p_\r(\xb[k+N]^\star,t_{k+N})\\
&+\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N}).
\end{split}\end{align*}
Using the same reasoning as before, there exists $\sigma_2 \in \mathcal{K}$ such that
\begin{align*}
&\bar p_{{\tilde\y}^\mathrm{r}}(\xb[k+N]^\star,t_{k+N})-\bar p_\r(\xb[k+N]^\star,t_{k+N})\\
&\hspace{5em}\leq \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}^\mathrm{r}_{k+N} \| ).
\end{align*}
Then, we obtain
\begin{align}\label{eq:jbar}
\begin{split}
\bar J_{{\tilde\y}^\mathrm{r}}^{\star}({\mathbf{x}}_{k},t_k) &\leq \bar V({\mathbf{x}}_{k},t_k) + \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ) \\
&\leq \bar J_\r^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma_2(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ) \\
& \leq \bar V^\mathrm{i}({\mathbf{x}}_{k},t_k) + \sigma(\| {\mathbf{y}}^\mathrm{f}_{k+N}-{\mathbf{y}}_{k+N}^{\mathrm{r}} \| ),
\end{split}
\end{align}
where we defined $\sigma:=\sigma_1+\sigma_2$.
Proceeding similarly as in the proof of Proposition~\ref{prop:stable}, we apply the control input $\ub[k]^\star$ from~\eqref{eq:nmpc}, i.e., ${\mathbf{y}}^\mathrm{f}=\r$, to obtain $${\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})=\bar{f}_k({\mathbf{x}}_k,{\mathbf{y}}^\mathrm{f}).$$
In order to be able to apply this procedure, we first assume that the obtained initial guess is feasible for the ideal problem~\eqref{eq:ideal_nmpc} and proceed as follows.
By Assumption~3, we use the terminal control law {$\kappa_{{\mathbf{y}}^\r}^\mathrm{f}({\mathbf{x}},t)$}, to form a guess at the next time step and upper bound the \emph{ideal} rotated value function. By optimality
\begin{align}\label{eq:iss_value_decrease}
\bar{V}^\mathrm{i}&( {\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1}) \leq{}\\ &\nonumber\leq{}\sum_{n=k+1}^{N-1}\bar{q}_\r(\xb^\star,\ub^\star,t_n)+\bar q_\r(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r},t_{k+N})\\
&\nonumber\hspace{4em}+\bar p_{{\tilde\y}^\r}(\xb[k+N+1]^{\kappa,\star},t_{k+N+1})\\
&\nonumber=\bar{J}_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\bar{q}_\r(\xb[k]^\star,\ub[k]^\star,t_k)-\bar{p}_{{\tilde\y}^\r}(\xb[k+N]^\star,t_{k+N})\\
&\nonumber+\bar{p}_{{\tilde\y}^\r}(\xb[k+N+1]^{\star,\kappa},t_{k+N+1})+\bar{q}_{\r}(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r},t_{k+N}),
\end{align}
where we used
$$\xb[k+N+1]^{\star,\kappa}\hspace{-0.2em}:= \hspace{-0.2em}f_{k+N}(\xb[k+N],\kappa_{{\mathbf{y}}^\r}),\, \kappa_{{\mathbf{y}}^\r}\hspace{-0.2em}:=\hspace{-0.2em}\kappa_{{\mathbf{y}}^\r}(\xb[k+N]^\star,t_{k+N}),$$
{and assumed that $\xb[k+N+1]^{\star,\kappa}\in\mathcal{X}_{{\mathbf{y}}^\r}(t_{k+N+1})$}. Again, using Assumption~3 we can now upper bound the terms
\begin{align*} \bar{p}_{{\tilde\y}^\r}(\xb[k+N+1]^{\star,\kappa},t_{k+N+1})-\bar{p}_{{\tilde\y}^\r}(\xb[k+N]^\star,t_{k+N})\\\
+\bar{q}_\r(\xb[k+N]^\star,\kappa_{{\mathbf{y}}^\r})\leq{}0,
\end{align*}
so that~\eqref{eq:iss_value_decrease} can be written as
\begin{align}
\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1}) &\leq{}J_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\bar{q}_\r(\xb[k]^\star,\ub[k]^\star,t_{k}),\\
\bar{V}^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f})) &\leq{}J_{{\tilde\y}^\r}^\star({\mathbf{x}}_k,t_k)-\alpha_3(\|{\mathbf{x}}_k-{\mathbf{x}}^\r_k\|),\label{eq:iss_bound_decr}
\end{align}
which, in turn, proves~\eqref{eq:iss_decrease}.
In case ${\xb[k+N+1]^{\star,\kappa}\not\in\mathcal{X}^\mathrm{f}_{{\mathbf{y}}^\mathrm{r}}(t_{k+N+1})}$,
we resort to a relaxation of the terminal constraint with an exact penalty~\cite{Scokaert1999a,Fletcher1987} in order to compute an upper bound to the cost. This relaxation has the property that the solution of the relaxed formulation coincides with the one of the non-relaxed formulation whenever it exists. Then, by construction, the cost of an infeasible trajectory is higher than that of the feasible solution.
{Finally, from Assumption~\ref{a:approx_feas} we know that the value functions $\bar{V}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1})$ and $\bar V^\mathrm{i}({\mathbf{x}}_{k+1}({\mathbf{y}}^\mathrm{f}),t_{k+1})$ are feasible and bounded for time $k+1$.}
\end{proof}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{iss_closed.eps}
\caption{Closed-loop simulation with initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray trajectories show the infeasible reference $\r=(\r^{\mathbf{x}},\r^\u)$, while the black trajectories show the optimal reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$ obtained from Problem~\eqref{eq:ocp}. The orange trajectories show the closed-loop behavior for the practical MPC Problem~\eqref{eq:nmpc}, while the blue trajectories show the closed-loop behavior for the \emph{ideal} MPC Problem~\eqref{eq:ideal_nmpc}.}
\label{fig:mpatc_1_states}
\end{figure*}
This theorem proves that one can use an infeasible reference, at the price of not converging exactly to the (unknown) optimal trajectory from OCP~\eqref{eq:ocp}, with an inaccuracy which depends on how inaccurate the terminal reference is. It is important to remark that, as proven in~\cite{Zanon2018a,Faulwasser2018}, since the MPC formulation has a turnpike, the effect of the terminal condition on the closed-loop trajectory is decreasing as the prediction horizon increases.
\begin{Remark}
We note that similar results may be possible to prove for general nonlinear systems if there exists a storage function such that strict dissipativity holds for the rotated cost functions~\cite{muller2014necessity}. Future research will investigate ways to extend the results of Theorems~1 and~2 for general nonlinear systems.
\end{Remark}
\section{Simulations}\label{sec:simulations}
In this section we implement the robotic example in~\cite{Faulwasser2009} to illustrate the results of Theorems~\ref{thm:as_stab_0} and \ref{thm:iss}. We will use the quadratic stage and terminal costs in \eqref{eq:stage_cost}-\eqref{eq:terminal_cost}, i.e.,
\begin{gather*}
q_\r(\xb,\ub,t_n) := \matr{c}{\xb-\rx_n\\\ub-\ru_n}^\top{}W\matr{c}{\xb-\rx_n\\\ub-\ru_n},\\
p_\r(\xb,t_{n}) := (\xb-\rx_{n})^\top{}P(\xb-\rx_{n}).
\end{gather*}
We consider the system presented in~\cite{Faulwasser2009}, i.e., an actuated planar robot with two degrees of freedom with dynamics
\begin{align}
\matr{c}{\dot{x}_1\\\dot{x}_2} &= \matr{c}{ x_2\\B^{-1}(x_1)(u-C(x_1,x_2)x_2-g(x_1))},\label{eq:robot}
\end{align}
where $x_1=(q_1,q_2)$ are the joint angles, $x_2=(\dot{q}_1,\dot{q}_2)$ the joint velocities, and $B$, $C$, and $g$ are given by
\begin{subequations}\label{eq:modelparams}
\begin{align*}
B(x_1) &:= \matr{cc}{200+50\cos(q_2) & 23.5+25\cos(q_2)\\
23.5+25\cos(q_2) & 122.5},\\
C(x_1,x_2) &:= 25\sin(q_2)\matr{cc}{\dot{q}_1 & \dot{q}_1+\dot{q}_2\\
-\dot{q}_1 & 0}\\
g(x_1) &:= \matr{c}{784.8\cos(q_1)+245.3\cos(q_1+q_2)\\
245.3\cos(q_1+q_2)},
\end{align*}
\end{subequations}
and with following constraints on the state and control
\begin{align}\label{eq:box_constr}
\|x_2\|_\infty\leq{}3/2\pi, && \|u\|_\infty\leq{}4000.
\end{align}
By transforming the control input as
$$u = C(x_1,x_2)x_2+g(x_1)+B(x_1)v,$$
system~\eqref{eq:robot} can be rewritten into a linear system
\begin{align}
\matr{c}{\dot{x}_1\\\dot{x}_2} &= \matr{c}{ x_2\\v},\label{eq:robot_linear}
\end{align}
subject to the non-linear input constraint
\begin{equation}
\|C(x_1,x_2)x_2+g(x_1)+B(x_1)v\|_\infty\leq{}4000.
\end{equation}
Similar to~\cite{Faulwasser2009}, we use
\begin{equation}\label{eq:path}
p(\theta)=\left (\theta-\frac{\pi}{3},\,5\sin\left (0.6 \left (\theta-\frac{\pi}{3}\right )\right )\right ),
\end{equation}
with $\theta\in[-5.3,0]$ as the desired path to be tracked, and define the timing law, with $t_0=0\ \mathrm{s}$, to be given by
\begin{align*}
\theta(t_0) = -5.3,\, \dot{\theta}(t) = \frac{v_\mathrm{ref}(t) }{\left \| \nabla_\theta \rho(\theta(t))\right \|_2},\, v_\mathrm{ref}(t) =\left \{
\begin{array}{@{}ll@{}}
1 & \hspace{-0.5em}\text{if } \theta<0\\
0 & \hspace{-0.5em}\text{if }\theta\geq{}0
\end{array}
\right . .
\end{align*}
This predefined path evolution implies that the norm of the reference trajectory for the joint velocities will be $1\ \mathrm{rad/s}$ for all $\theta<0$ and zero at the end of the path. Hence, we use the following reference trajectories
\begin{align*}
\r^{\mathbf{x}}(t) &= \matr{cc}{p(\theta(t)) &\frac{\partial{p}}{\partial\theta}\dot{\theta}(t)}^\top\hspace{-0.3em},\
\r^\u(t) = \matr{c}{ \frac{\partial^2 p}{\partial\theta^2}\dot{\theta}^2+\frac{\partial p}{\partial \theta}\ddot{\theta}}^\top\hspace{-0.3em},
\end{align*}
which have a discontinuity at $\theta=0$.
For the stage cost we use $W = \mathrm{blockdiag}(Q,R)$ with
\begin{align*}
Q=\mathrm{diag}(10,10,1,1),\
R=\mathrm{diag}(1,1).
\end{align*}
The terminal cost matrix is computed using an LQR controller with the cost defined by $Q$ and $R$ and is given by
$$ P = \matr{cc}{290.34\cdot{}\mathbf{1}^2 &105.42\cdot{}\mathbf{1}^2\\105.42\cdot{}\mathbf{1}^2&90.74\cdot{}\mathbf{1}^2}\in\mathbb{R}^4,$$
where $\mathbf{1}^2\in\mathbb{R}^{2\times2}$ is an identity matrix. Consequently, the corresponding terminal set is then given by
\begin{equation*}
\mathcal{X}^\mathrm{f}_\r(t_n) =\{ {\mathbf{x}}\, |\, ({\mathbf{x}}-\r^{\mathbf{x}}_n)^\top P({\mathbf{x}}-\r^{\mathbf{x}}_n) \leq{} 61.39\}.
\end{equation*}
For detailed derivations of the terminal cost and terminal set, we refer the reader to the Appendix in~\cite{Faulwasser2016,batkovic2020safe}.
In order to obtain the feasible reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$, we approximate the infinite horizon Problem~\eqref{eq:ocp} with a prediction horizon of $M=1200$ and sampling time $t_\mathrm{s}=0.03\ \mathrm{s}$. For the closed-loop simulations, we use the control input obtained from formulations~\eqref{eq:nmpc} and~\eqref{eq:ideal_nmpc} with horizon $N=10$ and sampling time $t_\mathrm{s}= 0.03\ \mathrm{s}$. Note that we used the linear system~\eqref{eq:robot_linear} with its corresponding state and input constraints for all problem formulations. Furthermore, all simulations ran on a laptop computer (i5 2GHz, 16GB RAM) and were implemented in Matlab using the CasADi~\cite{Andersson2019} software together with the IPOPT~\cite{wachter2006implementation} solver.
Figure \ref{fig:mpatc_1_states} shows the closed-loop trajectories for the initial condition $(x_1,x_2)=(-4.69,-1.62,0,0)$ and initial time $k=167$. The gray lines denote the infeasible reference $\r=(\r^{\mathbf{x}},\r^\u)$ for each state while the black lines denote the optimal reference ${\mathbf{y}}^{\mathrm{r}}=({\mathbf{x}}^{\mathrm{r}},\u^{\mathrm{r}})$ from~\eqref{eq:ocp}. The orange lines show the closed-loop evolution for the practical MPC Problem~\eqref{eq:nmpc}, i.e., when the terminal conditions are based on the infeasible reference ${\mathbf{y}}^\mathrm{f}=\r$. The blue lines instead show the closed-loop evolution for the \emph{ideal} MPC Problem~\eqref{eq:ideal_nmpc}, where the terminal conditions are based on the optimal reference from Problem~\eqref{eq:ocp}, i.e, ${\mathbf{y}}^\mathrm{f}={\mathbf{y}}^{\mathrm{r}}$. The bottom right plot of Figure~\ref{fig:mpatc_1_states} shows that the closed-loop error for both the practical MPC (orange lines) and \emph{ideal} MPC (blue lines) stabilize towards the reference $\r$ for times $t\leq{}5\mathrm{s}$. Between $5\ \mathrm{s}\leq{}t\leq{}9\ \mathrm{s}$, we can see that the discontinuity of the reference trajectory $\r$ affects how the two formulations behave. The \emph{ideal} formulation manages to track the optimal reference ${\mathbf{y}}^{\mathrm{r}}$ (black trajectory), while the practical formulation instead tries to track the infeasible reference $\r$ and therefore deviates compared to the \emph{ideal} formulation. After the discontinuity, the rest of the reference trajectory is feasible and both formulations asymptotically stable.
\section{Conclusions}\label{sec:conclusions}
The use of infeasible references in MPC formulations is of great interest due to its convenience and simplicity. In this paper, we have discussed how such references affect the tracking performance for MPC formulations. We have proved that MPC formulations can yield asymptotic stability to an optimal trajectory when terminal conditions are suitably chosen. In addition, we also proved that the stability results can be extend for sub-optimal terminal conditions, in which case the controlled system is stabilized around a neighborhood of the optimal trajectory. Future research will investigate ways to extend the stability results to general nonlinear systems.
\bibliographystyle{IEEEtran}
| {'timestamp': '2021-09-13T02:19:33', 'yymm': '2109', 'arxiv_id': '2109.04846', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.04846'} |
\section{Introduction}
The unexpected accelerated expansion of the universe as predicted by recent series of observations is speculated by cosmologist as a smooth transition from decelerated era in recent past \cite{Riess:1998cb,Perlmutter:1998np,Spergel:2003cb,Allen:2004cd,Riess:2004nr}. The cosmologists are divided in opinion about the cause of this transition. One group has the opinion of modification of the gravity theory while others are in favour introducing exotic matter component. Due to two severe drawbacks \cite{RevModPhys.61.1} of the cosmological constant as a DE candidate dynamical DE models namely quintessence field (canonical scalar field), phantom field \cite{Caldwell:2003vq,Vikman:2004dc,Nojiri:2005sr,Saridakis:2009pj,Setare:2008mb} (ghost scalar field) or a unifield model named quintom \cite{Feng:2004ad,Guo:2004fq,Feng:2004ff,Feng:2004ff} are popular in the literature.\par
However, a new cosmological problem arises due to the dynamical nature of the DE although vacuum energy and DM scale independently during cosmic evolution but why their energy densities are nearly equal today. To resolve this coincidence problem cosmologists introduce interaction between the DE and DM. As the choice of this interaction is purely phenomenological so various models appear to match the observational prediction. Although these models may resolve the above coincidence problem but a non-trivial, almost tuned sequence of cosmological eras \cite{Amendola:2006qi} appear as a result. Further, the interacting phantom DE models \cite{Chen:2008ft,Nunes:2004wn,Clifton:2007tn,Xu:2012jf,Zhang:2005jj,Fadragas:2014mra,Gonzalez:2007ht} deal with some special coupling forms, alleviating the coincidence problem.\par
Alternatively cosmologists put forward with a special type of interaction between DE and DM where the DM particles has variable mass, depending on the scalar field representing the DE \cite{Anderson:1997un}. Such type of interacting model is physically more sound as scalar field dependent varying mass model appears in string theory or scalar-tensor theory \cite{PhysRevLett.64.123}. This type of interacting model in cosmology considers mass variation as linear \cite{Farrar:2003uw,Anderson:1997un,Hoffman:2003ru}, power law \cite{Zhang:2005rg} or exponential \cite{Berger:2006db,PhysRevD.66.043528,PhysRevD.67.103523,PhysRevD.75.083506,Amendola:1999er,Comelli:2003cv,PhysRevD.69.063517} on the scalar field. Among these the exponential dependence is most suitable as it not only solves the coincidence problem but also gives stable scaling behaviour.\par
In the present work, varying mass interacting DE/DM model is considered in the background of homogeneous and isotropic space-time model. Due to highly coupled nonlinear nature of the Einstein field equations it is not possible to have any analytic solution. So by using suitable dimensionless variables the field equations are converted to an autonomous system. The phase space analysis of non-hyperbolic equilibrium points has been done by center manifold theory (CMT) for various choices of the mass functions and the scalar field potentials. The paper is organized as follows: Section \ref{BES} deals with basic equations for the varying mass interacting dark energy and dark matter cosmological model. Autonomous system is formed and critical points are determined in Section \ref{FASC}. Also stability analysis of all critical points for various choices of the involving parameters are shown in this section. Possible bifurcation scenarios \cite{10.1140/epjc/s10052-019-6839-8, 1950261, 1812.01975} by Poincar\'{e} index theory and global cosmological evolution have been examined in Section \ref{BAPGCE}. Finally, brief discussion and important concluding remarks of the present work is proposed in Section \ref{conclusion}.
\section{Varying mass interacting dark energy and dark matter cosmological model : Basic Equations\label{BES}}
Throughout this paper, we assume a homogeneous and isotropic universe with the flat Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) metric as follows:
\begin{equation}
ds^2=-dt^2+a^2(t)~d{\Sigma}^2,
\end{equation}
where `$t$' is the comoving time; $a(t)$ is the scale factor; $d{\Sigma}^2$ is the 3D flat space line element.\\
The Friedmann equations in the background of flat FLRW metric can be expressed as
\begin{eqnarray}
3H^2&=&k^2(\rho_\phi +\rho_{_{DM}}),\label{equn2}\\
2\dot{H}&=&-k^2(\rho_\phi +p_\phi +\rho_{_{DM}}),\label{equn3}
\end{eqnarray}
where `$\cdot $' denotes the derivative with respect to $t$; $\kappa(=\sqrt{8\pi G}$) is the gravitational coupling; $\{\rho_\phi,p_\phi\}$ are the energy density and thermodynamic pressure of the phantom scalar field $\phi$ (considered as DE) having expressions
\begin{align}
\begin{split}
\rho_{\phi}&=-\frac{1}{2}\dot{\phi}^2+V(\phi),\\
p_\phi&=-\frac{1}{2}\dot{\phi}^2-V(\phi),\label{equn4}
\end{split}
\end{align}
and $\rho_{_{DM}}$ is the energy density for the dark matter in the form of dust having expression
\begin{align}
\rho_{_{DM}}=M_{_{DM}}(\phi)n_{_{DM}},\label{equn5}
\end{align}
where $n_{_{DM}}$, the number density \cite{Leon:2009dt} for DM satisfies the number conservation equation
\begin{align}
\dot{n}_{_{DM}}+3H n_{_{DM}}=0.\label{equn6}
\end{align}
Now differentiating (\ref{equn5}) and using (\ref{equn6}) one has the DM conservation equation as
\begin{align}
\dot{\rho}_{_{DM}}+3H\rho_{_{DM}}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\dot{\phi}\rho_{_{DM}},\label{equn7}
\end{align}
which shows that mass varying DM (in the form of dust) can be interpreted as a barotropic fluid with variable equation of state : $\omega_{_{DM}}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\dot{\phi}$. Now due to Bianchi identity, using the Einstein field equations (\ref{equn2}) and (\ref{equn3}) the conservation equation for DE takes the form
\begin{align}
\dot{\rho}_{\phi}+ 3H(\rho_{\phi}+p_{\phi})=-\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)~\right\}\dot{\phi}\rho_{_{DM}}.\label{equn8}
\end{align}
or using (\ref{equn4}) one has
\begin{align}
\ddot{\phi}+3H\dot{\phi}-\frac{\partial V}{\partial \phi}=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\rho_{_{DM}}.\label{equn9}
\end{align}
The combination of the conservation equations (\ref{equn7}) and (\ref{equn8}) for DM (dust) and phantom DE (scalar) shows that the interaction between these two matter components depends purely on the mass variation, i.e., $Q=\frac{d}{d\phi}\left\{\ln M_{_{DM}}(\phi)\right\}\rho_{_{DM}}$. So, if $M_{_{DM}}$ is an increasing function of $\phi$, i.e., $Q>0$ then energy is exchanged from DE to DM while in the opposite way if $M_{_{DM}}$ is a decreasing function of $\phi$. Further, combining equations (\ref{equn7}) and (\ref{equn8}) the total matter $\rho_{tot}=\rho_{DM}+\rho_{DE}$ satisfies
\begin{align}
\dot{\rho}_{tot}+3H(\rho_{tot}+p_{tot})=0
\end{align}
with
\begin{align}
\omega_{tot}=\frac{p_{\phi}}{\rho_{\phi}+\rho_{_{DM}}}=\omega_{\phi}\Omega_{\phi}.
\end{align}
Here $\omega_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}$ is the equation of state parameter for phantom field and $\Omega_{\phi}=\frac{\rho_{\phi}}{\frac{3H^2}{\kappa^2}}$ is the density parameter for DE.
\section{Formation of Autonomous System : Critical point and stability analysis\label{FASC}}
In the present work the dimensionless variables can be taken as \cite{Leon:2009dt}
\begin{eqnarray}
x:&=&\frac{\kappa\dot{\phi}}{\sqrt{6}H}, \\
y:&=&\frac{\kappa\sqrt{V(\phi)}}{\sqrt{3}H}, \\
z:&=&\frac{\sqrt{6}}{\kappa \phi}
\end{eqnarray}
together with $N=\ln a$ and the expression of the cosmological parameters can be written as
\begin{align}
\Omega_{\phi}\equiv \frac{{\kappa}^2\rho_{\phi}}{3H^2}&=-x^2+y^2,\label{eq4}
\end{align}
\begin{equation}
\omega_{\phi}= \frac{-x^2-y^2}{-x^2+y^2}
\end{equation}
and
\begin{equation}
\omega_{tot}=-x^2-y^2.
\end{equation}
For the scalar field potential we consider two well
studied cases in the literature, namely the power-law
\begin{equation}
V(\phi)=V_0 \phi^{-\lambda}
\end{equation}
and the exponential dependence as
\begin{equation}
V(\phi)=V_1 e^{-\kappa\lambda \phi}.
\end{equation}
For the dark matter particle mass we also consider power-law
\begin{eqnarray}
M_{_{DM}}(\phi)&=& M_0 {\phi}^{-\mu}
\end{eqnarray}
and the exponential dependence as
\begin{eqnarray}
M_{_{DM}}(\phi)&=& M_1 e^{-\kappa\mu \phi},
\end{eqnarray}
where $V_0,V_1,M_0,M_1 (>0)$ and $\lambda,\mu$ are constant parameters. Here we study the dynamical analysis of this cosmological system for five possible models. In Model $1$ (\ref{M1}) we consider $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$, in Model $2$ (\ref{M2}) we consider $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$, in Model $3$ (\ref{M3}) we consider $V(\phi)=V_1e^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$, in Model $4$ (\ref{M4}) we consider $V(\phi)=V_1 e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$ and lastly in Model $5$ (\ref{M5}) we consider $V(\phi)=V_2\phi^{-\lambda} e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_2\phi^{-\mu}e^{-\kappa\mu\phi}$, where $V_2=V_0V_1$ and $M_2=M_0M_1$.
\subsection{Model 1: Power-law potential and power-law-dependent dark-matter particle mass \label{M1}}
In this consideration evolution equations in Section \ref{BES} can be converted to an autonomous system as follows
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2}-\frac{\mu}{2}z(1+x^2-y^2),\label{eq9} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq10} \\
z'&=&-xz^2,\label{eq11}
\end{eqnarray}
where $\lq$dash' over a variable denotes differentiation with respect to $ N=\ln a $.\bigbreak
To obtain the stability analysis of the critical points corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$, we consider four possible choices of $\mu$ and $\lambda$ as
$(i)$ $\mu\neq0$ and $\lambda\neq0$, $~~~(ii)$ $\mu\neq0$ and $\lambda=0$, $(iii)$ $\mu=0$ and $\lambda\neq0$, $(iv)$ $\mu=0$ and $\lambda=0$.
\subsubsection*{Case-(i)$~$\underline{$\mu\neq0$ and $\lambda\neq0$}}
In this case we have three real and physically meaningful critical points $A_1(0, 0, 0)$, $A_2(0, 1, 0)$ and $A_3(0, -1, 0)$. First we determine the Jacobian matrix at these critical points corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$. Then we shall find the eigenvalues and corresponding eigenvectors of the Jacobian matrix. After that we shall obtain the nature of the vector field near the origin for every critical points. If the critical point is hyperbolic in nature we use Hartman-Grobman theorem and if the critical point is non-hyperbolic in nature we use Center Manifold Theory \cite{Chakraborty:2020vkp}. At every critical points the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq9}-\ref{eq11})$, value of cosmological parameters and the nature of the critical points are shown in Table \ref{TI}.
\begin{table}[h]
\caption{\label{TI}Table shows the eigenvalues, cosmological parameters and nature of the critical points corresponding to each critical points $(A_1-A_3)$.}
\begin{tabular}{|c|c c c|c|c|c| c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical~ Points$\\$~~$\end{tabular} ~~ & $ \lambda_1 $ ~~ & $\lambda_2$ ~~ & $\lambda_3$& $~\Omega_\phi~$&$~\omega_\phi~$ &$~\omega_{tot}~$& $~q~$ & $Nature~of~critical~points$ \\ \hline\hline
~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\
$A_1(0,0,0)$ & $-\frac{3}{2}$ & $\frac{3}{2}$ & 0 & 0 & Undetermined & 0 &$\frac{1}{2}$& Non-hyperbolic\\
~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\\hline
~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\
$A_2(0,1,0)$ & $-3$ & $-3$ & 0 & 1 & $-1$ & $-1$&$-1$& Non-hyperbolic\\
~ & ~ & ~& ~& ~ & ~ & ~ & ~& ~\\ \hline
~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\
$A_3(0,-1,0)$ & $-3$ & $-3$ & $0$ & $1$ & $-1$ & $-1$&$-1$& Non-hyperbolic\\
~ & ~ & ~& ~& ~ & ~ & ~ & ~ & ~\\ \hline
\end{tabular}
\end{table}
\begin{center}
$1.~Critical~Point~A_1$
\end{center}
The Jacobian matrix at the critical point $A_1$ can be put as
\begin{equation}\renewcommand{\arraystretch}{1.5}
J(A_1)=\begin{bmatrix}
-\frac{3}{2} & 0 & -\frac{\mu}{2}\\
~~0 & \frac{3}{2} & ~~0\\
~~0 & 0 & ~~0
\end{bmatrix}.\label{eq12}
\end{equation}
The eigenvalues of $J(A_1)$ are $-\frac{3}{2}$, $\frac{3}{2}$ and $0$. $[1, 0, 0]^T$ , $[0, 1, 0]^T$ and $[-\frac{\mu}{3}, 0, 1]^T$ are the eigenvectors corresponding to the eigenvalues $-\frac{3}{2}$, $\frac{3}{2}$ and 0 respectively. Since the critical point $A_1$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. From the entries of the Jacobian matrix we can see that there is a linear term of $z$ corresponding to the eqn.(\ref{eq9}) of the autonomous system $(\ref{eq9}-\ref{eq11})$. But the eigen value $0$ of the Jacobian matrix (\ref{eq12}) is corresponding to (\ref{eq11}). So we have to introduce another coordinate system $(X,~Y,~Z)$ in terms of $(x,~y,~z)$. By using the eigenvectors of the Jacobian matrix (\ref{eq12}), we introduce the following coordinate system
\begin{equation}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
X\\
Y\\
Z
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
=\begin{bmatrix}
1 & 0 & \frac{\mu}{3} \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
x\\
y\\
z
\end{bmatrix}\label{eq15}
\end{equation}
and in these new coordinate system the equations $(\ref{eq9}-\ref{eq11})$ are transformed into
\begin{equation}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
X'\\
Y'\\
Z'
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
=\begin{bmatrix}
-\frac{3}{2} & 0 & 0 \\
~~0 & \frac{3}{2} & 0 \\
~~0 & 0 & 0
\end{bmatrix}
\begin{bmatrix}
X\\
Y\\
Z
\end{bmatrix}
+\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
non\\
linear\\
terms
\end{bmatrix}.
\end{equation}
By Center Manifold Theory there exists a continuously differentiable function $h$:$\mathbb{R}$$\rightarrow$$\mathbb{R}^2$ such that
\begin{align}\renewcommand{\arraystretch}{1.5}
h(Z)=\begin{bmatrix}
X \\
Y \\
\end{bmatrix}
=\begin{bmatrix}
a_1Z^2+a_2Z^3+a_3Z^4 +\mathcal{O}(Z^5)\\
b_1Z^2+b_2Z^3+a_3Z^4 +\mathcal{O}(Z^5)
\end{bmatrix}.
\end{align}
Differentiating both side with respect to $N$, we get
\begin{eqnarray}
X'&=&(2a_1Z+3a_2Z^2+4a_3Z^3)Z',\\
Y'&=&(2b_1Z+3b_2Z^2+4b_3Z^3)Z',
\end{eqnarray}
where $a_i$, $b_i$ $\in\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of Z we get,
$a_1$=0, $a_2=\frac{2\mu^2}{27}$, $a_3=0$ and $b_i$=0 for all $i$.
So, the center manifold is given by
\begin{eqnarray}
X&=&\frac{2\mu^2}{27}Z^3,\label{eq18}\\
Y&=&0\label{eq19}
\end{eqnarray}
and the flow on the Center manifold is determined by
\begin{eqnarray}
Z'&=&\frac{\mu}{3}Z^3+\mathcal{O}(Z^5).\label{eq20}
\end{eqnarray}
\begin{figure}[h]
\includegraphics[width=1\textwidth]{A11}
\caption{Vector field near the origin for the critical point $A_1$ in $XZ$-plane. L.H.S. figure is for $\mu>0$ and R.H.S. figure is for $\mu<0$. }
\label{A_1}
\end{figure}
The flow on the center manifold depends on the sign of $\mu$. If $\mu>0$ then $Z'>0$ for $Z>0$ and $Z'<0$ for $Z<0$. Hence, we conclude that for $\mu>0$ the origin is a saddle node and unstable in nature (FIG.\ref{A_1}(a)). Again if $\mu<0$ then $Z'<0$ for $Z>0$ and $Z'>0$ for $Z<0$. So, we conclude that for $\mu<0$ the origin is a stable node, i.e., stable in nature (FIG.\ref{A_1}(b)). \bigbreak
\begin{center}
$2.~Critical~Point~A_2$
\end{center}
The Jacobian matrix at $A_2$ can be put as
\begin{equation}\renewcommand{\arraystretch}{1.5}
J(A_2)=\begin{bmatrix}
-3 & ~~0 & -\frac{\lambda}{2}\\
~~0 & -3 & ~~0\\
~~0 & ~~0 & ~~0
\end{bmatrix}\label{eq21}.
\end{equation}
The eigenvalues of the above matrix are $-3$, $-3$ and $0$. $[1, 0, 0]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue $-3$ and $\left[-\frac{\lambda}{6}, 0, 1\right]^T$ be the eigenvector corresponding to the eigenvalue $0$. Since the critical point $A_2$ is non-hyperbolic in nature, so we use Center Manifold Theory for analyzing the stability of this critical point. We first transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z$, such that the critical point $A_2$ moves to the origin. By using the eigenvectors of the Jacobian matrix $J(A_2)$, we introduce another set of new coordinates $(u,~v,~w)$ in terms of $(X,~Y,~Z)$ as
\begin{equation}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
=\begin{bmatrix}
1 & 0 & \frac{\lambda}{6} \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
X\\
Y\\
Z
\end{bmatrix}\label{eq24}
\end{equation}
and in these new coordinates the equations $(\ref{eq9}-\ref{eq11})$ are transformed into
\begin{equation} \renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
u'\\
v'\\
w'
\end{bmatrix}
=\begin{bmatrix}
-3 & ~~0 & 0 \\
~~0 & -3 & 0 \\
~~0 & ~~0 & 0
\end{bmatrix}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}
+
\begin{bmatrix}
non\\
linear\\
terms
\end{bmatrix}.
\end{equation}
By center manifold theory there exists a continuously differentiable function $h$:$\mathbb{R}$$\rightarrow$$\mathbb{R}^2$ such that
\begin{align}\renewcommand{\arraystretch}{1.5}
h(w)=\begin{bmatrix}
u \\
v \\
\end{bmatrix}
=\begin{bmatrix}
a_1w^2+a_2w^3 +\mathcal{O}(w^4)\\
b_1w^2+b_2w^3 +\mathcal{O}(w^4)
\end{bmatrix}.
\end{align}
Differentiating both side with respect to $N$, we get
\begin{eqnarray}
u'&=&(2a_1w+3a_2w^2)w'+\mathcal{O}(w^3)\label{eq25}\\
v'&=&(2b_1w+3b_2w^2)w'+\mathcal{O}(w^3)\label{eq26}
\end{eqnarray}
where $a_i$, $b_i$ $\in\mathbb{R}$. We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighbourhood of the origin. Comparing coefficients corresponding to power of $w$ both sides of (\ref{eq25}) and (\ref{eq26}), we get
$a_1$=0, $a_2=\frac{\lambda^2}{108}$ and $b_1=\frac{\lambda^2}{72}$, $b_2=0$. So, the center manifold can be written as
\begin{eqnarray}
u&=&\frac{\lambda^2}{108}w^3,\label{eqn27}\\
v&=&\frac{\lambda^2}{72}w^2\label{eqn28}
\end{eqnarray}
\begin{figure}
\includegraphics[width=1\textwidth]{A12}
\caption{Vector field near the origin for the critical point $A_2$ in (uw)-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$. }
\label{19}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{A22}
\caption{Vector field near the origin for the critical point $A_2$ in $(vw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.}
\label{20}
\end{figure}
and the flow on the center manifold is determined by
\begin{eqnarray}
w'&=&\frac{\lambda}{6}w^3+\mathcal{O}(w^4) .\label{eq29}
\end{eqnarray}
Here we see the center manifold and the flow on the center manifold is completely same as the center manifold and the flow which was determined in \cite{1111.6247} and the stability of the vector field near the origin depends on the sign of $\lambda$. If $\lambda<0$ then $w'<0$ for $w>0$ and $w'>0$ for $w<0$. So, for $\lambda<0$ the origin is a stable node, i.e., stable in nature. Again if $\lambda>0$ then $w'>0$ for $w>0$ and $w'<0$ for $w<0$. So, for $\lambda>0$ the origin is a saddle node, i.e., unstable in nature.
The vector field near the origin are shown as in FIG.\ref{19} and FIG.\ref{20} separately for $(wu)$-plane and $(wv)$-plane respectively. As the new coordinate system $(u,~v,~w)$ is topologically equivalent to the old one, hence the origin in the new coordinate system, i.e., the critical point $A_2$ in the old coordinate system $(x,~y,~z)$ is a stable node for $\lambda<0$ and a saddle node for $\lambda>0$.
\begin{center}
$3.~Critical~Point~A_3$
\end{center}
The Jacobian matrix at the critical point $A_3$ is same as (\ref{eq21}). So, the eigenvalues and corresponding eigenvectors are also same as above. Now we transform the coordinates into a new system $x=X,~ y=Y-1,~ z=Z$, such that the critical point is at the origin. Then by using the matrix transformation (\ref{eq24}) and after putting similar arguments as above, the expressions of the center manifold can be written as
\begin{eqnarray}
u&=&-\frac{\lambda^2}{108}w^3\label{eqn30},\\
v&=&-\frac{\lambda^2}{72}w^2\label{eqn31}
\end{eqnarray}
and the flow on the center manifold is determined by
\begin{eqnarray}
w'&=&\frac{\lambda}{6}w^3+\mathcal{O}(w^4) .\label{eqn32}
\end{eqnarray}
Here also the stability of the vector field near the origin depends on the sign of $\lambda$. Again as the expression of the flow on the center manifold is same as (\ref{eq29}). So we can conclude as above that for $\lambda<0$ the origin is a stable node,i.e., stable in nature and for $\lambda>0$ the origin is unstable due to its saddle nature. The vector fields near the origin on $uw$-plane and $vw$-plane are shown as in FIG.\ref{24} and FIG.\ref{25} respectively. Hence, the critical point $A_3$ is a stable node for $\lambda<0$ and a saddle node for $\lambda>0$.\bigbreak
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{A21}
\caption{Vector field near the origin for the Critical point $A_3$ in $(uw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.}
\label{24}
\end{figure}
\begin{figure}[h]
\includegraphics[width=1\textwidth]{A31}
\caption{Vector field near the origin for the Critical point $A_3$ in $(vw)$-plane. L.H.S. figure is for $\lambda>0$ and R.H.S. figure is for $\lambda<0$.}
\label{25}
\end{figure}
\newpage
\subsubsection*{Case-(ii)$~$\underline{$\mu\neq0$ and $\lambda=0$}}
In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\mu}{2}z(1+x^2-y^2),\label{eq33} \\
y'&=&\frac{3}{2}y(1-x^2-y^2),\label{eq34} \\
z'&=&-xz^2.\label{eq35}
\end{eqnarray}
We have also three critical points corresponding to the above autonomous system, in which two are space of critical points. The critical points for this autonomous system are $C_1(0, 0, 0)$, $C_2(0,1, z_c)$ and $C_3(0,-1,z_c)$ where $z_c$ is any real number. Corresponding to the critical points $C_0$, $C_1$ and $C_2$ the eigenvalues of the Jacobian matrix, value of cosmological parameters and the nature of the critical points are same as $A_1$, $A_2$ and $A_3$ respectively.
\begin{center}
$1.~Critical~Point~C_1$
\end{center}
The Jacobian matrix $J(C_1)$ for the autonomous system $(\ref{eq33}-\ref{eq35})$ at this critical point is same as (\ref{eq12}). So, all the eigenvalues and the corresponding eigenvectors are also same as for $J(C_1)$. If we put forward argument like the stability analysis of the critical point $A_1$ then the center manifold can be expressed as $(\ref{eq18}-\ref{eq19})$ and the flow on the center manifold is determined by $(\ref{eq20})$. So the stability of the vector field near the origin is same as for the critical point $A_1$.
\begin{center}
$2.~Critical~Point~C_2$
\end{center}
The Jacobian matrix at the critical point $C_2$ can be put as
\begin{equation}\renewcommand{\arraystretch}{1.5}
J(C_2)=\begin{bmatrix}
-3 & ~~\mu z_c & 0\\
~~0 & -3 & 0\\
-z_c^2 & ~~0 & 0
\end{bmatrix}.\label{eq36}
\end{equation}
The eigenvalues of the above matrix are $-3$, $-3$, 0. $\left[1, 0, \frac{z_c^2}{3}\right]^T$ and $[0, 1, 0]^T$ are the eigenvectors corresponding to the eigenvalue -3 and $[0, 0, 1]^T$ be the eigenvector corresponding to the eigenvalue 0. To apply CMT for a fixed $z_c$, first we transform the coordinates into a new system $x=X,~ y=Y+1,~ z=Z+z_c$, such that the critical point is at the origin and after that if we put forward argument as above to determine center manifold, then the center manifold can be written as
\begin{eqnarray}
X&=&0,\label{eq37}\\
Y&=&0\label{eq38}
\end{eqnarray}
and the flow on the center manifold is determined by
\begin{eqnarray}
Z'&=&0.\label{eq39}
\end{eqnarray}
So, the center manifold is lying on the $Z$-axis and the flow on the center manifold can not be determined by (\ref{eq39}). Now, if we project the vector field on the plane
which is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\ref{z_c}. So every point on $Z$- axis is a stable star.
\begin{center}
$2.~Critical~Point~C_3$
\end{center}
If we put forward argument as above to obtain the center manifold and the flow on the center manifold. Then we will get the center manifold same as $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}). In this case also we will get the same vector field as FIG.\ref{z_c}.\bigbreak
From the above discussion, firstly we have seen that the space of critical points $C_2$ and $C_3$ are non-hyperbolic in nature but by using CMT we could not determine the vector field near those critical points and also the flow on the vector field. So, in this case the last eqn.(\ref{eq35}) of the autonomous system $(\ref{eq33}-\ref{eq35})$ did not provide any special behaviour. For this reason and as the expressions of $\Omega_\phi$, $\omega_\phi$ and $\omega_{total}$ depends only on $x$ and $y$ coordinates, we want to take only the first two equations of the autonomous system $(\ref{eq33}-\ref{eq35})$ and try to analyze the stability of the critical points which are lying on the plane, parallel to $xy-$plane, i.e., the plane $z=constant=c$ (say). In $z=c$ plane the first two equations in $(\ref{eq33}-\ref{eq35})$ can be written as
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\mu}{2}c(1+x^2-y^2),\label{eqn40} \\
y'&=&\frac{3}{2}y(1-x^2-y^2).\label{eqn41}
\end{eqnarray}
In this case we have five critical points corresponding to the autonomous system $(\ref{eqn40}-\ref{eqn41})$. The set of critical points, existence of critical points and the value of cosmological parameters are shown in Table \ref{T3} and the eigenvalues and the nature of critical points are shown in Table \ref{T4}.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{stable_z_c}
\caption{Vector field near about every point on $Z-$axis for the critical points $C_2$ and $C_3$.}
\label{z_c}
\end{figure}
\begin{table}[!]
\caption{\label{T3}Table shows the set of critical points, existence of critical points and the value of cosmological parameters corresponding to the autonomous system $(\ref{eqn40}-\ref{eqn41})$. }
\begin{tabular}{|c|c|c c |c c c c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} ~~ & $ Existence $ ~~ & ~~$x$ ~~&~~ $y$& $\Omega_\phi$&~~$\omega_{\phi}$~~ &$\omega_{tot}$ & $~~~~q$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_1 $\\$~$\end{tabular} ~~ & $For~all~\mu~and~c $&$0$&$~~~1$&$1$&$-1$&$-1$&$~~-1$ \\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_2 $\\$~$\end{tabular} ~~ & $For~all~\mu ~and~c $ ~~ & ~~$0$ &$~~-1$~~&$1$& $~-1$ ~~& $-1$ & $~~-1$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_3 $\\$~$\end{tabular} ~~ & $For~all ~~\mu ~and~c $ ~~ & ~~$-\frac{\mu c}{3}$ ~~&~~$~~0$~&$-\frac{\mu^2 c^2}{9}$ & $~~-1$ ~~&~~ $-\frac{\mu^2 c^2}{9}$ &~~ $\frac{1}{2}\left(1-\frac{\mu^2 c^2}{3}\right)$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_4 $\\$~$\end{tabular} ~~ & \begin{tabular}{@{}c@{}}$ For~c\neq 0~ and~$\\$for~all~\mu\in \left(-\infty,-\frac{3}{c}\right]\cup\left[\frac{3}{c},\infty\right)$ \end{tabular} ~~ & ~~$-\frac{3}{\mu c}$ ~~&~~ $\sqrt{1-\frac{9}{\mu^2 c^2}}$~~&~~$\left(1-\frac{18}{\mu^2 c^2}\right)$&$~~~~\frac{\mu^2 c^2}{18-\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_5 $\\$~$\end{tabular} ~~ & \begin{tabular}{@{}c@{}}$ For~c\neq 0~ and~$\\$for~all~\mu\in \left(-\infty,-\frac{3}{c}\right]\cup\left[\frac{3}{c},\infty\right)$ \end{tabular} ~~ & ~~$-\frac{3}{\mu c}$ ~~&~~ $-\sqrt{1-\frac{9}{\mu^2 c^2}}$~~&$\left(1-\frac{18}{\mu^2 c^2}\right)$ &~~$~~\frac{\mu^2 c^2}{18-\mu^2c^2}$~~&~~$-1$ ~~& ~~$~-1$\\ \hline
\end{tabular}
\end{table}
\begin{table}[!]
\caption{\label{T4}Table shows the eigenvalues $(\lambda_1, \lambda_2)$ of the Jacobian matrix corresponding to the critical points and the nature of all critical points $(E_1-E_5)$.}
\begin{tabular}{|c|c c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ Critical~Points $\\$~$\end{tabular} &$ ~~\lambda_1 $ & $~~\lambda_2$ & $ Nature~~ of~~ Critical~~ points$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\ $E_1$ \\$~$\end{tabular} & $-3$ & $ -3 $&Hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_2 $\\$~$\end{tabular} & $-3$ & $ -3 $& Hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_3 $\\$~$\end{tabular} & $-\frac{3}{2}\left(1+\frac{\mu^2c^2}{9}\right)$ & $\frac{3}{2}\left(1-\frac{\mu^2c^2}{9}\right)$& \begin{tabular}{@{}c@{}}$~~$\\Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_4 $\\$~$\end{tabular} & \begin{tabular}{@{}c@{}}$~~$\\$\frac{-3+\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\ $\frac{-3-\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$\\$~~$\end{tabular} &\begin{tabular}{@{}c@{}}$~~$\\Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ E_5 $\\$~$\end{tabular} & \begin{tabular}{@{}c@{}}$~~$\\$\frac{-3+\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\ $\frac{-3-\sqrt{45-\frac{324}{\mu^2 c^2}}}{2}$\\$~~$\end{tabular} &\begin{tabular}{@{}c@{}}Non-hyperbolic for $\mu c=\pm3$\\and\\hyperbolic for $\mu c\neq\pm3$\end{tabular}\\ \hline
\end{tabular}
\end{table}
\newpage
For avoiding similar arguments which we have mentioned for analyzing the stability of the above critical points, we only state the stability and the reason behind the stability of these critical points in a tabular form, which is shown as in Table \ref{T_stability}.
\begin{table}[h]
\caption{\label{T_stability}Table shows the stability and the reason behind the stability of the critical points $(E_1-E_5)$}
\begin{tabular}{|c|c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline
$E_1,~E_2$& Both are stable star & \begin{tabular}{@{}c@{}}$~~$\\As both eigenvalues $\lambda_1$ and $\lambda_2$ are negative and equal. By Hartman-\\Grobman theorem we can conclude that the critical points $E_1$ and \\$E_2$ both are stable star.\\$~~$\end{tabular}\\ \hline
$E_3$&\begin{tabular}{@{}c@{}}$~~$\\ Stable node for $\mu c=-3$,\\saddle node for $\mu c=3$ ,\\ stable node for $\mu c>3~or,~<-3$,\\saddle node for $-3<\mu c<3$ \\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu c=-3:$\\After shifting the this critical point into the origin by taking the\\ transformation $x= X-\frac{\mu c}{3}$, $y= Y$ and by using CMT, the CM \\is given by $X=Y^2+\mathcal{O}(Y^4) $ and the flow on the CM is determined \\by $ Y'=-\frac{3}{2}Y^3+\mathcal{O}(Y^5)$. $Y'<0$ while $Y>0$ and for $Y<0$, $Y'>0$.\\ So, the critical point $E_3$ is a stable node (FIG.\ref{mu_c_3}(a)).\\$~~$\\ For $\mu c=3:$\\ The center manifold is given by $X=-Y^2+\mathcal{O}(Y^4) $ and the flow on\\ the center manifold is determined by $ Y'=\frac{3}{2}Y^3+\mathcal{O}(Y^5)$. $Y'<0$\\ while $Y<0$ and for $Y>0$, $Y'>0$. So, the critical point $E_3$ is a\\ saddle node (FIG.\ref{mu_c_3}(b)).\\$~~$\\For $\mu c>3~or,~\mu c<-3$:\\ Both of the eigenvalues $\lambda_1$ and $\lambda_2$ are negative and unequal. So by\\ Hartman-Grobman theorem the critical point $E_3$ is a stable node.\\ $~~$\\ For $-3<\mu c<3:$\\$\lambda_1$ is negative and $\lambda_2$ is positive. So by Hartman-Grobman theorem\\ the critical point $E_3$ is unstable node.\\$~~$\end{tabular} \\ \hline
$E_4,~E_5$ &\begin{tabular}{@{}c@{}}$~~$\\Both are stable node for $\mu c=-3$,\\ saddle node for $\mu c=3$,\\ stable node for $\mu c>3~or,~<-3$\\$~~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\mu c=3$ and $\mu c=-3$:\\ The expression of the center manifold and the flow on the center\\ manifold is same as the expressions for $\mu c=-3$ and $\mu c=-3$\\ cases respectively for $E_3$.\\ $~~$\\ For $\mu c>3,~or~<-3$:\\ Both of the eigenvalues $\lambda_1$ and $\lambda_2$ are negative and unequal.\\ Hence, by Hartman-Grobman theorem we can conclude that the critical\\ points $E_4$ and $E_5$ both are unstable in nature.\\$~~$ \end{tabular}\\ \hline
\end{tabular}
\end{table}
Note that $\mu c\geq3$ and $\mu c\leq-3$ be the domain of existence of the critical point $E_4$ and $E_5$. For this reason we did not determine the stability analysis of the critical points $E_4$ and $E_5$ for $\mu c\in (-3,3)$.
\begin{figure}[!]
\centering
\includegraphics[width=1\textwidth]{mu_c_3}
\caption{Vector field near near the origin for the critical point $E_3$. L.H.S. for $\mu c=3$ and R.H.S. for $\mu c=-3$.}
\label{mu_c_3}
\end{figure}
\newpage
\subsubsection*{Case-(iii)$~$\underline{$\mu=0$ and $\lambda\neq 0$}}
In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2},\label{eq40} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq41} \\
z'&=&-xz^2. \label{eq42}
\end{eqnarray}
Corresponding to the above autonomous system we have three space of critical points $P_1(0,0,z_c)$, $P_2(0,1,0)$ and $P_3(0,-1,0)$ where $z_c$ is any real number. The value of cosmological parameters, eigenvalues of the Jacobian matrix at those critical points corresponding to the autonomous system $(\ref{eq40}-\ref{eq42})$ and the nature of critical points $P_1$, $P_2$ and $P_3$ are same as for the critical points $A_1$, $A_2$ and $A_3$ respectively, shown as in Table \ref{TI}. \newpage
\begin{center}
$1.~Critical~Point~P_1$
\end{center}
The Jacobian matrix at the critical point $P_1$ can be put as
\begin{equation}
\renewcommand{\arraystretch}{1.5}
J(P_1)=\begin{bmatrix}
-\frac{3}{2} & 0 & 0\\
~~0 & \frac{3}{2}& 0\\
-z_c^2 & 0 & 0
\end{bmatrix}.\label{eq45}
\end{equation}
The eigenvalues of the above matrix are $-\frac{3}{2}$, $\frac{3}{2}$ and $0$ and $\left[1, 0, \frac{2}{3}z_c^2\right]^T$, $[0, 1, 0]^T$ and $[0, 0, 1]^T$ are the corresponding eigenvectors respectively.
For a fixed $z_c$, first we shift the critical point $P_0$ to the origin by the coordinate transformation $x=X$, $y=Y$ and $z=Z+z_c$, if we put forward argument as above for non-hyperbolic critical points. Then, the center manifold can be written as $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}). Similarly as above (the discussion of stability for the critical point $C_2$) we can conclude that the center manifold for the critical point $P_1$ is also lying in the $Z-$axis but flow on the center manifold can not be determined. Now, if we project the vector field on the plane
which is parallel to $XY$-plane, i.e., the plane $Z=constant$(say), then the vector field is shown as in FIG.\ref{saddle_z_c}. So every point on $Z$- axis is a saddle node.\bigbreak
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{saddle_z_c}
\caption{Vector field near about every point on $Z-$axis for the critical point $P_1$.}
\label{saddle_z_c}
\end{figure}
Again if we want to obtain the stability of the critical points in the plane which is parallel to $xy$-plane, i.e., $z=constant=c$(say), then we only take the first two equations (\ref{eq40}) and (\ref{eq41}) of the autonomous system $(\ref{eq40}-\ref{eq42})$ and also replace $z$ by $c$ in those two equations. After that we can see that there exists three real and physically meaningful hyperbolic critical points $B_1(0,0)$, $B_2\left(-\frac{\lambda c}{6}, \sqrt{1+\frac{\lambda^2 c^2}{36}}\right)$ and $B_3\left(-\frac{\lambda c}{6}, -\sqrt{1+\frac{\lambda^2 c^2}{36}}\right)$. So by obtaining the eigenvalues of the Jacobian matrix corresponding to the autonomous system at those critical points and using Hartman-Grobman theorem we only state the stability of all critical points and also write the value of cosmological parameters corresponding to these critical points in tabular form, which is shown as in Table \ref{TB}.\bigbreak
For the critical points $P_2$ and $P_3$ we have the same Jacobian matrix (\ref{eq21}) and if we will take the similar transformations (shifting and matrix) and then by using the similar arguments as $A_2$ and $A_3$ respectively, we conclude that the the stability of $P_2$ and $P_3$ is same as $A_2$ and $A_3$ respectively.
\begin{table}[!]
\caption{\label{TB}Table shows the eigenvalues $(\lambda_1, \lambda_2)$ of the Jacobian matrix, stability and value of cosmological parameters corresponding to the critical points and the nature of all critical points $(B_1-B_3)$.}
\begin{tabular}{|c|c c|c|c c c c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ Critical~Points $\\$~$\end{tabular} &$ ~~\lambda_1 $ & $~~\lambda_2$ & $ Stability$&$~\Omega_\phi~$& ~~$\omega_{\phi}$~~ &$\omega_{tot}$ & ~~$q$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$B_1$\\$~$\end{tabular} &$-\frac{3}{2}$ & $\frac{3}{2}$&Stable star&$0$&Undetermined&$0$& $\frac{1}{2}$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$B_2,~B_3$\\$~$\end{tabular}&$-3\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-3\left(1+\frac{\lambda^2 c^2}{36}\right)$& \begin{tabular}{@{}c@{}}$~~$\\Stable star for $\lambda c=0$\\and\\stable node for $\lambda c\neq 0$\\$~~$\end{tabular}& $1$&$-\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-\left(1+\frac{\lambda^2 c^2}{18}\right)$&$-\left(1+\frac{\lambda^2 c^2}{12}\right)$\\ \hline
\end{tabular}
\end{table}
\newpage
\subsubsection*{Case-(iv)$~$\underline{$\mu=0$ and $\lambda=0$}}
In this case the autonomous system $(\ref{eq9}-\ref{eq11})$ changes into
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2),\label{eq49} \\
y'&=&\frac{3}{2}y(1-x^2-y^2),\label{eq50} \\
z'&=&-xz^2\label{eq51}.
\end{eqnarray}
Corresponding to the above autonomous system we have three space of critical points $S_1(0,0,z_c)$, $S_2(0,1,z_c)$ and $S_3(0,-1,z_c)$ where $z_c$ is any real number, which are exactly same as $C_1$, $C_2$ and $C_3$. In this case also all critical points are non-hyperbolic in nature. By taking the possible shifting transformations (for $S_1~(x=X,y=Y,z=Z+z_c)$, for $S_2~(x=X,y=Y+1,z=Z+z_c)$ and for $S_3~(x=X,y=Y-1,z=Z+z_c)$ ) as above we can conclude that for all critical points the center manifold is given by $(\ref{eq37}-\ref{eq38})$ and the flow on the center manifold is determined by (\ref{eq39}), i.e., for all critical points the center manifold is lying on the $Z$-axis. Again if we plot the vector field in $Z=constant$ plane, we can see that for the critical point $S_1$ every points on $Z$-axis is a saddle node (same as FIG.\ref{saddle_z_c}) and for $S_2$ and $S_3$ every points on $Z$-axis is a stable star (same as FIG.\ref{z_c}).
\subsection{Model 2: Power-law potential and
exponentially-dependent dark-matter particle mass \label{M2}}
In this consideration evolution equations in Section \ref{BES} can be converted to the autonomous system as follows
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\frac{\lambda y^2 z}{2}-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2),\label{eq54} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\frac{\lambda xyz}{2},\label{eq55} \\
z'&=&-xz^2,\label{eq56}
\end{eqnarray}
We have five critical points $L_1$, $L_2$, $L_3$, $L_4$ and $L_5$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points are shown as in Table \ref{TPLE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at those critical points and the nature of the critical points are shown in Table \ref{TNE}.\par
Here we only concern about the stability of the critical points for $\mu\neq 0$ and $\lambda\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$.
\begin{table}[h]
\caption{\label{TPLE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }
\begin{tabular}{|c|c|c c c|c|c|c| c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\Omega_\phi~$&$~\omega_\phi~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\ $L_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&0&1&0 & 1 & $-1$ & $-1$&$-1$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\ $L_2$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&0&$-1$&0 & 1 & $-1$ & $-1$&$-1$\\ \hline
$L_3$ & \begin{tabular}{@{}c@{}}$~~$\\For all \\$\mu\in\left(-\infty,-\sqrt{\frac{3}{2}}\right]\cup\left[\sqrt{\frac{3}{2}},\infty\right)$\\and all $\lambda$\\$~~$\end{tabular}&$-\frac{1}{\mu}\sqrt{\frac{3}{2}}$&$\sqrt{1-\frac{3}{2\mu^2}}$&0&$1-\frac{3}{\mu^2}$&$\frac{\mu^2}{3-\mu^2}$&$-1$&$-1$\\ \hline
$L_4$ & \begin{tabular}{@{}c@{}}$~~$\\For all \\$\mu\in\left(-\infty,-\sqrt{\frac{3}{2}}\right]\cup\left[\sqrt{\frac{3}{2}},\infty\right)$\\and all $\lambda$\\$~~$\end{tabular}&$-\frac{1}{\mu}\sqrt{\frac{3}{2}}$&$-\sqrt{1-\frac{3}{2\mu^2}}$&0&$1-\frac{3}{\mu^2}$&$\frac{\mu^2}{3-\mu^2}$&$-1$&$-1$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\ $L_5$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$-\sqrt{\frac{2}{3}}\mu$&$0$&$0$&$-\frac{2}{3}\mu^2$ & $1$&$-\frac{2}{3}\mu^2$&$\frac{1}{2}\left(1-2\mu^2\right)$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{TNE}The eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at those critical points $(L_1-L_5)$ and the nature of the critical points}
\begin{tabular}{|c|c c c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$\lambda_3$&$Nature~ of~ critical~ Points$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$L_1$\\$~~$\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$L_2$\\$~~$\end{tabular}&$-3$&$-3$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$L_3$\\$~~$\end{tabular}&$-\frac{3}{2}\left(1+\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$-\frac{3}{2}\left(1-\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$L_4$\\$~~$\end{tabular}&$-\frac{3}{2}\left(1+\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$-\frac{3}{2}\left(1-\frac{1}{\mu}\sqrt{-6+5\mu^2}\right)$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$L_5$\\$~~$\end{tabular}&$-\frac{3}{2}$&$\frac{3}{2}$&$0$&Non-hyperbolic \\ \hline
\end{tabular}
\end{table}
\begin{center}
$1.~Critical~Point~L_1$
\end{center}
The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_1$ can be put as
\begin{equation}
\renewcommand{\arraystretch}{1.5}
J(L_1)=\begin{bmatrix}
-3&\sqrt{6}\mu&-\frac{\lambda}{2}\\
~~0&-3&~~0\\
~~0 & ~~0& ~~0
\end{bmatrix}.
\end{equation}
The eigenvalues of $J(L_1)$ are $-3$, $-3$, $0$ and $[1,0,0]^T$, $\left[-\frac{\lambda}{6}, 0,1\right]^T$ are the eigenvectors corresponding to the eigenvalues $-3$ and $0$ respectively. Since the algebraic multiplicity corresponding to the eigenvalue $-3$ is $2$ but the dimension of the eigenspace corresponding to that eigenvalue is $1$, i.e., algebraic multiplicity and geometric multiplicity corresponding to the eigenvalue $-3$ are not equal to each other. So, the Jacobian matrix $J(L_1)$ is not diagonalizable. To determine the center manifold for this critical point there only arises a problem for presence of the nonzero element in the top position of third column of the Jacobian matrix. First we take the coordinate transformation $x=X,y=Y+1,z=Z$ which shift the critical point $L_1$ to the origin. Now we introduce another coordinate system which will remove the term in the top position of the third column. Since, there are only two linearly independent eigenvectors, so we have to obtain another linearly independent column vector that will help to construct the new coordinate system. Since, $[0,1,0]^T$ be the column vector which is linearly independent to the eigenvectors of $J(L_1)$. The new coordinate system $(u,v,w)$ can be written in terms of $(X,Y,Z)$ as (\ref{eq24})
and in these new coordinate system the equations $(\ref{eq54}-\ref{eq56})$ are transformed into
\begin{equation}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
u'\\
v'\\
w'
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
=\begin{bmatrix}
-3&\sqrt{6}\mu&0\\
~~0&-3&~~0\\
~~0 & ~~0& ~~0
\end{bmatrix}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}
+\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
non\\
linear\\
terms
\end{bmatrix}.
\end{equation}
By similar arguments which we have derived in the stability analysis of the critical point $A_2$, the center manifold can be written as (\ref{eqn27}-\ref{eqn28})
and the flow on the center manifold is determined by (\ref{eq29}).
As the expression of center manifold and the flow are same as for the critical point $A_2$. So the stability of the critical point $L_1$ is same as the stability of $A_2$.
\begin{center}
$2.~Critical~Point~L_2$
\end{center}
After shifting the critical points to the origin (by taking the shifting transformations $(x=X,y=Y-1,z=Z)$ and the matrix transformation (\ref{eq24})) and by putting the forward arguments which we have mentioned for the analysis of $L_1$, the center manifold can be expressed as $(\ref{eqn30}-\ref{eqn31})$ and the flow on the center manifold is determined by (\ref{eqn32}). So the stability of the critical point $L_2$ is same as the stability of $A_3$.
\begin{center}
$3.~Critical~Point~L_3$
\end{center}
The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_3$ can be put as
\begin{equation}
\renewcommand{\arraystretch}{3}
J(L_3)=\begin{bmatrix}
-\frac{9}{2\mu^2}&\sqrt{1-\frac{3}{2\mu^2}}\left(\frac{3}{\mu}\sqrt{\frac{3}{2}}+\sqrt{6}\mu\right)&-\frac{\lambda}{2}\left(1-\frac{3}{2\mu^2}\right)\\
\frac{3}{\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}&-3\left(1-\frac{3}{2\mu^2}\right)&\frac{\lambda}{2\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}\\
~~0 & ~~0& ~~0
\end{bmatrix}.
\end{equation}
The eigenvalues corresponding to the Jacobian matrix $J(L_3)$ are shown in Table.\ref{TNE}. From the existence of the critical point $L_3$ we can conclude that the eigenvalues of $J(L_3)$ always real. Since the critical point $L_3$ exists for $\mu\leq -\sqrt{\frac{3}{2}}$ or $\mu\geq \sqrt{\frac{3}{2}}$, our aim is to define the stability in all possible regions of $\mu$ for at least one choice of $\mu$ in these region. For this reason we will define the stability at four possible choices of $\mu$. We first determine the stability of this critical point at $\mu=\pm\sqrt{\frac{3}{2}}$. Then for $\mu< -\sqrt{\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\mu=-\sqrt{3}$ and for $\mu>\sqrt{\frac{3}{2}}$, we shall determine the stability of $L_3$ at $\mu=\sqrt{3}$.\par
For $\mu=\pm\sqrt{\frac{3}{2}}$, the Jacobian matrix $J(L_3)$ converts into
$$
\begin{bmatrix}
-3&0&0\\~~0&0&0\\~~0&0&0
\end{bmatrix}
$$
and as the critical point $L_3$ converts into $(\mp 1,0,0)$, first we take the transformation $x=X\mp 1, y= Y, z=Z$ so that $L_3$ moves into the origin. As the critical point is non-hyperbolic in nature we use CMT for determining the stability of this critical point. From center manifold theory there exist a continuously differentiable function
$h:$$\mathbb{R}^2$$\rightarrow$$\mathbb{R}$ such that $X=h(Y,Z)=aY^2+bYZ+cZ^2+higher~order~terms,$ where $a,~b,~c~ \epsilon~\mathbb{R}$. \\
Now differentiating both side with respect to $N$, we get
\begin{eqnarray}
\frac{dX}{dN}=[2aY+bZ ~~~~ bY+2cZ]\begin{bmatrix}
\frac{dY}{dN}\\
~\\
\frac{dZ}{dN}\\
\end{bmatrix}\label{equn52}
\end{eqnarray}
Comparing L.H.S. and R.H.S. of (\ref{equn52}) we get,
$a=1$, $b=0$ and $c=0$, i.e., the center manifold can be written as
\begin{eqnarray}
X&=&\pm Y^2+higher~order~terms\label{eq65}
\end{eqnarray}
and the flow on the center manifold is determined by
\begin{eqnarray}
\frac{dY}{dN}&=&\pm\frac{\lambda}{2}YZ+higher~order~terms,\label{eq66}\\
\frac{dZ}{dN}&=&\pm Z^2+higher~order~terms\label{eq67}.
\end{eqnarray}
We only concern about the non-zero coefficients of the lowest power terms in CMT as we analyze arbitrary small neighborhood of the origin and here the lowest power term of the expression of center manifold depends only on $Y$. So, we draw the vector field near the origin only on $XY$-plane, i.e., the nature of the vector field implicitly depends on $Z$ not explicitly. Now we try to write the flow equations $(\ref{eq66}-\ref{eq67})$ in terms of $Y$ only. For this reason, we divide the corresponding sides of (\ref{eq66}) by the corresponding sides of (\ref{eq67}) and then we will get
\begin{align*}
&\frac{dY}{dZ}=\frac{\lambda}{2}\frac{Y}{Z}\\ \implies& Z=\left(\frac{Y}{C}\right)^{2/\lambda},~~\mbox{where $C$ is a positive arbitrary constant}
\end{align*}
After substituting this any of $(\ref{eq66})$ or $(\ref{eq67})$, we get
\begin{align}
\frac{dY}{dN}=\frac{\lambda}{2C^{2/\lambda}}Y^{1+2/\lambda}
\end{align}
As the power of $Y$ can not be negative or fraction, so we have only two choices of $\lambda$, $\lambda=1$ or $\lambda=2$. For $\lambda=1$ or, $\lambda=2$ both of the cases the origin is a saddle node, i.e., unstable in nature (FIG.\ref{L_21} is for $\mu=\sqrt{\frac{3}{2}}$ and FIG.\ref{L_2_1_1} is for $\mu=-\sqrt{\frac{3}{2}}$). Hence, for $\mu=\pm \sqrt{\frac{3}{2}}$, in the old coordinate system the critical point $L_3$ is unstable due to its saddle nature.\bigbreak
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{L21}
\caption{Vector field near the origin when $\mu=\sqrt{\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\lambda=1$ and R.H.S. phase plot is for $\lambda=2$.}
\label{L_21}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{L211}
\caption{Vector field near the origin when $\mu=-\sqrt{\frac{3}{2}}$, for the critical point $L_3$. L.H.S. phase plot is for $\lambda=1$ and R.H.S. phase plot is for $\lambda=2$.}
\label{L_2_1_1}
\end{figure}
For $\mu=\sqrt{3}$, the Jacobian matrix $J(L_3)$ converts into
$$ \renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
-\frac{3}{2}&~~\frac{9}{2}&-\frac{\lambda}{4}\\~~\frac{3}{2}&-\frac{3}{2}&~~\frac{\lambda}{4}\\~~0&~~0&~~0
\end{bmatrix}.
$$
The eigenvalues of the above Jacobian matrix are $-\frac{3}{2}(1+\sqrt{3})$, $-\frac{3}{2}(1-\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[-\sqrt{3},1,0]^T$, $[\sqrt{3},1,0]^T$ and $\left[-\frac{\lambda}{6},0,1\right]^T$ respectively. As for $\mu=\sqrt{3}$, the critical point $L_3$ converts into $\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}},0\right)$; so first we take the transformations $x= X-\frac{1}{\sqrt{2}}$, $y= Y+\frac{1}{\sqrt{2}}$ and $z= Z$ which shift the critical point to the origin. By using the eigenvectors of the above Jacobian matrix, we introduce a new coordinate system $(u,v,w)$ in terms of $(X,Y,Z)$ as
\begin{equation}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
=\begin{bmatrix}
-\frac{1}{2\sqrt{3}} & \frac{1}{2} & -\frac{\lambda}{12\sqrt{3}} \\
\frac{1}{2\sqrt{3}} & \frac{1}{2} & \frac{\lambda}{12\sqrt{3}}\\
0 & 0 & 1
\end{bmatrix}\renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
X\\
Y\\
Z
\end{bmatrix}
\end{equation}
and in these new coordinates the equations $(\ref{eq54}-\ref{eq56})$ are transformed into
\begin{equation} \renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
-u'+v'\\
u'+v'\\
w'
\end{bmatrix}
=\begin{bmatrix}
\frac{3}{2}(1+\sqrt{3})& -\frac{3}{2}(1-\sqrt{3}) & 0 \\
-\frac{3}{2}(1+\sqrt{3}) & -\frac{3}{2}(1-\sqrt{3}) & 0 \\
~~0 & ~~0 & 0
\end{bmatrix}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}
+
\begin{bmatrix}
non\\
linear\\
terms
\end{bmatrix}.
\end{equation}
Now if we add $1$st and $2$nd equation of the above matrix equation and then divide both sides by $2$, then we get $v'$. Again, if we subtract $1$st equation from $2$nd equation and divide both sides by $2$, we get $u'$. Finally, in matrix form in the new coordinate system the autonomous system can be written as
\begin{equation} \renewcommand{\arraystretch}{1.5}
\begin{bmatrix}
u'\\
v'\\
w'
\end{bmatrix}
=\begin{bmatrix}
-\frac{3}{2}(1+\sqrt{3})& 0 & 0 \\
0 & -\frac{3}{2}(1-\sqrt{3}) & 0 \\
0 & ~~0 & 0
\end{bmatrix}
\begin{bmatrix}
u\\
v\\
w
\end{bmatrix}
+
\begin{bmatrix}
non\\
linear\\
terms
\end{bmatrix}.
\end{equation}
If we put similar arguments which we have mentioned for the analysis of $A_2$, then the center manifold can be expressed as
\begin{align}
u&=\frac{2}{3(1+\sqrt{3})}\left\{\frac{(\sqrt{3}-1)\lambda^2-4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3),\label{eqn72}\\
v&=-\frac{2}{3(\sqrt{3}-1)}\left\{\frac{(\sqrt{3}+1)\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3)\label{eqn73}
\end{align}
and the flow on the center manifold is determined by
\begin{align}
w'&=\frac{1}{\sqrt{2}}w^2+\mathcal{O}(w^3)\label{eqn74}.
\end{align}
From the flow equation we can easily conclude that the origin is a saddle node and unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\ref{L_2_2}. Hence, in the old coordinate system $(x,y,z)$, for $\mu=\sqrt{3}$ the critical point $L_3$ is unstable due to its saddle nature.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{L222}
\caption{Vector field near the origin in $uw$-plane when $\mu=\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\lambda<0$ or $\lambda>\frac{4}{\sqrt{3}-1}$ and the phase plot (b) is for $0<\lambda<\frac{4}{\sqrt{3}-1}$. For the critical point $L_4$, the phase plot (a) is for $0<\lambda<\frac{4}{\sqrt{3}-1}$ and the phase plot (b) is for $\lambda<0$ or $\lambda>\frac{4}{\sqrt{3}-1}$.}
\label{L_22}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{L22}
\caption{Vector field near the origin in $vw$-plane when $\mu=\sqrt{3}$, for the critical points $L_3$ and $L_4$. For the critical point $L_3$, the phase plot (a) is for $\lambda<-\frac{4}{\sqrt{3}+1}$ or $\lambda>0$ and the phase plot (b) is for $-\frac{4}{\sqrt{3}+1}<\lambda<0$. For the critical point $L_4$, the phase plot (a) is for $-\frac{4}{\sqrt{3}+1}<\lambda<0$ and the phase plot (b) is for $\lambda<-\frac{4}{\sqrt{3}+1}$ or $\lambda>0$.}
\label{L_2_2}
\end{figure}
Lastly, for $\mu=-\sqrt{3}$, we have the same eigenvalues $-\frac{3}{2}(1+\sqrt{3})$, $-\frac{3}{2}(1-\sqrt{3})$ and $0$ and the corresponding eigenvectors are $[\sqrt{3},1,0]^T$, $[-\sqrt{3},1,0]^T$ and $\left[-\frac{\lambda}{6},0,1\right]^T$ respectively of $J(L_3)$. After putting corresponding arguments which we have mentioned for $\mu=\sqrt{3}$ case, then we will get the same expressions $(\ref{eqn72}-\ref{eqn73})$ for center manifold and (\ref{eqn74}) for flow on the center manifold. So, for this case also we conclude that the critical point $L_3$ is a saddle node and unstable in nature.
\newpage
\begin{center}
$4.~Critical~Point~L_4$
\end{center}
The Jacobian matrix corresponding to the autonomous system $(\ref{eq54}-\ref{eq56})$ at the critical point $L_4$ can be put as
\begin{equation}
\renewcommand{\arraystretch}{3}
J(L_4)=\begin{bmatrix}
-\frac{9}{2\mu^2}&-\sqrt{1-\frac{3}{2\mu^2}}\left(\frac{3}{\mu}\sqrt{\frac{3}{2}}+\sqrt{6}\mu\right)&-\frac{\lambda}{2}\left(1-\frac{3}{2\mu^2}\right)\\
-\frac{3}{\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}&-3\left(1-\frac{3}{2\mu^2}\right)&-\frac{\lambda}{2\mu}\sqrt{\frac{3}{2}}\sqrt{1-\frac{3}{2\mu^2}}\\
~~0 & ~~0& ~~0
\end{bmatrix}.
\end{equation}
For this critical point also we analyze the stability for the above four choices of $\mu$, i.e., $\mu=\pm\sqrt{\frac{3}{2}}$, $\mu=\sqrt{3}$ and $\mu=-\sqrt{3}$. \par
For $\mu=\pm\sqrt{\frac{3}{2}}$, we will get the same expressions of center manifold (\ref{eq65}) and the flow on the center manifold $(\ref{eq66}-\ref{eq67})$. So, for this case the critical point $L_4$ is unstable due to its saddle nature. \par
For $\mu=\sqrt{3}$, after putting corresponding arguments as $L_3$, the center manifold can be written as
\begin{align}
u&=\frac{2}{3(1+\sqrt{3})}\left\{\frac{(1-\sqrt{3})\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3),\label{eqn76}\\
v&=\frac{2}{3(\sqrt{3}-1)}\left\{\frac{(\sqrt{3}+1)\lambda^2+4\lambda}{48\sqrt{6}}\right \}w^2+\mathcal{O}(w^3)\label{eqn77}
\end{align}
and the flow on the center manifold is determined by
\begin{align}
w'&=\frac{1}{\sqrt{2}}w^2+\mathcal{O}(w^3)\label{eqn78}.
\end{align}
From the flow equation we can conclude that the origin is a saddle node and hence in the old coordinate system $L_4$ is a saddle node, i.e., unstable in nature. The vector field near the origin in $uw$-plane is shown as in FIG.\ref{L_22} and the vector field near the origin in $vw$-plane is shown as in FIG.\ref{L_2_2}.\par
For $\mu=-\sqrt{3}$ we also get the same expression of center manifold and flow equation as for $\mu=\sqrt{3}$ case.
\begin{center}
$5.~Critical~Point~L_5$
\end{center}
First we shift the critical point $L_5$ to the origin by the transformation $x= X-\sqrt{\frac{2}{3}}\mu$, $y=Y$ and $z= Z$. For avoiding similar arguments which we have mentioned for the above critical points, we only state the main results center manifold and the flow equation for this critical point. The center manifold for this critical point can be written as
\begin{align}
X&=0,\\Y&=0
\end{align}
and the flow on the center manifold can be obtained as
\begin{align}
\frac{dZ}{dN}=\sqrt{\frac{2}{3}}\mu Z^2 +\mathcal{O}(Z^3).
\end{align}
From the expressions of the center manifold we can conclude that the center manifold is lying on the $Z$-axis. From the flow on the center manifold FIG.\ref{z_center_manifold}, we conclude that the origin is unstable for both of the cases $\mu>0$ or $\mu<0$.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{z_center_manifold}
\caption{Flow on the center manifold near the origin for the critical point $L_5$. (a) is for $\mu>0$ and (b) is for $\mu<0$.}
\label{z_center_manifold}
\end{figure}
\subsection{Model 3: Exponential potential and
power-law-dependent dark-matter particle mass \label{M3}}
In this case evolution equations in Section \ref{BES} can be written to the autonomous system as follows
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\frac{\mu}{2}z(1+x^2-y^2),\label{eq82} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy,\label{eq83} \\
z'&=&-xz^2.\label{eq84}
\end{eqnarray}
We have three physical meaningful critical points $R_1$, $R_2$ and $R_3$ corresponding to the above autonomous system. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ shown in Table \ref{TPRE} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ at those critical points and the nature of the critical points are shown in Table \ref{TNRE}.\par
Here we also concern about the stability of the critical points for $\mu\neq 0$ and $\lambda\neq 0$ because for another possible cases we will get the similar types result which we have obtained for Model $1$.
\begin{table}[h]
\caption{\label{TPRE}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }
\begin{tabular}{|c|c|c c c|c|c|c| c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$&$z~~$& $~\Omega_X~$&$~\omega_X~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$0$&$0$&$0$&$0$&Undetermined&$0$&$\frac{1}{2}$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$R_2$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$\sqrt{1+\frac{\lambda^2}{6}}$&$0$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$R_3$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$-\sqrt{1+\frac{\lambda^2}{6}}$&$0$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{TNRE}The eigenvalues $(\lambda_1,\lambda_2,\lambda_3)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq82}-\ref{eq84})$ at those critical points $(R_1-R_3)$ and the nature of the critical points.}
\begin{tabular}{|c|c c c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$\lambda_3$&$Nature~ of~ critical~ Points$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~~$\end{tabular}&$-\frac{3}{2}$&$\frac{3}{2}$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$R_2$\\$~~$\end{tabular}&$-(3+\lambda^2)$&$-\left(3+\frac{\lambda^2}{2}\right)$&$0$&Non-hyperbolic\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$R_3$\\$~~$\end{tabular}&$-(3+\lambda^2)$&$-\left(3+\frac{\lambda^2}{2}\right)$&$0$&Non-hyperbolic\\ \hline
\end{tabular}
\end{table}
For avoiding similar types of argument, we only state the stability of every critical points and the reason behind the stability in the tabular form, which is shown as in Table \ref{T_R_stability}.
\begin{table}[!]
\caption{\label{T_R_stability}Table shows the stability and the reason behind the stability of the critical points $(R_1-R_3)$.}
\begin{tabular}{|c|c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$R_1$\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu>0$, $R_1$ is a saddle node\\$~~$\\ and \\$~~$\\for $\mu<0$, $R_1$ is a stable node\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\After introducing the coordinate transformation (\ref{eq15}),\\ we will get the same expression of center manifold\\ $(\ref{eq18}-\ref{eq19})$ and the flow on the center manifold is\\ determined by $(\ref{eq20})$(FIG.\ref{A_1}).\\$~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$R_2,R_3$\\$~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\lambda>0$ or $\lambda<0$, \\$~~$\\$R_2$ and $R_3$ both are unstable\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\After shifting $R_2$ and $R_3$ to the origin by using coordinate\\ transformation $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y+\sqrt{1+\frac{\lambda^2}{6}},z=Z\right)$ and\\ $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y-\sqrt{1+\frac{\lambda^2}{6}},z=Z \right)$ respectively,\\ we can conclude that the center manifold is lying on $Z$-axis\\ and the flow on the center manifold is determined by\\
$\frac{dZ}{dN}=\frac{\lambda}{\sqrt{6}}Z^2+\mathcal{O}(Z^3)$.\\$~~$\\ The origin is unstable for both of the cases $\lambda>0$\\ (same as FIG.\ref{z_center_manifold}\textbf{(a)}) and $\lambda<0$ (same as FIG.\ref{z_center_manifold}\textbf{(b)}).\\$~$\end{tabular}\\ \hline
\end{tabular}
\end{table}
\subsection{Model 4: Exponential potential and
exponentially-dependent dark-matter particle mass \label{M4}}
In this consideration evolution equations in Section \ref{BES} can be written to the autonomous system as follows
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2),\label{eq85} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy.\label{eq86}
\end{eqnarray}
We ignore the equation corresponding to the auxiliary variable $z$ in the above autonomous system because the R.H.S. expression of $x'$ and $y'$ does not depend on $z$.\par
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{M_1}
\caption{Vector field near the origin for the critical point $M_1$. L.H.S. for $\mu >3$ and R.H.S. for $\mu<0$.}
\label{M_1}
\end{figure}
Corresponding to the above autonomous system we have four critical points $M_1$, $M_2$, $M_3$ and $M_4$. The set of critical points, their existence and the value of cosmological parameters at those critical points corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ shown in Table \ref{TPME} and the eigenvalues of the Jacobian matrix corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ at those critical points and the nature of the critical points are shown in Table \ref{TNME}.\bigbreak
\begin{table}[h]
\caption{\label{TPME}Table shows the set of critical points and their existence, value of cosmological parameters corresponding to that critical points. }
\begin{tabular}{|c|c|c c|c|c|c| c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$Existence$&$x$&$y$& $~\Omega_X~$&$~\omega_X~$ &$~\omega_{tot}~$& $~q~$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\ $M_1$\\$~~$\end{tabular}& For all $\mu$ and $\lambda$&$-\sqrt{\frac{2}{3}}\mu$&$0$&$-\frac{2}{3}\mu^2$ & $1$&$-\frac{2}{3}\mu^2$&$\frac{1}{2}\left(1-2\mu^2\right)$ \\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_2$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$\sqrt{1+\frac{\lambda^2}{6}}$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_3$\\$~~$\end{tabular}&For all $\mu$ and $\lambda$&$-\frac{\lambda}{\sqrt{6}}$&$-\sqrt{1+\frac{\lambda^2}{6}}$&$1$&$-1-\frac{\lambda^2}{3}$&$-1-\frac{\lambda^2}{3}$&$-\frac{1}{2}\left(2+\lambda^2\right)$\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_4$\\$~~$\end{tabular}&\begin{tabular}{@{}c@{}}$~~$\\For $\mu\neq\lambda$\\and\\ $\min\{\mu^2-\frac{3}{2},\lambda^2+3\}\geq\lambda\mu$\\$~~$\end{tabular}&$\frac{\sqrt{\frac{3}{2}}}{\lambda-\mu}$&$\frac{\sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}}{|\lambda-\mu|}$&$\frac{\mu^2-\lambda\mu-3}{(\lambda-\mu)^2}$&$\frac{\mu(\lambda-\mu)}{\mu^2-\lambda\mu-3}$&$\frac{\mu}{\lambda-\mu}$&$\frac{1}{2}\left(\frac{\lambda+2\mu}{\lambda-\mu}\right)$\\ \hline
\end{tabular}
\end{table}
\begin{table}[h]
\caption{\label{TNME}The eigenvalues $(\lambda_1,\lambda_2)$ of the Jacobian matrix corresponding to the autonomous system $(\ref{eq85}-\ref{eq86})$ at those critical points $(M_1-M_4)$ and the nature of the critical points.}
\begin{tabular}{|c|c c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$~Critical ~Points$\\$~~$\end{tabular} &$\lambda_1$&$\lambda_2$&$Nature~ of~ critical~ Points$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$M_1$\\$~~$\end{tabular}&$-\left(\frac{3}{2}+\mu^2\right)$$~~$&$~~$$-\left(\mu^2-\frac{3}{2}\right)+\lambda\mu$& \begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $\left(\mu^2-\frac{3}{2}\right)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\mu^2-\frac{3}{2}\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_2$\\$~~$\end{tabular}&$-(3+\lambda^2)+\lambda\mu$&$-\left(3+\frac{\lambda^2}{2}\right)$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $(\lambda^2+3)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\lambda^2+3\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_3$\\$~~$\end{tabular}&$-(3+\lambda^2)+\lambda\mu$&$-\left(3+\frac{\lambda^2}{2}\right)$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic if $(\lambda^2+3)\neq\lambda\mu$,\\$~~$ \\ non-hyperbolic if $\left(\lambda^2+3\right)=\lambda\mu$\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$M_4$\\$~~$\end{tabular}&$\frac{a+d+\sqrt{(a-d)^2+4bc}}{2}$&$\frac{a+d-\sqrt{(a-d)^2+4bc}}{2}$&\begin{tabular}{@{}c@{}}$~~$\\Hyperbolic when $\mu^2-\frac{3}{2}>\lambda\mu$\\ and $\lambda^2+3>\lambda\mu$,\\$~~$\\non-hyperbolic when $\mu^2-\frac{3}{2}=\lambda\mu$\\ or $\lambda^2+3=\lambda\mu$\\$~~$\end{tabular}\\ \hline
\end{tabular}
\end{table}
Note that for the critical point $M_4$ we have written the eigenvalues in terms of $a$, $b$, $c$ and $d$, where $a=-\frac{3}{2(\lambda-\mu)^2}(\lambda^2+3-\lambda\mu)$, $b=\mp\sqrt{\frac{3}{2}}\left(\frac{3}{(\lambda-\mu)^2}+2\right)\sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}$, $c=\mp\sqrt{\frac{3}{2}}\left\{\frac{\lambda^2+3-\lambda\mu}{(\lambda-\mu)^2}\right\}
\sqrt{-\frac{3}{2}-\mu(\lambda-\mu)}$, $d=-\frac{3}{(\lambda-\mu)^2}\left\{\left(\mu^2-\frac{3}{2}\right)-\lambda\mu\right\}$.\par
Again, here we only state the stability of every critical points $(M_1-M_4)$ and the reason behind the stability in the tabular form, which is shown as in Table \ref{T_M_stability}.
\begin{table}[!]
\caption{\label{T_M_stability}Table shows the stability and the reason behind the stability of the critical points $(M_1-M_4)$}
\begin{tabular}{|c|c|c|}
\hline
\hline
\begin{tabular}{@{}c@{}}$~~$\\$ CPs $\\$~$\end{tabular} &$Stability$& $Reason~behind~the~stability$ \\ \hline\hline
\begin{tabular}{@{}c@{}}$~~$\\$ M_1 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Stable node for $\left(\mu^2-\frac{3}{2}\right)>\lambda\mu$\\ $~~$\\and\\$~~$\\ saddle node for $\left(\mu^2-\frac{3}{2}\right)\leq\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\left(\mu^2-\frac{3}{2}\right)>\lambda\mu$, as both eigenvalues \\of the Jacobian matrix at $M_1$ are negative, so by\\ Hartman-Grobman theorem we can conclude that\\ the critical point $M_1$ is a stable node.\\$~~$\\ For $\left(\mu^2-\frac{3}{2}\right)<\lambda\mu$, as one eigenvalue is positive\\ and another is negative, so by Hartman-Grobman theorem\\ we can conclude that the critical point $M_1$ is a saddle node.\\$~~$\\ For $\left(\mu^2-\frac{3}{2}\right)=\lambda\mu$, after shifting the critical point\\ $M_1$ to the origin by the coordinate transformation\\ $\left(x=X-\sqrt{\frac{2}{3}}\mu,y=Y\right)$, the center manifold can be written as \\$X=\frac{1}{\mu}\sqrt{\frac{3}{2}}Y^2+\mathcal{O}(Y^3)$\\ and the flow on the center manifold can be determined as\\ $\frac{dY}{dN}=\frac{9}{4\mu^2}Y^3+\mathcal{O}(Y^4)$.\\ Hence, for both of the cases $\mu>0$ and $\mu<0$ the origin\\ is a saddle node and unstable in nature (FIG.\ref{M_1}).\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ M_2,M_3 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Stable node for $\left(\lambda^2+3\right)>\lambda\mu$\\$~~$\\ and\\$~~$\\ saddle node for $\left(\lambda^2+3\right)\leq\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\For $\left(\lambda^2+3\right)>\lambda\mu$, as both eigenvalues \\of the Jacobian matrix at $M_2$ are negative, so by\\ Hartman-Grobman theorem we can conclude that\\ the critical point $M_2$ is a stable node.\\$~~$\\ For $\left(\lambda^2+3\right)<\lambda\mu$, as one eigenvalue is positive\\ and another is negative, so by Hartman-Grobman theorem\\ we can conclude that the critical point $M_2$ is a saddle node.\\$~~$\\ For $\left(\lambda^2+3\right)=\lambda\mu$, after shifting the critical point\\ $M_1$ to the origin by the coordinate transformation\\ $\left(x=X-\frac{\lambda}{\sqrt{6}},y=Y\pm\sqrt{1+\frac{\lambda^2}{6}}\right)$, the center manifold can be\\ written as $~~Y=\mp\frac{1}{2\sqrt{1+\frac{\lambda^2}{6}}}X^2+\mathcal{O}(X^3)$\\ and the flow on the center manifold can be determined as\\ $\frac{dX}{dN}=\frac{\lambda}{2}\sqrt{\frac{3}{2}}\left\{1-\frac{6}{\lambda^2}\pm\frac{12}{\lambda^2}\left(1+\frac{\lambda^2}{6}\right)^{\frac{3}{2}}\right\}X^2+\mathcal{O}(X^4)$.\\ Hence, for all possible values $\lambda$ due to the even power $X$\\ in the R.H.S. of the flow equation, the origin is\\ a saddle node and unstable in nature.\\$~~$\end{tabular}\\ \hline
\begin{tabular}{@{}c@{}}$~~$\\$ M_4 $\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\Saddle node for both of the cases, i.e.,\\ $\mu^2-\frac{3}{2}=\lambda\mu$ or $\lambda^2+3=\lambda\mu$\\$~$\end{tabular}& \begin{tabular}{@{}c@{}}$~~$\\ For $\mu^2-\frac{3}{2}=\lambda\mu$, as $M_4$ converts into\\ $M_1$, so we get the same stability like $M_1$.\\$~~$\\ For $\lambda^2+3=\lambda\mu$ as $M_4$ converts into $M_2$ and $M_3$, \\so we get the same stability like $M_2$ and $M_3$.\\$~$\end{tabular}\\\hline
\end{tabular}
\end{table}
Also note that for hyperbolic case of $M_4$, the components of the Jacobian matrix $a,b,c$ and $d$ are very complicated and from the determination of eigenvalue, it is very difficult to provide any conclusion about the stability and for this reason we skip the stability analysis for this case.
\subsection{Model 5: Product of exponential and power-law potential and
product of exponentially-dependent and power-law-dependent dark-matter particle mass \label{M5}}
In this consideration evolution equations in Section \ref{BES} can be written to the autonomous system as follows
\begin{eqnarray}
x'&=&-3x+\frac{3}{2}x(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda y^2-\frac{\lambda}{2}y^2z-\sqrt{\frac{3}{2}}\mu(1+x^2-y^2)-\frac{\mu}{2}z(1+x^2-y^2),\label{eqn80} \\
y'&=&\frac{3}{2}y(1-x^2-y^2)-\sqrt{\frac{3}{2}}\lambda xy-\frac{\lambda}{2}xyz,\label{eqn81}\\
z'&=&-xz^2\label{eqn82}
\end{eqnarray}
For determining the critical points corresponding to the above autonomous system, we first equate the R.H.S. of (\ref{eqn82}) with $0$. Then we have either $x=0$ or $z=0$. For $z=0$ then the above autonomous system converts in to the autonomous system of Model 4. So, then we will get the similar types of result as Model 4. When $x=0$, we have three physically meaningful critical points corresponding to the above autonomous system for $\mu\neq 0$ and $\lambda\neq 0$. For another choices of $\mu$ and $\lambda$ like Model 1, we will get similar types of results. The critical points are $N_1(0,0,-\sqrt{6})$, $N_2(0,1,-\sqrt{6})$ and $N_3(0,-1,-\sqrt{6})$ and all are hyperbolic in nature. As the $x$ and $y$ coordinates of these critical points are same as $A_1$, $A_2$ and $A_3$ and the value of cosmological parameters are not depending on $z$ coordinate, so we get the same result for the value of cosmological parameters as $A_1$, $A_2$ and $A_3$ respectively, which are presented in Table \ref{TI}.
\begin{center}
$1.~Critical~Point~N_1$
\end{center}
The Jacobian matrix corresponding to the autonomous system (\ref{eqn80}-\ref{eqn82}) at the critical point $N_1$ has three eigenvalues $\frac{3}{2}$, $-\frac{1}{4}\left(3+\sqrt{9+48\mu}\right)$ and $-\frac{1}{4}\left(3-\sqrt{9+48\mu}\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\left[\frac{1}{24}\left(3+\sqrt{9+48\mu}\right),0,1\right]^T$ and $\left[\frac{1}{24}\left(3-\sqrt{9+48\mu}\right),0,1\right]^T$ respectively. As the critical point is hyperbolic in nature, so we use Hartman-Grobman theorem for analyzing the stability of this critical point. From the determination of eigenvalues, we conclude that the stability of the critical point $N_1$ depends on $\mu$. For $\mu<-\frac{9}{48}$, the last two eigenvalues are complex conjugate with negative real parts. For $\mu\geq-\frac{9}{48}$, all eigenvalues are real.\par
For $\mu<-\frac{9}{48}$, due to presence of negative real parts of last two eigenvalues, $yz$-plane is the stable subspace and as the first eigenvalue is positive, $x$-axis is the unstable subspace. Hence, the critical point $N_1$ is saddle-focus, i.e., unstable in nature. The phase portrait in $xyz$ coordinate system is shown as in FIG.\ref{focus_1}.\par
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{focus11}
\caption{Phase portrait near the origin for the critical point $N_1$ in $xyz$ coordinate system. This phase portrait is drawn for $\mu=-1$.}
\label{focus_1}
\end{figure}
For $\mu\geq-\frac{9}{48}$, always we have at least one positive eigenvalue and at least one negative eigenvalue and hence we can conclude that the critical point $N_1$ is unstable due to its saddle nature.
\begin{center}
$2.~Critical~Point~N_2~\&~ N_3$
\end{center}
The Jacobian matrix corresponding to the autonomous system $(\ref{eqn80}-\ref{eqn82})$ at the critical point $N_2$ and $N_3$ has three eigenvalues $-3$, $-\frac{1}{2}\left(3+\sqrt{9+12\lambda}\right)$ and $-\frac{1}{2}\left(3-\sqrt{9+12\lambda}\right)$ and the corresponding eigenvectors are $[0,1,0]^T$, $\left[\frac{1}{12}\left(3+\sqrt{9+12\lambda}\right),0,1\right]^T$ and $\left[\frac{1}{12}\left(3-\sqrt{9+12\lambda}\right),0,1\right]^T$ respectively. From the determination of the eigenvalue, we conclude that the last two eigenvalues are complex conjugate while $\lambda<-\frac{3}{4}$ and the eigenvalues are real while $\lambda\geq-\frac{3}{4}$.\par
For $\lambda<-\frac{3}{4}$, we can see that the last two eigenvalues are complex with negative real parts and first eigenvalue is always negative. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable focus-node in this case. The phase portrait in $xyz-$coordinate system is shown as in FIG.\ref{focus_2}.\par
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{focus2}
\caption{Phase portrait near the origin for the critical point $N_2$ and $N_3$ in $xyz$ coordinate system. This phase portrait is drawn for $\lambda=-1$.}
\label{focus_2}
\end{figure}
For $-\frac{3}{4}\leq\lambda<0$, we can see that all eigenvalues are negative. So, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are stable node in this case.\par
For $\lambda>0$, we have two negative and one positive eigenvalues. Hence, by Hartman-Grobman theorem we conclude that the critical points $N_2$ and $N_3$ both are saddle node and unstable in nature.\bigskip
\section{Bifurcation Analysis by Poincar\'{e} index and Global Cosmological evolution \label{BAPGCE}}
The flat potential plays a crucial role to obtain the bouncing solution. After the bounce, the flat potential naturally allows the universe to penetrate the slow-roll inflation regime, as a result of that making the bouncing universe compatible with observations.\par
In Model 1 (\ref{M1}), for the inflationary scenario, we consider $\lambda$ and $\mu$ very small positive number so that $V(\phi) \approx V_0$ and $M_{DM} \approx M_0$. The Eqn. (\ref{eq11}) mainly regulate the flow along $Z$-axis. Due to Eqn. (\ref{eq11}) the overall 3-dimensional phase space splits up into two compartments and the $ZY$-plane becomes the separatrix. In the right compartment, for $x>0$, we have $z' <0$ and $z'>0$ in the left compartment. on the $ZY$ plane $z' \approx 0$. For $\lambda \neq 0$ and $\mu \neq 0$, all critical points are located on the Y-axis. As all cosmological parameters can be expressed in terms of $x$ and $y$, so we rigorously inspect the vector field on $XY$-plane. Due to Eqn. (\ref{eq4}), the viable phase-space region (say $S$) satisfies $y^2-x^2 \leqslant 1$ which is inside of a hyperbola centered at the origin (FIG.\ref{hyperbola}). On the $XY$-plane $z' \approx 0$. So on the $XY$-plane, by Hartman-Grobman theorem we can conclude there are four hyperbolic sectors around $A_1$ ($\alpha$-limit set) and one parabolic sector around each of $A_2$ and $A_3$ ($\omega$-limit sets). So, by Bendixson theorem, it is to be noted that, the index of $A_1|_{XY}$ is $1$ and the index of $A_2|_{XY}$ and $A_3|_{XY}$ is $-1$. If the initial position of the universe is in left compartment and near to the $\alpha$-limit, then the universe remains in the left compartment and moves towards $\omega$-limit set asymptotically at late time. Similar phenomenon happens in right compartment also. The universe experiences a fluid dominated non-generic evolution near $A_1$ for $\mu>0$ and a generic evolution for $\mu<0$. For sufficiently flat potential, near $A_2$ and $A_3$, a scalar field dominated non-generic and generic evolution occur for $\lambda>0$ and $\lambda<0$ respectively (see FIG. \ref{Model1}).
\begin{figure}[h]
\centering
\includegraphics[width=.4\textwidth]{hyperbolic.pdf}
\caption{Vector field on the projective plane by antipodal points identified of the disk.}
\label{hyperbola}
\end{figure}
\begin{figure}[htbp!]
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{A1.png}
\caption{}
\label{fig:A1}
\end{subfigure}%
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{A2.png}
\caption{}
\label{fig:A2}
\end{subfigure}%
\begin{subfigure}{.34\textwidth}
\includegraphics[width=.9\linewidth]{A3.png}
\caption{}
\label{fig:A3}
\end{subfigure}
\caption{\label{Model1}\textit{Model 1}: Qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values for three sets of initial conditions. (a) The initial condition near the point $A_1$. (b) The initial condition near the point $A_2$. (c) The initial condition near the point $A_3$. We observe that the limit of the physical parameter $\omega_{total}\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase.}
\end{figure}
The Poinca\'{e} index theorem \cite{0-387-95116-4} helps us to determine Euler Poincar\'{e} characteristic which is $\chi(S)=n-f-s$, where $n$, $f$, $s$ are the number of nodes, foci and saddle on $S$. Henceforward we consider index as Poinca\'{e} index. So for the vector field of case-(i)$|_{XY-plane}$, $\chi(S)=1$. This vector field can define a vector field on the projective plane, i.e., in 3-dimensional phase-space, if we consider a closed disk the $XY$-plane of radius one and centered at the origin, then we have the same vector field on the projective plane by antipodal point identified.\par
For $z=constant (\neq 0)$ plane the above characterization of vector field changes as a vertical flow along $Z$-axis regulate the character of the vector field. Using Bendixson theorem \cite{0-387-95116-4} we can find the index of nonhyperbolic critical point by restricting the vector field on a suitable two-dimensional subspace.\par
If we restrict ourselves on $XZ$-plane, $A_1$ is saddle in nature for $\mu > 0$. On the $XZ$ plane the index of $A_1$ is -1 for $\mu>0$ as four hyperbolic sectors are separated by two separatices around $A_1$. For $\mu<0$, there is only one parabolic sector and the index is zero (FIG.\ref{A_1}). On the $YZ$ plane $A_1$ swap its index with $XZ$ plane depending on the sign of $\mu$.\par
On the uw-plane $A_2$ and $A_3$ have index 1 for $\lambda>0$ and -1 for $\lambda \leqslant 0$. On the uw-plane $A_2$ and $A_3$ have index -1 for $\lambda>0$ and 1 for $\lambda < 0$. At $\lambda=0$, the index of $A_2$ is 0 but the index of $A_3$ is 1. On uv-plane the index $A_2$ or $A_3$ is 1 and does not depend on $\lambda$. On the (uw)-plane around $A_2$ the number of hyperbolic sector is four and there is no elliptic sector. So the index of $A_2$ and $A_3$ $(origin)|_{uw~plane}/ _{vw~plane}$ is -1 for $\lambda>0$ and for $\lambda<0$ the index is 1 as there is no hyperbolic or elliptic orbit.\par
A set of non-isolated equilibrium points is said to be normally hyperbolic if the only eigenvalues with zero real parts are those whose corresponding eigenvectors are tangent to the set. For the case (ii) to case (iv), we get normally hyperbolic critical points as the eigenvector $[0~ 0~ 1]^T$ (in new $(u,v,w)$ coordinate system) corresponding to only zero eigenvalue, is tangent to the line of critical points. The stability of a set which is normally hyperbolic can be completely classified by considering the signs of the eigenvalues in the remaining directions. So the character of the flow of the phase space for each $z=constant$ plane is identical to the $XY$-plane in the previous case. Thus the system (\ref{eq9}-\ref{eq11}) is structurally unstable \cite{0-387-95116-4} at $\lambda=0$ or $\mu=0$ or both. On the other hand, the potential changes its character from runaway to non-runaway as $\lambda$ crosses zero from positive to negative. Thus $\lambda=0$ and $\mu=0$ are the bifurcation values\cite{1950261}.\bigbreak
Model 2 (\ref{M2}) contains five critical points $L_1-L_5$. For $\lambda>0$, the flow is unstable and for $\lambda<0$ the flow on the center manifold is stable. Around $L_2$, the character of the vector field same as $L_1$. For $\mu=\pm \sqrt{\frac{3}{2}}$, the flow on the center manifold at $L_3$ or $L_4$ depends on the sign of $\lambda$ (FIG.\ref{L_21} \& FIG.\ref{L_2_1_1}). On the other hand, $\mu>\sqrt{\frac{3}{2}}$ or $\mu< \sqrt{\frac{3}{2}}$ the flow on the center manifold does not depend on $\lambda$. For $\mu >0$, the flow on the center manifold at $L_5$ moves increasing direction of $z$. On the other hand, for $\mu <0$, the flow on the center manifold is in decreasing direction of $z$. The index of $L_1$ is same as $A_2$.
For $\mu=\pm \sqrt{\frac{3}{2}}$ and $\lambda=1$, the index of $L_2|_{XY plane}$ is -1 as there are only four hyperbolic sectors. But for $\lambda=2$, there are two hyperbolic and one parabolic sectors, so the index is zero.
The index of $L_3$ is same as $L_2$. The index of $L_4$ on $ZX$ or $XY$ plane is zero as there are two hyperbolic and one parabolic sector for each $\mu>0$ and $\mu<0$. So it is to be noted that, for $\lambda=0, \pm \sqrt{\frac{3}{2}} $ and $\mu=0$ the system is structurally unstable.
\begin{figure}[htbp!]
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{L1.png}
\caption{}
\label{fig:L1}
\end{subfigure}%
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{L2.png}
\caption{}
\label{fig:L2}
\end{subfigure}%
\begin{subfigure}{.34\textwidth}
\includegraphics[width=.9\linewidth]{L3mun.png}
\caption{}
\label{fig:L3n}
\end{subfigure}
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{L3mup.png}
\caption{}
\label{fig:L3n}
\end{subfigure}%
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{L4mup.png}
\caption{}
\label{fig:L4p}
\end{subfigure}%
\begin{subfigure}{.34\textwidth}
\includegraphics[width=.9\linewidth]{L5mup.png}
\caption{}
\label{fig:L5}
\end{subfigure}
\caption{\label{Model2}\textit{Model 2}: Some interesting qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values for six sets of initial conditions. (a) The initial position near the point $L_1$. (b) The initial position near the point $L_2$. (c) The initial position near the point $L_3$ and $\mu<-\sqrt{\frac{3}{2}}$. (d) The initial position near the point $L_3$ and $\mu>\sqrt{\frac{3}{2}}$. (e) The initial position near the point $L_4$ and $\mu>\sqrt{\frac{3}{2}}$. (f) The initial position near the point $L_5$ and $\mu>0$. We observe that the limit of the physical parameter $\omega_{total}\rightarrow -1$. In early or present time the scalar field may be in phantom phase but the field is attracted to the de-Sitter phase except for (b) and (e). In (e) the scalar field crosses phantom boundary line and enters into the phantom phase in late timeand would cause Big-Rip.}
\end{figure}
The universe experiences a scalar field dominated non-generic evolution near $L_1$ and $L_2$ for $\lambda>0$ and a scalar field dominated generic evolution for $\lambda<0$ or on the z-nullcline. Near $L_3$ and $L_4$, a scalar field dominated non-generic evolution of the universe occur at $\mu \approx \pm \sqrt{\frac{3}{2}}$. At $\mu \approx 0$ a scaling non-generic evolution occur near $L_5$ (see FIG.\ref{Model2}).
\bigbreak
Model 3 (\ref{M3}) contains three critical points $R_1-R_3$. $R_1$ is saddle for all values of $\mu$. On the $xy$ plane the index of $R_1$ is same as $A_1$. On the projection of the $xy$-plane $R_2$ and $R_3$ are stable nodes for all values of $\lambda$. On the center manifold at $R_2$ or $R_3$, the flow is increasing direction along $z$-axis and the flow is decreasing direction along $z$-axis for $\lambda<0$. On the $XZ$ or $YZ$ plane, the index of $R_2$ or $R_3$ is zero as around each of them there are two hyperbolic and one parabolic sectors.Thus we note that, for $\mu=0$ and $\lambda=0$, the stability of the system bifurcate.\\
We observe that no scaling solutions or a tracking solutions exist in this specific model like in the quintessence
theory. However, the critical points which describe the de Sitter solution do not exist in the case of quintessence for
the exponential potential; the universe experiences a fluid dominated non-generic evolution near critical point $R_1$ and a scalar field dominated non-generic evolution near critical point $R_2$ and $R_3$. For sufficiently flat potential, early or present phantom/non-phantom universe is attracted to $\Lambda$CDM cosmological model (see FIG. \ref{fig:Model3}).\bigbreak
Model 4 (\ref{M4}) contains four critical points $M_1-M_4$. $M_1-M_3$ are stable node for $\left(\lambda^2+3\right)>\lambda\mu$ (index 1) and saddle node (index zero) for $\left(\lambda^2+3\right)\leq\lambda\mu$, i.e., the stability of the system bifurcate at $\left(\lambda^2+3\right)=\lambda\mu$. Thus we find a generic evolution for $\left(\lambda^2+3\right)\neq \lambda\mu$ and no-generic otherwise. The kinetic dominated solution ($M_1$) and scalar field dominated solutions ($M_2$ and $M_3$) are stable for $\left(\lambda^2+3\right)>\lambda\mu$. For the energy density, near $M_2$ and $M_3$, we observe that at late times the scalar field dominates $\Omega_X=\Omega_\phi \rightarrow 1$ and $\Omega_m \rightarrow 0$, while the parameter for the equation of state $\omega_{tot}$ have the limits $\omega_{tot} \rightarrow -1$ for sufficiently flat potential.\bigbreak
Model 5 (\ref{M5}) contains three critical points $N_1$, $N_2$, $N_3$. For $\mu< -\frac{3}{16}$, the Shilnikov's saddle index \cite{Shilnikov} of $N_1$ is $\nu_{N_1}=\frac{\rho_{N_1}}{\gamma_{N_1}}=0.5$ and saddle value is $\sigma_{N_1}=-\rho_{N_1}+\gamma_{N_1}=0.75$. As So Shilnikov condition \cite{Shilnikov} is satisfied as $\nu_{N_1}<1$ and $\sigma_{N_1}>0$. The second Shilnikov's saddle value $\sigma^{(2)}_{N_1}=-2\rho_{N_1}+\gamma_{N_1}=0$. So, by L. Shilnikov's theorem (Shilnikov, 1965) \cite{Shilnikov} there are countably many saddle periodic orbits in a neighborhood of the homoclinic loop of the saddle-focus $N_1$. As $\nu_{N_1}$ is invariant for any choice of $\mu$, so Shilnikov's bifurcation does not appear. For $-\frac{3}{16}<\mu < 0$, the vector field near $N_1$ is saddle in character. On the other hand, $N_1$ is saddle for $\mu>0$. So, $\mu=0$ is a bifurcation value for the bifurcation point $N_1$. Similarly, $\lambda=0$ is a bifurcation point for the bifurcation points $N_2$ and $N_3$. We observe scalar field dominated solutions near $N_2$ and $N_3$ which exists at bifurcation value, i.e., for sufficiently flat universe and attracted to $\Lambda$CDM cosmological model. \\
\begin{figure}[htbp!]
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{R1.png}
\caption{}
\label{fig:R1}
\end{subfigure}%
\begin{subfigure}{0.34\textwidth}
\includegraphics[width=.9\linewidth]{R2.png}
\caption{}
\label{fig:R2}
\end{subfigure}%
\begin{subfigure}{.34\textwidth}
\includegraphics[width=.9\linewidth]{R3.png}
\caption{}
\label{fig:R3}
\end{subfigure}
\caption{Qualitative evolution of the physical variables $\omega_{total}$, $\omega_{\phi}$ and $q$ for perturbation of the parameters ($\lambda$ \& $\mu$) near the bifurcation values each of \textit{Model 3}, \textit{Model 4} and \textit{Model 5} for three sets of initial conditions. The initial positions in (a), (b) and (c) are near \underline{$R_1$, $R_2$ and $R_3$} (\underline{$M_1$, $M_2$ and $M_3$}/\underline{$N_1$, $N_2$ and $N_3$}) respectively. \label{fig:Model3} }
\end{figure}
\section{Brief discussion and concluding remarks \label{conclusion}}
The present work deals with a detailed dynamical system analysis of the interacting DM and DE cosmological model in the background of FLRW geometry. The DE is chose as a phantom scalar field with self-interacting potential while varying mass (a function of the scalar field) DM is chosen as dust. The potential of the scalar field and the varying mass of DM are chosen as exponential or power-law form (or a product of them) and five possible combination of them are studied.\bigbreak
\textbf{Model 1: $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$}\par
For case (i), i.e., $\mu\neq 0, \lambda\neq 0$; there are three non-hyperbolic critical points $A_1$, $A_2$, $A_3$ of which $A_1$ corresponds to DM dominated decelerating phase (dust era) while $A_2$ and $A_3$ purely DE dominated and they represent the $\Lambda$CDM model (i.e., de-Sitter phase) of the universe.\par
For case (ii), i.e., $\mu\neq 0, \lambda=0$; there is one critical point and two space of critical points. The cosmological consequence of these critical points are similar to case (i).\par
For case (iii), i.e., $\mu= 0, \lambda\neq 0$; there is one space of critical points and two distinct critical points. But as before the cosmological analysis is identical to case (i).\par
For the fourth case, i.e., $\mu=0, \lambda=0$; there are three space of critical points $(S_1,S_2,S_3)$ which are all non-hyperbolic in nature and are identical to the critical points in case (ii). Further, considering the vector fields in $Z=constant$ plane, it is found that for the critical point $S_1$, every point on $Z-$ axis is a saddle node while for critical points $S_2$ and $S_3$ every point on $Z-$axis is a stable star.\bigbreak
\textbf{Model 2: $V(\phi)=V_0\phi^{-\lambda}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$}\par
The autonomous system for this model has five non-hyperbolic critical points $L_i$, $i=1,\ldots,5$. For $L_1$ and $L_2$, the cosmological model is completely DE dominated and the model describes cosmic evolution at the phantom barrier. The critical points $L_3$ and $L_4$ are DE dominated cosmological solution ($\mu^2>3$) representing the $\Lambda$CDM model. The critical point $L_5$ corresponds to ghost (phantom) scalar field and it describes the cosmic evolution in phantom domain ($2\mu^2>3$).\bigbreak
\textbf{Model 3: $V(\phi)=V_1e^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_0\phi^{-\mu}$}\par
There are three non-hyperbolic critical points in this case. The first one (i.e., $R_1$) is purely DM dominated cosmic evolution describing the dust era while the other two critical points (i.e., $R_2$, $R_3$) are fully dominated by DE and both describe the cosmic evolution in the phantom era.\bigbreak
\textbf{Model 4: $V(\phi)=V_1 e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_1e^{-\kappa\mu\phi}$}\par
The autonomous system so formed in this case has four critical points $M_i$, $i=1,\ldots,4$ which may be hyperbolic/non-hyperbolic depending on the parameters involved. The critical point $M_1$ represents DE as ghost scalar field and it describes the cosmic evolution in the phantom domain. For the critical points $M_2$ and $M_3$, the cosmic evolution is fully DE dominated and is also in the phantom era. The cosmic era corresponding to the critical point $M_4$ describes scaling solution where both DM and DE contribute to the cosmic evolution.\bigbreak
\textbf{Model 5: $V(\phi)=V_2\phi^{-\lambda} e ^{-\kappa\lambda\phi}, M_{_{DM}}(\phi)=M_2\phi^{-\mu}e^{-\kappa\mu\phi}$}\par
This model is very similar to either model $4$ or model $1$, depending on the choices of the dimensionless variables $x$ and $z$. For $z=0$, the model reduces to model $4$ while for $x=0$ the model is very similar to model $1$ and hence the cosmological analysis is very similar to that.\par
Finally, using Poincar\'{e} index theorem, Euler Poincar\'{e} characteristic is determined for bifurcation analysis of the above cases from the point of view of the cosmic evolution described by the equilibrium points. Lastly, inflationary era of cosmic evolution is studied by using bifurcation analysis.
\begin{acknowledgements}
The author Soumya Chakraborty is grateful to CSIR, Govt. of India for giving Junior Research Fellowship (CSIR Award No: 09/096(1009)/2020-EMR-I) for the Ph.D work.
The author S. Mishra is grateful to CSIR, Govt. of India for giving Senior Research Fellowship (CSIR Award No: 09/096 (0890)/2017-EMR-I) for the Ph.D work. The author Subenoy Chakraborty is thankful to Science and Engineering Research Board (SERB) for awarding MATRICS Research Grant support (File No: MTR/2017/000407).\\
\end{acknowledgements}
\bibliographystyle{unsrt}
| {'timestamp': '2020-11-20T02:16:16', 'yymm': '2011', 'arxiv_id': '2011.09842', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.09842'} |
\section{Introduction} \label{sec:intro}
Neutrinos of astrophysical and cosmological origin have been crucial for unraveling neutrino masses and properties. Solar neutrinos provided the first evidence for neutrino oscillations, and hence massive neutrinos. We know that at least two massive neutrinos should exist, as required by the two distinct squared mass differences measured, the atmospheric $\lvert\Delta m^2_{31}\rvert \approx 2.51\cdot 10^{-3}$~eV$^2$ and the solar $\Delta m^2_{21} \approx 7.42\cdot 10^{-5}$~eV$^2$ splittings~\cite{deSalas:2020pgw,Esteban:2020cvm,Capozzi:2021fjo}~\footnote{The current ignorance on the sign of $\lvert\Delta m^2_{31}\rvert$ is translated into two possible mass orderings. In the \emph{normal} ordering (NO), the total neutrino mass is $\sum m_\nu \gtrsim 0.06$~eV, while in the \emph{inverted} ordering (IO) it is $\sum m_\nu \gtrsim 0.10 $~eV.}. However, neutrino oscillation experiments are not sensitive to the absolute neutrino mass scale. On the other hand, cosmological observations provide the most constraining recent upper bound on the total neutrino mass via relic neutrinos, $\sum m_\nu<0.09$~eV at $95\%$~CL~\cite{DiValentino:2021hoh}, where the sum runs over the distinct neutrino mass states. However, this limit is model-dependent, see for example~\cite{DiValentino:2015sam,Palanque-Delabrouille:2019iyz,Lorenz:2021alz,Poulin:2018zxs,Ivanov:2019pdj,Giare:2020vzo,Yang:2017amu,Vagnozzi:2018jhn,Gariazzo:2018meg,Vagnozzi:2017ovm,Choudhury:2018byy,Choudhury:2018adz,Gerbino:2016sgw,Yang:2020uga,Yang:2020ope,Yang:2020tax,Vagnozzi:2018pwo,Lorenz:2017fgo,Capozzi:2017ipn,DiValentino:2021zxy,DAmico:2019fhj,Colas:2019ret}.
The detection of supernova (SN) neutrinos can also provide constraints on the neutrino mass, by exploiting the time of flight delay~\cite{Zatsepin:1968ktq} experienced by a neutrino of mass $m_\nu$ and energy $E_\nu$:
\begin{equation}
\label{eq:delay}
\Delta t = \frac{D}{2c}\left(\frac{m_\nu}{E_{\nu}}\right)^2~,
\end{equation}
\noindent where $D$ is the distance travelled by the neutrino. This method probes the same neutrino mass constrained via laboratory-based kinematic measurements of beta-decay electrons~\footnote{The current limit from the tritium beta decay experiment KATRIN (Karlsruhe Tritium Neutrino) is $m_{\beta}<0.8$~eV~\cite{Aker:2021gma} and the expected sensitivity is 0.2~eV~\cite{Drexlin:2013lha}, both at 90\% CL.}. Using neutrinos from SN1987A~\cite{Kamiokande-II:1989hkh,Kamiokande-II:1987idp,Bionta:1987qt,Alekseev:1988gp,Alekseev:1987ej}, a $95\%$ confidence level (CL) current upper limit of $m_\nu<5.8$~eV~\cite{Pagliaroli:2010ik} has ben derived (see also Ref.~\cite{Loredo:2001rx}). Prospects for future SN explosions may reach the sub-eV level~\cite{Pagliaroli:2010ik,Nardi:2003pr,Nardi:2004zg,Lu:2014zma,Hyper-Kamiokande:2018ofw,Hansen:2019giq}. Nevertheless, these forecasted estimates rely on the detection of inverse $\beta$ decay events in water Cherenkov or liquid scintillator detectors, mostly sensitive to $\bar{\nu}_e$ events. An appealing and alternative possibility is the detection of the $\nu_e$ neutronization burst exploiting the liquid argon technology at the DUNE far detector~\cite{DUNE:2020zfm,Rossi-Torres:2015rla}. The large statistics and the very distinctive neutrino signal in time will ensure a unique sensitivity to the neutrino mass signature via time delays.
\section{Supernova electron neutrino events} \label{sec:events}
Core-collapse supernovae emit $99\%$ of their energy ($\simeq 10^{53}$~ergs) in the form of (anti)neutrinos of all flavors with mean energies of $\mathcal{O}(10~\si{\mega\electronvolt})$. The explosion mechanism of a core-collapse SN can be divided into three main phases: the \emph{neutronization burst}, the \emph{accretion phase} and the \emph{cooling phase}. The first phase, which lasts for 25 milliseconds approximately, is due to a fast \emph{neutronization} of the stellar nucleus via electron capture by free protons, causing an emission of electron neutrinos ($e^- + p\rightarrow \nu_e + n$). The flux of $\nu_e$ stays trapped behind the shock wave until it reaches sufficiently low densities for neutrinos to be suddenly released. Unlike subsequent phases, the neutronization burst phase has little dependence on the progenitor star properties. In numerical simulations, there is a second \emph{accretion} phase of $\sim 0.5$~s in which the shock wave leads to a hot accretion mantle around the high density core of the neutron star. High luminosity $\nu_e$ and $\bar{\nu}_e$ fluxes are radiated via the processes $e^- + p\rightarrow \nu_e + n$ and $e^+ + n \rightarrow \bar{\nu}_e + p$ due to the large number of nucleons and the presence of a quasi-thermal $e^+e^-$ plasma. Finally, in the \emph{cooling} phase, a hot neutron star is formed. This phase is characterized by the emission of (anti)neutrino fluxes of all species within tens or hundreds of seconds.
For numerical purposes, we shall make use here of the following quasi-thermal parametrization, representing well detailed numerical simulations~\cite{Keil:2002in,Hudepohl:2009tyy,Tamborra:2012ac,Mirizzi:2015eza}:
\begin{equation}
\label{eq:differential_flux}
\Phi^{0}_{\nu_\beta}(t,E) = \frac{L_{\nu_\beta}(t)}{4 \pi D^2}\frac{\varphi_{\nu_\beta}(t,E)}{\langle E_{\nu_\beta}(t)\rangle}\,,
\end{equation}
and describing the differential flux for each neutrino flavor $\nu_\beta$ at a time $t$ after the SN core bounce, located at a distance $D$. In Eq.~\ref{eq:differential_flux}, $L_{\nu_\beta}(t)$ is the $\nu_\beta$ luminosity, $\langle E_{\nu_\beta}(t)\rangle$ the mean neutrino energy and $\varphi_{\nu_\beta}(t,E)$ is the neutrino energy distribution, defined as:
\begin{equation}
\label{eq:nu_energy_distribution}
\varphi_{\nu_\beta}(t,E) = \xi_\beta(t) \left(\frac{E}{\langle E_{\nu_\beta}(t)\rangle}\right)^{\alpha_\beta(t)} \exp{\left\{\frac{-\left[\alpha_\beta(t) + 1\right] E}{\langle E_{\nu_\beta}(t)\rangle}\right\}}\,,
\end{equation}
\noindent where $\alpha_\beta(t)$ is a \emph{pinching} parameter and $\xi_\beta(t)$ is a unit-area normalization factor.
The input for luminosity, mean energy and pinching parameter values have been obtained from the \texttt{SNOwGLoBES} software \cite{snowglobes}. \texttt{SNOwGLoBES} includes fluxes from the Garching Core-Collapse Modeling Group~\footnote{\url{https://wwwmpa.mpa-garching.mpg.de/ccsnarchive/index.html}}, providing computationally expensive simulation results for a progenitor star of $8.8 M_\odot$~\cite{Hudepohl:2009tyy}.
Neutrinos experience flavor conversion inside the SN as a consequence of their coherent interactions with electrons, protons and neutrons in the medium, being subject to the MSW (Mikheyev-Smirnov-Wolfenstein) resonances associated to the solar and atmospheric neutrino sectors~\cite{Dighe:1999bi}. After the resonance regions, the neutrino mass eigenstates travel incoherently in their way to the Earth, where they are detected as flavor eigenstates. The neutrino fluxes at the Earth ($\Phi_{\nu_e}$ and $\Phi_{\nu_\mu}=\Phi_{\nu_\tau}=\Phi_{\nu_x}$) can be written as:
\begin{eqnarray}
\label{eq:nue}
\Phi_{\nu_e}&= &p \Phi^{0}_{\nu_e} +(1-p) \Phi^{0}_{\nu_x}~;\\
\Phi_{\nu_\mu}+\Phi_{\nu_\tau} \equiv 2\Phi_{\nu_x} & =& (1-p) \Phi^{0}_{\nu_e} + (1+p) \Phi^{0}_{\nu_x}~,
\end{eqnarray}
\noindent where $\Phi^{0}$ refers to the neutrino flux in the SN interior, and the $\nu_e$ survival probability $p$ is given by $p = |U_{e3}|^2= \sin^2 \theta_{13}$ ($p \simeq |U_{e2}|^2 \simeq \sin^2 \theta_{12}$) for NO (IO), due to adiabatic transitions in the $H$ ($L$) resonance, which refer to flavor conversions associated to the atmospheric $\Delta m^2_{31}$ (solar $ \Delta m^2_{21}$) mass splitting, see e.g.~\cite{Dighe:1999bi}. Here we are neglecting possible non-adiabaticity effects occurring when the resonances occur near the shock wave \cite{Schirato:2002tg,Fogli:2003dw,Fogli:2004ff,Tomas:2004gr,Dasgupta:2005wn,Choubey:2006aq,Kneller:2007kg,Friedland:2020ecy}, as well as the presence of turbulence in the matter density \cite{Fogli:2006xy,Friedland:2006ta,Kneller:2010sc,Lund:2013uta,Loreti:1995ae,Choubey:2007ga,Benatti:2004hn,Kneller:2013ska,Fogli:2006xy}. The presence of non-linear collective effects~\cite{Mirizzi:2015eza,Chakraborty:2016yeg,Horiuchi:2018ofe,Tamborra:2020cul,Capozzi:2022slf} is suppressed by the large flavor asymmetries of the neutronization burst~\cite{Mirizzi:2015eza}.
Earth matter regeneration effects also affect the neutrino propagation in case the SN is shadowed by the Earth for the DUNE detector. The trajectories of the neutrinos depend on the SN location and on the time of the day at which the neutrino burst reaches the Earth. Neutrinos therefore travel a certain distance through the Earth characterized by a zenith angle $\theta$, analogous to the one usually defined for atmospheric neutrino studies. This convention assumes $\cos \theta=-1$ for upward-going events, \emph{i.e.} neutrinos that cross a distance equal to the Earth's diameter, and $\cos \theta\geq 0$ for downward-going neutrinos that are un-shadowed by the Earth. An analytical expression for the electron neutrino fluxes after crossing the Earth~\footnote{In what follows, we shall focus on electron neutrino events, the dominant channel in DUNE.} yields no modifications for NO.
In turn, for IO, an approximate formula for the $\nu_e$ survival probability in Eq.~\ref{eq:nue} and after crossing the Earth, assuming that SN neutrinos have traveled a distance $L(\cos\theta)$ inside the Earth and in a density constant medium, reads as~\cite{Dighe:1999bi,Lunardini:2001pb}:
\begin{widetext}
\begin{eqnarray}
\label{eq:p2e}
p & = & \sin^2\theta_{12} + \sin2\theta^m_{12} \, \label{P2e}
\sin(2\theta^m_{12}-2\theta_{12})
\sin^2\left(
\frac{\Delta m^2_{21} \sin2\theta_{12}}{4 E \,\sin2\theta^m_{12}}\,L(\cos\theta)
\right)\,,
\end{eqnarray}
\end{widetext}
\noindent
where $\theta^m_{12}$ is the effective value of mixing angle $\theta_{12}$ in matter for neutrinos:
\begin{eqnarray}
\sin2\theta^m_{12} = \frac{\sin^2\theta_{12}}
{\sin^2\theta_{12}+ \left(\cos^2\theta_{12}- \frac{2\sqrt{2}G_F N{e} E}{\Delta m^2_{21}}\right)^2}~.
\end{eqnarray}
In the expression above, $N_e$ refers to the electron number density in the medium, $\sqrt{2}G_F N_e (\textrm{eV})\simeq 7.6 \times 10^{-14} Y_e\rho$, with $Y_e$ and $\rho$ the electron fraction and the Earth's density in g/cm$^3$ respectively.
Our numerical results are obtained calculating $p$ in Eq.~\ref{eq:p2e} in the general case of neutrino propagation in multiple Earth layers, with sharp edge discontinuities between different layers and a mild density dependence within a layer, see \cite{Lisi:1997yc,Fogli:2012ua}. Our method consists in evaluating the evolution operator for the propagation in a single layer using the Magnus expansion \cite{Magnus_exp}, where the evolution operator is written as the exponential of an operator series. In our case, we stop at the second order of the series. With the approximation of the electron density being a fourth order polynomial as a function of the Earth radius, the integrals involved in the Magnus expansion become analytical. The evolution operator over the entire trajectory in the Earth is simply the product of the operators corresponding to each crossed layer.
The neutrino interaction rate per unit time and energy in the DUNE far detector is defined as:
\begin{equation}
\label{eq:rate_DUNE_fun}
R(t,E) = N_\text{target}~\sigma_{\nu_e\text{CC}}(E)~\epsilon(E)~\Phi_{\nu_e}(t,E)~,
\end{equation}
\noindent where $t$ is the neutrino emission time, $E$ is the neutrino energy, $N_\text{target}=\num{6.03e32}$ is the number of argon nuclei for a $40$ kton fiducial mass of liquid argon, $\sigma_{\nu_e\text{CC}}(E)$ is the $\nu_e$ cross-section, $\epsilon(E)$ is the DUNE reconstruction efficiency and $\Phi_{\nu_e}(t,E)$ is the electron neutrino flux reaching the detector per unit time and energy. The total number of expected events is given by $R\equiv \int R(t,E)\mathop{}\!\mathrm{d} t \mathop{}\!\mathrm{d} E$.
As far as cross-sections are concerned, liquid argon detectors are mainly sensitive to electron neutrinos via their charged-current interactions with $^{40}$Ar nuclei, $\nu_e + {^{40} Ar} \rightarrow e^{-} + {^{40} K^{*}}~$, through the observation of the final state electron plus the de-excitation products (gamma rays, ejected nucleons) from $^{40} K^{*}$. We use the MARLEY~\footnote{MARLEY (Model of Argon Reaction Low Energy Yields) is a Monte Carlo event generator for neutrino interactions on argon nuclei at energies of tens-of-MeV and below, see \url{http://www.marleygen.org/} and Ref.~\cite{Gardiner:2021qfr}.} charged-current $\nu_e$ cross-section on $^{40}$Ar, implemented in \texttt{SNOwGLoBES} \cite{snowglobes}. Concerning event reconstruction, we assume the efficiency curve as a function of neutrino energy given in Ref.~\cite{DUNE:2020zfm}, for the most conservative case quoted there of 5~MeV as deposited energy threshold.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{IO_eventsperbin_1ms.pdf}
\caption{\label{fig:events}Number of $\nu_e$ events per unit time in the DUNE far detector. The plot zooms into the first 50~ms since core bounce. A SN distance of 10~kpc is assumed. Several histograms are shown: neglecting oscillations (both in the SN and in the Earth), as well as including oscillations for the NO and IO cases. For IO, we show the variation of the Earth matter effects with zenith angle $\theta$.}
\end{center}
\end{figure}
Figure~\ref{fig:events} shows the number of $\nu_e$ events as a function of emission time at the DUNE far detector from a SN explosion at $10$~kpc from Earth, for negligible time delays due to non-zero neutrino masses. We illustrate the case where no oscillations are considered. We also account for oscillations in NO and IO cases, the latter for several possible SN locations with respect to the Earth. The neutronization burst is almost entirely (partially) suppressed in the normal (inverted) mass ordering.
For a SN located at $D=10$~kpc from the Earth and without Earth matter effects, $R$ is found to be 860, 1372 and 1228 for the no oscillations, NO and IO cases, respectively. In other words, the largest total event rate is obtained for the largest swap of electron with muon/tau neutrinos in the SN interior, \emph{i.e.} the smallest value of $p$ in Eq.~\ref{eq:nue}, corresponding to the NO case. This can be understood from the larger average neutrino energy at production of muon/tau neutrinos compared to electron neutrinos, resulting in a higher (on average) neutrino cross-section and reconstruction efficiency.
Finally, as shown in Fig.~\ref{fig:events}, Earth matter effects are expected to have a mild effect on the event rate in all cases. The $\nu_e$ flux is left unchanged in the normal ordering, while Earth matter effects modify slightly the neutronization burst peak in the IO case. The total number of events becomes $R=1206, 1214, 1260, 1200$ for IO and $\cos\theta = -0.3,-0.5,-0.7,-1$, respectively.
\section{Neutrino mass sensitivity} \label{sec:likelihood}
In order to compute the DUNE sensitivity to the neutrino mass, we adopt an 'unbinned' maximum likelihood method similar to the one in \cite{Pagliaroli:2010ik}.
We start by generating many DUNE toy experiment datasets (a few hundreds, typically) for each neutrino oscillation and SN distance scenario, and assuming massless neutrinos. For each dataset, the time/energy information of the $R$ generated events are sampled following the parametrization of Eq.~\ref{eq:rate_DUNE_fun}, and events are sorted in time-ascending order.
Furthermore, we assume a $10\%$ fractional energy resolution in our $\mathcal{O}$(10~MeV) energy range of interest, see~\cite{DUNE:2020zfm}, and smear the neutrino energy of each generated event accordingly. We assume perfect time resolution for our studies. On the one hand, DUNE's photon detection system provides a time resolution better than 1~$\mu$s~\cite{DUNE:2020zfm}, implying a completely negligible smearing effect. On the other hand, even in the more conservative case of non-perfect matching between TPC and optical flash information, the DUNE charge readout alone yields a time resolution of order 1~ms~\cite{DUNE:2020ypp}. While not completely negligible, the time smearing is expected to have a small impact also in this case, considering the typical 25~ms duration of the SN neutronization burst.
Once events are generated for each DUNE dataset, we proceed with our minimization procedure. The two free parameters constrained in our fit are an offset time $t_\text{off}$ between the moment when the earliest SN burst neutrino reaches the Earth and the detection of the first event $i=1$, and the neutrino mass $m_\nu$. The fitted emission times $t_{i,fit}$ for each event $i$ depend on these two fit parameters as follows:
\begin{equation}
\label{eq:emission_t}
t_{i,fit} = \delta t_i - \Delta t_{i}(m_\nu) + t_\text{off}\,,
\end{equation}
where $\delta t_i $ is the time at which the neutrino interaction $i$ is measured in DUNE (with the convention that $\delta t_1\equiv 0$ for the first detected event), $\Delta t_i(m_\nu)$ is the delay induced by the non-zero neutrino mass (see Eq.~\ref{eq:delay}), and $t_\text{off}$ is the offset time. We do not include any free parameter describing the SN emission model uncertainties in our fit.
By neglecting backgrounds and all the constant (irrelevant) factors, our likelihood $\mathcal{L}$ function \cite{Pagliaroli:2008ur} reads as
\begin{equation}
\label{eq:likelihood_fun}
\mathcal{L}(m_{\nu},t_\text{off}) = \prod_{i=1}^{R}\int R(t_i,E_i)G_i(E)\mathop{}\!\mathrm{d} E~,
\end{equation}
\noindent where $G_i$ is a Gaussian distribution with mean $E_i$ and sigma $0.1E_i$, accounting for energy resolution. The estimation of the $m_\nu$ fit parameter is done by marginalizing over the nuisance parameter $t_\text{off}$. For each fixed $m_\nu$ value, we minimize the following $\chi^2$ function:
\begin{equation}
\label{eq:chi2_fun}
\chi^2(m_{\nu}) = -2 \log(\mathcal{L}(m_{\nu},t_\text{off,best}))~,
\end{equation}
\noindent where $\mathcal{L}(m_{\nu},t_\text{off,best})$ indicates the maximum likelihood at this particular $m_\nu$ value.
The final step in our analysis is the combination of all datasets for the same neutrino oscillation and SN distance scenario, to evaluate the impact of statistical fluctuations. For each $m_\nu$ value, we compute the mean and the standard deviation of all toy dataset $\chi^2$ values. In order to estimate the allowed range in $m_\nu$, the $\Delta\chi^2$ difference between all mean $\chi^2$ values and the global mean $\chi^2$ minimum is computed. The mean 95\% CL sensitivity to $m_\nu$ is then defined as the largest $m_\nu$ value satisfying $\Delta \chi^2<3.84$. The $\pm 1\sigma$ uncertainty on the 95\% CL $m_\nu$ sensitivity can be computed similarly, including into the $\Delta\chi^2$ evaluation also the contribution from the standard deviation of all toy dataset $\chi^2$ values.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{chi2_10kpc.pdf}
\caption{\label{fig:chi2}$\Delta\chi^2(m_\nu)$ profiles as a function of neutrino mass $m_\nu$, for DUNE generated samples assuming massless neutrinos and a SN distance of 10~kpc. We show the no oscillations' case together with the results for NO and IO. The mean sensitivities and their $\pm 1\sigma$ uncertainties are shown with solid lines and filled bands, respectively. The horizontal dotted line depicts the $95\%$~CL.}
\end{center}
\end{figure}
\begin{table}
\centering
\caption{Mean and standard deviation of the $95\%$~CL sensitivity on neutrino mass from a sample of DUNE SN datasets at $D=10$~kpc, for different neutrino oscillation scenarios. For the IO case, we give sensitivities for different zenith angles $\theta$.}
\label{tab:m_nu_mass_bounds}
\begin{tabular}{@{\extracolsep{0.5cm}}ccc@{\extracolsep{0cm}}}
\toprule
Neutrino mass ordering & $\cos\theta$ & $m_\nu$(eV) \\
\midrule
No oscillations & $0$ & $0.51^{+0.20}_{-0.20}$ \\
\midrule
Normal Ordering & $0$ & $2.01^{+0.69}_{-0.55}$ \\
\midrule
\multirow{5}*{Inverted Ordering} & $0$ & $0.91^{+0.31}_{-0.33}$ \\
& $-0.3$ & $0.85^{+0.33}_{-0.30}$ \\
& $-0.5$ & $0.88^{+0.29}_{-0.33}$ \\
& $-0.7$ & $0.91^{+0.30}_{-0.32}$ \\
& $-1$ & $0.87^{+0.32}_{-0.28}$ \\
\bottomrule
\end{tabular}
\end{table}
Our statistical procedure, and its results for a SN distance of $D=10$~kpc, can be seen in Fig.~\ref{fig:chi2}. The $\Delta\chi^2$ profiles as a function of neutrino mass are shown for no oscillations, and oscillations in SN environment assuming either NO or IO. Earth matter effects have been neglected in all cases. After including Earth matter effects as previously described, only the IO expectation is affected. Table~\ref{tab:m_nu_mass_bounds} reports our results on the mean and standard deviation of the $m_{\nu}$ sensitivity values for different $\cos\theta$ values, that is, for different angular locations of the SN with respect to the Earth.
As can be seen from Fig.~\ref{fig:chi2} and Tab.~\ref{tab:m_nu_mass_bounds}, 95\% CL sensitivities in the 0.5--2.0~eV range are expected. The best, sub-eV reach, results are expected for the no oscillations and IO scenarios. Despite the largest overall event statistics, $R=1372$, the NO reach is the worst among the three cases, of order 2.0~eV. This result clearly indicates the importance of the shape information, in particular of the sharp neutronization burst time structure visible in Fig.~\ref{fig:events} only for the no oscillations and IO cases. Table~\ref{tab:m_nu_mass_bounds} also shows that oscillations in the Earth's interior barely affect the neutrino mass sensitivity.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{mass_sensitivity_comparison_errbar.pdf}
\caption{\label{fig:distance}Dependence of the $95\%$~CL neutrino mass sensitivity with the distance $D$ from Earth at which the SN explodes. The mean and standard deviation of the expected sensitivity values are shown with solid lines and filled bands, respectively.}
\end{center}
\end{figure}
Figure ~\ref{fig:distance} shows how the $95\%~$CL sensitivity on the neutrino mass varies with the SN distance $D$. Both the mean and the standard deviation of the expected sensitivity values are shown. In all scenarios, the sensitivities to $m_\nu$ worsen by about a factor of 2 as the SN distance increases from 5 to 25~kpc. As is well known, as the distance $D$ increases, the reduced event rate ($R\propto 1/D^2$) tends to be compensated by the increased time delays for a given $m_\nu$ ($\Delta t_i(m_\nu)\propto D$). Our analysis shows that this compensation is only partial, and better sensitivities are obtained for nearby SNe.
\section{Conclusions} \label{sec:conclusions}
The capability to detect the electron neutrino flux component from a core-collapse SN in our galactic neighborhood makes large liquid argon detectors powerful observatories to obtain constraints on the absolute value of neutrino mass via time of flight measurements.
Exploiting the signal coming from charged-current interactions of $\nu_e$ with argon nuclei, a 0.9~eV sensitivity on the absolute value of neutrino mass has been obtained in DUNE for the inverted ordering (IO) of neutrino masses, a SN distance of 10~kpc and at 95\% CL. The sensitivity is expected to be significantly worse in the normal ordering (NO) scenario, 2.0~eV for the same SN distance and confidence level. The sensitivity difference between the two orderings demonstrates the benefit of detecting the $\nu_e$ neutronization burst, whose sharp time structure would be almost entirely suppressed in NO while it should be clearly observable in DUNE if the mass ordering is IO. The mild effects of oscillations induced by the Earth matter, affecting only the inverted mass ordering, and of the SN distance from Earth, have been studied. The DUNE sensitivity reach appears to be competitive with both laboratory-based direct neutrino mass experiments (such as KATRIN) and next-generation SN observatories primarily sensitive to the $\bar{\nu}_e$ flux component (such as Hyper-Kamiokande and JUNO).
\begin{acknowledgments}
This work has been supported by the Spanish grants FPA2017-85985-P, PROMETEO/2019/083 and PROMETEO/2021/087, and by the European ITN project HIDDeN (H2020-MSCA-ITN-2019/860881-HIDDeN). The work of FC is supported by GVA Grant No. CDEIGENT/2020/003.
\end{acknowledgments}
| {'timestamp': '2022-03-02T02:00:28', 'yymm': '2203', 'arxiv_id': '2203.00024', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.00024'} |
\section{Introduction}
Spin-orbit interaction (SOI) plays an important role in the widely
studied spin-related effects and spintronic devices. In the latter
it can be either directly utilized to create spatial separation of
the spin-polarized charge carries or indirectly influence the device
performance through spin-decoherence time. In 2D structures two
kinds of SOI are known to be of the most importance, namely Rashba
and Dresselhaus mechanisms. The first one characterized by parameter
$\alpha$ is due to the structure inversion asymmetry (SIA) while the
second one characterized by $\beta$ is due to the bulk inversion
asymmetry (BIA). Most brightly both of the contributions reveal
themselves when the values of $\alpha$ and $\beta$ are comparable.
In this case a number of interesting effects occur: the electron
energy spectrum becomes strongly anisotropic \cite{AnisotrSpectrum},
the electron spin relaxation rate becomes dependent on the spin
orientation in the plane of the quantum well
\cite{AverkievObserved}, a magnetic break-down should be observed in
the Shubnilov de Haas effect\cite{magn}. The energy spectra
splitting due to SOI can be observed in rather well-developed
experiments as that based on Shubnikov--de Haas effect. However,
these experiments can hardly tell about the partial contributions of
the two mechanisms leaving the determination of the relation between
$\alpha$ and $\beta$ to be a more challenging task. At the same
time, in some important cases spin relaxation time $\tau_s$ and spin
polarization strongly depend on the $\frac{\alpha}{\beta}$ ratio. In
this paper we consider the tunneling between 2D electron layers,
which turns out to be sensitive to the relation between Rashba and
Dresselhaus contributions. The specific feature of the tunneling in
the system under consideration is that the energy and in-plane
momentum conservation put tight restrictions on the tunneling.
Without SOI the tunneling conductance exhibits delta function-like
maximum at zero bias broadened by elastic scattering in the layers
\cite{MacDonald}, and fluctuations of the layers width
\cite{VaksoFluctuations}. Such a behavior was indeed observed in a
number of experiments \cite{Eisenstein,Turner,Dubrovski}. Spin-orbit
interaction splits the electron spectra into two subbands in each
layer. At that energy and momentum conservation can be fulfilled for
the tunneling between opposite subbands of the layers at a finite
voltage corresponding to the subbands splitting. However, if the
parameters of SOI are equal for left and right layers, the tunneling
remains prohibited due to orthogonality of the appropriate spinor
eigenstates. In \cite{Raichev} it was pointed out that this
restriction can also be eliminated if Rashba parameters are
different for the two layers. A structure design was proposed
\cite{Raikh} where exactly opposite values of the Rashba parameters
result from the built-in electric field in the left layer being
opposite to that in the right layer. Because the SOI of Rashba type
is proportional to the electric field, this would result in
$\alpha^R=-\alpha^L$, where $\alpha^L$ and $\alpha^R$ are the Rashba
parameters for the left and right layers respectively. In this case
the
peak of the conductance should occur at the voltage $U_0$ corresponding
to the energy of SOI: $eU_0=\pm2\alpha k_F$, where $k_F$ is Fermi
wavevector. In this paper we consider arbitrary Rashba and
Dresselhaus contributions and show how qualitatively different
situations can be realized depending on their partial impact. In
some cases the structure of the electrons eigenstates suppresses
tunneling at ever voltage. At that the scattering is important as it
restores the features of voltage-current characteristic containing
information about SOI parameters. Finally the parameters $\alpha$
and $\beta$ can be obtained in the tunneling experiment which unlike
other spin-related experiments requires neither magnetic field nor
polarized light.
\section{Calculations}
We consider two 2D electron layers separated by potential barrier at
zero temperature (see Fig.\ref{fig:layers}). We shall consider only
one level of size quantization and not too narrow barrier so that
the electrons wavefunctions in the left and right layers overlap
weakly.
The system can be described by the phenomenological tunneling Hamiltonian \cite{MacDonald,MacDonald2,VaksoFluctuations}
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=180pt \epsfbox[30 530 500 760]{fig1.eps}
\caption{\label{fig:layers} Energy diagram of two 2D electron
layers.}
\end{figure}
\begin{equation}
\label{HT0} H=H_{0}^L+H_{0}^R+H_T,
\end{equation}
where $H_{0}^L,H_{0}^R$ are the partial Hamiltonians for the left
and right layers respectively, $H_T$ is the tunneling term. With
account for the elastic scattering and SOI in the layers the partial
Hamiltonians and the tunneling term have the the following form in
representation of secondary quantization:
\begin{equation}
\label{eqH}
\begin{array}{l}
H_{0}^l = \sum\limits_{k,\sigma} {\varepsilon^l_{k} c^{l+}_{k\sigma}
c^l_{k\sigma } } + \sum\limits_{k,k',\sigma} {V^l_{kk'} c^{l+}_{k\sigma}c^l_{k'\sigma }} + H^l_{SO} \\
H_T = \sum\limits_{k,k',\sigma,\sigma'} {T_{kk'\sigma\sigma'}\left( {c^{L+}_{k\sigma} c^{R}_{k'\sigma'} + c^{R+}_{k'\sigma'} c^L_{k\sigma} } \right)}, \\
\end{array}
\end{equation}
Here index $l$ is used for the layer designation and can take the
values $l=R$ for the right layer, $l=L$ for the left layer. By $k$
here and further throughout the paper we denote the wavevector
aligned parallel to the layers planes, $\sigma$ denotes spin
polarization and can take the values $\sigma=\pm 1/2$.
$\varepsilon_k^l$ is the energy of an electron in the layer $l$
having in-plane wavevector $k$. It can be expressed as:
\begin{equation}
\label{spectrum}
\varepsilon _k^l = \varepsilon+\varepsilon_0^l+\Delta^l,
\end{equation}
where $\varepsilon=\frac{\hbar^2k^2}{2m}$, $m$ being electron's
effective mass, $\varepsilon_0^l$ and $\Delta^l$ are the size
quantization energy and the energy shift due to external voltage for
the layer $l$ . We shall also use the value $\Delta^{ll'}$ defined
as
$\Delta^{ll'}=(\Delta^l-\Delta^{l'})+(\varepsilon_0^l-\varepsilon_0^{l'})$.
Similar
notation will be used for spin polarization denoted by indices $\sigma$, $\sigma'$.
The second term in the Hamiltonian (\ref{eqH}) $V_{kk'}^l$ is the matrix element of the scattering operator.
We consider only elastic scattering. The tunneling
term $H_T$ in (\ref{eqH}) is described by the tunneling constant
$T_{kk'\sigma\sigma'}$, which
has the meaning of size quantization levels splitting due to
the wavefunctions overlap. By lowercase $t$ we shall denote the
overlap integral itself. Our consideration is valid only for the
case of weak overlapping, i.e. $t\ll1$. Parametrically $T\sim
t\varepsilon_F$, where $\varepsilon_F$ is the electrons Fermi
energy. The term $H^{l}_{SO}$ describes the spin-orbit part of the
Hamiltonian:
\begin{equation}
\label{eqSOH}
\hat{H}^l_{SO}=\alpha^l \left( \bm{\sigma} \times \bm{k}
\right)_z + \beta^{l} \left( {\sigma _x k_x - \sigma _y k_y }
\right),
\end{equation}
where $\sigma_i$ are the Pauli matrices, $\alpha^l,\beta^l$ are
respectively the parameters of Rashba and Dresselhaus interactions
for the layer $l$. In the secondary quantization representation:
\begin{eqnarray}
\hat {H}_{SO}^l =\alpha^l \sum\limits_k {\left( {k_y
-ik_x } \right)c_{k\sigma }^{l+} c_{k\sigma '}^l +} \left( {k_y
+ik_x }
\right)c_{k\sigma '}^{l+} c_{k,\sigma }^l \nonumber \\
+\beta^l \sum\limits_k
{\left( {k_x -ik_y } \right)c_{k\sigma }^{l+} c_{k\sigma '}^l +}
\left( {k_x +ik_y } \right)c_{k\sigma '}^{l+} c_{k\sigma }^l
\label{eqSOHc}
\end{eqnarray}
The operator of the tunneling current can be expressed as
\cite{MacDonald}:
\begin{equation}
\label{current0}
\hat{I} = \frac{{ie}}{\hbar
}\sum\limits_{k,k',\sigma,\sigma'} T_{kk'\sigma\sigma'}
\left(\hat\rho_{kk'\sigma\sigma'}^{RL}-\hat\rho_{k'k\sigma'\sigma}^{LR}
\right),
\end{equation}
where
$\hat\rho_{kk'\sigma\sigma'}^{ll'}=c_{k,\sigma}^{l+}c_{k',\sigma'}^{l'}$
We shall assume the case of in-plane momentum and the spin
projection being conserved in the tunneling event so the tunneling
constant $T_{kk'\sigma\sigma'}$ has the form
$T_{kk'\sigma\sigma'}=T\delta_{kk'}\delta_{\sigma\sigma'}$, where
$\delta$ is the Cronecker symbol. The tunneling current is then
given by
\begin{equation}
\label{current}
I = \frac{ie}{\hbar}
T \int dk\: \mathrm{Tr} \left( \left<\hat\rho^{RL}_{k\sigma}\right>
-\left<\hat\rho^{LR}_{k\sigma}\right>\right),
\end{equation}
where $<>$ denotes the expectation value in quantum-mechanical
sense. For further calculations it is convenient to introduce vector
operator
$\bm{\hat{S}}^{ll'}_{kk'}=\left\{\hat{S_0},\bm{\hat{s}}\right\}=\left\{\mathrm{Tr}\left(\hat\rho^{ll'}_{kk'\sigma\sigma'}\right),\mathrm{Tr}\left({\bm
\sigma}\hat\rho^{ll'}_{kk'\sigma\sigma'}\right) \right\}$. This
vector fully determines the current because the latter can be
expressed through the difference
$\hat{S}^{RL}_{0k}-\hat{S}^{LR}_{0k}$. The time evolution of
$\bm{\hat{S}}^{ll'}_{kk'}$ is governed by:
\begin{equation}
\label{drodt}
\frac{d\bm{\hat{S}}_{kk'}^{ll'}}{dt}=\frac{i}{\hbar}[H,\bm{\hat{S}}_{kk'}^{ll'}]
\end{equation}
In the standard way of reasoning \cite{Luttinger} we assume
adiabatic onset of the interaction with characteristic time
$w^{-1}$. We will set $w=0$ in the final expression. With this
(\ref{drodt}) turns into:
\begin{equation}
\label{drodt0}
(\bm{\hat{S}}_{kk'}^{ll'}-\bm{\hat{S}}_{kk'}^{(0)ll'})w=\frac{i}{\hbar}[H,\bm{\hat{S}}_{kk'}^{ll'}]
\end{equation}
Here $\bm{\hat{S}}_{kk'}^{(0)ll'}$ represents the stationary
solution of (\ref{drodt}) without interaction. By interaction here
we mean the tunneling and the elastic scattering by impurities but
not the external voltage. The role of the latter is merely shifting
the layers by $eU$ on the energy scale. From such defined
interaction it immediately follows that the only non-zero elements
of $\bm{\hat{S}}_{kk'}^{(0)ll'}$ are that with $l=l'$ and $k=k'$. In
further abbreviations we will avoid duplication of the indices i.e.
write single $l$ instead of $ll$ and $k$ instead of $kk$:
\begin{equation}
\label{Sdiag}
\bm{\hat{S}}_{kk'}^{(0)ll'}=\bm{\hat{S}}_{k}^{(0)l}\delta_{kk'}\delta_{ll'}
\end{equation}
With use of fermion commutation rules
\begin{eqnarray*}
\left\{ {c_i c_k } \right\} = \left\{ {c_i^ + c_k^ + } \right\} = 0 \\
\left\{ {c_i c_k^ + } \right\} = \delta _{ik}
\end{eqnarray*}
the calculations performed in a way similar to \cite{Luttinger}
bring us to the following system of equations
with respect to
$\bm{\hat{S}}_{k}^{ll'}$:
\begin{eqnarray}
0= \left( {\Delta^{ll'}+i\hbar w } \right){\bf{\hat
S}}_k^{ll'} + T\left( {{\bf{\hat S}}_k^{l'} - {\bf{\hat S}}_k^l }
\right)+{\bf{M(}}k{\bf{)\hat S}}_k^{ll'} \nonumber \\
- \sum\limits_{k'} {\left( {\frac{{A_{kk'} {\bf{\hat S}}_k^{ll'} -
B_{kk'} {\bf{\hat S}}_{k'}^{ll'} }}{{ {\varepsilon' - \varepsilon
-\Delta^{ll'} } + i\hbar w}} + \frac{{B_{kk'} {\bf{\hat S}}_k^{ll'}
- A_{kk'} {\bf{\hat S}}_{k'}^{ll'} }}{{ {\varepsilon -
\varepsilon' -\Delta^{ll'} } + i\hbar w}}} \right)}
\label{system1}
\end{eqnarray}
\begin{eqnarray}
i\hbar w\left( {{\bf{\hat S}}_k^{\left( 0 \right)l} - {\bf{\hat
S}}_k^l } \right) = T\left( {{\bf{\hat S}}_k^{l'l} - {\bf{\hat
S}}_k^{ll'} } \right) + {\bf{M}}(k){\bf{\hat S}}_k^l \nonumber \\ +
\sum\limits_{k'} { {\frac{{2i\hbar wA_{kk'} \left( {{\bf{\hat
S}}_k^l - {\bf{\hat S}}_{k'}^{l'} } \right)}}{{\left( {\varepsilon'
- \varepsilon } \right)^2 + \left( {\hbar w} \right)^2 }}} },
\label{system2}
\end{eqnarray}
where $\bm{M}$ is a known matrix, depending on $k$ and parameters of
spin-orbit interaction in the layers. Here we also introduced the
quadratic forms of the impurities potential matrix elements:
\begin{eqnarray}
A_{kk'} \equiv \left| {V_{k'k}^{l} } \right|^2 \nonumber \\
B_{kk'} \equiv V_{k'k}^{l} V_{kk'}^{l'}
\label{correlators}
\end{eqnarray}
As (\ref{system1}) and (\ref{system2}) comprise a system of linear
integral equations these quantities enter the expression
(\ref{current}) for the current linearly and can be themselves
averaged over spatial distribution of the impurities. In order to
perform this averaging we assume the short range potential of
impurities:
\begin{equation}
\label{ImpuritiesPotential} V\left( r \right) = \sum\limits_a
{V_0^{} \delta \left( {r - r_a } \right)}
\end{equation}
The averaging immediately shows that the correlators
$\left<A_{kk'}\right>\equiv A$ and $\left<B_{kk'}\right>\equiv B$
have different parametrical dependence on the tunneling transparency
$t$, namely
\begin{equation}
\label{T2}
\frac{B}{A}\sim t^{2}\sim T^2
\end{equation}
We emphasize that this result holds for non-correlated distribution
of the impurities as well as for their strongly correlated
arrangement such as a thin layer of impurities placed in the middle
of the barrier. The corresponding expressions for these two cases
are given below. Index 'rand' stands for uniform impurities
distribution and 'cor' for their correlated arrangement in the
middle of the barrier $(z=0)$:
\begin{eqnarray}
{B^{rand} } = \frac{{V_0^2 n}}{W}\int {dz}
f_l ^2 (z)f_{l'} ^2 (z)\sim\frac{{V_0^2
n}}{W}\frac{{t^2 }}{d} \nonumber \\
{A^{rand} }
= \frac{{V_0^2 n}}{W}\int {dz} f_l^4\left(z\right)
\sim\frac{{V_0^2 n}}{W}\frac{1}{d}
\nonumber \\
{B^{cor} } = \frac{{V_0^2 n_s }}{W}f_l ^2 (0)f_{l'} ^2
(0)\sim\frac{{V_0^2 n_s}}{W}\frac{{t^2 }}{d}
\nonumber \\
{A^{cor} } = \frac{{V_0^2 n_s
}}{W}f_l ^4 \left( 0 \right)\sim\frac{{V_0^2 n_s}}{W}\frac{1}{d},
\label{correlators1}
\end{eqnarray}
where $n$ and $n_s$ are bulk and surface concentrations of the
impurities, $W$ is the lateral area of the layers, $d$ is the width
of the barrier and $f(z)$ is the eigenfunction corresponding to the
size quantization level, $z$ is coordinate in the direction normal
to the layers planes, $z=0$ corresponding to the middle of the
barrier\cite{Raikh}.
Unlike \cite{Raikh} and according to (\ref{T2}) we
conclude that the correlator $\left<B_{kk'}\right>$ has to be
neglected as soon as we shall be interested in calculating the
current within the order of $T^2$. In the hereused method of
calculation this result appears quite naturally, however, it can be
similarly traced in the technique used in \cite{Raikh} (see
Appendix). For the same reason the tunneling term should be dropped
from (\ref{system2}) as it would give second order in $T$ if
(\ref{system2}) substituted into (\ref{system1}). According to
(\ref{correlators}) $A$ can be expressed in terms of electrons
scattering time:
\begin{equation}
\label{tau} \frac{1}{\tau } = \frac{{2\pi }}{\hbar }\nu\left\langle
{\left| {V_{kk'} } \right|^2 } \right\rangle = \frac{{2\pi
}}{\hbar }\nu A ,
\end{equation}
where $\nu$ is the 2D density of states $\nu=\frac{m}{2\pi\hbar^2}$.
By means of Fourier transformation on energy variable the system
(\ref{system1}),(\ref{system2}) can be reduced to the system of
linear algebraic equations. Finally ${{\bf{\hat S}}_k^{ll'} }$ can
be expressed as a function of ${{\bf{\hat S}}_k^{\left( 0 \right)l}
}$. Consequently the current (\ref{current}) becomes a function of
$\left<\hat{\rho}_{k\sigma}^{(0)R}\right>$,
$\left<\hat{\rho}_{k\sigma}^{(0)L}\right>$. For the considered case
of zero temperature:
\[
\left<\rho _{k\sigma}^{(0)l}\right> = \frac{1}{2W} \theta \left(
{\varepsilon _F^l + \Delta ^l - \varepsilon - \varepsilon _\sigma }
\right),
\]
where
\[
\varepsilon _\sigma = \pm \left| {\alpha ^l \left( {k_x - ik_y }
\right) - \beta ^l \left( {ik_x - k_y } \right)} \right|,
\]. Without loss of generality we shall consider the
case of identical layers and external voltage applied as shown in
Fig.\ref{fig:layers}:
\begin{eqnarray*}
\varepsilon_0^R=\varepsilon_0^L\\
\Delta^L=-\frac{eU}{2}, \Delta^R=+\frac{eU}{2}\\
\Delta^{RL}=-\Delta^{LR}=eU
\end{eqnarray*}
The calculations can be simplified with account for
two small parameters:
\begin{eqnarray}
\xi=\frac{\hbar}{\varepsilon_F\tau}\ll1 \nonumber \\
\eta=\frac{eU}{\varepsilon_F}\ll1 \label{deltaef}
\end{eqnarray}
With (\ref{eftau}) calculation yields the following expression for
the current:
\begin{equation}
\label{currentfinal0} I = \frac{{ie}}{{2\pi \hbar }}T^2 \nu
\int\limits_0^\infty {\int\limits_0^{2\pi } {\left( {\zeta ^L +
\zeta ^R } \right)\mathrm{Tr}\left( {\rho _\sigma ^{\left( 0
\right)R} - \rho _\sigma ^{\left( 0 \right)L} } \right)d\varepsilon
d\varphi } },
\end{equation}
where
\[
\label{constants} \zeta ^l = \frac{{C^l \left[ {\left( {C^l }
\right)^2 - 2bk^2 \sin2\varphi - gk^2 } \right]}}{{\mathop {\left(
{f + 2d\sin2\varphi } \right)}\nolimits^2 k^4 - 2\left( {C^l }
\right)^2 \left( {c + 2a\sin2\varphi } \right)k^2 + \left( {C^l }
\right)^4 }}, \]
\[ C^l\left(U\right) = \Delta ^l + i\frac{\hbar
}{\tau },
\]
\begin{eqnarray}
a = \alpha ^L \beta ^L + \alpha ^R \beta ^R \nonumber \\
b = \left( {\beta ^L + \beta ^R } \right)\left( {\alpha ^L + \alpha ^R } \right)\nonumber \\
c = \left( {\beta ^L } \right)^2 + \left( {\beta ^R } \right)^2 + \left( {\alpha ^L } \right)^2 + \left( {\alpha ^R } \right)^2 \nonumber \\
d = \alpha ^L \beta ^L - \alpha ^R \beta ^R \nonumber \\
f = \left( {\beta ^L } \right)^2 - \left( {\beta ^R } \right)^2 + \left( {\alpha ^L } \right)^2 - \left( {\alpha ^R } \right)^2 \nonumber \\
g = \mathop {\left( {\beta ^L + \beta ^R } \right)}\nolimits^2 + \mathop {\left( {\alpha ^L + \alpha ^R } \right)}\nolimits^2 \nonumber \\
\label{constants}
\end{eqnarray}
Parameters $a$-$g$ are various combinations of the Rashba and
Dresselhaus parameters of SOI in the layers. Both types of SOI are
known to be small in real structures so that:
\begin{equation}
\alpha k_F\ll\varepsilon_F, \; \beta k_F\ll\varepsilon_F
\end{equation}
This additional assumption together with (\ref{deltaef}) reduces
(\ref{currentfinal0}) to
\begin{equation}
\label{currentfinal} I = \frac{{ie^2 }}{{2\pi \hbar }}T^2 \nu
WU\int\limits_0^{2\pi } {\left[ {\zeta ^L \left( {\varepsilon_F }
\right) + \zeta ^R \left( {\varepsilon_F } \right)} \right]d\varphi
}
\end{equation}
The integral over $\varphi$ in (\ref{currentfinal}) can be
calculated analytically by means of complex variable integration.
However, the final result for arbitrary $\alpha^l,\beta^l$ is not
given here for it is rather cumbersome. In the next section some
particular cases are discussed.
\section{Results and Discussion}
The obtained general expression (\ref{currentfinal}) can be
simplified for a few particular important relations between Rashba
and Dresselhaus contributions. These calculations reveal
qualitatively different dependencies of the d.c. tunneling current
on the applied voltage.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=210pt \epsfbox[130 350 700 800]{fig2.eps}
\caption{\label{fig:tunnelingmain}Tunneling conductance, a:
$\varepsilon_F=10$ meV, $\alpha=\beta=0$, $\tau=2*10^{-11}$ s; b:
same as a, but $\alpha k_F=0.6$ meV; c: same as b, but
$\beta=\alpha$; d: same as c, but $\tau=2*10^{-12}$ s.}
\end{figure}
The results of the calculations shown below were obtained using the
following parameters: Fermi energy $\varepsilon_F=10$ meV,
spin-orbit splitting was taken to resemble $GaAs$ structures:
$\alpha k_F=0.6$ meV.
\subsection{No Spin-Orbit Interaction}
In the absence of SOI ($\alpha^R=\alpha^L=0$, $\beta^R=\beta^L=0$) the
energy spectrum for each of the layers forms a paraboloid:
\begin{equation}
E^l(k)=\varepsilon_0+\frac{\hbar^2k^2}{2m}\pm \frac{eU}{2}.
\end{equation}
According to our assumptions (\ref{current0}),(\ref{current}), the tunneling takes place at:
\begin{eqnarray}
E^R=E^L\nonumber \\
k^R=k^L
\label{conservation}
\end{eqnarray}
Both conditions are satisfied
only at $U=0$ so that a nonzero external voltage does not produce any current
despite it produces empty states in one layers aligned to the filled states in the other layer
(Fig.\ref{fig:layers}). The momentum conservation restriction in (\ref{conservation}) is weakened if the electrons scatter at the impurities.
Accordingly, one should expect a nonzero tunneling current
within a finite voltage range in vicinity of zero.
For the considered case the general formula (\ref{currentfinal}) is simplified radically as all the parameters (\ref{constants})
have zero values. Finally we get the well-known
result\cite{MacDonald}:
\begin{equation}
\label{currentMacDonald}
I = 2e^2 T^2 \nu
WU\frac{{\frac{1}{\tau }}}{{\left( {eU} \right)^2 + \left(
{\frac{\hbar }{\tau }} \right)^2 }}. \end{equation}
The conductance defined as $G(U)=I/U$ has Lorentz-shaped peak at $U=0$
turning into delta function at $\tau\rightarrow\infty$.
This case is shown in (Fig.\ref{fig:tunnelingmain},a).
All the curves in Fig.\ref{fig:tunnelingmain} show the results of the
calculations for very weak scattering. The corresponding scattering
time is taken $\tau=2*10^{-11}s$.
\subsection{Spin-Orbit Interaction of Rashba type}
The spin-orbit interaction gives
qualitatively new option for the d.c. conductance to be finite at
non-zero voltage. SOI splits the spectra into two subbands. Now an
electron from the first subband of the left layer can tunnel to a
state in a second subband of the right layer. Let us consider a
particular case when only Rashba type of SOI interaction exists in
the system, its magnitude being the same in both layers, i.e.
$|\alpha^R|=|\alpha^L|\equiv \alpha$, $\beta^R=\beta^L=0$. In this
case the spectra splits into two paraboloid-like subbands "inserted"
into each other. Fig.\ref{fig:spectraRashba} shows their
cross-sections for both layers,
arrows show spin orientation. By applying a certain external
voltage $U_0=\frac{2\alpha k_F}{e}$,
$k_F=\frac{\sqrt{2m\varepsilon_F}}{\hbar}$ the layers can be shifted
on the energy scale in such a way that the cross-section of the
"outer" subband of the right layer coincides with the "inner"
subband of the left layer (see solid circles in
Fig.\ref{fig:spectraRashba}). At that both conditions
(\ref{conservation}) are satisfied. However, if the spin is taken
into account, the interlayer transition can still remain forbidden.
It happens if the appropriate spinor eigenstates involved in the
transition are orthogonal. This very case occurs if
$\alpha^R=\alpha^L$, consequently the conductance behavior remains
the same as that without SOI. Contrary, if the Rashba terms are of
the opposite signs, i.e. $\alpha^R=-\alpha^L$ the spin orientations
in the "outer" subband of the right layer and the "inner" subband of
the left layer are the same and the tunneling is allowed at a finite
voltage but forbidden at $U=0$ . This situation, pointed out in
\cite{Raichev,Raikh} should reveal itself in sharp maxima of the
conductance at $U=\pm U_0$ as shown in
Fig.\ref{fig:tunnelingmain},b. From this dependence the value of
$\alpha$ can be immediately extracted from the position of the peak.
Evaluating (\ref{constants}) for this case and further the
expression (\ref{currentfinal}) we obtain the following result for
the current:
\begin{equation}
\label{currentRaikh} I = \frac{{2e^2T^2 W\nu U\frac{\hbar }{\tau
}\left[ {\delta^2 + e^2 U^2 + \left( {\frac{\hbar }{\tau }}
\right)^2 } \right]}}{{\left[ {\left( {eU - \delta } \right)^2 +
\left( {\frac{\hbar }{\tau }} \right)^2 } \right]\left[ {\left( {eU
+ \delta } \right)^2 + \left( {\frac{\hbar }{\tau }} \right)^2 }
\right]}},
\end{equation}
where $\delta=2\alpha k_F$. The result is in agreement with that
derived in\cite{Raikh}, taken for uncorrelated spatial arrangement
of the impurities. As we have already noted we do not take into
account interlayer correlator $\left<B_{kk'}\right>$
($\ref{correlators}$) because parametrically it has higher order of
tunneling overlap integral $t$ than the intralayer correlator
$\left<A_{kk'}\right>$. Therefore the result (\ref{currentRaikh}) is
valid for arbitrary degree of correlation in spatial distribution of
the impurities in the system.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=220pt \epsfbox[130 500 700 800]{fig3.eps}
\caption{\label{fig:spectraRashba}Cross-section of electron energy spectra in the left(a) and right (b) layer for
the case
$\alpha^{L}=-\alpha^{R}, \beta^{L}=\beta^{R}=0$.}
\end{figure}
It is worth noting that the opposite case when only Dresselhaus type
of SOI exists in the system leads to the same results. However, it
is rather non-practical to study the case of the different
Dresselhaus parameters in the layers because this type of SOI
originates from the crystallographic asymmetry and therefore cannot
be varied if the structure composition is fixed. For this case to be
realized one needs no make the two layers of different materials.
\subsection{Both Rashba and Dresselhaus contributions}
The presence of Dresselhaus term in addition to the Rashba
interaction can further modify the tunneling conductance in a
non-trivial way. A special case occurs if the magnitude of the
Dresselhaus term is comparable to that of the Rashba term. We shall
always assume the Dresselhaus contribution being the same in both
layers: $\beta^{L}=\beta^{R}\equiv\beta$. Let us add the Dresselhaus
contribution to the previously discussed case so that
$\alpha^{L}=-\alpha^{R}\equiv\alpha,\;\alpha=\beta$. The
corresponding energy spectra and spin orientations are shown in
Fig.\ref{fig:spectraRD}. Note that while the spin orientations in
the initial and final states are orthogonal for any transition
between the layers, the spinor eigenstates are not, so that the
transitions are allowed whenever the momentum and energy
conservation requirement (\ref{conservation}) is fulfilled. It can
be also clearly seen from Fig.\ref{fig:spectraRD} that the condition
(\ref{conservation}), meaning overlap of the cross-sections a. and
b. occurs only at few points. This is unlike the previously
discussed case where the overlapping occurred within the whole
circular cross-section shown by solid lines in
Fig.\ref{fig:spectraRashba}. One should naturally expect the
conductance for the case presently discussed to be substantially
lower. Using (\ref{currentfinal}) we arrive at a rather cumbersome
expression for the current:
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=220pt \epsfbox[130 500 700 810]{fig4.eps}
\caption{\label{fig:spectraRD}Cross-section of electron energy
spectra in the left(a) and right (b) layer for
the case
$\alpha^{R}=-\alpha^L=\beta$.}
\end{figure}
\begin{eqnarray}
I = eT^2 W\nu U\left[ {\frac{{G_ - \left(
{G_ - ^2 - \delta ^2 } \right)}}{{\sqrt {F_ - \left( {\delta ^4 +
F_ - } \right)} }} - \frac{{G_ + \left( {G_ + ^2 - \delta ^2 }
\right)}}{{\sqrt {F_ + \left( {\delta ^4 + F_ + } \right)} }}}
\right], \label{CurrentSpecial}
\end{eqnarray}
where \begin{eqnarray*}
G_ \pm = eU \pm i\frac{\hbar }{\tau } \\
F_ \pm = G_ \pm ^2 \left( {G_ \pm ^2 - 2\delta^2 } \right).
\end{eqnarray*}
Alternatively, for the case of no interaction with impurities a
precise formula for the transition rate between the layers can be
obtained by means of Fermi's golden rule. We obtained the following
expression for the current:
\begin{equation}
\label{CurrentPrecise} I = \frac{{2\pi eT^2 W}}{{\hbar \alpha ^2
}}\left( {\sqrt {K + \frac{{8m\alpha ^2 eU}}{{\hbar ^2 }}} - \sqrt
{K - \frac{{8m\alpha ^2 eU}}{{\hbar ^2 }}} } \right),
\end{equation} where
\[
K = 2\delta^2 - e^2 U^2 + \frac{{16m^2 \alpha ^4 }}{{\hbar ^4 }}
\]
Comparing the results obtained from (\ref{CurrentSpecial}) and
(\ref{CurrentPrecise}) is an additional test for the correctness of
(\ref{CurrentSpecial}). Both dependencies are presented in
Fig.\ref{fig:goldenRule} and show a good match. The same dependence
of conductance on voltage is shown in Fig.\ref{fig:tunnelingmain},c.
As can be clearly seen in the figure the conductance is indeed
substantially suppressed in the whole voltage range. This is
qualitatively different from all previously mentioned cases.
Furthermore, the role of the scattering at impurities appears to be
different as well. For the considered above cases characterized by
resonance behavior of the conductance, the scattering broadens the
resonances into Lorentz-shape peaks with the characteristic width
$\delta=\hbar/(e\tau)$. Contrary, for the last case the weakening of
momentum conservation, caused by the scattering, increases the
conductivity and restores the manifestation of SOI in its dependence
on voltage. Fig.\ref{fig:tunnelingmain},d shows this dependence for
a shorter scattering time $\tau=2*10^{-12}$. The reason for that is
the weakening of the momentum conservation requirement due to the
elastic scattering. One should now consider the overlap of the
spectra cross-sections the circles in Fig.\ref{fig:spectraRD} having
a certain thickness proportional to $\tau^{-1}$. This increases the
number of points at which the overlap occurs and, consequently, the
value of the tunneling current. As the calculations show, for
arbitrary $\alpha$ and $\beta$ the dependence of conductance on
voltage can exhibit various complicated shapes with a number of
maxima, being very sensitive to the relation between the two
contributions. The origin of such a sensitivity is the interference
of the angular dependencies of the spinor eigenstates in the layers.
A few examples of such interference are shown in
Fig.\ref{fig:variousRD}, a--c. All the dependencies shown were
calculated for the scattering time $\tau=2*10^{-12}$ s.
Fig.\ref{fig:variousRD},a summarizes the results for all previously
discussed cases of SOI parameters, i.e. no SOI (curve 1), the case
$\alpha_R=-\alpha_L, \beta=0$ (curve 2) and
$\alpha_R=-\alpha_L=\beta$ (curve 3). Following the magnitude of
$\tau$ all the reasonances are broadenered compared to that shown in
Fig.\ref{fig:tunnelingmain}. Fig.\ref{fig:variousRD},b (curve 2)
demonstrates the conductiance calculated for the case
$\alpha_L=-\frac{1}{2}\alpha_R=\beta$, Fig.\ref{fig:variousRD},c
(curve 2) -- for the case $\alpha_L=\frac{1}{2}\alpha_R=\beta$. The
curve 1 corresponding to the case of no SOI is also shown in all the
figures for reference. Despite of a significant scattering parameter
all the patterns shown in Fig.\ref{fig:variousRD} remain very
distinctive. That means that in principle the relation between the
Rashba and Dresselhaus contributions to SOI can be extracted merely
from the I-V curve measured in a proper tunneling experiment.
\begin{figure}[h]
\leavevmode
\centering\epsfxsize=190pt \epsfbox[130 350 700 800]{fig5.eps}
\caption{\label{fig:goldenRule}Tunneling conductance calculated for
the case $\alpha^R=-\alpha^L=\beta$ and very weak scattering
compared to the precise result obtained through Fermi's golden rule
calculation.}
\end{figure}
\begin{figure}[h]
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\centering\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6a.eps}
\nonumber
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6b.eps}
\nonumber
\end{center}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\textwidth}
\begin{center}
\epsfxsize=170pt \epsfbox[70 650 266 801]{fig6c.eps}
\end{center}
\end{minipage}
\caption{\label{fig:variousRD}Tunneling conductance calculated for various parameters of
SOI}
\end{figure}
\section{Summary}
As we have shown, in the system of two 2D electron layers separated
by a potential barrier SOI can reveal itself in the tunneling
current. The difference in spin structure of eigenstates in the
layers results in a sort of interference in the tunneling
conductance. The dependence of tunneling conductance on voltage
appears to be very sensitive to the parameters of SOI. Thus, we
propose a way to extract the parameters of SOI and, in particular,
the relation between Rashba and Dresselhaus contributions in the
tunneling experiment. We emphasize that unlike many other
spin-related experiments the manifestation of SOI studied in this
paper should be observed without external magnetic field. Our
calculations show that the interference picture may be well resolved
for GaAs samples with the scattering times down to $\sim 10^{-12}$
s, in some special cases the scattering even restores the traces of
SOI otherwise not seen due to destructive interference.
\section*{ACKNOWLEDGEMENTS}
This work has been supported in part by RFBR, President
of RF support (grant MK-8224.2006.2) and Scientific Programs of RAS.
| {'timestamp': '2007-10-24T13:36:13', 'yymm': '0710', 'arxiv_id': '0710.4435', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.4435'} |
\section{Introduction}
Nano-manufacturing by polymer self-assembly is attracting interests in recent
decades due to its wide applications~\cite{FINK:1998}. The numerical simulation
of this process can be used to research the mechanisms of phase separation of
polymer blends and predict the unobservable process states and unmeasurable
material properties. The mathematical principles and numerical simulation of
self-assembly via phase separation has been extensively
studied~\cite{SCOTT:1949,HSU:1973,CHEN:1994,HUANG:1995,ALTENA:1982,ZHOU:2006,TONG:2002,HE:1997,MUTHUKUMAR:1997,KARIM:1998}. But few specific software
toolkit have been developed to efficiently investigate this phenomenon. \par
A computer program is developed in MATLAB for the numerical simulation of the
polymer blends phase separation.
With this software, the mechanisms of the phase separation are investigated.
Also the mobility, gradient energy coefficient energy, and the surface energy
in the experiment are estimated with the numerical model. The software can
evaluate the physical parameters in the numerical model by implementing the
real experimental parameters and materials properties. The numerical simulation
results can be analyzed with the software and the results from the simulation
software can be validated with the experimental results. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_gui_screen_shot.eps}
\caption{Screenshot of the simulation program graphical user interface.
\label{fg_gui_screenshot}}
\clearpage
\end{figure}
\section{Fundamentals}
The numerical model for phase separation of polymer blends are established and
validated with experimental results work~\cite{SHANG:2010}. The free energy profile during the phase separation in a inhomogeneous mixture
is described by Cahn-Hilliard
Equation~\cite{CAHN:1958, CAHN:1959, CAHN:1961, CAHN:1965}, as shown below,
\begin{equation}
F(C_1,C_2,C_3)=\int_{V} \left\{ f(C_1,C_2,C_3)+\displaystyle\sum_{i=1,2,3} [\kappa_i (\nabla C_i)^2] \right\} dV \label{cahn_hilliard_intro}
\end{equation}
where $f$ is the local free energy density of homogeneous material, $\phi _i$
is the lattice volume fraction of component $i$, and $\kappa_i$ is the gradient
energy coefficient for the component $i$. The total free energy of the system
is composed by two items as shown in Equation~\ref{cahn_hilliard_intro}. The
first item is the local free energy and the second is the composition gradient
contribution to the free energy. \par
In our study, the local free energy is in the form of Flory-Huggins equation,
which is well know and studied for polymer blends~\cite{HUANG:1999}
The ternary Flory-Huggins Equation is shown as follows,
\begin{equation}
\begin{split}
f(C_1,C_2,C_3)
&= \frac{RT}{v_{site}}\bigg( \frac{C_1}{m_1}\ln{C_1}+\frac{C_2}{m_2}\ln{C_2} + C_3\ln{C_3} \\
& \chi_{12}C_1C_2+\chi_{13}C_1C_3+\chi_{23}C_2C_3\bigg)
\label{eq_flory_huggins_intro}
\end{split}
\end{equation}
where $R$ is the ideal gas constant, $T$ is the absolute temperature,
$v_{site}$ is the lattice site volume in the Flory-Huggins model, $m_i$ is the
degree of polymerization of component $i$, and $C_i$ is the composition for the
component $i$. \par
There are some parameters in the numerical model which can not be measured
directly, such as the gradient energy coefficient and the mobility. These
parameters have to be estimated from the experimental
parameters.The gradient energy coefficient, $\kappa$, determines the influence
of the composition gradient to the total free energy of the domain.
The value of $\kappa$ is difficult to measure experimentally. Though efforts
have been made by Saxena and Caneba~\cite{SAXENA:2002} to estimate the
gradient energy coefficient in a ternary polymer system from experimental
methods, few experimental results are published for our conditions. Initially,
the value of $\kappa$ can be estimated by the interaction distance between
molecules~\cite{WISE_THESIS:2003},
\begin{equation}
\kappa=\frac{RTa^2}{3v_{site}}\label{eq_gradient_energy_coefficient}
\end{equation}
where $a$ is the monomer size. A modified equation to calculate $\kappa$
considering the effects of the composition is reported by Gennes,
et al.~\cite{GENNES:1980}.
\begin{equation}
\kappa_i=\frac{RTa^2}{36v_{site}C_i}
\end{equation}
where the subscript, $i$, represents component $i$. \par
The mobility is estimated from the diffusivity of the components. The mobility
of the polymer blends with long chains can be estimated by the equation as
follows~\cite{GENNES:1980},
\begin{equation}
M_i=\frac{C_i}{m_i}\frac{D_mN_ev_{site}}{RT}
\end{equation}
where $m_i$ is the degree of polymerization as stated before, $D_m$ is the
diffusivity of the monomer, and $N_e$ is the effective number of monomers per
entanglement length. Because of the scarce experimental data for $N_e$, a more
generalized form is employed for our study,
\begin{equation}
M=\frac{Dv_{site}}{RT}\label{eq_mobility}
\end{equation}
The time evolution of the composition of component $i$ can be represented
as~\cite{HUANG:1995,BATTACHARYYA:2003,GENNES:1980,SHANG:2009},\par
\begin{equation}
\begin{split}
\frac{\partial C_i}{\partial t}
&= M_{ii}\left[ \frac{\partial f}{\partial C_i}-\frac{\partial f}{\partial C_3}-2\kappa_{ii}\nabla^2C_i-2\kappa_{ij}\nabla^2C_j\right] \\
& +M_{ij}\left[ \frac{\partial f}{\partial C_j}-\frac{\partial f}{\partial C_3}-2\kappa_{ji}\nabla^2C_i-2\kappa_{jj}\nabla^2C_j \right]
\end{split}\label{eq6_paper2}
\end{equation}
where the subscripts $i$ and $j$ represent components 1 and 2, and\par
\begin{equation}
\begin{aligned}
M_{ii}=&(1-\overline{C}_i)^2M_i+\overline{C}_i^2\displaystyle\sum_{j\neq i}M_j\qquad i=1,2;j=1,2,3\\
M_{ij}=&-\displaystyle\sum_{i\neq j}\left[(1-\overline{C}_i)\overline{C}_j\right]M_i+\overline{C}_i\overline{C}_jM_3\qquad i=1,2;j=1,2
\end{aligned}
\end{equation}
where $\overline{C}_i$ is the average composition of component $i$. To simplify
the solution of Equation \ref{eq6_paper2}, $\kappa_{ii}=\kappa_i+\kappa_3$, and
$\kappa_{12}=\kappa_{21}=\kappa_3$, where $\kappa_i$ is the gradient energy
coefficient in Equation~\ref{eq_gradient_energy_coefficient}. \par
For detailed discussion and practical scientific cases with this software can
be found in our previous
works~\cite{SHANG:2008,SHANG:2009,SHANG:2009THESIS}.\par
\section{The MATLAB Program for Simulation of Polymer Phase Separation}
\subsection{Design Principles}
The program is developed in MATLAB m-code. A graphical user interface (GUI) is
implemented in the program created with MATLAB GUI editor. MATLAB is widely
used in scientific computation and has many toolkits and commonly used
mathematical functionalities. But implementing the software in MATLAB the
efficiency of development is greatly improved. Also, by developing the program
in MATLAB, the program is cross platform. \par
The software is designed for daily usage of simulation and experiment
scientists. The program is light weighted and programmed with high computation
efficiency so that it can produce significant science results in a common PC.
It also extensible to a parallel version or implement code to use the high
computation performance of GPU. The GUI is implemented so that the users can
conveniently input the experiment parameters. The results as well as the user
settings can be saved and revisited by the program. Also, for better assistance
to a real productive environment, the simulation model is carefully designed,
so that the users provide the real processing and material parameters and the
program will produce quantitative results comparable to experimental results.
Analytical tools are also provided with the program for post-processing of the
results. \par
\subsection{Numerical Methods}
To solve the partial differential equation, the discrete cosine transform
spectral method is employed. The discrete cosine transform (DCT) is applied
to the right hand side and left hand side of Equation~\ref{eq6_paper2}. The
partial differential equation in the ordinary space then transformed into an
ordinary differential equation in frequency space. When the ODE in frequency
space then is solved, the results are transformed back to the ordinary
space. \par
Comparing to conventional finite element method, the spectral method is more
efficient and accurate. This method enabled the program to solve the equation
in a reasonable computation time to investigate the changes of the phase
separation during an real time span long enough to observe the phase evolution.
The spectral method is only
applied to the spatial coordinates since the time length of the evolution is
not predictable. Actually the real time for phase evolution is usually one of
the major concerns as the result of the simulation. \par
The DCT takes a considerable portion of the computation time. Especially in a
3-dimensional numerical model, the 3-dimensional DCT function with conventional
approach has a complexity of $O(n^3)$, which can not be practical for real
application on a PC. To overcome this computational difficulty, the code can
either be translated to C code embedded in MATLAB m scripts, or a different
mathematical approach can be implemented as well. In this program, the DCT is
calculated from the fast Fourier transform (FFT) which is optimized in
MATLAB. \par
\subsection{Quantitative Simulation with Real Experimental Parameters}
Many of previous numerical simulations in the self-assembly with polymer blends
phase separation are qualitative other than quantitative. The results can only
be used to provide non-quantitative suggestions to the experiments. While this
program implemented a numerical model which quantitatively simulates the
experimental results with the real processing and material parameters. Most of
inputs in to this program can be directly measured and read from the instrument
or material labels. For some of the physical parameters such as $\kappa$ and
the mobility, the program can provide a start value from the calculation with
the theoretical model. The user may need to validate the value by comparing
the simulation results to the experimental results. Eventually, a more accurate
estimation can be find with optimization methods by setting the difference
between the simulation and experiment results as the cost function. \par
Besides the parameters in Cahn-Hilliard equation, other effects such as the
evaporation, substrate functionalization, and the degree of polymerization are
also implemented with the real conditions. The final results are saved and
summarized. The characteristic length of result pattern from simulation and its
compatibility with the substrate functionalization are calculated. These
numbers can be used to compare with the experimental results. \par
\subsection{Data Visualization and Results Analysis}
When running the program, the message from the software will be output to the
working console of MATLAB. The messages will show the current state and real
time results of the simulation. Also, when the simulation is started, the phase
pattern will be plotted in a real time plot window. Users can set the frequency
of real time plot and the scale factor on the domain of the contour plot in
the GUI. The results of the simulation will be saved to a folder designated by
the user. The real time plot will be saved to the result folder. The
quantitative results will be saved as several comma separated values (CSV)
text files. The result folder can be loaded into the analysis toolkit of the
program and the user can view the assessment values such as the characteristic
length, the compatibility parameters, and the composition profile wave in depth
direction with convenient plotting tools. Usually these results such as the
composition profile in each direction in the domain are difficult to observe
in experiment results. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_gui_running.eps}
\caption{The simulation is running with the real time plot of the
current ternary phase morphology.
\label{fg_gui_running}}
\clearpage
\end{figure}
\section{Examples}
To demonstrate the capability of this program, example simulation cases are
shown in this paper. The results of numerical simulation have been validated
with the experimental results in our previous work~\cite{SHANG:2010}. To
compare the simulated results with a real experimental system, we directed
the morphologies of polystyrene (PS) / polyacrylic acid (PAA) blends using
chemically heterogeneous patterns. More specifically, alkanethiols with
different chemical functionalities were patterned by electron beam
lithography, which were then used to direct the assembly of PS/PAA blends
during the spin coating from their mutual solvent~\cite{MING:2009}. The
experimental conditions are implemented into the numerical simulation. The
effects such as the substrate functionalization and the solvent evaporation
are involved in the numerical modeling.
The parameters difficult to measure are acquired with the optimization methods
~\cite{SHANG:2009}.
\par
Sophisticated techniques are required to investigate the composition profile in
the depth of the polymer film~\cite{GEOGHEGAN:2003}. While the numerical
simulation results can provide the composition profile in each position of the
file, the composition profile change in depth direction can be easily accessed.
To investigate the composition wave allow the direction perpendicular to the
film surface, a thick film is implemented to the numerical simulation. This kind
of film is not only difficult to fabricate and characterize in experiments, however
in the numerical modeling, the user only needs to change the mesh grid domain size.
The depth profiles with different substrate functionalization are shown in
Figure~\ref{fg_thick_film}, where $|f_s|$ denotes the surface energy term from the
substrate functionalization. This term will be added to the total free energy
on the interface of the polymer film and the substrate. The initial thickness
of the film is 1 mm and decreases to 8 $\mu m$ due to the evaporation of the
solvent. The thickness results are scaled by 0.5 to fit in the figures. It can
be seen that a higher surface interaction force can result in a faster substrate
directed phase separation in the film. A stronger substrate interface attraction
force can direct the phase separation morphology near the substrate surface. While
with a lower surface energy, the phase separation dynamics in the bulk of the
film overcomes the substrate attraction force. It can be seen that at 30 seconds,
the substrate functionalization has little effects on the morphology on the
substrate surface. Also, the checker board structure can be seen near the
substrate surface with a higher surface energy~\cite{KARIM:1998}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_thick_film.eps}
\caption{The phase separation in a thick film. \label{fg_thick_film}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_thick_film} The phase separation in a thick film.)\par
\qquad
\fi
To investigate the effects of a more complicated pattern, a larger domain is
simulated. The pattern on the substrate applied on the substrate surface is
shown in Figure~\ref{fg_chn_pattern}. The substrate pattern is designed to
investigate the effects of various shapes and contains components such as
squares, circles, and dead end lines in different sizes. The initial surface
dimensions of the model are changed to 12$\mu m\times$12$\mu m$. The initial
thickness of the film is 1mm and shrinks during the solvent evaporation. The
elements in the modelling is 384$\times$384$\times$16. The average composition
ratio of PS/PAA is changed to 38/62 to match the pattern. The result patterns
from the simulation can be seen in Figure~\ref{fg_complicated_patterns}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_chn_pattern.eps}
\caption{The substrate pattern with complicated features.
\label{fg_chn_pattern}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_chn_pattern}The substrate pattern with complicated features.)\par
\qquad
\fi
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_complicated_patterns.eps}
\caption{The effects of complicated substrate patterns.
\label{fg_complicated_patterns}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_complicated_patterns}The effects of complicated substrate
patterns.)\par
\qquad
\fi
It can be seen that in a larger domain with complicated substrate patterns, the
attraction factor has to be increased to obtain a better replication. In
general, the increase of the attraction factor will increase the refinement of
the pattern according to the substrate pattern. But since the substrate pattern
has geometrical features in different sizes, the attraction factor has to be
strong enough to force the intrinsic phase separation with unified
characteristic length to match the substrate pattern in different sizes. This
would be the main challenge to the replication of complicated patterns. It has
been reported by Ming et. al.~\cite{MING:2009} that the addition of the
copolymer can improve the refinement of the final patterns in experiments. The
reason is that the PAA-b-PS block copolymer will concentrate in the interface
of the PS and PAA domains in the phase separation, therefore decreasing the
mixing free energy. Fundamentally, the addition of the block copolymer
increased the miscibility of the two polymers. To simulate these phenomena, the
Flory-Huggins interaction parameter is decreased from 0.22 to 0.1 to increase
the miscibility of PS/PAA in the modelling. The result pattern is also shown in
Figure~\ref{fg_complicated_patterns}, in comparison to the cases without the
addition of block copolymers. It can be seen that the refinement of the phase
separated pattern is improved by the addition of the block copolymer. The $C_s$
values of the phase separation with complicated patten are measured and plotted
in Figure~\ref{fg_cs_complicated_patterns}. \par
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{fg_cs_complicated_patterns.eps}
\caption{The effects of complicated substrate patterns. \label{fg_cs_complicated_patterns}}
\clearpage
\end{figure}
\iffalse
\qquad
(Figure~\ref{fg_cs_complicated_patterns}The effects of complicated substrate patterns.)\par
\qquad
\fi
A assessment parameter, $C_s$, the compatibility parameter is introduced to
evaluate the replication of the morphology to the substrate pattern, where
a higher $C_s$ value denotes a better
replication of the polymer film morphology according to the substrate pattern
It can be seen in Figure~\ref{fg_cs_complicated_patterns} that the $C_s$ value
for the system with block copolymer is 7.69E-01, which is higher than the
system without the block copolymer when attraction forces are the same. The
decrease of the Flory-Huggins interaction parameter increases the miscibility
of the polymers, which will decrease the miscibility gap of the polymers, as
can be seen in Equation~\ref{eq_flory_huggins_intro}. The two phase at
equilibrium will be less concentrated in different types of polymer. This is an
issue may need to be concerned when the interaction parameter of the two
polymers is changed. \par
\section{Conclusion}
A computer program for simulation of polymer self-assembly with phase separation
is introduced. The program is developed in MATLAB m code and designed to
assist the scientists in real working environments. The program is able to
simulate the experiment results quantitatively with real experimental
parameters. The unmeasurable physical parameters such as the gradient energy
coefficient and the mobility can be estimated with the program. The program
provides a graphical user interface and analytical toolkits. This program
can help the scientists in research in polymer phase separation mechanisms and
dynamics with high efficiency, convenience of usage, quantitative results
analysis, and validated reliability.
\section{Acknowledgement}
The authors would thank the efforts of Liang Fang and Ming Wei for providing
help in the experimental procedurals. The authors also appreciate the
valuable suggestions and comments from other users and testers of this program.
This project is a part of the research in Center of High-rate Nanomanufacturing,
sponsored by National Science Foundation (grant number NSF-0425826).
\bibliographystyle{unsrt}
| {'timestamp': '2010-07-09T02:00:25', 'yymm': '1007', 'arxiv_id': '1007.1254', 'language': 'en', 'url': 'https://arxiv.org/abs/1007.1254'} |
\section{Introduction}
Sunspot oscillations are a significant phenomenon observed in the solar atmosphere. Studying the oscillations started in 1969 \citep{1969SoPh....7..351B}, when non-stationary brightenings in the CaII and K were discovered. These brightenings were termed umbral flashes (UFs). Furthermore, \citep{1972ApJ...178L..85Z} and \citep{1972SoPh...27...71G}, using observations in the $H\alpha$ line wing, discovered ring structures in sunspots. Those structures propagated from the umbral centre to the penumbral outer boundary with a three-minute periodicity. The authors referred to these background structures as running penumbral waves (RPWs). Below, at the photosphere level, the oscillation spectrum shows a wide range of frequencies with a peak near five-minute oscillations. These frequencies are coherent, which indicates at the umbral brightness variations within this range as a whole \citep{2004A&A...424..671K}. Also, there exist low-frequency 10-40 minute components in sunspots \citep{2009A&A...505..791S, 2008ASPC..383..279B, 2013A&A...554A.146K}. Their nature has been in doubt so far.
Observations in \cite{2002A&A...387L..13D} showed that the observed emission in magnetic loops anchored in a sunspot has an $\sim$ 172 sec frequency periodicity, which indicates that photospheric oscillations in the form of waves can penetrate through the transition zone upwards into the corona. According to \cite{1977A&A....55..239B}, the low-frequency waves oscillated at the subphotospheric level (p-mode) propagate through natural waveguides as a concentration of magnetic elements (e.g. sunspots and pores). Their oscillation period may be modified by a mechanism for the cut-off frequency. In \cite{1984A&A...133..333Z} showed that the oscillations with a frequency lower than the cut-off frequency fade quickly. The main factor affecting the cut-off frequency is the inclination of the field lines, along which the wave propagation occurs. We can observe five-minute oscillations both in the chromosphere spicules \citep{2004Natur.430..536D}, and in the coronal loops of active regions \citep{2005ApJ...624L..61D, 2009ApJ...702L.168D}. Further investigations of low-frequency oscillations in the solar atmosphere higher layers \citep{2009ASPC..415...28W, 2009ApJ...697.1674M, 2011SoPh..272..101Y} corroborated the assumption that their emergence at such heights is a consequence of wave channelling in the inclined magnetic fields. The observed rate of the disturbance indicates on propagation of slow magneto-acoustic waves \citep{2009A&A...505..791S, 2012SoPh..279..427K}.
For high-frequency oscillations, the sources with less than three-minute period are localized in the umbra, and they decrease in size as the period decreases \citep{2008SoPh..248..395S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2012ApJ...757..160J}. Here in the umbral central part, where the field is almost perpendicular to the Sun surface and there is no field line beam divergence, we see the footpoints of the elementary magnetic loops in the form of oscillating cells \citep{2014AstL...40..576Z}. The main mechanism that determines their power is related to the presence of the subphotospheric and chromospheric resonator in the sunspot. Outside the central part, where the field inclination starts to manifest itself, the mechanism for a cut-off frequency change begins.
Sunspot oscillations are also expressed in the form of UFs \citep{1969SoPh....7..351B, 1969SoPh....7..366W}, whose emission manifests itself most definitely in the kernel of chromospheric lines. A number of papers \citep{2007PASJ...59S.631N, 2007A&A...463.1153T, 2003A&A...403..277R, 2001ApJ...552..871L, 2000Sci...288.1396S, 1983SoPh...87....7T, 1981A&A...102..147K} have studied this phenomenon. \cite{2010ApJ...722..888B} assumed that UFs are induced by magneto-acoustic waves propagating upwards that are converted into shocks. Photospheric oscillations become more abrupt as the waves move into a medium with lower density and transform into a shock front, thus heating the ambient medium. The temperature in the UF source surroundings surpasses the ambient values by 1000 K, which results in brightening of individual umbral sites of the order of several arcsec. On these scales, one also observes sunspot umbral magnetic field variations, although there is no visible confirmation of field line inclination variations or their common configuration throughout these processes \citep{2003A&A...403..277R}. The observations taken recently have shown the presence of very small-size jet-like spatial details of less than 0.1 Mm in the sunspot umbra. Their positions are apparently related to the footpoints of single magnetic loops, along which sunspot oscillations propagate \citep{2014ApJ...787...58Y}.
Umbral flashes are also related to the running wave phenomenon in a sunspot penumbra. This phenomenon is observed in $H\alpha$ and He lines \citep{2007ApJ...671.1005B} and in CaII \citep{2013A&A...556A.115D} in the form of travelling spatial structures moving horizontally, radially from the umbra towards the penumbral outer boundary \citep{2000A&A...355..375T, 2003A&A...403..277R}. The waves that propagate along field lines are non-stationary with changes in the oscillation power both in time and in space \citep{2010SoPh..266..349S}. These results in noticeable periodic emission modulation by propagating three-minute waves at the footpoints of magnetic loops. A possible response of such a modulation is the emergence of both low-frequency wave trains, and individual oscillation maxima brightnesses as UFs.
In this study, we analysed the association between the sunspot UFs source spatial distribution and the spatial structure of the field lines anchored in the umbra. To better understand the association between oscillation activation and flash emergence, we studied the dynamics of the three-minute oscillations in UFs sources. For the spatial localization of the propagating wave fronts to magnetic waveguides, we used the method of pixelized wavelet filtration (PWF technique) \citep{2008SoPh..248..395S}. The paper is arranged as follows: in Section 1, we introduce the paper subject; in Section 2, we provide the observational data and processing methods; in Section 3, we describe the data analysis and obtained results; in Section 4, we discuss the processes of the flash evolutions; and in Section 5, we make conclusions concerning the obtained results.
\section{Observations and data processing}
To study the connection between UFs and sunspot oscillations we used the data observations of the Solar Dynamic Observatory (SDO/AIA) \citep{2012SoPh..275...17L} obtained with a high spatial and temporal resolution. We studied the four active regions with developed sunspots at the maximum of their wave activity. To obtain the spatial location of the UFs sources in space and height we used the observations of January 26, 2015 (NOAA 12268, 01:00-04:00 UT), January 10, 2016 (NOAA 12480, 01:00-04:00 UT), and March 27, 2016 (NOAA 12526, 01:00-04:00 UT). A more comprehensive analysis was carried out for the observations of December 08, 2010 (NOAA 11131, 00:00-03:20 UT).
We used calibrated and centred images of the Sun (Lev 1.5) for various wavelengths. The observations were performed within extreme ultraviolet (EUV; 1600 \AA) and UV (304 \AA, 171 \AA) ranges with cadences of 24 sec and 12 sec, respectively. The pixel size was 0.6 \arcsec. The differential rotation of the investigated regions during the observation was removed by using the Solar Software.
We built time-distance plots along the detected UF sources to search for a correlation between the background wavefront propagation process and the UF emergence. The precise value of the revealed oscillation periods was obtained through the Fourier method. For 2D wave processing and obtaining of their time dynamics, we used the PWF technique. The spectral algorithm applied in this method enabled us to search for waves throughout the sunspot and to trace the direction of their propagation.
Using the helioseismologic method to calculate the time lag of propagating three-minute wavefronts relative to each other \citep{2014A&A...569A..72S} enabled us to establish the height affixment of the SDO/AIA temperature channels. The channel of the extreme ultraviolet range 1600 \AA ~records the emission at the levels of the upper photosphere and transition region with temperatures 6000 K and 100000 K, respectively. However, the main sensitivity of the channel and, correspondingly, the minimum wave lag at the upwards propagation, falls at the emission arriving from the lower atmosphere. This channel often shows dotted, fine-structure details, brightening the field line magnetic footpoint regions. The regions with a high concentration of field lines appear black, particularly near to sunspots and active regions. The 304 \AA ~(He II) channel shows bright regions at the level of the upper chromosphere and lower transition region, where plasma has a high density. The characteristic temperature of the channel is about 50000 K. This channel is best suited to study various oscillation processes in the solar atmosphere, particularly in sunspots where the power of three-minute oscillations reaches maximum. To observe the coronal magnetic structures, we used observations with a 171 \AA ~(Fe IX) wavelength. The emission arrives from the quiet corona and from the upper transition region with the temperature of about 1000000K.
\section{Results}
We investigated the emergence of umbral short-time recurrent flashes of brightness by using the unique observational possibility of the SDO/AIA temperature channels to receive emission from different heights of the sunspot atmosphere. This allowed us to obtain, for the first time, the information on the UF source distribution throughout an umbra and to understand their height location. To test the stability of the recurrent UF source location and their seeing at different heights, we built variation maps for the SDO/AIA different temperature channels. These maps show the distribution of the signal variation relative to its mean value in each point of images throughout the observational time.
\subsection{Spatial and heights location of UFs}
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig1.eps}
\end{center}
\caption{Upper panels: Snapshots of the UFs in sunspot active regions on January 26, 2015 (01:57:54 UT), January 10, 2016 (01:33:52.6 UT), and March 27, 2016 (01:49:28.6 UT) obtained by SDO/AIA (1600 \AA). The broken black rectangles show the umbral regions. The arrows indicate the UFs sources. Middle panels: The corresponding sunspot regions at 171 \AA. The original maps (contours) overlapped on variation maps (colour background) of UV emission obtained during the observation. Asterisks denote the localization of the UFs sources. Bottom panels: Scaled variation maps of the umbral regions at 1600 \AA. The small white rectangles show sources of UFs.}
\label{1}
\end{figure}
Figure~\ref{1} presents a series of sunspots images and their variation maps during the emergence of separate bright UFs obtained by SDO/AIA at 1600 \AA ~and 171 \AA. The observational time was about three hours for each day during the four days of observation. The number of the obtained images for one day was 450 frames at a 24-sec temporal resolution. Similar images were also obtained in the 304 \AA ~and 171 \AA ~channels, where the temporal resolution was 12 seconds. The number of frames was 900. This observational material is adequate to compile with confidence the statistical material both by UF number and by the location in the umbral area. The umbral regions are shown by the dash squares. To increase the visibility of the umbral weak brightening sources, we used the log scale. This enabled us to record weak processes of the umbral background wavefront propagation and to study their association with the UF emergence. This procedure was applied to all studied SDO/AIA temperature channels. This allowed us to obtain time cubes of images and to make the films, in which the dynamics of the umbral emission intensity are presented.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig2.eps}
\end{center}
\caption{Variation maps of umbral UV emission in the SDO/AIA different temperature channels (1600 \AA, 304 \AA~ and 171 \AA) obtained during 00:00-03:20 UT observation NOAA 11131 on December 08, 2010. Squares with numerals indicate the position of observed UF sources. The arrows show the scanning direction when obtaining the time-distance plots. The dash circle outlines schematically the umbral boundary. The variation intensity is presented by colours in the logarithmic scale.}
\label{2}
\end{figure}
Watching and studying frame-by-frame the films obtained for a variety of ultraviolet wavelengths showed the presence of two dynamic components in a sunspots. The first is related to a continuous propagation of the background three-minute oscillations in the umbra and longer periodicity in the penumbra. This component is visible with a naked eye in the form of wavefronts propagating towards a penumbra from a pulsing source located in the sunspot centre. This source agrees well with the centre of the spiral wavefronts propagation described previously in \cite{2014A&A...569A..72S} for December 08, 2010 event. The other component is related to short-time brightenings of separate parts of the propagating fronts and with the emergence of small-angular size details as UF sources.
We can see on variation maps at 1600 \AA ~(Fig.~\ref{1}, bottom panels) that the UFs sources as local brightenings have different localizations, intensities, and shapes located in the umbral periphery. There are both bright point sources and extended sources that have different spatial orientation. Some localize near to the light bridge for example on January 10, 2016. This type of intensity variation was described in \cite{2014ApJ...792...41Y}. Watching the obtained films showed that the fast processes of the UF brightening mainly appear in the same umbral site. Also, they manifest themselves both as individual pulses and as a series of modulated pulsations.
When we compare the obtained spatial location of bright points of variation inside umbra at 1600 \AA ~and 171 \AA, we can see well coinciding UFs sources with footpoints of coronal loops, anchored in the umbra of the sunspots (Fig.~\ref{1}, middle panels). Mainly variation maps on coronal heights show the elongated details, which can be interpret as magnetic loops along which waves propagate from the bottom layers of the sunspot atmosphere to the corona. The maxima of waves variation distributes along the loops as a bright elongated details. The main behaviour of the oscillation sources at separated periods is determined by the cut-off frequency.
The UF source visibility varies depending on the height of the ultraviolet emission generation. We can observe a part of the flashes at all heights. The other part manifests itself only lower, at the photospheric level. The angular size of UF sources varies from flash-to-flash by revealing itself as a point or as an extended source.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig3.eps}
\end{center}
\caption{Snapshots of the narrowband maps of umbral region NOAA 11131 with 3-min periodicity on December 08, 2010. The left panel shows
the localization of the stable source of the local UFs at 1600 \AA ~(00:22:17 UT). The right panel shows the position of the bright sources at 304 \AA ~(00:22:32 UT), which ride the expending 3-min spiral wave fronts as background UFs. The dash circle outlines the umbral boundary. The arrows shows the position of UFs sources.}
\label{3}
\end{figure}
Figure~\ref{2} shows the variation maps obtained at 1600 \AA, 304 \AA, and 171 \AA ~wavelengths on December 08, 2010. One can see that the brightness variation distribution shows an inhomogeneous structure in the umbra, whose value depends on the SDO/AIA recording channel. Below, at the upper photosphere level (1600 \AA), there is a well-defined umbra indicated by the dashed circle. These umbral features have a lower level of the emission variation. Against its background, the sources that have both points and extended shapes stand out.
We found eight UF sources within the umbral boundary. The source size varies from 2 to 8 \arcsec. Mainly, these sources are located on the periphery near to the sunspot umbral boundary. When moving upwards onto the transition region level (304 \AA) we observe the disappearance of the point UF sources (No.1-4) and the increase in the brightness of the extended UF sources (No.5-8). There is an increase in the emission variation and accordingly the umbral brightness increases owing to the boost of the background three-minute oscillations. Higher, in the corona (171 \AA), we see that along with the UF sources visible below extended details appear that spatially coincide with the magnetic loops. Propagation of the background three-minute waves along these loops contributes mainly to the emission variation increase.
For the UF-type short-time processes, the maximal brightness is reached lower at the photosphere level (1600 \AA). When comparing the three-minute background component emission variations within different SDO/AIA temperature channels, the maximal value is reached at the transition region level (304 \AA).
The obtained variation maps show the values of the signal variance both in periodic and non-periodic components. To isolate only periodic signal, we have constructed a series of narrowband maps with 3 min signal periodicity in space and time with used the PWF technique. Figure \ref{3} shows the obtained snapshots of narrowband oscillation maps (positive half-periods) in the SDO/AIA temperature channels at 1600 \AA, 00:22:17 UT and 304 \AA, 00:22:45 UT. These times correspond to the appearance of maximum brightness in UF source N5. We see that at wavelength 1600 \AA ~there is only one bright, local source UFs associated with periodical oscillations in a limited spatial area. Its position almost does not change with time. At the transition zone (304 \AA), we see the wave fronts as an evolving spiral with the pulse source in the centre of umbra. Similar dynamics of wave fronts was discussed in \cite{2014A&A...569A..72S}. Contours highlight the details of the fronts, the brightness of which exceed 50 \% of the maximum value in time. With propagation waves from umbra centre to its boundary, these details continuously appear and disappear, originating the short-term brightening of separated parts of the fronts as background UFs. On the variation maps, these changes are connected with background brightening.
To understand how UF sources are related to the umbral magnetic structures, we compared their spatial position with the coronal loops seen in the UV emission (SDO/AIA, 171 \AA) and magnetic field structure of this active region previously described in \cite{2012ApJ...756...35R}. Because the considered sunspot is the leading in the group, the magnetic field configuration shows a well-defined east-west asymmetry. The magnetic field lines anchored in the eastern part of the sunspot are much lower and more compact, than the field lines anchored in the western part of the sunspot.
When considering the UF source positions (Fig.~\ref{2}, 1600 \AA), we notice that the detected UF point sources (numbered 1-4) are localized in the umbral western part near to the footpoints of large magnetic loops. More extended sources (numbered 5-8) are related to the eastern part, and are located near the compact loops of the footpoints, relating the sunspot with its tail part. The size of the extended UF sources is about 7-10 \arcsec, and the point UFs are about 2.5 \arcsec.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig4.eps}
\end{center}
\caption{Time-distance plots along the N5 UF source obtained by SDO/AIA at 1600 \AA ~(left panel), and at 304 \AA ~(right panel) temperature channels on December 08, 2010. The brightness periodic changes are the 3-minute oscillation wavefronts. The arrows show the UF. The horizontal dashed lines indicate the umbra/penumbra border. The 1D spatial coordinates are in arcsec, the time in UT.}
\label{4}
\end{figure}
\subsection{Time dynamics of UFs on December 08, 2010}
More comprehensive analysis of the time dynamics for wave processes was performed for the sunspot active region NOAA 11131 on December 08, 2010. The wave processes inside the umbra were intensively studied by \cite{2012A&A...539A..23S, 2014A&A...569A..72S, 2014A&A...561A..19Y, 2014AstL...40..576Z}.
The detected compact sources of the maximal variation in Fig.~\ref{2} were studied to reveal the existence of flash and/or oscillation activity. For this we scanned each of the sources at 1600 \AA ~and 304 \AA and built the time-distance plots. The arrows show the UF source scan directions.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig5.eps}
\end{center}
\caption{Time dynamics of the EUV emission for the N2 and N6 sources at 1600 \AA. The arrows show the maximum emission of UF. Time in UT.}
\label{5}
\end{figure}
Figure~\ref{4} presents an example of the obtained time-distance plots in 1600 \AA ~(left panel) and in 304 \AA ~(right panel) for the N5 extended source. We see that throughout the entire observational time there are background three-minute broad brightness variations in the umbra that smoothly transit into the five-minute oscillations at the boundary of the umbra and penumbra shown by the dashed line. This type of partial brightening of wave fronts during propagation in the umbra as UFs was described in \cite{2014ApJ...792...41Y}. Most clearly, these UFs are exhibited at the level of the transition region in 304 \AA ~(Fig.~\ref{4}, right panel). Also, these oscillations exist lower, at the level of the upper photosphere (1600 \AA). Against their background, we note a series of periodically recurrent various-power local UFs. The arrows in Fig.~\ref{4} indicate separate pulses (left panel). The position of flashes by space coincides with the maximal brightness of the N5 extended source. The fine spatio-temporal structure of the UF sources also coincides with the brightenings of the three-minute oscillation background wavefronts.
When comparing the flash peak values below and above the sunspot atmosphere, we note that UFs have shorter duration at the level of the photosphere than that at the level of the transition region. Low-frequency modulation of three-minute oscillations occurs. The brightness change at 304 \AA ~occurs smoothly without well-defined peaks. During flashes brightenings of the 3-minute wavefront in the source occur. The brightness contrast decreases as the height of the UF observation increases. One may assume that UFs and the background three-minute oscillations have identical natures in the form of the wave activity increase within the magnetic loops, where their propagation occurs with different time and spatial scales.
To compare the time profiles of the brightness variation within different UF sources for one wavelength, we used cuts along spatial coordinates with the maximal brightness on the time-distance plots (Fig.~\ref{4}). The profiles for each UF source were obtained. Fig.~\ref{5} shows a brightness change example for N2 and N6 sources at the level of the upper photosphere (1600 \AA), where the UF visibility is maximal.
One can see that, along with the well-defined three-minute oscillations (Fig.~\ref{5}, left panel), there also exist pulse events as UFs. Their number and duration depends on the flash source. Thus, we only observed individual flashes during a three-hour interval of observations for the sources numbered 1 through 4. At the same time, on the profiles of the 5-8 sources, we note series of flashes with different amplitudes and durations (Fig.~\ref{5}, right panel).
Comparing the shape of the revealed sources in Fig.~\ref{2} with the corresponding profiles in Fig.~\ref{5} showed that, for point sources, the emergence of rare individual UFs is common. The UF extended sources are related to the series of periodically recurring different amplitude pulses, about 4-14 flashes during the observations. Comparing the peak amplitudes of various UF sources revealed that the brightness change in the point sources is almost five times less, than that for the extended.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig6.eps}
\end{center}
\caption{Time dynamics for the N5 UF source at various SDO/AIA channels: 1600 \AA, left panel and 171 \AA, right panel. The blue lines show the brightness changes recorded during the flashes. The red lines show the time profiles of the filtered 3-minute oscillations. The numerals denote the oscillation train numbers. Bottom panels: Fourier spectra of UF signals for SDO/AIA channels accordingly.}
\label{6}
\end{figure}
\subsubsection{Relation between wave dynamics and UFs}
Based on the obtained 1D time-distance plots for each source ~(Fig.~\ref{4}), for which the relation between the oscillating 3-minute component and the UF emergence is well traced, we performed a spectral analysis of the time profiles by using the fast Fourier transform (FFT), and PWF technique. We applied the Fourier transform to provide a good spectral resolution, and the PWF technique to obtain a spatio-temporal structure of the wavefronts propagating in UF sources.
Figure~\ref{6} shows an example of the oscillations detected in the N5 extended source over the 00:10-00:50 UT observational period, when there emerged UFs. We can see the profiles with sharp UFs at 1600 \AA. At the corona level at 171 \AA ~there are stable 3 min oscillations without spikes. This served as the main criterion for studying the spectral behaviour of filtered 3 min oscillations at 171 \AA ~and its comparison with the original signal at 1600 \AA. In this case the spectral power does not change because of sharp jumps in the signals.
One can see that at the level of the upper photosphere (Fig.~\ref{6}a, 1600 \AA, blue lines), there exist periodic brightness changes in the EUV emission. These changes take shape as a UF series, where UFs were exhibited as a sequence of low-frequency trains of higher frequency oscillations. Those higher frequency oscillations are particularly expressed in the sunspot atmosphere higher coronal layers at 171 \AA ~(Fig.~\ref{6}b). The Fourier spectrum showed the existence of significant harmonics. These harmonics are related to an $\sim$ 3-5-minute periodicity and to the $\sim$ 13-min low-frequency oscillations (Fig.~\ref{6}c,d).
To trace the time dynamics for the detected periodicity, we performed a wavelet filtration of the series in the period band near three minutes. We found the four trains of high-frequency oscillations numbered in Fig.~\ref{6}a. If one compares the behaviour of the filtered three-minute signal (red lines) and the UF emergence (blue lines), it is apparent that the train maxima coincide with the UF brightness maxima. A complex UF time profile (in the form of a series of variable peaks) is related to the existence of oscillations with different amplitudes, phases, and lifetimes in the trains.
When comparing the oscillations in UFs, one can see (Fig.~\ref{6}), that the low-frequency trains are well visible in the lower atmosphere. Their power decreases in the upper atmosphere. This is well traced on the Fourier spectra of the signals for different height levels (Fig.~\ref{6}c,d). We note the inverse dependence between the harmonic power. At the level of the upper photosphere, the low-frequency modulation is maximal at a low level of the 3-minute harmonic. In contrast, in the corona, there is a pronounced peak of 3-minute oscillations with the minimal value of the $\sim$ 13-minute component power.
Increasing oscillations in the source led to the formation of compact brightenings in the form of UFs on the time-distance plot (Fig.~\ref{4}, left panel). As the low-frequency oscillation power decreases, at the corona level a smooth increase occurs in the high-frequency three-minute component in the form of brightenings of the wavefront separate details (Fig.~\ref{4}, right panel). The mean UF duration for extended sources was $\sim$ 3.7 minutes. This value is near the value of one period for the three-minute oscillation maximal power.
To test the obtained association between UFs and oscillations, we calculated the correlation coefficients between the original signal and the three-minute filtered signal in various SDO/AIA channels. There is a direct correlation between the three-minute oscillation power and the UFs power. The maximal value for the correlation coefficient is at 1600 \AA, and this value varies within the 0.65 - 0.85 range for different sources of flashes.
One may assume that the obtained association between the increase in the three-minute oscillations and the UF emergence is characteristic of not only the detected N5 source, but is also present in all the detected sources. To test this statement, we calculated the narrowband three-minute oscillation power variations in the N7 and N8 sources above, at the corona level (171 \AA), and compared these variations with the UF emergence in the integral signal lower, at the photosphere level (1600 \AA). The observational interval was 00:00-03:20 UT.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig7.eps}
\end{center}
\caption{Amplitude variations of the N7 and N8 extended sources of UFs at 1600 \AA ~and 171 \AA ~temperature channels. Blue lines show the profiles of the original signal at 1600 \AA. Red lines show the 3-min oscillation power at 171 \AA.}
\label{7}
\end{figure}
Figure~\ref{7} shows the time profiles for the signals in the N7 and N8 extended sources, and the corresponding variation of power oscillations in the corona. Apparently, in the sources at the upper photosphere level (blue lines, 1600 \AA), there are recurrent UFs of different amplitude. In addition to the case with the N5 source, the bulk of the UF peak values are accompanied by an increase in the three-minute oscillation low-frequency trains at the corona level (red lines, 171 \AA). There is a well-defined correlation between the signals. Thus, over 01:20-03:20 UT, the emergence of the "step-like" signals at the photosphere level with their gradual steeping and the emergence of UF pulses is followed by a smoothly varying increase in the power of the three-minute oscillation trains in the corona.
\begin{figure
\begin{center}
\includegraphics[width=9.0 cm]{Fig8.eps}
\end{center}
\caption{Amplitude variations of the point N1 and N3 UF sources. Green lines show the original signal at 1600 \AA; blue lines present the signal at 171 \AA. Red lines show the mean power of the 3-min oscillations.}
\label{8}
\end{figure}
\begin{figure*
\begin{center}
\includegraphics[width=14.0 cm]{Fig9.eps}
\end{center}
\caption{Snapshots of spatial distribution of travelling wave fronts during the UF for the N5 extended source. Duration of propagating the 3-min waves along a magnetic waveguide of about one period. The observational wavelength is 1600 \AA. A continuous line represents a positive half-period of propagating waves, and the dashed line separates the negative half-period. The background image is the distribution of the brightness of the source at the time of the maximum flash. The minimum negative half-period is indicated in green. The time resolution is 24 sec.}
\label{9}
\end{figure*}
For the N1 and N4 point sources only single pulses with a low-intensity level were observed. For these sources, we compared the coronal three-minute oscillation power mean-level variations with the moments of the UF single burst peak emergence at the photosphere level. Fig.~\ref{8} shows the original signal profiles at varying height levels (green lines for 1600 \AA, blue lines for 171 \AA) with the superposition of the three-minute oscillation mean power (red lines, 171 \AA). Apparently, the moments of the short flash emergence below the sunspot coincide with the three-minute oscillation power maxima above. In this, we note a similar sequence in the signal evolution such as that for the extended sources. The difference is in the duration of the flashes. Thus, for the N1 source (02:36:15 UT), the UF duration was $\sim$ 1.5 minutes, for N2 (03:07:30 UT) - about 1.1 minutes, for N3 (01:01:30 UT) - about 1.0 minute, and for N4 (03:12:00 UT) - about 1.1 minutes. The UF mean value for the point sources was $\sim$ 1.2 minutes.
\subsubsection{Wave propagation in UF sources}
To study narrowband wave propagation over the UF source space, we used the PWF technique. Fig.~\ref{9} shows the time sequence for the EUV emission wavefront images (SDO/AIA, 1600 \AA) obtained for the N5 source during the second train of the three-minute oscillations (00:18:00 - 00:20:48 UT). The temporal resolution was 24 sec. The oscillation positive half-period is shown by the continuous contours, the negative is outlined by the dash contours. The basis is the source image at the UF maximum instant at 00:20 UT.
Comparing the obtained images (Fig.~\ref{9}) with the profile for the UF maximal brightness variation (Fig.~\ref{6}a), we can clearly see that the brightness increase is accompanied by the onset of the wave propagation along the detected direction coinciding (by shape) with the UF maximal emission source. These motion directions towards the penumbra, in turn, coincide with the footpoint of the magnetic loop, along which the waves propagate. There are recurrent instances when the fronts emerge in the same site of the umbra limited space. The beginning of the N5 extended source coincides (by space) with the pulsing centre of the three-minute waves expanding spirally. One may assume that the wave source coincides with the footpoint of the magnetic bundle that diverges in the upper atmosphere. Separate spiral arms rotate anti-clockwise. These background waves were studied in \cite{2014A&A...569A..72S} for this active region.
Presumably, propagation of spiral-shaped waves (Fig.~\ref{3}, 304 \AA) is the initiator of the wave increase in separate magnetic loops. In this case, the bulk of bright fronts propagates towards the bright extended UF emergences. The wave propagation projection velocities along the waveguide lie within the 20-30 km/s interval. These values agree with the slow magneto-acoustic wave propagation velocity in the sunspot.
For different low-frequency numbered trains of the UF in the N5 source (Fig.~\ref{6}a, 1600 \AA), the maximal brightness was located in various parts of the magnetic waveguide, and it varied with time. Each series UFs with the $\sim$ 10-13 minute duration was accompanied by an increase in the low-frequency trains of the 3-minute waves. There are differences for each wave train. One observes the UFs, when both propagating and standing waves are visible throughout one train. The wave velocity can vary from train to train. Mainly, the waves move towards the sunspot boundary from its centre.
The increase in the wave processes for the UF point sources occurs in the form of producing single pulses in the umbral site limited to several pixels. The emergence of so-called standing waves without their apparent propagation is characteristic for these sources. Mainly, the 2D time dynamics of the three-minute oscillation sources agrees with the UF source dynamics.
\section{Discussion}
The results obtained from the SDO/AIA data showed that the investigated phenomenon of UFs is characteristic of all the heights within a sunspot atmosphere. We see a response to both below, at the photosphere level, and above, at the corona level, the sunspot atmosphere. This means that flashes represent a global process of energy release and this process encompasses all the layers of an sunspot umbra.
Usually, an umbra is considered a relatively quiet region as compared with a penumbra. This is because the umbral magnetic field represents a vertical bundle of magnetic field lines diverging with height. The umbral field line inclination is minimal. Correspondingly, the magnetic reconnection responsible for the flash energy release emergence is unlikely in a homogeneous, vertical field. This conclusion indicates that there are other mechanisms for the emission increase during UFs.
A wave mechanism is an alternative to explain this increase. It is based on the assumption that the observed brightenings in the form of UFs are short-time power increase in wave processes within separate umbral parts. This viewpoint to be common, because the well-known three-minute umbral oscillation were revealed to propagate non-uniformly both over the sunspot space and with time \citep{2012A&A...539A..23S, 2014A&A...569A..72S}. Mainly, the waves are modulated by a low-frequency component in the form of the $\sim$ 13-15 minute trains and their power is time variable. The wave motion direction is determined by the spatial structure of the umbra-anchored magnetic field lines, along which slow magneto-acoustic waves propagate.
There are instances when a significant increase in power of the three-minute oscillation trains occurs at separate footpoints of magnetic loops. These processes have an indefinite character, and the source of the next wave increase is impossible to locate. On the other hand, the magnetic loop footpoints are stable over the umbral space over a certain time period. This enables us to assume that the positions of the UF sources are probably directly related to the magnetic loop footpoints, in which short-time increases in the three-minute waves are observed.
These assumptions agree well with the spatial localization of the UF sources at the umbral boundary (Fig.~\ref{1}) as well as the difference in shape, i.e. extended and point. Umbral flash sources maintain their spatial stability for about three hours, producing UF series. On the other hand, \cite{2003A&A...403..277R} noted that some flashes possess instability both in space and time.
In \cite{2014ApJ...792...41Y}, the authors showed that the UFs visible on time-distance plots occur at random locations without a well-established occurrence rate. It has been established that the appearance of new UFs sources is associated with the trains of three-minute oscillations in the sunspot umbra with much larger amplitude. The individual UFs ride wave fronts of umbral oscillations. A possible explanation for this is the presence in the umbra background oscillations as expending fronts of 3-min waves and their interaction between each other. A similar type of brightening was considered in \cite{2014A&A...569A..72S}. These authors noted that the individual parts of the wave fronts, which are shaped as rings or spirals, during propagation along magnetic loops with different
spatial configuration and interactions between each other, can lead to the appearance of diffuse brightening
with spatial instability. Such short-lived background UFs are well visible on the time-distance diagrams, constantly appear in umbra, and do not have stable shapes and localizations in space (Fig.~\ref{3}, 304 \AA). Basically, the pulse source of such wave fronts is located in the centre of umbra, and is possibly associated with the footpoint of the magnetic bundle whose loops are expanding with height.
In the case of background UFs, we observed the local traces of waves that propagate along loops with different inclinations relative to the solar normal and correspondingly different cut-off frequencies. This forms a brightening of wave tracks, which we observed as diffuse UFs during increasing of power oscillations in selected areas of umbra. We can also obtain the same effect during interactions between wave fronts. With height, the visibility and positions in space of these sources are shifted in a radial direction because of upwards wave propagation.
For the local UFs discussed in our work, the sources have small angular size with a periodic 3-min component and stable location, both over space and height (Fig.~\ref{3}, 1600 \AA). Their appearance is associated with the power of the maximum wave propagating near the footpoints of coronal loops outside the main magnetic bundle. The origin of these loops is umbral periphery. Their inclination can be different relatively to the configuration of the main magnetic bundle.
The existence of an UFs fine structure was previously assumed in \cite{2000Sci...288.1396S} and \cite{2005ApJ...635..670C} using spectroscopic observations. Improving the angular and spatial resolutions of astronomical instruments enabled us to observe such changes in UF sources directly. Thus, \cite{2009ApJ...696.1683S} used HINODE data (CaII H line) to find an umbra fine structure in the form of a filamentary structures that emerged during UFs. These details were present at an oscillation increase, and formed a system of extended filaments, along which brightness varied with time in the form of travelling bright and dark details. The calculated horizontal velocity varied within 30-100 km/s.
We can assume that we observe in UF sources projection motions (at the footpoints of magnetic field lines) of the three-minute longitudinal wavefronts propagating upwards \citep{2003ApJ...599..626B}. Depending on the field line start localization (the umbral centre or near its boundary) and on the inclination to the solar normal, there is a projection effect of wave visibility. Near the sunspot boundary, one observes extended UF sources, whereas closer to the centre point sources are observable. This statement is true if we assume a symmetry of diverging field lines relative to the umbral central part. In reality, there is often an east-west asymmetry of an active group. This asymmetry is related to the presence of the head (sunspot) and tail (floccule) parts.
The wave path length, and, accordingly, the wave visibility with certain oscillation periods is determined by the cut-off frequency \citep{1977A&A....55..239B, 1984A&A...133..333Z}. The path also varies as a cosine of the magnetic waveguide inclination angle. Point UF sources with a minimal angular size are related to the footpoints of vertical magnetic field lines. Large, extended UF sources, are related to the footpoints of field lines with a large inclination to the solar normal.
Comparing the positions of the sources at the NOAA 11131, various heights showed a good concurrence of the UF sources underneath (1600 \AA) with the footpoints of coronal loops (171 \AA), which play the role of magnetic waveguides for three-minute waves. For the low-lying loops in the eastern part of the NOAA 11131 that connect the sunspot with the tail part, we see extended UF sources at their footpoints. For the western part, we see point sources.
The revealed interconnection between the UF emergence and the increase in the three-minute wave trains indicates that we can consider UFs as events, in which peak increases in the trains of oscillations at the footpoints of magnetic loops exhibit themselves. There is a direct dependence between the oscillation power and the flash brightness at a maximal correlation. The higher the magnitude of the three-minute waves, the more powerful the flash. This dependence concerns both extended UF sources and point UF\ sources. The UF emission maximum coincides with the maximum of the three-minute oscillations within one wave train.
The 2D spectral PWF-analysis for the SDO/AIA image cube directly showed (Fig.~\ref{9}) that, during UFs, three-minute wave motions emerge along the detected magnetic loops towards the umbral boundary at the UF sources. The wave propagation starts at the footpoint and terminates at the level, where its inclination angle corresponds to the cut-off frequency beyond the limits of the observed three-minute frequency band. The greater the loop inclination, the greater the projection of the UF source to the observer, and the more wave trains (UF pulses) we can record. Correspondingly, we will observe extended, bright UF sources. Contrary to the propagating waves, so-called standing waves will be observed for point sources. An explanation of this is the projection of the propagating waves along vertical magnetic loops towards the observer. In this case, spatial wavefronts will be observed within a loop cut region limited by space. Those fronts form UF sources with a small angular size.
The UF source lifetime will also be different. For point UFs, the source lifetime is about 1-2 minutes; for extended UFs, it is 3-15 minutes. The visibility of the source is restricted by a low integral power level of the UF emission of the point sources and a short observational time for maximal oscillations (1-2 half-periods). For extended UF sources, we can observe a few travelling wave trains simultaneously, which intensifies their integral brightness and increases the observational time (lifetime).
\section{Conclusions}
We analysed the association between an increase of wave activity in the sunspot active groups and the emergence of UFs. We used the observational data in the UV emission obtained in the various temperature channels of the SDO/AIA with high spatial and temporal resolutions. To detect the oscillations, we used the time-distance plots and Fourier and wavelet transform spectral techniques. The results are as follows:
1) We revealed fast periodic disturbances related to the wave activity in the sunspot umbra during a three-hour observation. These disturbances correlate well with the continuous diffuse brightening of separate details of the propagating three-minute wavefronts as described in \cite{2014ApJ...792...41Y}. Along with this, short-time emergences of the small local sources having a periodic component and identified as UFs are observed.
2) We can divide the observed umbral brightening into two types. The first type are background UFs associated with random brightening of separated parts of wave fronts during their propagation. These UFs are observed all of the time in umbra as weak diffuse details that ride the wave fronts without stable shapes and localization in space. The second type are local UFs associated with the increasing of wave activity near to the footpoints of magnetic loops. These sources not change spatial position in time and show pronounced wave dynamics during UFs.
3). For the local UFs we revealed various types of spatial shapes of the sources. We suppose that the point sources are located at the footpoints of large magnetic loops. Their feature is the activity with rare single pulses of low power and duration. The extended sources are related to the footpoints of low magnetic loops and large inclinations. The features of this source type are series of recurrent UF pulses related to propagating trains of three-minute waves. The flash power depends on the distance of the wave path, along which the emission is integrated. The wave path and, correspondingly the UF source projection size, are determined by the cut-off frequency.
4) The emergence of the main UF maximum is shown to coincide with the maximal value of the power of the three-minute oscillation trains in separated loops. This type of wave dynamics follows that described in \cite{2014ApJ...792...41Y} for background UFs but localized in magnetic loops. There is a correlation between the UF emergence at the photosphere level and the increase in the power of the three-minute wave trains in the corona.
These results explicitly show the correlation between the sunspot three-minute oscillation processes and the UF emergence. These processes are a reflection of the slow magneto-acoustic wave propagation from the subphotospheric level into the corona along the inclined magnetic fields. The wave process power dynamics in separate magnetic loops determines the time and site of the UF source emergence. The main mechanism responsible for the observed UF parameters is the wave cut-off frequency. In the future we plan to study in more detail the relationship between the shape of the local UFs sources and the inclination of the magnetic loops near to the footpoints of which the flashes are observed.
\begin{acknowledgements}
We are grateful to the referee for helpful and constructive comments and suggestions. The authors are grateful to the SDO/AIA teams for operating the instruments and performing the basic data reduction, and especially, for the open data policy. This work is partially supported by the Ministry of Education and Science of the Russian Federation, the Siberian Branch of the Russian Academy of Sciences
(Project II.16.3.2) and by the programme of basic research of the RAS Presidium No.28. The work is carried out as part of Goszadanie 2018, project No. 007-00163-18-00 of 12.01.2018 and supported by the Russian Foundation for Basic Research (RFBR), grants Nos. 14-0291157 and
17-52-80064 BRICS-a. The research was funded by Chinese Academy of Sciences President’s International Fellowship Initiative, Grant No. 2015VMA014.
\end{acknowledgements}
\bibliographystyle{aa}
| {'timestamp': '2018-05-04T02:04:40', 'yymm': '1710', 'arxiv_id': '1710.08100', 'language': 'en', 'url': 'https://arxiv.org/abs/1710.08100'} |
\section{Introduction}
A process of $e^+e^-$ pair production by a high-energy electron in the atomic field is interesting both from experimental and theoretical points of view. It is important to know the cross section of this process with high accuracy at data analysis in detectors. Besides, this process gives the substantial contribution to a background at precision experiments devoted to search of new physics. From the theoretical point of view, the cross section of electroproduction in the field of heavy atoms reveals very interesting properties of the Coulomb corrections, which are the difference between the cross section exact in the parameters of the field and that calculated in the lowest order of the perturbation theory (the Born approximation).
The cross sections in the Born approximation, both differential and integrated, have been discussed in numerous papers
\cite{Bhabha2,Racah37,BKW54,MUT56,Johnson65,Brodsky66,BjCh67,Henry67,Homma74}. The Coulomb corrections to the differential cross section of high-energy electroproduction by an ultra-relativistic electron in the atomic field have been obtained only recently in our paper \cite{KM2016}. In that paper it is shown that the Coulomb corrections significantly modify the differential cross section of the process as compared with the Born result. It turns out that both effects, the exact account for the interaction of incoming and outgoing electrons with the atomic field and the exact account for the interaction of the produced pair with the atomic field, are very important for the value of the differential cross section. On the other hand, the are many papers devoted to the calculation of $e^+e^-$ electroproduction by a heavy particles (muons or nuclei) in an atomic field \cite{Nikishov82,IKSS1998,SW98,McL98,Gre99,LM2000}. It that papers, the interaction of a heavy particle and the atomic field have been neglected. In our recent paper \cite{KM2017} it has been shown that the cross section, differential over the angles of a heavy outgoing particle, changes significantly due to the exact account for the interaction of a heavy particle with the atomic field. However, the cross section integrated over these angles is not affected by this interaction. Such unusual properties of the cross section of electroproduction by a heavy particle stimulated us to perform the detailed investigation of the integrated cross section of the electroproduction by the ultra-relativistic electron.
In the present paper we investigate in detail the integrated cross section, using the analytical result for the matrix element of the process obtained in our paper \cite{KM2016} with the exact account for the interaction of all charged particles with the atomic field. Our goal is to understand the relative importance of various contributions to the integrated cross section under consideration.
\section{General discussion}\label{general}
\begin{figure}[h]
\centering
\includegraphics[width=1.\linewidth]{diagrams.eps}
\caption{Diagrams $T$ (left) and $\widetilde{T}$ (right) for the contributions to the amplitude ${\cal T}$ of the process $e^-Z\to e^- e^+e^-Z$. Wavy line denotes the photon propagator, straight lines denote the wave functions in the atomic field.}
\label{fig:diagrams}
\end{figure}
The differential cross section of high-energy electroproduction by an unpolarized electron in the atomic field reads
\begin{equation}\label{eq:cs}
d\sigma=\frac{\alpha^2}{(2\pi)^8}\,d\varepsilon_3d\varepsilon_4\,d\bm p_{2\perp}\,d\bm p_{3\perp}d\bm p_{4\perp}\,\frac{1}{2}\sum_{\mu_i=\pm1}|{\cal T}_{\mu_1\mu_2\mu_3\mu_4}|^{2}\,,
\end{equation}
where $\bm p_1$ is the momentum of the initial electron, $\bm p_2$ and $\bm p_3$ are the final electron momenta, $\bm p_4$ is the positron momentum, $\mu_i=\pm 1$ corresponds to the helicity of the particle with the momentum $\bm p_i$, $\bar\mu_i=-\mu_i$, $\varepsilon_{1}=\varepsilon_{2}+\varepsilon_{3}+\varepsilon_{4}$ is the energy of the incoming electron, $\varepsilon_{i}=\sqrt{{p}_{i}^2+m^2}$, $m$ is the electron mass, and $\alpha$ is the fine-structure constant, $\hbar=c=1$. In Eq.~\eqref{eq:cs} the notation $\bm X_\perp=\bm X-(\bm X\cdot\bm \nu)\bm\nu$ for any vector $\bm X$ is used, $\bm\nu=\bm p_1/p_1$.
We have
\begin{equation}\label{TTT}
{\cal T}_{\mu_1\mu_2\mu_3\mu_4}=T_{\mu_1\mu_2\mu_3\mu_4}-\widetilde{T}_{\mu_1\mu_2\mu_3\mu_4}\,,
\quad \widetilde{T}_{\mu_1\mu_2\mu_3\mu_4}=T_{\mu_1\mu_3\mu_2\mu_4}(\bm p_2\leftrightarrow \bm p_3)\,,
\end{equation}
where the contributions $T$ and $\widetilde{T}$ correspond, respectively, to the left and right diagrams in Fig.~\ref{fig:diagrams}.
The amplitude $T$ has been derived in Ref.~\cite{KM2016} by means of the quasiclassical approximation \cite{KLM2016}. Its explicit form is given in Appendix with one modification. Namely, we have introduced the parameter $\lambda$ which is equal to unity if the interaction of electrons, having the momenta $\bm p_1$, $\bm p_2$ in the term $T$ and $\bm p_1$ , $\bm p_3$ in the term $\widetilde T$, with the atomic field is taken into account. The parameter $\lambda$ equals to zero, if one neglects this interaction. Insertion of this parameter allows us to investigate the importance of various contributions to the cross section.
First of all we note that the term $T$ is a sum of two contributions, see Appendix,
$$T=T^{(0)}+T^{(1)}\,,$$
where $T^{(0)}$ is the contribution to the amplitude in which the produced $e^+e^-$ pair does not interact with the atomic field, while the contribution $T^{(1)}$ contains such interaction.
In other words, the term $T^{(0)}$ corresponds to bremsstrahlung of the virtual photon decaying into a free $e^+e^-$ pair. In the contribution $T^{(1)}$, electrons with the momenta $\bm p_1$ and
$\bm p_2$ may interact or not interact with the atomic field. The latter contribution is given by the amplitude $T^{(1)}$ at $\lambda=0$. Below we refer to the result of account for such interaction in the term $T^{(1)}$ as the Coulomb corrections to scattering. Note that the contribution $T^{(0)}$ at $\lambda=0$ vanishes.
In the present work we are going to elucidate the following points: the relative contribution of the term $T^{(0)}$ to the cross section, an importance of the Coulomb corrections to scattering, an importance of the interference between the amplitudes $T$ and $\widetilde{T}$ in the cross section.
We begin our analysis with the case of the differential cross section. Let us consider the quantity $S$,
\begin{equation}\label{S}
S=\sum_{\mu_i=\pm1}\Bigg|\frac{\varepsilon_1 m^4 {\cal T}_{\mu_1\mu_2\mu_3\mu_4}}{\eta (2\pi)^2}\Bigg|^2 \,,
\end{equation}
where $\eta=Z\alpha$ and $Z$ is the atomic charge number. In Fig.~\ref{dif} the dependence of $S$
on the positron transverse momentum $p_{4\perp}$ is shown for gold ($Z=79$) at some values of $\varepsilon_i$, $\bm p_{2\perp}$, and $\bm p_{3\perp}$. Solid curve is the exact result, long-dashed curve corresponds to $\lambda=0$, dashed curve is the result obtained without account for the contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, dash-dotted curve is the result obtained without account for the interference between $T$ and $\widetilde{T}$, and dotted curve is the Born result (in the Born approximation $S$ is independent of $\eta$). One can see for the case considered, that the Born result differs significantly from the exact one, and account for the interference is also very important. The contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$ are noticeable but not large, and the Coulomb corrections to the contributions $T^{(1)}$ and $\widetilde{T}^{(1)}$ are essential.
The effect of screening for the values of the parameters considered in Fig.~\ref{dif} is unimportant. Note that relative importance of different effects under discussions for the differential cross section strongly depends on the values of $\bm p_{i}$. However, in all cases a deviation of the Born result from the exact one is substantial even for moderate values of $Z$.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{plotdif.eps}
\caption{The quantity $S$, see Eq. \eqref{S}, as the function of $p_{4\perp}/m$ for $Z=79$, $\varepsilon_1=100m$, $\varepsilon_2/\varepsilon_1=0.28$, $\varepsilon_3/\varepsilon_1=0.42$, $\varepsilon_4/\varepsilon_1=0.3$, $p_{2\perp}=1.3 m$, $p_{3\perp}=0.5 m$, $\bm p_{3\perp}$ parallel to $\bm p_{4\perp}$, and the angle between $\bm p_{2\perp}$ and $\bm p_{4\perp}$ being $\pi/2$; solid curve is the exact result, dotted curve is the Born result, dash-dotted curve is that obtained without account for the interference between $T$ and $\widetilde{T}$, the result for $\lambda=0$ is given by long-dashed curve, and the dashed curve corresponds to the result obtained by neglecting the contribution $T^{(0)}$ and $\widetilde{T}^{(0)}$.}
\label{dif}
\end{figure}
Let us consider the cross sections $d\sigma/dp_{2\perp}$, i.e., the cross sections differential over the electron transverse momentum $p_{2\perp}$. This cross section for $Z=79$ and $\varepsilon_1=100 m$ is shown in the left panel in Fig.~\ref{dif2}. In this picture solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\lambda=0$.
It is seen that the exact result significantly differs from the Born one, and account for the Coulomb corrections to scattering is also essential. An importance of account for the interference between $T$ and $\widetilde{T}$, as well as account for the contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, is demonstrated by the right panel in Fig.~\ref{dif2}. In this picture the quantity $\delta$, which is the deviation of the approximate result for $d\sigma/dp_{2\perp}$ from the exact one in units of the exact cross section, is shown. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$. It seen that both effects are noticeable.
Our results are obtained under the condition $\varepsilon_i\gg m$, and a question on the limits of integration over energies appears at the numerical calculations of $d\sigma/dp_{2\perp}$. We have examined this question and found that the variation of the limits of integration in the vicinity of the threshold changes only slightly the result of integration.
In any case, such a variation does not change the interplay of various contributions to the cross sections, and we present the results obtained by the integration over all kinematical region allowed.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{dp2new.eps}
\includegraphics[width=0.45\linewidth]{dp2_difnew.eps}
\caption{Left panel: the dependence of $d\sigma/dp_{2\perp}$ on $p_{2\perp}/m$ in units $\sigma_0/m=\alpha^2\eta^2/m^3$ for $Z=79$, $\varepsilon_1/m=100$; solid curve is the exact result, dotted curve is the Born result, and long-dashed curve corresponds to $\lambda=0$. Right panel: the quantity $\delta$ as the function of $p_{2\perp}/m$, where $\delta$ is the deviation of the approximate result for $d\sigma/dp_{2\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$.}
\label{dif2}
\end{figure}
It follows from Fig.~\ref{dif2} that the deviation of the results obtained for $\lambda=1$ from that obtained for $\lambda=0$ is
noticeable and negative in the vicinity of the pick and small and positive in the wide region outside the pick. However, these two deviations (positive and negative) strongly compensate each other in the cross section integrated over both electron transverse momenta $\bm p_{2\perp}$ and $\bm p_{3\perp}$. This statement is illustrated in Fig.~\ref{dif4}, where
the cross sections differential over the positron transverse momentum, $d\sigma/dp_{4\perp}$ is shown for $Z=79$ and $\varepsilon_1=100 m$.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{dp4new.eps}
\includegraphics[width=0.45\linewidth]{dp4_difnew.eps}
\caption{Left panel: the dependence of $d\sigma/dp_{4\perp}$ on $p_{4\perp}/m$ in units $\sigma_0/m=\alpha^2\eta^2/m^3$ for $Z=79$, $\varepsilon_1/m=100$; solid curve is the exact result and dotted curve is the Born result. Right panel: the quantity $\delta_1$ as the function of $p_{4\perp}/m$, where $\delta_1$ is the deviation of the approximate result for $d\sigma/dp_{4\perp}$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$.}
\label{dif4}
\end{figure}
Again, the Born result differs significantly from the exact one. It is seen that all relative deviations $\delta_1$ depicted in the right panel are noticeable. Then,
the results obtained for $\lambda=0$ and that without contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$ are very close to each other. This means that account for the Coulomb corrections to scattering leads to a very small shift of the integrated cross section $d\sigma/dp_{4\perp}$, in contrast to the cross section $d\sigma/dp_{2\perp}$. Such suppression is similar to that found in our resent paper \cite{KM2017} at the consideration of $e^+e^-$ pair electroproduction by a heavy charged particle in the atomic field.
At last, let us consider the total cross section $\sigma$ of the process under consideration. The cross section $\sigma$ for $Z=79$ as the function of $\varepsilon_1/m$ is shown in the left panel in Fig.~\ref{tot}. In this picture solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \cite{Racah37}. Note that a small deviation of our Born result at relatively small energies from the asymptotics of the Born result is due, first, to uncertainty of our result related to the uncertainty of low limit of integration over the energies of the produced particles, and secondly, to neglecting identity of the final electrons in Ref.~\cite{Racah37}.
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{total_cs.eps}
\includegraphics[width=0.45\linewidth]{total_cs_dif.eps}
\caption{Left panel: the total cross section $\sigma$ as the function of $\varepsilon_1/m$ in units $\sigma_0=\alpha^2\eta^2/m^2$ for $Z=79$; solid curve is the exact result, dotted curve is the Born result, and dash-dotted curve is the ultra-relativistic asymptotics of the Born result given by the formula of Racah \cite{Racah37}. Right panel: the quantity $\delta_2$ as the function of $\varepsilon_1/m$, where $\delta_2$ is the deviation of the approximate result for $\sigma$ from the exact one in units of the exact cross section. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions of $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$.}
\label{tot}
\end{figure}
It is seen that the exact result differs significantly from the Born one. In the right panel of Fig.~\ref{tot} we show the relative deviation $\delta_2$ of the approximate result for $\sigma$ from the exact one. Dash-dotted curve is obtained without account for the interference between $T$ and $\widetilde{T}$, dashed curve is obtained without contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, and long-dashed curve corresponds to $\lambda=0$. The corrections to the total cross section due to account for the contributions $T^{(0)}$ and $\widetilde{T}^{(0)}$, and the Coulomb corrections to scattering are small even at moderate energy $\varepsilon_1$. The effect of the interference is more important at moderate energy and less important at high energies.
In our recent paper \cite{KM2016} the differential cross section of electroproduction by relativistic electron has been derived. For the differential cross section, we have pointed out that the Coulomb corrections to the scattering are the most noticeable in the region $p_{2\perp}\sim \omega/\gamma$. On the basis of this statement, we have evaluated in the leading logarithmic approximation the Coulomb corrections to the total cross section, see Eq.~(33) of Ref.~\cite{KM2016}. However, as it is shown in the present paper, for the total cross section the contribution of the Coulomb corrections to scattering in the region $p_{2\perp}\sim \omega/\gamma$ is compensated strongly by the contribution of the Coulomb corrections to scattering in the wide region outside $p_{2\perp}\sim \omega/\gamma$. As a result, the Coulomb corrections to the total cross section derived in the leading logarithmic approximation does not affected by account for the Coulomb corrections to scattering. This means that the coefficient in Eq.~(33) of Ref.~\cite{KM2016} should be two times smaller and equal to that in the Coulomb corrections to the total cross section of $e^+e^-$ electroproduction by a relativistic heavy particle calculated in the leading logarithmic approximation. Note that an accuracy of the result obtained for the Coulomb corrections to the total cross section is very low because in electroproduction there is a strong compensation between the leading and next-to-leading terms in the Coulomb corrections, see Ref.~\cite{LM2009}.
\section{Conclusion}
Performing tabulations of the formula for the differential cross section of $e^+e^-$ pair electroproduction by a relativistic electron in the atomic field \cite{KM2016}, we have elucidated the importance of various contributions to the integrated cross sections of the process. It is shown that the Coulomb corrections are very important both for the differential cross section and for the integrated cross sections even for moderate values of the atomic charge number. This effect is mainly related to the Coulomb corrections to the amplitudes $T^{(1)}$ and ${\widetilde T}^{(1)}$ due to the exact account of the interaction of the produced $e^+e^-$ pair with the atomic field (the Coulomb corrections to the amplitude of $e^+e^-$ pair photoproduction by a virtual photon). There are also some other effects. For the cross section differential over the electron transverse momentum, $d\sigma/dp_{2\perp}$, the account for the interference of the amplitudes and the contribution of virtual bremsstrahlung (the contribution of the amplitudes $T^{(0)}$ and ${\widetilde T}^{(0)}$)
is noticeable. The Coulomb corrections to scattering is larger than these two effects but essentially smaller than the Coulomb corrections to the amplitude of pair photoproduction by a virtual photon. However, in the cross section differential over the positron transverse momentum, $d\sigma/dp_{4\perp}$, the interference of the amplitudes and the contribution of virtual bremsstrahlung lead to the same corrections as the effect of the Coulomb corrections to scattering. They are of the same order as in the case of $d\sigma/dp_{2\perp}$. This means that there is a strong suppression of the effect of the Coulomb corrections to scattering in the cross section $d\sigma/dp_{4\perp}$. Relative importance of various effects for the total cross section is the same as in the case of the cross section $d\sigma/dp_{4\perp}$.
\section*{Acknowledgement}
This work has been supported by Russian Science Foundation (Project No. 14-50-00080). It has been also supported in part by
RFBR (Grant No. 16-02-00103).
\section*{Appendix}\label{app}
Here we present the explicit expression for the amplitude $T$, derived in Ref.~\cite{KM2016}, with one modification. Namely, since we are going to investigate the importance of the interaction of electrons with the momenta $\bm p_1$ and $\bm p_2$ with the atomic field, we introduce the parameter $\lambda$ which is equal to unity if this interaction is taken into account and equals to zero if one neglects this interaction. We write the amplitude $T$ in the form
$$T=T^{(0)}+T^{(1)}\,,\quad T^{(0)}=T^{(0)}_\parallel+T^{(0)}_\perp\,,\quad T^{(1)}=T^{(1)}_\parallel+T^{(1)}_\perp\,,$$
where the helicity amplitudes $T^{(0)}_{\perp\parallel}$ read
\begin{align}\label{T0}
&T_\perp^{(0)}=\frac{8\pi A(\bm\Delta_0)}{\omega(m^2+\zeta^2)} \Big\{\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}
\Big[\frac{\varepsilon_3}{\omega^2}(\bm s_{\mu_3}^*\cdot \bm X)(\bm s_{\mu_3}\cdot\bm\zeta)(\varepsilon_1\delta_{\mu_1\mu_3}+\varepsilon_2\delta_{\mu_1\mu_4})\nonumber\\
&-\frac{\varepsilon_4}{\omega^2}(\bm s_{\mu_4}^*\cdot \bm X)(\bm s_{\mu_4}\cdot\bm\zeta) (\varepsilon_1\delta_{\mu_1\mu_4}+\varepsilon_2\delta_{\mu_1\mu_3})\Big]\nonumber\\
&-\frac{m\mu_1}{\sqrt{2}\varepsilon_1\varepsilon_2}R\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\bar\mu_4}
(\bm s_{\mu_1}\cdot\bm\zeta)(-\varepsilon_3\delta_{\mu_1\mu_3}+\varepsilon_4\delta_{\mu_1\mu_4})\nonumber\\
&+\frac{m\mu_3}{\sqrt{2}\varepsilon_3\varepsilon_4}\delta_{\mu_1\mu_2}\delta_{\mu_3\mu_4}(\bm s_{\mu_3}^*\cdot\bm X)(\varepsilon_1\delta_{\mu_3\mu_1}+\varepsilon_2\delta_{\mu_3\bar\mu_1})
+\frac{m^2\omega^2}{2\varepsilon_1\varepsilon_2\varepsilon_3\varepsilon_4}R\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\mu_4}\delta_{\mu_1\mu_3}\Big\}\,,\nonumber\\
&T_\parallel^{(0)}=-\frac{8\pi }{\omega^2}A(\bm\Delta_0)R\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}\,.
\end{align}
Here $\mu_i=\pm 1$ corresponds to the helicity of the particle with the momentum $\bm p_i$, $\bar\mu_i=-\mu_i$, and
\begin{align}\label{T0not}
&A(\bm\Delta)=-\frac{i\lambda}{\Delta_{\perp}^2}\int d\bm r\,\exp[-i\bm\Delta\cdot\bm r-i\chi(\rho)]\bm\Delta_{\perp}\cdot\bm\nabla_\perp V(r)\,,\nonumber\\
&\chi(\rho)=\lambda\int_{-\infty}^\infty dz\,V(\sqrt{z^2+\rho^2})\,,\quad\bm\rho=\bm r_\perp\,,\quad\bm\zeta=\frac{\varepsilon_3\varepsilon_4}{\omega}\bm\theta_{34}\,,\nonumber\\
&\omega=\varepsilon_3+\varepsilon_4\,, \quad \bm\Delta_{0\perp}=\varepsilon_2\bm\theta_{21}+\varepsilon_3\bm\theta_{31}+\varepsilon_4\bm\theta_{41}\,,\nonumber\\
&\Delta_{0\parallel}=-\frac{1}{2}\left[m^2\omega\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\frac{p_{2\perp}^2}{\varepsilon_2}+ \frac{p_{3\perp}^2}{\varepsilon_3}+\frac{p_{4\perp}^2}{\varepsilon_4}\right]\,,\nonumber\\
&R=\frac{1}{d_1d_2}[\Delta^2_{0\perp} (\varepsilon_1+\varepsilon_2)+2\varepsilon_1\varepsilon_2(\bm\theta_{12}\cdot\bm\Delta_{0\perp})]\,,\nonumber\\
&\bm X=\frac{1}{d_1}(\varepsilon_3\bm\theta_{23}+\varepsilon_4\bm\theta_{24})-\frac{1}{d_2}(\varepsilon_3\bm\theta_{13}+\varepsilon_4\bm\theta_{14})\,,\nonumber\\
&d_1=m^2\omega\varepsilon_1\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\varepsilon_2\varepsilon_3\theta_{23}^2
+\varepsilon_2\varepsilon_4\theta_{24}^2+\varepsilon_3\varepsilon_4\theta_{34}^2\,,\nonumber\\
&d_2=m^2\omega\varepsilon_2\left(\frac{1}{\varepsilon_1\varepsilon_2}+\frac{1}{\varepsilon_3\varepsilon_4}\right)+\varepsilon_2\varepsilon_3\theta_{31}^2
+\varepsilon_2\varepsilon_4\theta_{41}^2+(\varepsilon_3\bm\theta_{31}+\varepsilon_4\bm\theta_{41})^2\,,\nonumber\\
&\bm\theta_i=\bm p_{i\perp}/p_i \,,\quad \bm\theta_{ij}=\bm\theta_{i}-\bm\theta_{j}\,,
\end{align}
with $V(r)$ being the electron potential energy in the atomic field. In the amplitude $T^{(0)}$ the interaction of the produced $e^+e^-$ pair with the atomic field is neglected, so that $T^{(0)}$ depends on the atomic potential in the same way as the bremsstrahlung amplitude, see, e.g., Ref.~\cite{LMSS2005}.
The amplitudes $T^{(1)}_{\perp\parallel}$ have the following form
\begin{align}\label{T1C}
&T_\perp^{(1)}=\frac{8i\eta}{\omega \varepsilon_1}|\Gamma(1-i\eta)|^2 \int\frac{d\bm\Delta_\perp\, A(\bm\Delta_\perp+\bm p_{2\perp})F_a(Q^2)}{Q^2 M^2\,(m^2\omega^2/\varepsilon_1^2+\Delta_\perp^2)}\left(\frac{\xi_2}{\xi_1}\right)^{i\eta}
{\cal M}\,, \nonumber\\
&{\cal M}=-\frac{\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}}{\omega} \big[ \varepsilon_1(\varepsilon_3 \delta_{\mu_1\mu_3}-\varepsilon_4 \delta_{\mu_1\mu_4})
(\bm s_{\mu_1}^*\cdot\bm \Delta_\perp)(\bm s_{\mu_1}\cdot\bm I_1)\,\nonumber\\
&+\varepsilon_2(\varepsilon_3 \delta_{\mu_1\bar\mu_3}-\varepsilon_4 \delta_{\mu_1\bar\mu_4})(\bm s_{\mu_1}\cdot\bm \Delta_\perp)(\bm s_{\mu_1}^*\cdot\bm I_1) \big]+\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\bar\mu_4}\frac{m\omega\mu_1}{\sqrt{2}\varepsilon_1 }(\varepsilon_3 \delta_{\mu_1\mu_3}-\varepsilon_4 \delta_{\mu_1\mu_4})(\bm s_{\mu_1}
\cdot\bm I_1)\nonumber\\
&+\delta_{\mu_1\mu_2}\delta_{\mu_3\mu_4}\frac{m\mu_3}{\sqrt{2}}(\varepsilon_1 \delta_{\mu_1\mu_3}+\varepsilon_2 \delta_{\mu_1\bar\mu_3})(\bm s_{\mu_3}^*\cdot\bm \Delta_\perp)I_0
-\frac{m^2\omega^2}{2\varepsilon_1}\delta_{\mu_1\bar\mu_2}\delta_{\mu_3\mu_4}\delta_{\mu_1\mu_3}I_0\,,\nonumber\\
&T_\parallel^{(1)}=-\frac{8i\eta\varepsilon_3\varepsilon_4}{\omega^3}|\Gamma(1-i\eta)|^2 \int \frac{d\bm\Delta_\perp\, A(\bm\Delta_\perp+\bm p_{2\perp})F_a(Q^2)}{Q^2 M^2}\left(\frac{\xi_2}{\xi_1}\right)^{i\eta}\,I_0
\delta_{\mu_1\mu_2}\delta_{\mu_3\bar\mu_4}\,,
\end{align}
where $F_a(Q^2)$ is a atomic form factor,
and the following notations are used
\begin{align}\label{T1Cnot}
&M^2=m^2\Big(1+\frac{\varepsilon_3\varepsilon_4}{\varepsilon_1\varepsilon_2}\Big)
+\frac{\varepsilon_1\varepsilon_3\varepsilon_4}{\varepsilon_2\omega^2} \Delta_\perp^2\,,\quad
\bm Q_\perp=\bm \Delta_\perp-\bm p_{3\perp}-\bm p_{4\perp}\,, \nonumber\\
&Q^2= Q_\perp^2+\Delta_{0\parallel}^2\,,\quad
\bm q_1=\frac{\varepsilon_3}{\omega}\bm \Delta_\perp- \bm p_{3\perp}\,,\quad \bm q_2=
\frac{\varepsilon_4}{\omega}\bm \Delta_\perp- \bm p_{4\perp} \,,\nonumber\\
&I_0=(\xi_1-\xi_2)F(x)+(\xi_1+\xi_2-1)(1-x)\frac{F'(x)}{i\eta}\,,\nonumber\\
&\bm I_1=(\xi_1\bm q_1+\xi_2\bm q_2)F(x)+(\xi_1\bm q_1-\xi_2\bm q_2)(1-x)\frac{F'(x)}{i\eta}\,,\nonumber\\
&\xi_1=\frac{M^2}{M^2+q_1^2}\,,\quad \xi_2=\frac{M^2}{M^2+q_2^2}\,,\quad x=1-\frac{Q_\perp^2\xi_1\xi_2}{M^2}\,,\nonumber\\
&F(x)=F(i\eta,-i\eta, 1,x)\,,\quad F'(x)=\frac{\partial}{\partial x}F(x)\,,\quad \eta=Z\alpha\,.
\end{align}
Note that the parameter $\lambda$ is contained solely in the function $A(\bm\Delta)$, Eq.~\eqref{T0not}.
| {'timestamp': '2017-05-22T02:04:52', 'yymm': '1705', 'arxiv_id': '1705.06906', 'language': 'en', 'url': 'https://arxiv.org/abs/1705.06906'} |
\section{Introduction}
\label{sec:introduction}
\subsection{Motivation: Road and Terrain Mapping}
\label{subsec:terrain}
There has been a steep rise of interest in the last decade among researchers in academia and the commercial sector in autonomous vehicles and self driving cars. Although adaptive estimation has been studied for some time, applications such as terrain or road mapping continue to challenge researchers to further develop the underlying theory and algorithms in this field. These vehicles are required to sense the environment and navigate surrounding terrain without any human intervention. The environmental sensing capability of such vehicles must be able to navigate off-road conditions or to respond to other agents in urban settings. As a key ingredient to achieve these goals, it can be critical to have a good {\em a priori} knowledge of the surrounding environment as well as the position and orientation of the vehicle in the environment.
To collect this data for the construction of terrain maps, mobile vehicles equipped with multiple high bandwidth, high resolution imaging sensors are deployed. The mapping sensors retrieve the terrain data relative to the vehicle and navigation sensors provide georeferencing relative to a fixed coordinate system. The geospatial data, which can include the digital terrain maps acquired from these mobile mapping systems, find applications in emergency response planning and road surface monitoring. Further, to improve the ride and handling characteristic of an autonomous vehicle, it might be necessary that these digital terrain maps have accuracy on a sub-centimeter scale.
One of the main areas of improvement in current state of the art terrain modeling technologies is the localization. Since the localization heavily relies on the quality of GPS/GNSS, IMU data, it is important to come up with novel approaches which could fuse the data from multiple sensors to generate the best possible estimate of the environment. Contemporary data acquisition systems used to map the environment generate scattered data sets in time and space. These data sets must be either post-processed or processed online for construction of three dimensional terrain maps.
Fig.\ref{fig:vehicle1} and Fig.\ref{fig:vehicle2} depict a map building vehicle and trailer developed by some of the authors at Virginia Tech. The system generates experimental observations in the form of data that is scattered in time and space. These data sets have extremely high dimensionality.
Roughly 180 million scattered data points are collected per minute of data acquisition, which corresponds to a data file of roughly $\mathcal{O}(1GB)$ in size. Current algorithms and software developed in-house post-process the scattered data to generate road and terrain maps. This offline batch computing problem can take many days of computing time to complete. It remains a challenging task to derive a theory and associated algorithms that would enable adaptive or online estimation of terrain maps from such high dimensional, scattered measurements.
This paper introduces a novel theory and associated algorithms that are amenable to observations that take the form of scattered data. The key attribute of the approach is that the unknown function representing the terrain is viewed as an element of a RKHS. The RKHS is constructed in terms of a kernel function $k(\cdot,\cdot): \Omega \times \Omega \rightarrow \mathbb{R}$ where $\Omega \subseteq \mathbb{R}^d$ is the domain over which scattered measurements are made.
The kernel $k$ can often be used to define a collection of radial basis functions (RBFs) $k_x(\cdot):=k(x,\cdot)$, each of which is said to be centered at some point $x\in \Omega$. For example, these RBFs might be exponentials, wavelets, or thin plate splines \cite{wendland}.
By embedding the unknown function that represents the terrain in a RKHS, the new formulation generates a system that constitutes a distributed parameter system. The unknown function, representing map terrain, is the infinite dimensional distributed parameter.
Although the study of infinite dimensional distributed parameter systems can be substantially more difficult than the study of ODEs, a key result is that stability and convergence of the approach can be established succinctly in many cases.
Much of the complexity \cite{bsdr1997,bdrr1998} associated with construction of Gelfand triples or the analysis of infinitesimal generators and semigroups that define a DPS can be avoided for many examples of the systems in this paper.
The kernel $k(\cdot,\cdot): \Omega \times \Omega \rightarrow \mathbb{R}$ that defines the RKHS provides a natural collection of bases for approximate estimates of the solution that are based directly on some subset of scattered measurements $\{ x_i \}_{i=1}^n \subset \mathbb{R}^d$.
It is typical in applications to select the centers $\{x_i\}_{i=1}^n$ that locate the basis functions from some sub-sample of the locations at which the scattered data is measured. Thus, while we do not study the nuances of such methods, in this paper the formulation provides a natural framework to pose so-called ``basis adaptive methods'' such as in~\cite{dzcf2012} and the references therein.
While our formulation is motivated by this particular application, it is a general construction for framing and generalizing some conventional approaches for online adaptive estimation. This framework introduces sufficient conditions that guarantee convergence of estimates in spatial domain $\Omega$ to the unknown function $f$. In contrast, nearly all conventional strategies consider stability and convergence in time alone for some fixed finite dimensional space of $\mathbb{R}^d \times \mathbb{R}^n$, with $n$ the number of parameters used to represent the estimate. The remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for the infinite dimensional adaptive estimation problem defined over an RKHS. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from RBF bases that are centered at a subset of scattered observations.
\begin{figure}
\centering
\includegraphics[scale=0.75]{Picture1.png}
\caption{Vehicle Terrain Measurement System, Virginia Tech}
\label{fig:vehicle1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.75]{Picture2.png}
\caption{Experimental Setup with LMI 3D GO-Locator Lasers}
\label{fig:vehicle2}
\end{figure}
\subsection{Related Research}
\label{sec:related_research}
The general theory derived in this paper has been motivated in part by the terrain mapping application discussed in Section \ref{sec:introduction}, but also by recent research in a number of fields related to estimation of nonlinear functions. In this section we briefly review some of the recent research in probabilistic or Bayesian mapping methods, nonlinear approximation and learning theory, statistics, and nonlinear regression.
\subsubsection{Bayesian and Probabilistic Mapping}
Many popular known techniques adopt a probabilistic approach towards solving the localization and mapping problem in robotics. The algorithms used to solve this problem fundamentally rely on Bayesian estimation techniques like particle filters, Kalman filters and other variants of these methods \cite{Thrun2005Probabilistic, Whyte2006SLAM1, Whyte2006SLAM2}. The computational efforts required to implement these algorithms can be substantial since they involve constructing and updating maps while simultaneously tracking the relative locations of agents with respect to the environment. Over the last three decades significant progress has been made on various frontiers in terms of high-end sensing capabilities, faster data processing hardwares, robust and efficient computational algorithms \cite{Dissanayake2011Review, Dissanayake2000Computational}. However, the usual Kalman filter based approaches implemented in these applications often are required to address the inconsistency problem in estimation that arise from uncertainties in state estimates \cite{Huang2007Convergence,Julier2001Counter}. Furthermore, it is well acknowledged among the community that these methods suffer from a major drawback of `{\em closing the loop}'. This refers to the ability to adaptively update the information if it is revisited. Since such a capability for updating information demands huge memory to store the high resolution and high bandwidth data. Moreover, it is highly nontrivial to guarantee that the uncertainties in estimates would converge to lower bound at sub optimal rates, since matching these rates and bounds significantly constraint the evolution of states along infeasible trajectories. While probabilistic methods, and in particular Bayesian estimation techniques, for the construction of terrain maps have flourished over the past few decades, relatively few approaches for establishing deterministic theoretical error bounds in the spatial domain of the unknown function representing the terrain have appeared.
\subsubsection{Approximation and Learning Theory}
Approximation theory has a long history, but the subtopics of most relevance to this paper include recent studies in multiresolution analysis (MRA), radial basis function (RBF) approximation and learning theory. The study of MRA techniques became popular in the late 1980's and early 1990's, and it has flourished since that time. We use only a small part of the general theory of MRAs in this paper, and we urge the interested reader to consult one of the excellent treatises on this topic for a full account. References \cite{Meyer,mallat,daubechies, dl1993} are good examples of such detailed treatments. We briefly summarize the pertinent aspects of MRA here and in Section \ref{sec:MRA}. A multiresolution analysis defines a family of nested approximation spaces $\seq{H_j}_{j\in \mathbb{N}}\subseteq H$ of an abstract space $H$ in terms of a single function $\phi$, the scaling function. The approximation space $H_j$ is defined in terms of bases that are constructed from dilates and translates $\seq{\phi_{j,k}}_{k\in \mathbb{Z}^d}$ with $\phi_{j,k}(x):=2^{jd/2}\phi(2^jx-k)$ for $x\in \mathbb{R}^d$ of this single function $\phi$. It is for this reason that these spaces are sometimes referred to as shift invariant spaces. While the MRA is ordinarily defined only in terms of the scaling functions, the theory provides a rich set of tools to derive bases $\seq{\psi_{j,k}}_{k\in \mathbb{Z}}$, or wavelets,
for the complement spaces $W_j:=V_{j+1}- V_{j}$. Our interest in multiresolution analysis arises since these methods can be used to develop multiscale kernels for RKHS, as summarized in \cite{opfer1,opfer2}. We only consider approximation spaces defined in terms of the scaling functions in this paper. Specifically, with a parameter $s \in \mathbb{R}^+$ measuring smoothness, we use $s-$regular MRAs to define admissible kernels for the reproducing kernels that embody the online and adaptive estimation strategies in this paper.
When the MRA bases are smooth enough, the RKHS kernels derived from a MRA can be shown to be equivalent to a scale of Sobolev spaces having well documented approximation properties.
The B-spline bases in the numerical examples yield RKHS embeddings with good condition numbers. The details of the RKHS embedding strategy given in terms of wavelet bases associated with an MRA is treated in the forthcoming paper.
\subsubsection{Learning Theory and Nonlinear Regression}
The methodology defined in this paper for online adaptive estimation can be viewed as similar in philosophy to the recent efforts that synthesize learning theory and approximation theory. \cite{dkpt2006,kt2007,cdkp2001,t2008} In these references, independent and identically distributed observations of some unknown function are collected, and they are used to define an estimator of that unknown function. Sharp estimates of error, guaranteed to hold in probability spaces, are possible using tools familiar from learning theory and thresholding in approximation spaces. The approximation spaces are usually defined terms of subspaces of an MRA. However, there are a few key differences between the these efforts in nonlinear regression and learning theory and this paper. The learning theory approaches to estimation of the unknown function depend on observations of the function itself. In contrast, the adaptive online estimation framework here assumes that observations are made of the estimator states, not directly of the unknown function itself. The learning theory methods also assume a discrete measurement process, instead of the continuous measurement process that characterizes online adaptive estimation. On the other hand, the methods based on learning theory derive sharp function space rates of convergence of the estimates of the unknown function. Such estimates are not available in conventional online adaptive estimation methods. Typically, convergence in adaptive estimation strategies is guaranteed in time in a fixed finite dimensional space. One of the significant contributions of this paper is to construct sharp convergence rates in function spaces, similar to approaches in learning theory, of the unknown function using online adaptive estimation.
\subsubsection{Online Adaptive Estimation and Control}
Since the approach in this paper generalizes a standard strategy in online adaptive estimation and control theory, we review this class of methods in some detail. This summary will be crucial in understanding the nuances of the proposed technique and in contrasting the sharp estimates of error available in the new strategy to those in the conventional approach.
Many popular textbooks study online or adaptive estimation within the context of adaptive control theory for systems governed by ordinary differential equations \cite{sb2012,IaSu,PoFar}. The theory has been extended in several directions, each with its subtle assumptions and associated analyses.
Adaptive estimation and control theory has been refined for decades, and significant progress has been made in deriving convergent estimation and stable control strategies that are robust with respect to some classes of uncertainty.
The efforts in \cite{bsdr1997,bdrr1998} are relevant to this paper, where the authors generalize some of adaptive estimation and model reference adaptive control (MRAC) strategies for ODEs so that they apply to deterministic infinite dimensional evolution systems. In addition, \cite{dmp1994,dp1988,dpg1991,p1992} also investigate adaptive control and estimation problems under various assumptions for classes of stochastic and infinite dimensional systems.
Recent developments in $\mathcal{L}^1$ control theory as presented in \cite{HC}, for example, utilize adaptive estimation and control strategies in obtaining stability and convergence for systems generated by collections of nonlinear ODEs.
To motivate this paper, we consider a model problem in which the plant dynamics are generated by the nonlinear ordinary differential equations
\begin{align}
\dot{x}(t)&= A x(t) + Bf(x(t)), \quad \quad x(0)=x_0
\label{eq:simple_plant}
\end{align}
with state $x(t)\in \mathbb{R}^d$, the known Hurwitz system matrix $ A \in \mathbb{R}^{d\times d}$, the known control influence matrix $B\in \mathbb{R}^d$, and the unknown function $f:\mathbb{R}^d \rightarrow \mathbb{R}$.
Although this model problem is an exceedingly simple prototypical example studied in adaptive estimation and control of ODEs \cite{sb2012,IaSu,PoFar}, it has proven to be an effective case study in motivating alternative formulations such as in \cite{HC} and will suffice to motivate the current approach.
Of course, much more general plants are treated in standard methods \cite{sb2012,IaSu,PoFar,naranna} and can be attacked using the strategy that follows. This structurally simple problem is chosen so as to clearly illustrate the essential constructions of RKHS embedding method while omitting the nuances associated with general plants. A typical adaptive estimation problem can often be formulated in terms of an estimator equation and a learning law. One of the simplest estimators for this model problem takes the form
\begin{align}
\dot{\hat{x}}(t)&= A \hat{x}(t) + B\hat{f}(t,x(t)),
\quad \quad
\hat{x}(0)=x_0
\label{eq:sim_estimator}
\end{align}
where $\hat{x}(t)$ is an estimate of the state $x(t)$ and $\hat{f}(t,x(t))$ is time varying estimate of the unknown function $f$ that depends on measurement of the state $x(t)$ of the plant at time $t$. When the state error $\tilde{x}:=x-\hat{x}$ and function estimate error $\tilde{f}:=f-\hat{f}$ are defined, the state error equation is simply
\begin{align}
\dot{\tilde{x}}(t)&= A \tilde{x}(t) + B\tilde{f}(t,x(t)), \quad \quad
\tilde{x}(0)=\tilde{x}_0.
\label{eq:sim_error}
\end{align}
The goal of adaptive or online estimation is to determine a learning law that governs the evolution of the function estimate $\hat{f}$ and guarantees that the state estimate $\hat{x}$ converges to the true state $x$,
$
\tilde{x}(t)= x(t)-\hat{x}(t) \to
0 \text{ as } t\to \infty
$.
Perhaps additionally, it is hoped that the function estimates $\hat{f}$ converge to the unknown function $f$,
$
\tilde{f}(t)= f(t) -\hat{f}(t) \to
0 \text{ as } t \to \infty.
$
The choice of the learning law for the update of the adaptive estimate $\hat{f}$ depends intrinsically on what specific information is available about the unknown function $f$.
It is most often the case for ODEs that the estimate $\hat{f}$ depends on a finite set of unknown parameters $\hat{\alpha}_1,\ldots,\hat{\alpha}_n$. The learning law is then expressed as an evolution law for the parameters $\hat{\alpha}_i$, $i=1,\ldots,n$. The discussion that follows emphasizes that this is a very specific underlying assumption regarding the information available about unknown function $f$. Much more general prior assumptions are possible.
\subsubsection{Classes of Uncertainty in Adaptive Estimation}
The adaptive estimation task seeks to construct a learning law based on the knowledge that is available regarding the function $f$.
Different methods for solving this problem have been developed depending on the type of information available about the unknown function $f$.
The uncertainty about $f$ is often described as forming a continuum between structured and unstructured uncertainty.
In the most general case, we might know that $f$ lies in some compact set $\mathcal{C}$ of a particular Hilbert space of functions $H$ over a subset $\Omega \subseteq \mathbb{R}^d$.
This case, that reflects in some sense the least information regarding the unknown function, can be expressed as the condition that
$
f \in \{g \in \mathcal{C} | \mathcal{C}\subset {H} \},
$
for some compact set of functions $\mathcal{C}$ in a Hilbert space of functions $H$.
In approximation theory, learning theory, or non-parametric estimation problems this information is sometimes referred to as the {\em prior}, and choices of $H$ commonly known as the hypothesis space. The selection of the hypothesis space $H$ and set $\mathcal{C}$ often reflect the approximation, smoothness, or compactness properties of the unknown function \cite{dkpt2006}.
This example may in some sense utilize only limited or minimal information regarding the unknown function $f$, and we may refer to the uncertainty as unstructured. Numerous variants of conventional adaptive estimation admit additional knowledge about the unknown function.
In most conventional cases the unknown function $f$ is assumed to be given in terms of some fixed set of parameters.
This situation is similar in philosophy to problems of parametric estimation which restrict approximants to classes of functions that admit representation in terms of a specific set of parameters.
Suppose the finite dimensional basis $\left \{ \phi_k\right \}_{k=1,\ldots, n}$ is known for a particular finite dimensional subspace $H_n \subseteq H$ in which the function lies, and further that the uncertainty is expressed as the condition that there is a unique set of unknown coefficients $\left \{\alpha_i^*\right\}_{i=1,\ldots,n} $ such that $f:=f^*=\sum_{i=1,\ldots,n} \alpha_i^* \phi_i \in H_n$. Consequently, conventional approaches may restrict the adaptive estimation technique to construct an estimate with knowledge that $f$ lies in the set
\begin{align}
\label{eq:e2}
f \in \biggl \{ g \in H_n \subseteq H \biggl |
&g = \sum_{i=1,\ldots,n} \alpha_i \phi_i
\text{ with } \\
\notag &\alpha_i \in [a_i,b_i]
\subset \mathbb{R} \text{ for } i=1,\ldots,n
\biggr \}
\end{align}
\noindent This is an example where the uncertainty in the estimation problem may be said to be structured. The unknown function is parameterized by the collection of coefficients $\{\alpha_i^*\}_{i=1,\ldots,n}$.
In this case the compact set the $\mathcal{C}$ is a subset of $H_n$. As we discuss in sections ~\ref{subsec:Lit},~\ref{sec:RKHS},and ~\ref{sec:existence}, the RKHS embedding approach can be characterised by the fact that the uncertainty is more general and even unstructured, in contrast to conventional methods.
\subsubsection{Adaptive Estimation in $\mathbb{R}^d \times \mathbb{R}^n$}
\label{subsec:adapt1}
The development of adaptive estimation strategies when the uncertainty takes the form in \ref{eq:e2} represents, in some sense, an iconic approach in the adaptive estimation and control community.
Entire volumes \cite{sb2012,IaSu,PoFar,NarPar199D} contain numerous variants of strategies that can be applied to solve adaptive estimation problems in which the uncertainty takes the form in \ref{eq:e2}.
One canonical approach to such an adaptive estimation problem is governed by three coupled equations: the plant dynamics ~\ref{eq:f}, estimator equation \ref{eq:a2}, and the learning rule.
We organize the basis functions as $\phi:=[\phi_1,\dots,\phi_n]^T$ and the parameters as $\alpha^{*^T}=[\alpha^*_1,\ldots,\alpha^*_n]$,
$\hat{\alpha}^T=[\hat{\alpha}_1,\ldots,\hat{\alpha}_n]$. A common gradient based learning law yields the governing equations that incorporate the plant dynamics, estimator equation, and the learning rule.
\begin{align}
\label{eq:f}
\dot{x}(t) &= Ax(t) + B \alpha^{*^T} \phi(x(t)),\\
\label{eq:a2}
\dot{\hat{x}}(t) &
=A \hat{x}(t) + B \hat{\alpha}^T(t) \phi(x(t)), \\
\label{eq:a3}
\dot{\hat{\alpha}}(t) &= \Gamma^{-1}\phi B^T P(x-\hat{x}),
\end{align}
where $\Gamma\in \mathbb{R}^{n\times n}$ is symmetric and positive definite. The symmetric positive definite matrix $P\in\mathbb{R}^{d\times d}$ is the unique solution of Lyapunov's equation $A^T P + PA = -Q$, for some selected symmetric positive definite $Q \in \mathbb{R}^{d\times d}$.
\noindent Usually the above equations are summarized in terms the two error equations
\begin{align}
\label{eq:a4}
\dot{\tilde{x}}(t) &= A \tilde{x} + B \phi^{T}(x(t))\tilde{\alpha}(t)\\
\label{eq:a5}
\dot{\tilde{\alpha}}(t) &= -\Gamma^{-1} \phi(x(t)) B^T P\tilde{x}.
\end{align}
with $\tilde{\alpha}:=\alpha^*-\hat{\alpha}$ and $\tilde{x}:=x-\hat{x}$.
Equations ~\ref{eq:a4},~\ref{eq:a5} can also be written as
\begin{align}
\begin{Bmatrix}
\dot{\tilde{x}}(t) \\
\dot{\tilde{\alpha}}(t)
\end{Bmatrix}
=
\begin{bmatrix}
A & B \phi^T (x(t))\\
-\Gamma^{-1} \phi(x(t)) B ^T P & 0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{\alpha}(t)
\end{Bmatrix}.
\label{eq:error_conv}
\end{align}
This equation defines an evolution on $\mathbb{R}^d \times \mathbb{R}^n$
and has been studied in great detail in ~\cite{naranna,narkud,mornar}.
Standard texts such as ~\cite{sb2012,IaSu,PoFar,NarPar199D} outline numerous other variants for the online adaptive estimation problem using projection, least squares methods and other popular approaches.
\subsection{Overview of Our Results}
\label{subsec:Lit}
\subsubsection{Adaptive Estimation in $\mathbb{R}^d \times H$}
\label{subsec:adapt2}
In this paper, we study the method of RKHS embedding that interprets the unknown function $f$ as an element of the RKHS $H$, without any {\em a priori} selection of the particular finite dimensional subspace used for estimation of the unknown function. The counterparts to Equations ~\ref{eq:f},~\ref{eq:a2},~\ref{eq:a3} are the plant, estimator, and learning laws
\begin{align}
\dot{x}(t) &= Ax(t) + BE_{x(t)}f,\\
\dot{\hat{x}}(t) &= A\hat{x}(t) + BE_{x(t)}\hat{f}(t), \label{eq:rkhs_plant}\\
\dot{\hat{f}}(t) &= \Gamma^{-1}(BE_{x(t)})^*P(x(t) - \hat{x}(t)),
\end{align}
where as before $x,\hat{x}\in \mathbb{R}^d$, but $f$ and $\hat{f}(t)\in H$, $E_{\xi}: H \to \mathbb{R}^d $ is the evaluation functional that is given by $E_{\xi}: f \mapsto f(\xi)$ for all $\xi\in \mathbb{R}^d$ and $f \in H$, and $\Gamma\in \mathcal{L}(H,H)$ is a self adjoint, positive definite linear operator.a The error equation analogous to Equation~\ref{eq:error_conv} system is then given by
\begin{align}
\begin{Bmatrix}
\dot{\tilde{x}}(t) \\
\dot{\tilde{f}}(t)
\end{Bmatrix}
=
\begin{bmatrix}
A & B E_{x(t)}\\
-\Gamma^{-1}(B E_{x(t)})^*P & 0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix},
\label{eq:eom_rkhs}
\end{align}
which defines an evolution on $\mathbb{R}^d \times H$, instead of on $\mathbb{R}^d \times \mathbb{R}^n$.
\subsubsection{Existence, Stability, and Convergence Rates}
We briefly summarize and compare the conlusions that can be reached for the conventional and RKHS embedding approaches. Let $(\hat{x}, \hat{f})$ be estimates of $(x,f)$ that evolve according to the state, estimator, and learning law of RKHS embedding. Define the state and distributed parameter error as $\tilde{x}:=x-\hat{x}$ and $\tilde{f}:=f-\hat{f}$, respectively. Under the assumptions outlined in Theorems \ref{th:unique}, \ref{th:stability}, and \ref{th:PE} for each $T>0$ there is a unique mild solution for the error $(\tilde{x},\tilde{f})\in C([0,T];\mathbb{R}^d\times H)$ to the DPS described by Equations \ref{eq:eom_rkhs}. Moreover, the error in state estimates $\tilde{x}(t)$ converges to zero,
$\lim_{t \rightarrow \infty} \| \tilde{x}(t)\|=0$. If all the evolutions with initial conditions in an open ball containing the origin exist in $C([0,\infty);\mathbb{R}\times H)$, the equilibrium at the origin $(\tilde{x},\tilde{f})=(0,0)$ is stable. The results so far are therefore entirely analogous to conventional estimation method, but are cast in the infinite dimensional RKHS $H$. See the standard texts ~\cite{sb2012,IaSu,PoFar,NarPar199D} for proofs of existence and convergence of the conventional methods. It must be emphasized again that the conventional results are stated for evolutions in $\mathbb{R}^d\times\mathbb{R}^n$, and the RKHS results hold for evolutions in $\mathbb{R}^d\times H$. Considerably more can be said about the convergence of finite dimensional approximations. For the RKHS embedding approach state and finite dimensional approximations $(\hat{x}_j,\hat{f}_j)$ of the infinite dimensional estimates $(\hat{x},\hat{f})$ on a grid that has resolution level $j$ are governed by Equations \ref{eq:approx_on_est1} and \ref{eq:approx_on_est2}. The finite dimensional estimates $(\hat{x}_j,\hat{f}_j)$ converge to the infinite dimensional estimates $(\hat{x},\hat{f})$ at a rate that depends on $\|I-\Gamma\Pi_j^*\Gamma_j^{-1} \Pi_j\|$ and $\|I - \Pi_j\|$ where $\Pi_j : H \to H_j$ is the $H$-orthogonal projection.
The remainder of this paper studies the existence and uniqueness of solutions, stability, and convergence of approximate solutions for infinite dimensional, online or adaptive estimation problems. The analysis is based on a study of distributed parameter systems (DPS) that contains the RKHS $H$. The paper concludes with an example of an RKHS adaptive estimation problem for a simple model of map building from vehicles. The numerical example demonstrates the rate of convergence for finite dimensional models constructed from radial basis function (RBF) bases that are centered at a subset of scattered observations.
The discussion focuses on a comparison and contrast of the analysis for the ODE system and the distributed parameter system.
Prior to these discussions, however, we present a brief review fundamental properties of RKHS spaces in the next section.
\section{Reproducing Kernel Hilbert Space}
\label{sec:RKHS}
Estimation techniques for distributed parameter systems have been previously studied in \cite{bk1989}, and further developed to incorporate adaptive estimation of parameters in certain infinite dimensional systems by \cite{bsdr1997} and the references therein. These works also presented the necessary conditions required to achieve parameter convergence during online estimation. But both approaches rely on delicate semigroup analysis and evolution, or Gelfand triples.The approach herein is much simpler and amenable to a wide class of applications. It appears to be simpler, practical approach to generalise conventional methods. This paper considers estimation problems that are cast in terms of the unknown function $f:\Omega \subseteq \mathbb{R}^d \to \mathbb{R}$, and our approximations will assume that this function is an element of a reproducing kernel Hilbert space. One way to define a reproducing kernel Hilbert space relies on demonstrating the boundedness of evaluation functionals, but we briefly summarize a constructive approach that is helpful in applications and understanding computations such as in our numerical examples.
In this paper $\mathbb{R}$ denotes the real numbers, $\mathbb{N}$ the positive integers, $\mathbb{N}_0$ the non-negative integers, and $\mathbb{Z}$ the integers. We follow the convention that $a \gtrsim b$ means that there is a constant $c$, independent of $a$ or $b$, such that $b \leq ca$. When $a\gtrsim b $ and $b\gtrsim a$, we write $a \approx b $. Several function spaces are used in this paper. The $p$-integrable Lebesgue spaces are denoted $L^p(\Omega)$ for $1\leq p \leq \infty$, and $C^s (\Omega)$ is the space of continuous functions on $\Omega$ all of whose derivatives less than or equal to $s$ are continuous. The space $C_b^s (\Omega)$ is the normed vector subspace of $C^s (\Omega)$ and consists of all $f\in C^s (\Omega)$ whose derivatives of order less than or equal to $s$ are bounded. The space $C^{s,\lambda} (\Omega)\subseteq C_b^s (\Omega) \subseteq C^s (\Omega)$ is the collection of functions with derivatives $\frac{\partial^{|\alpha|}f}{\partial x^{|\alpha|}}$ that are $\lambda$-Holder continuous,
\begin{align*}
\|f(x)-f(y)\| \leq C\|x - y\|^{\lambda}
\end{align*}
The Sobolev space of functions that have weak derivatives of the order less than equal to $r$ that lie in $L^p(\Omega)$ is denoted $H^r_p(\Omega)$.
A reproducing kernel Hilbert space is constructed in terms of a symmetric, continuous, and positive definite function $k:\Omega \times \Omega \to \mathbb{R}$, where positive definiteness requires that for any finite collection of points
$\{x_i\}_{i=1}^n \subseteq \Omega $
$$\sum_{i,j=1}^{n}k(x_i , x_j ) \alpha_i \alpha_j \gtrsim \|\alpha\|^{2}_{\mathbb{R}^n}
$$
for all $\alpha = \{\alpha_1,\hdots, \alpha_n \}^T$.. For each $x\in \Omega$, we denote the function $k_x := k_x (\cdot) = k(x,\cdot)$ and refer to $k_x$ as the kernel function centered at $x$. In many typical examples ~\cite{wendland}, $k_x$ can be interpreted literally as a radial basis function centered at $x\in \Omega$. For any kernel functions $k_x$ and $k_y$ centered at $x,y \in \Omega$, we define the inner product $(k_x,k_y):= k(x,y)$.
The RKHS $H$ is then defined as the completion of all finite sums extracted from the set $\{k_x|x \in \Omega\}$.
It is well known that this construction guarantees the boundedness of the evaluation functionals $E_x : H \to \mathbb{R}$. In other words for each $x\in \Omega$ we have a constant $c_x$ such that
$$ |E_x f | = |f(x)| \leq c_x \|f\|_H$$
for all $f\in H$. The reproducing property of the RKHS $H$ plays a crucial role in the analysis here, and it states that,
$$E_xf = f(x) = (k_x , f)_H$$
for $x \in \Omega$ and $f\in H$. We will also require the adjoint $E_x^* :\mathbb{R}\to H $ in this paper, which can be calculated directly by noting that
$$ (E_x f,\alpha )_\mathbb{R} = (f,\alpha k_x)_H = (f,E_x^* \alpha)_H $$
for $\alpha \in \mathbb{R}$ , $x\in \Omega$ and $f\in H$. Hence, $E_x^* : \alpha \mapsto \alpha k_x \in H$.
Finally, we will be interested in the specific case in which it is possible to show that the RKHS $H$ is a subset of $C(\Omega)$, and furthermore, that the associated injection$i:H \rightarrow C(\Omega)$ is uniformly bounded.
This uniform embedding is possible, for example, provided that the kernel is bounded by a constant $\tilde{C}^2$,
$
\sup_{x\in \Omega} k(x,x) \leq \tilde{C}^2.
$
This fact follows by first noting that by the reproducing kernel property of the RKHS,
we can write
\begin{equation}
|f(x)|=|E_x f |= |(k_x, f)_H | \leq \|k_x \|_H \|f\|_H.
\end{equation}
From the definition of the inner product on $H$, we have
$
\|k_x \|^2=|(k_x, k_x)_H |=|(k(x,x)| \leq \tilde{C}^2.
$
It follows that $\|if\|_{C(\Omega)}:= \|f\|_{C(\Omega)} \leq {\tilde{C}} \|f\|_H$ and thereby that $\|i\|\leq {\tilde{C}}$. We next give two examples that will be studied in this paper.
\subsection*{Example: The Exponential Kernel}
A popular example of an RKHS, one that will be used in the numerical examples, is constructed from the family of exponentials $\kappa(x,y):=e^{-\| x-y\|^2/\sigma^2}$ where $\sigma>0$.
Suppose that $\tilde{C} = \sqrt{\sup_{x\in\Omega}\kappa(x,x)}<\infty$. Smale and Zhou in \cite{sz2007} argue that
$$
|f(x)|=|E_x(f)|=|(\kappa_x,f)_H|\leq
\|\kappa_x\|_H \|f\|_H
$$
for all $x\in \Omega$ and $f\in H$, and since
$\|\kappa_x\|^2=|\kappa(x,x)|\leq \tilde{C}^2$, it follows that the embedding $i:H \rightarrow L^\infty(\Omega)$ is bounded,
$$
\|f\|_{L^\infty(\Omega)}:=\|i(f)\|_{L^\infty(\Omega)}\leq \tilde{C} \|f\|_H.
$$
For the exponential kernel above, $\tilde{C}=1$.
Let $C^s(\Omega)$ denote the space of functions on $\Omega$ all of whose partial derivatives of order less than or equal to $s$ are continuous. The space $C^s_b(\Omega)$ is endowed with the norm
$$
\|f\|_{C^s_b(\Omega)}:= \max_{|\alpha|\leq s}
\left \|
\frac{\partial^{|\alpha|}f}{\partial x^\alpha}
\right \|_{L^\infty(\Omega)},
$$
with the summation taken over multi-indices $\alpha:=\left \{ \alpha_1, \ldots,\alpha_d \right \}\in \mathbb{N}^d$, $\partial x^{\alpha}:=\partial x_1^{\alpha_1} \cdots \partial x_d^{\alpha_d}$, and $|\alpha|=\sum_{i=1,\ldots,d} \alpha_i$.
Observe that the continuous functions in $C^s(\Omega)$ need not be bounded even if $\Omega$ is a bounded open domain. The space $C^s_b(\Omega)$ is the subspace consisting of functions $f\in C^s_b(\Omega)$ for which all derivatives of order less than or equal to $s$ are bounded.
The space $C^{s,\lambda}(\Omega)$ is the subspace of functions $f$ in $C^{s}(\Omega)$
for which all of the partial derivatives $\frac{\partial f^{|\alpha|}}{\partial x^\alpha}$ with $|\alpha|\le s$ are
$\lambda$-Holder continuous. The norm of $C^{s,\lambda}(\Omega)$ for $0 < \lambda \leq 1$ is given by
$$
\|f\|_{C^{s,\lambda}(\Omega)} = \|f\|_{C^s(\Omega)}+ \max_{0 \leq \alpha \leq s} \sup_{\substack{x,y\in \Omega \\x\ne y}}\frac{\left| \frac{\partial^{|\alpha|} f}{\partial x^{|\alpha|}}(x) -\frac{\partial^{|\alpha|}f}{\partial x^{|\alpha|}}(y) \right|}{|x-y|^\lambda}
$$
Also, reference \cite{sz2007} notes that if $\kappa(\cdot,\cdot)\in C^{2s,\lambda}_b(\Omega \times \Omega)$ with $0<\lambda<2$ and $\Omega$ is a closed domain, then the inclusion $H\rightarrow C^{s,\lambda/2}_b(\Omega)$ is well defined and continuous. That is the mapping $i:H \rightarrow C^{s,\lambda/2}_b$ defined via $f\mapsto i(f):=f$ satisfies
$$
\| f\|_{C^{s,\lambda/2}_b(\Omega)}\lesssim \|f\|_H.
$$
In fact reference \cite{sz2007} shows that
$$
\|f \|_{C^s_b(\Omega)} \leq 4^s \|\kappa\|_{{C^{2s}_b}(\Omega\times \Omega)}^{1/2} \|f\|_H.
$$
The overall important conclusion to draw from the summary above is that there are many conditions that guarantee that the imbedding $H\hookrightarrow C_b(\Omega)$ is continuous. This condition will play a central role in devising simple conditions for existence of solutions of the RKHS embedding technique.
\subsection{Multiscale Kernels Induced by $s$-Regular Scaling Functions}
\label{sec:MRA}
The characterization of the norm of the Sobolev space $H^{r}_2:=H^{r}_2(\mathbb{R}^d)$ has appeared in many monographs that discuss multiresolution analysis \cite{Meyer,mallat,devore1998}. It is also possible to define the Sobolev space $H^{r}_2(\mathbb{R}^d)$ as the Hilbert space constructed from a reproducing kernel $\kappa(\cdot,\cdot):\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ that is defined in terms of an $s$-regular scaling function $\phi$ of an multi-resolution analysis (MRA) \cite{Meyer,devore1998}. The scaling function $\phi$ is $s$-regular provided that, for $\frac{d}{2}<r<s$, we define the kernel
\begin{align*}
\kappa(u,v):&=\sum_{j=0}^\infty 2^{j(d-2r)}\sum_{k\in \mathbb{Z}^d}\phi(2^ju-k)\phi(2^jv-k)\\
&=\sum_{j=0}^\infty 2^{-2rj}\sum_{k\in \mathbb{Z}^d}\phi_{j,k}(u)\phi_{j,k}(v)
.\end{align*}
It should be noted that the requirement $d/2<r$ implies the coefficient $2^{j(d-2r)}$ above is decreasing as $j\rightarrow \infty$, and ensures the summation converges. As discussed in Section \ref{sec:RKHS} and in reference \cite{opfer1,opfer2}, the RKHS is constructed as the closure of the finite linear span of the set of function $\left\{\kappa_u\right\}_{u\in \Omega}$ with $\kappa_u(\cdot):=\kappa(u,\cdot)$. Under the assumption that $\frac{d}{2}<r<s$, the Sobolev space $H^r_2(\mathbb{R}^d)$
can also be related to the Hilbert space $H_\kappa^r(\mathbb{R}^d)$
defined as
\begin{align*}
H_{\kappa}^r(\mathbb{R}^d):=\left\{ f:\mathbb{R}^d\rightarrow\mathbb{R} \mid (f,f)_{\kappa,r}^\frac{1}{2}=\|f\|_{\kappa,r}<\infty\right\}
\end{align*}
with the inner product $(\cdot,\cdot)_{\kappa,r}$ on $H_{\kappa}^r(\mathbb{R}^d)$ defined as
\begin{align*}
(f,f)_{\kappa,r}&:=\|f\|_{\kappa,r}^2:=
\inf \biggl\{ \sum_{j=0}^\infty 2^{j(2r-d)}\|f_j\|_{V_j}^2\biggl|
f_j\in V_j, f=\sum_{j=0}^\infty f_j\biggr\}
\end{align*}
with $\|f\|^2_{V_j}=\sum_{k \in \mathbb{Z}^d} c_{j,k}^2 $ for $f_j(u)=\sum_{k \in \mathbb{Z}^d}c_{j,k}\phi(2^ju-k)$ and $j\in \mathbb{N}_0$. Note that the characterization above of $H_{\kappa}^r(\mathbb{R}^d)$ is expressed only in terms of the scaling functions $\phi_{j,k}$ for $j\in \mathbb{N}_0$ and $k\in \mathbb{Z}^d$. The functions $\phi$ and $\psi$ need not define an orthonormal multiresolution in this characterization, and the bases $\psi_{j,k}$ for the complement spaces $W_j$ are not used. We discuss the use of wavelet bases $\psi_{j,k}$ for the definition of the kernel in forthcoming paper. References \cite{opfer1,opfer2} show that when $d/2< r<s$, we have the norm equivalence
\begin{align}
H_{\kappa}^r(\mathbb{R}^d)\approx H^{r}_2(\mathbb{R}^d).
\label{eq:norm_equiv}
\end{align}
Finally, from Sobolev's Embedding Theorem \cite{af2003}, whenever $r>d/2$ we have the embedding
$$
H^r_2 \hookrightarrow C_b^{r-d/2} \subset C^{r-d/2}
$$
where $C_b^r$ is the subspace of functions $f$ in $C^r$ all of whose derivatives up through order $r$ are bounded. In fact, by choosing the $s$-regular MRA with $s$ and $r$ large enough, we have the imbedding
$H^r_2(\Omega) \hookrightarrow C(\Omega)$ when $\Omega \subseteq \mathbb{R}^d$ \cite{af2003}.
One of the simplest examples that meet the conditions of this section includes the normalized B-splines of order $r>0$. We denote by $N^r$ the normalized B-spline of order $r$ with integer knots and define its translated dilates by $N^r_{j,k}:=2^{jd/2}N^r(2^{jd} x - k)$ for $k\in \mathbb{Z}^d$ and $j\in \mathbb{N}_0$. In this case the kernel is written in the form
$$
\kappa(u,v):=\sum_{j=0}^\infty 2^{-2rj}\sum_{k\in \mathbb{Z}^d}N^r_{j,k}(u)N^r_{j,k}(v).
$$
Figure \ref{fig:nbsplines} depicts the translated dilates of the normalized B-splines of order $1$ and $2$ respectively.
\begin{center}
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.4\textwidth]{nbsplines_N1}
&
\includegraphics[width=.4\textwidth]{nbsplines_N2}\\
{ B-splines $N^1$}
&
{ B-splines $N^2$}
\end{tabular}
\caption{Translated Dilates of Normalized B-Splines}
\label{fig:nbsplines}
\end{figure}
\end{center}
\section{Existence,Uniqueness and Stability}
\label{sec:existence}
In the adaptive estimation problem that is cast in terms of a RKHS $H$, we seek a solution $X = (\tilde{x},\tilde{f}) \in \mathbb{R}^d \times H \equiv \mathbb{X}$ that satisfies Equation \ref{eq:eom_rkhs}.
In general $\mathbb{X}$ is an infinite dimensional state space for this estimation problem, which can in principle substantially complicate the analysis in comparison to conventional ODE methods.
We first establish that the adaptive estimation problem in Equation \ref{eq:eom_rkhs} is well-posed.
The result that is derived below is not the most general possible, but rather has been emphasised because its conditions are simple and easily verifiable in many applications.
\begin{theorem}
\label{th:unique}
Suppose that $x \in C([0,T];\mathbb{R}^d)$ and that the embedding $i:H \hookrightarrow C(\Omega)$ is uniform in the sense that there is a constant $C>0$ such that for any $f \in H$,
\begin{equation}
\label{6}
\|f\|_{C(\Omega)}\equiv \|if\|_{C(\Omega)} \leq C\|f\|_H.
\end{equation}
For any $T>0$ there is a unique mild solution $(\tilde{X},\tilde{f}) \in C([0,T],\mathbb{X})$ to Equation \ref{eq:eom_rkhs} and the map $X_0 \equiv (\tilde{x}_0,\tilde{f}_0) \mapsto (\tilde{x},\tilde{f}) $ is Lipschitz continuous from $\mathbb{X}$ to $C([0,T],\mathbb{X})$.
\end{theorem}
\begin{proof}
We can split the governing Equation \ref{eq:eom_rkhs} into the form
\begin{align}
\begin{split}
\begin{Bmatrix}
\dot{\tilde{x}}(t)\\
\dot{{\tilde{f}}}(t)
\end{Bmatrix}
=
&\begin{bmatrix}
A & 0\\
0 & A_0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix}+
\begin{bmatrix}
0 & B E_{(x(t))}\\
-\Gamma^1 (B E_{(x(t)})^* P & -A_0
\end{bmatrix}
\begin{Bmatrix}
\tilde{x}(t)\\
\tilde{f}(t)
\end{Bmatrix},
\end{split}
\end{align}
and write it more concisely as
\begin{equation}
\dot{\tilde{X}} = \mathbb{A}\tilde{X}(t) + \mathbb{F}(t,\tilde{X}(t))
\end{equation}
where the operator $A_0 \in \mathcal{L}(H,H)$ is arbitrary. It is immediately clear that $\mathbb{A}$ is the infinitesimal generator of $C_0$ semigroup on $\mathbb{X}\equiv \mathbb{R}^d\times H$ since $\mathbb{A}$ is bounded on $\mathbb{X}$. In addition, we see the following:
\begin{enumerate}
\item The function $\mathbb{F}: \mathbb{R}^+ \times \mathbb{X} \to \mathbb{X}$ is uniformly globally Lipschitz continuous: there is a constant $L>0$ such that
$$
\|\mathbb{F}(t,X)-\mathbb{F}(t,Y)\| \leq L\|X-Y\|
$$
for all $ X,Y \in \mathbb{X}$ and $t\in [0,T]$.
\item The map $t \mapsto \mathbb{F}(t,X)$ is continuous on $[0,T]$ for each fixed $X\in \mathbb{X}$.
\end{enumerate}
By Theorem 1.2, p.184, in reference \cite{pazy}, there is a unique mild solution
$$\tilde{X} = \{\tilde{x},\tilde{f}\}^T \in C([0,T];\mathbb{X})\equiv C([0,T];\mathbb{R}^d\times H). $$
In fact the map $\tilde{X}_0 \mapsto X$ is Lipschitz continuous from $\mathbb{X}\to C([0,T];\mathbb{X})$.
\end{proof}
The proof of stability of the equilibrium at the origin of the RKHS
Equation \ref{eq:eom_rkhs} closely resembles the Lyapunov analysis of Equation \ref{eq:error_conv}; the extension to consideration of the infinite dimensional state space $\mathbb{X}$ is required.
It is useful to carry out this analysis in some detail to see how the adjoint $E_x^* :\mathbb{R}\to H $ of the evaluation functional $E_x : H \to \mathbb{R}$ plays a central and indispensable role in the study of the stability of evolution equations on the RKHS.
\begin{theorem}
\label{th:stability}
Suppose that the RKHS Equations \ref{eq:eom_rkhs} have a unique solution in $C([0,\infty);H)$ for every initial condition $X_0$ in some open ball $B_r (0) \subseteq \mathbb{X}$. Then the equilibrium at the origin is Lyapunov stable. Moreover, the state error $\tilde{x}(t) \rightarrow 0$ as $t \rightarrow \infty$.
\end{theorem}
\begin{proof}
Define the Lyapunov function $V:\mathbb{X} \to \mathbb{R}$ as
$$ V \begin{Bmatrix}
\tilde{x}\\
\tilde{f}
\end{Bmatrix}
= \frac{1}{2}\tilde{x}^T P\tilde{x} + \frac{1}{2}(\Gamma \tilde{f},\tilde{f})_H.
$$
This function is norm continuous and positive definite on any neighborhood of the origin since $ V(X) \geq \|X\|^2_{\mathbb{X}}$ for all $X \in \mathbb{X}$. For any $X$, and in particular over the open set $B_r(0)$, the derivative of the Lyapunov function $V$ along trajectories of the system is given as
\begin{align*}
\dot{V} &= \frac{1}{2}(\dot{\tilde{x}}^T P\tilde{x}+\tilde{x}^TP\dot{\tilde{x}})+(\Gamma \tilde{f},\dot{\tilde{f}})_H\\
&= -\frac{1}{2}\tilde{x}^T Q\tilde{x}+(\tilde{f},E_x^*B^*P\tilde{x}+\Gamma\dot{\tilde{f}})_{H}= -\frac{1}{2}\tilde{x}^T Q\tilde{x},
\end{align*}
since $(\tilde{f},E_x^*B^*P\tilde{x}+\Gamma\dot{\tilde{f}})_{H}=0$.
Let $\epsilon$ be some constant such that $0 < \epsilon < r$. Define $\gamma (\epsilon)$ and $\Omega_\gamma$ according to
$$\gamma(\epsilon) = \inf_{\|X\|_\mathbb{X}=\epsilon} V(X),$$
$$\Omega_\gamma = \{X \in \mathbb{X}|V(X)<\gamma \}.$$
We can picture these quantities as shown in Fig. \ref{fig:lyapfun} and Fig. \ref{fig:kernels}.
\begin{figure}
\centering
\includegraphics[scale=0.35]{fig1Lyap_2}
\caption{Lyapunov function, $V(x)$}
\label{fig:lyapfun}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.55]{fig2Stability_2}
\caption{Stability of the equilibrium}
\label{fig:kernels}
\end{figure}
But $\Omega_\gamma=\{X\in \mathbb{X}|V(X)<\gamma\}$ is an open set since it is the inverse image of the open set $(-\infty,\gamma) \subset \mathbb{R}$ under the continuous mapping $V:\mathbb{X} \to \mathbb{R}$. The set $\Omega_\gamma$ therefore contains an open neighborhood of each of its elements. Let $\delta>0$ be the radius of such an open ball containing the origin with $B_\delta(0) \subset \Omega_\gamma$.
Since $\overline{\Omega}_\gamma:=\{X\in \mathbb{X}|V(X)\leq \gamma\}$ is a level set of $V$ and $V$ is non-increasing, it is a positive invariant set. Given any initial condition $x_0 \in B_\delta(0) \subseteq \Omega_\gamma$, we know that the trajectory $x(t)$ starting at $x_0$ satisfies
$x(t) \in \overline{\Omega}_\gamma \subseteq \overline{B_\epsilon(0)} \subseteq B_r(0)$ for all $t\in [0,\infty)$.
The equilibrium at the origin is stable.
The convergence of the state estimation error $\tilde{x}(t) \rightarrow 0$ as $t\rightarrow \infty$ can be based on Barbalat's lemma by modifying the conventional arguments for ODE systems. Since $\frac{d}{dt}(V(X(t))) = - \frac{1}{2} \tilde{x}^T(t) Q \tilde{x}\leq 0$, $V(X(t))$ is non-increasing and bounded below by zero. There is a constant $V_\infty:=\lim_{t \rightarrow \infty}V(X(t))$, and we have
$$
V(X_0)-V_\infty = \int_0^\infty \tilde{x}^T(\tau)Q\tilde{x} d\tau \gtrsim \|\tilde{x}\|^2_{L^2((0,\infty);\mathbb{R}^d)}.
$$
Since $V(X(t)) \leq V(X_0)$, we likewise have $\|\tilde{x}\|_{L^\infty(0,\infty)}\lesssim V(X_0)$ and $\|\tilde{f}\|_{L^\infty((0,\infty);H)}\lesssim V(X_0)$. The equation of motion enables a uniform bound on $\dot{\tilde{x}}$ since
\begin{align}
&\|\dot{\tilde{x}}(t)\|_{\mathbb{R}^d}
\leq \|A\| \| \tilde{x}(t)\|_{\mathbb{R}^d}
+ \|B\| \|E_{x(t)} \tilde{f}(t)\|_{\mathbb{R}^d}, \notag \\
&\leq \|A\| \| \tilde{x}(t)\|_{\mathbb{R}^d}
+ \tilde{C} \|B\| \| \tilde{f}(t) \|_{H},\\
& \leq \|A\| \|\tilde{x}\|_{L^\infty((0,\infty);\mathbb{R}^d)}
+ \tilde{C} \|B\| \| \tilde{f} \|_{L^\infty((0,\infty),H)}. \notag
\end{align}
Since $\tilde{x}\in L^\infty((0,\infty);\mathbb{R}^d)) \cap L^2((0,\infty);\mathbb{R}^d)$ and $\dot{\tilde{x}} \in L^\infty((0,\infty);\mathbb{R}^d)$, we conclude by generalizations of Barbalat's lemma \cite{Farkas2016Variations} that $\tilde{x}(t) \rightarrow 0$ as $t \to \infty$.
\end{proof}
It is evident that Theorem \ref{th:stability} yields results about stability and convergence over the RKHS of the state estimate error to zero that are analogous to typical results for conventional ODE systems. As expected, conclusions for the convergence of the function estimates $\hat{f}$ to $f$ are more difficult to generate, and they rely on {\em persistency of excitation } conditions that are suitably extended to the RKHS framework.
\begin{mydef}
We say that the plant in the RKHS Equation ~\ref{eq:rkhs_plant} is {\em strongly persistently exciting} if there exist constants $\Delta,\gamma>0,\text{ and }T$ such that for $f\in H$ with $\|f\|_H=1$ and $t>T$ sufficiently large,
$$
\int_{t}^{t+\Delta}
\left(E^*_{x(\tau)}E_{x(\tau)}f,f\right)_H d\tau \gtrsim \gamma.
$$
\end{mydef}
As in the consideration of ODE systems, persistency of excitation is sufficient to guarantee convergence of the function parameter estimates to the true function.
\begin{theorem}
\label{th:PE}
Suppose that the plant in Equation \ref{eq:rkhs_plant} is strongly persistently exciting and that either (i) the function $k(x(.),x(.)) \in L^1((0,\infty);\mathbb{R})$, or (ii) the matrix $-A$ is coercive in the sense that $(-Av,v)\geq c\|v\|^2$ $\forall$ $v\in\mathbb{R}^d$ and $\Gamma =P=I_d$. Then the parameter function error $\tilde{f}$ converges strongly to zero,
$$
\lim_{t\rightarrow \infty} \| f-\hat{f}(t) \|_H = 0.
$$
\end{theorem}
\begin{proof}
We begin by assuming $(i)$ holds,
In the proof of Theorem \ref{th:stability} it is shown that $V$ is bounded below and non-increasing, and therefore approaches a limit
$$
\lim_{t\rightarrow \infty} V(t)=V_\infty< \infty.
$$
Since $\tilde{x}(t) \rightarrow 0$ as $t\rightarrow \infty$, we can conclude that the limit
$$
\lim_{t\rightarrow \infty} \| \tilde{f}(t) \|_H \lesssim V_\infty.
$$
Suppose that $V_\infty \not = 0.$ Then there exists a positive, increasing sequence of times $\left\{ t_k\right \}_{k\in \mathbb{N}}$ with $\lim_{k\rightarrow \infty} t_k = \infty$ and some constant $\delta>0$
such that
$$
\| \tilde{f}(t_k)\|^2_H \ge \delta
$$
for all $k\in\mathbb{N}$.
Since the RKHS is persistently exciting, we can write
\begin{align*}
\int^{t_k+\Delta}_{t_k} \left(E^{*}_{x(\tau)}E_{x(\tau)}\tilde{f}(t_k),\tilde{f}(t_k)\right)_Hd\tau \gtrsim \gamma \| \tilde{f}{(t_k)}\|_{H}^{2} \geq \gamma \delta
\end{align*}
for each $k\in \mathbb{N}$. By the reproducing property of the RKHS, we can then see that
\begin{align*}
\gamma \delta \leq \gamma \| \tilde{f}(t_k) \|_H^2 &\lesssim \int_{t_k}^{t_k + \Delta} \left ( \kappa_{x(\tau)}, \tilde{f}(t_k) \right )_H^2 d\tau\\
&\leq \|\tilde{f}(t_k)\|_H^2 \int_{t_k}^{t_k + \Delta} \|\kappa_{x(\tau)} \|_H^2 d\tau \\
&= \| \tilde{f}(t_k) \|_H^2
\int_{t_k}^{t_k+\Delta} \left (\kappa_{x(\tau)},\kappa_{x(\tau)}\right )_H d\tau \\
& = \| \tilde{f}(t_k) \|_H^2
\int_{t_k}^{t_k+\Delta} \kappa(x(\tau),x(\tau)) d\tau.
\end{align*}
Since $\kappa_r(x(.),x(.)) \in L^1((0,\infty);\mathbb{R})$ by assumption, when we take the limit as $k\rightarrow \infty$, we obtain the contradiction $0<\gamma \leq 0$. We conclude therefore that $V_\infty=0$ and $\lim_{t\rightarrow \infty} \|\tilde{f}(t)\|_H = 0$.
We outline the proof when (ii) holds, which is based on slight modifications of arguments that appear in \cite{d1993,bsdr1997,dr1994,dr1994pe,bdrr1998,kr1994} that treat a different class of infinite dimensional nonlinear systems whose state space is cast in terms of a Gelfand triple.
Perhaps the simplest analysis follows from \cite{bsdr1997} for this case. Our hypothesis that $\Gamma=P=I_d$ reduces Equations \ref{eq:eom_rkhs} to the form of Equations 2.20 in \cite{bsdr1997}. The assumption that $-A$ is coercive in our theorem implies the coercivity assumption (A4) in \cite{bsdr1997} holds. If we define $\mathbb{X}=\mathbb{Y}:=\mathbb{R}^n \times H$, then it is clear that the imbeddings $\mathbb{Y} \rightarrow \mathbb{X} \rightarrow \mathbb{Y}$ are continuous and dense, so that they define a Gelfand triple. Because of the trivial form of the Gelfand triple in this case, it is immediate that the Garding inequality holds in Equation 2.17 in \cite{bsdr1997}.
We identify $BE_{x(t)}$ as the control influence operator $\mathcal{B}^*(\overline{u}(t))$ in \cite{bsdr1997}.
Under these conditions, Theorem ~\ref{th:PE} follows from Theorem 3.4 in \cite{bsdr1997} as a special case.
\end{proof}
\section{Finite Dimensional Approximations}
\label{sec:finite}
\subsection{Convergence of Finite Dimensional Approximations}
The governing system in Equations \ref{eq:eom_rkhs} constitute a distributed parameter system since the functions $\tilde{f}(t)$ evolve in the infinite dimensional space $H$. In practice these equations must be approximated by some finite dimensional system. Let $\{H_n\}_{n\in\mathbb{N}_0} \subseteq H$ be a nested sequence of subspaces. Let $\Pi_j$ be a collection of approximation operators $
\Pi_j:{H}\rightarrow {H}_n$ such that $\lim_{j\to \infty}\Pi_j f = f$ for all $f\in H$ and $\sup_{j\in \mathbb{N}_0} \|\Pi_j\| \leq C $ for a constant $C > 0$. Perhaps the most evident example of such collection might choose $\Pi_j$ as the $H$-orthogonal projection for a dense collection of subspaces $H_n$. It is also common to choose $\Pi_j$ as a uniformly bounded family of quasi-interpolants \cite{devore1998}. We next construct a finite dimensional approximations $\hat{x}_j$ and $\hat{f}_j$ of the online estimation equations in
\begin{align}
\dot{\hat{x}}_j(t) & = A\hat{x}_j(t) +
B E_{x(t)} \Pi^*_j \hat{f}_j(t), \label{eq:approx_on_est1} \\
\dot{\hat{f}}_j(t) & = \Gamma_j^{-1}\left ( B E_{x(t)} \Pi^*_j \right)^* P\tilde{x}_j(t)
\label{eq:approx_on_est2}
\end{align}
with $\tilde{x}_j:=x-\hat{x}_j$.
It is important to note that in the above equation $
\Pi_j:{H}\rightarrow {H}_n$, and $\Pi_j^*:{H}_n\rightarrow {H}$.
\begin{theorem}
Suppose that $x \in C([0,T],\mathbb{R}^d)$ and that the embedding $i:H \to C(\Omega)$ is uniform in the sense that
\begin{equation}
\label{6}
\|f\|_{C(\Omega)}\equiv \|if\|_{C(\Omega)} \leq C\|f\|_H.
\end{equation}
Then for any $T>0$,
\begin{align*}
\| \hat{x} - \hat{x}_j\|_{C([0,T];\mathbb{R}^d)} &\rightarrow 0,\\
\|\hat{f} - \hat{f}_j\|_{C([0,T];H)} &\rightarrow 0,
\end{align*}
as $j\rightarrow \infty$.
\end{theorem}
\begin{proof}
Define the operators $\Lambda(t):= B E_{x(t)}:H\rightarrow \mathbb{R}^d$ and for each $t\geq 0$, introduce the measures of state estimation error $\overline{x}_j:=\hat{x}-\hat{x}_j$, and define the function estimation error $\overline{f}_j
=\hat{f}-\hat{f}_j$.
Note that $\tilde{x}_j:=x-\hat{x}_j=x-\hat{x} + \hat{x}-\hat{x}_j=\tilde{x}+ \overline{x}_j$.
The time derivative of the error induced by approximation of the estimates can be expanded as follows:
\begin{align*}
&\frac{1}{2} \frac{d}{dt}\left (
( {\overline{x}}_j, {\overline{x}}_j )_{\mathbb{R}^d} + ({\overline{f}}_j,{\overline{f}}_j )_H
\right ) =
( \dot{\overline{x}}_j, {\overline{x}}_j )_{\mathbb{R}^d} + (\dot{\overline{f}}_j,{\overline{f}}_j )_H
\\
&= (A\overline{x}_j + \Lambda \overline{f}_j , \overline{x}_j)_{\mathbb{R}^d} +
\left (
\left (\Gamma^{-1}-\Pi_j^*\Gamma_j^{-1}\Pi_j \right )
\Lambda^*P \tilde{x}, \overline{f}_j
\right )_H
-\left (\Pi_j^* \Gamma_j^{-1} \Pi_j \Lambda^* P \overline{x}_j,\overline{f}_j \right)_H
\\
&\leq C_A \| \overline{x}_j \|^2_{\mathbb{R}^d} + \|\Lambda\| \| \overline{f}_j \|_{H} \| \overline{x}_j \|_{\mathbb{R}^d} \\
&\quad \quad
+ \| \Gamma^{-1}
(I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j) \Lambda^* P \tilde{x}\|_{H} \|\overline{f}_j \|_H
+\left \|
\Pi_j^* \Gamma_j^{-1} \Pi_j \Lambda^* P
\right \| \|\overline{x}_j\|
\|\overline{f}_j \|
\\
& \leq
C_A \| \overline{x}_j \|_{\mathbb{R}^d}^2 + \frac{1}{2}
\|\Lambda\| \left (
\| \overline{f}_j \|_{H}^2
+ \| \overline{x}_j \|_{\mathbb{R}^d}^2
\right )
+ \frac{1}{2}\|\Pi^*_j \Gamma_j^{-1} \Pi_j\|
\| \Lambda^*\| \|P\| \left ( \|\overline{x}_j\|^2_{\mathbb{R}^d} + \| \overline{f}_j\|_H \right )
\\
&\quad \quad
+ \frac{1}{2} \left (
\Gamma^{-1}
(I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j) \Lambda^* P \tilde{x}\|_{H}
+
\|\overline{f}_j \|^2_H
\right) \\
& \leq
\frac{1}{2} \|\Gamma^{-1} \| \| \Lambda^*\| \|P\|
\| I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j \|^2\|\tilde{x}\|^2_{\mathbb{R}^d}
+\\
&\quad \quad
+\left (C_A + \frac{1}{2} \|\Lambda\|
+ \frac{1}{2} C_B \|\Lambda^*\| \|P\|
\right ) \|\overline{x}_j\|^{2}_{\mathbb{R}^d}
+
\frac{1}{2} \left ( \|\Lambda\| + 1
+ \frac{1}{2} C_B \|\Lambda^*\| \|P\|\right) \|\overline{f}_j\|^{2}_H
\end{align*}
We know that $\|\Lambda(t)\|=\|\Lambda^*(t)\|$ is bounded uniformly in time from the assumption that $H$ is uniformly embedded in $C(\Omega)$.
We next consider the operator error that manifests in the term $(\Gamma^{-1} - \Pi^*_j \Gamma_j^{-1} \Pi_j)$. For any $g\in H$ we have
\begin{align*}
\| (\Gamma^{-1} - \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H & =
\| \Gamma^{-1}( I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H \\
&\leq
\| \Gamma^{-1} \|
\|\left (\Pi_j + (I-\Pi_j)\right )( I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)g \|_H \\
&\lesssim \| I-\Pi_j \| \|g\|_H.
\end{align*}
This final inequality follows since $\Pi_j(I - \Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j)=0$ and
$\Gamma \Pi^*_j \Gamma_j^{-1} \Pi_j\equiv\Gamma \Pi^*_j \left (\Pi_j \Gamma \Pi_j^* \right)^{-1} \Pi_j $ is uniformly bounded.
We then can write
\begin{align*}
\frac{d}{dt}\left (
\|\overline{x}_j\|^2_{\mathbb{R}^d} + \|\overline{f}_j\|^2_H
\right )
&\leq C_1 \| I-\Gamma \Pi_j^*\Gamma_j^{-1}\Pi_j \|^2 \\
&\quad \quad+ C_2 \left (\|\overline{x}_j\|^2_{\mathbb{R}^d} + \|\overline{f}_j\|^2_H \right )
\end{align*}
where $C_1,C_2>0$. We integrate this inequality over the interval $[0,T]$ and obtain
\begin{align*}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq
\|\overline{x}_j(0)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(0)\|^2_H \\
&
+ C_1T \| I-\Gamma\Pi_j^*\Gamma_j^{-1}\Pi_j \|^2 \\
&+ C_2\int_0^T \left (
\|\overline{x}_j(\tau)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(\tau)\|^2_H
\right ) d\tau
\end{align*}
We can always choose $\hat{x}(0) = \hat{x}_j(0)$, so that $\overline{x}_j(0) = 0$. If we choose $\hat{f}_j(0):=\Pi_j\hat{f}(0)$ then,
\begin{align*}
\|\overline{f}_j(0)\| &= \|\hat{f}(0)-\Pi_j\hat{f}(0)\|_H\\
&\leq \|I-\Pi_j\|_H \|\hat{f}(0)\|_H.
\end{align*}
The non-decreasing term can be rewritten as $C_1T \| I-\Gamma\Pi_j^* \Gamma_j^{-1} \Pi_j \|^2 \leq C_3 \|I-\Pi_j\|^2_H$.
\begin{align}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq C_4\|I-\Pi_j\|^2_H+ C_2\int_0^T \left (
\|\overline{x}_j(\tau)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(\tau)\|^2_H
\right ) d\tau
\label{eq:gron_last}
\end{align}
Let $\alpha(t):=C_4\|I-\Pi_j\|^2_H$ and applying Gronwall's inequality to equation \ref{eq:gron_last}, we get
\begin{align}
\|\overline{x}_j(t)\|^2_{\mathbb{R}^d}
+ \|\overline{f}_j(t)\|^2_H
&\leq \alpha(t) e^{C_2 T}
\end{align}
As $j\to \infty$ we get $\alpha(t) \to 0$, this implies $\overline{x}_j(t)\to 0$ and $\overline{f}_j(t)\to 0$.
Therefore the finite dimensional approximation converges to the infinite dimensional states in $\mathbb{R}^d \times H$.
\end{proof}
\section{Numerical Simulations}
\label{sec:numerical}
\begin{figure}
\centering
\includegraphics[scale=0.3]{Figure1parta}
\hspace{1cm}
\includegraphics[scale=0.3]{Figure1partb}
\captionsetup{justification=justified,margin=1cm}
\caption{Experimental setup and definition of basis functions}
\label{fig:Model}
\end{figure}
A schematic representation of a quarter car model consisting of a chassis, suspension and road measuring device is shown in Fig ~\ref{fig:Model}. In this simple model the displacement of car suspension and chassis are $x_1$ and $x_2$ respectively. The arc length $s$ measures the distance along the track that vehicle follows. The equation of motion for the two DOF model has the form,
\begin{equation}
M\ddot{x}(t)+C\dot{x}(t)+Kx(t)=Bf(s(t))
\end{equation}
with the mass matrix $M \in \mathbb{R}^{2\times2}$, the stiffness matrix $K \in \mathbb{R}^{2\times2}$, the damping matrix $C \in \mathbb{R}^{2\times2}$, the control influence vector $b \in \mathbb{R}^{2\times 1}$ in this example. The road profile is denoted by the unknown function $f:\mathbb{R} \to \mathbb{R}$. For simulation purposes, the car is assumed to traverse a circular path of radius $R$, so that we restrict attention to periodic round profiles $f : [0,R]\to \mathbb{R}$. To illustrate the methodology, we first assume that the unknown function, $f$ is restricted to the class of uncertainty mentioned in Equation~\ref{eq:e2} and therefore can be approximated as
\begin{equation}
f(\cdot)=\sum_{i=1}^n{\alpha_i^*k_{x_i}(\cdot)}
\end{equation}
with $n$ as the number of basis functions, $\alpha_i^*$ are the true unknown coefficients to be estimated, and $k_{x_i}(\cdot)$ are basis functions over the circular domain.
Hence the state space equation can be written in the form
\begin{equation}
\dot{x}(t)=Ax(t)+B\sum_{i=1}^n{\alpha_i^*k_{x_i}(s(t))}.
\label{eq:num_sim}
\end{equation}
where the state vector $x = [\dot{x}_1,x_1,\dot{x}_2,x_2]$, the system matrix $A\in \mathbb{R}^{4 \times 4}$, and control influence matrix $B \in \mathbb{R}^{4 \times 1}$.
For the quarter car model shown in Fig. \ref{fig:Model} we derive the matrices,
$$
A=\begin{bmatrix}
\frac{-c_2}{m_1} &\frac{-(k_1+k_2)}{m_1} &\frac{c_2}{m_1} &\frac{k_2}{m_1}\\
1 &0 &0 &0\\
\frac{-c_2}{m_2} &\frac{(k_2)}{m_2} &\frac{-c_2}{m_2} &\frac{-k_2}{m_2}\\
0 &0 &1 &0
\end{bmatrix}
\quad \text{and} \quad
B=\begin{bmatrix}
\frac{k_1}{m_1}\\
0\\
0\\
0
\end{bmatrix}.
$$
Note that if we augment the state to be $\{x_1,x_2,x_3,x_4,s\}$ and append an ODE that specifies $\dot{s}(t)$ for $t\in \mathbb{R}^+$ the equations ~\ref{eq:num_sim} can be written in the form of equations ~\ref{eq:simple_plant}.Then the finite dimensional set of coupled ODE's for the adaptive estimation problem can be written in terms of the plant dynamics, estimator equation, and the learning law which are of the form shown in Equations \ref{eq:f}, \ref{eq:a2}, and \ref{eq:a3} respectively.
\subsection{Synthetic Road Profile}
The constants in the equation are initialized as follows: $m_1=0.5$ kg, $m_2=0.5$ kg, $k_1=50000$ N/m, $k_2=30000$ N/m and $c_2=200$ Ns/m, $\Gamma=0.001$.
The radius of the path traversed $R=4$ m, the road profile to be estimated is assumed to have the shape $f(\cdot)= \kappa\sin(2\pi \nu (\cdot))$ where $\nu =0.04$ Hz and $\kappa=2$.
Thus our adaptive estimation problem is formulated for a synthetic road profile in the RKHS $H = \overline{\{k_x(\cdot)|x\in \Omega\}}$ with $k_x(\cdot)=e^\frac{-\|x-{\cdot} \|^2}{2\sigma^2 }$.
The radial basis functions, each with standard deviation of $\sigma=50$, span over the range of $25^o$ with their centers $s_i$ evenly separated along the arc length. It is important to note that we have chosen a scattered basis that can be located at any collection of centers $\{s_i\}_{i=1}^{n}\subseteq \Omega$ but the uniformly spaced centers are selected to illustrate the convergence rates.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.45]{rbf_road}
\caption{Road surface estimates for $n=\{10,20,\cdots,100\}$}
\label{fig:Sine Road}
\end{figure}
Fig.\ref{fig:Sine Road} shows the finite dimensional estimates $\hat{f}$ of the road and the true road surface $f$ for different number of basis kernels ranging from $n=\{10,20,\cdots,100\}$.
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.5\textwidth]{L2_example}
&
\includegraphics[width=.5\textwidth]{C_error_example}\\
\end{tabular}
\caption{Convergence rates using Gaussian kernel for synthetic data}
\label{fig:logsup}
\end{figure}
The plots in Fig.\ref{fig:logsup} show the rate of convergence of $L^2$ error and the $C(\Omega)$ error with respect to the number of basis functions. The {\em{log}} along the axes in the figures refer to the natural logarithm unless explicitly specified.
\subsection{Experimental Road Profile Data}
The road profile to be estimated in this subsection is based on the experimental data obtained from the Vehicle Terrain Measurement System shown in Fig.~\ref{fig:circle}. The constants in the estimation problem are initialized to the same numerical values as in previous subsection.
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{Road_Run1}
&
\includegraphics[width=0.4\textwidth]{Circle}\\
{Longitudinal Elevation Profile.}
&
{Circular Path followed by VTMS.}
\end{tabular}
\caption{Experimental Data From VTMS.}
\label{fig:circle}
\end{figure}
In the first study in this section the adaptive estimation problem is formulated in the RKHS $H = \overline{k_x(\cdot)|x\in \Omega\}}$ with $k_x(\cdot)=e^\frac{-\|x-{\cdot}\|^2}{2\sigma^2 }$. The radial basis functions, each with standard deviation of $\sigma=50$, span over the range of with a collection of centers located at $\{s_i\}_{i=1}^{n}\subseteq \Omega$ evenly separated along the arclength. This is repeated for kernels defined using B-splines of first order and second order respectively.
Fig.\ref{fig:Kernels} shows the finite dimensional estimates of the road and the true road surface $f$ for a data representing single lap around the circular track, the finite dimensional estimates $\hat{f}_n$ are plotted for different number of basis kernels ranging from $n=\{35,50,\cdots,140\}$ using the Gaussian kernel as well as the second order B-splines.
The finite dimensional estimates $\hat{f}_n$ of the road profile and the true road profile $f$ for data collected representing multiple laps around the circular track is plotted for the first order B-splines as shown in Fig.~\ref{fig:Lsplines Road}. The plots in Fig.~\ref{fig:sup_error_compare} show the rate of convergence of the $L^2$ error and the $C(\Omega)$ error with respect to number of basis functions.
It is seen that the rate of convergence for $2^{nd}$ order B-Spline is better as compared to other kernels used to estimate in these examples. This corroborates the fact that smoother kernels are expected to have better convergence rates.
Also, the condition number of the Grammian matrix varies with $n$, as illustrated in Table.\ref{table:1} and Fig.\ref{fig:conditionnumber}. This is an important factor to consider when choosing a specific kernel for the RKHS embedding technique since it is well known that the error in numerical estimates of solutions to linear systems is bounded above by the condition number. The implementation of the RKHS embedding method requires such a solution that depends on the grammian matrix of the kernel bases at each time step. We see that the condition number of Grammian matrices for exponentials is $\mathcal{O}(10^{16})$ greater than the corresponding matrices for splines. Since the sensitivity of the solutions of linear equations is bounded by the condition numbers, it is expected that the use of exponentials could suffer from a severe loss of accuracy as the dimensionality increases. The development for preconditioning techniques for Grammian matrices constructed from radial basis functions to address this problem is an area of active research.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.4 \textwidth]{Exp_RBF_Road}
&
\includegraphics[width = 0.4 \textwidth]{Bsplines_Road}\\
{Road surface estimates for Gaussian kernels}
&
{Road surface estimate for second-order B-splines}
\end{tabular}
\caption{Road surface estimates for single lap}
\label{fig:Kernels}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width = 0.4 \textwidth]{LSpline_Road}
\caption{Road surface estimate using first-order B-splines}
\label{fig:Lsplines Road}
\end{figure}
\begin{center}
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.5\textwidth]{Compare_L2_Error}
&
\includegraphics[width = 0.5\textwidth]{Compare_C_Error}\\
\end{tabular}
\caption{Convergence rates for different kernels}
\label{fig:sup_error_compare}
\end{figure}
\end{center}
\begin{center}
\centering
\begin{table}[H]
\centering
\begin{tabular}{|p{1cm}|p{2.2cm}|p{2.2cm}|p{2.2cm}|}
\hline
No. of Basis Functions & Condition No. (First order B-Splines) $\times 10^3$ & Condition No.(Second order B-Splines) $\times 10^4$ & Condition No.(Gaussian Kernels) $\times 10^{20}$\\
\hline \hline
10 & 0.6646 & 0.3882 & 0.0001 \\
20 & 1.0396 & 0.9336 & 0.0017 \\
30 & 1.4077 & 1.5045 & 0.0029 \\
40 & 1.7737 & 2.0784 & 0.0074 \\
50 & 2.1388 & 2.6535 & 0.0167\\
60 & 2.5035 & 3.2293 & 0.0102\\
70 & 2.8678 & 3.8054& 0.0542\\
80 & 3.2321 & 4.3818& 0.0571\\
90 & 3.5962 & 4.9583& 0.7624\\
100 & 3.9602 & 5.5350& 1.3630\\
\hline
\end{tabular}
\caption{Condition number of Grammian Matrix vs Number of Basis Functions}
\label{table:1}
\end{table}
\end{center}
\begin{figure}[H]
\centering
\includegraphics[height=0.3\textheight,width=0.65\textwidth]{Conditon_Number}
\caption{Condition Number of Grammian Matrix vs Number of Basis Functions}
\label{fig:conditionnumber}
\end{figure}
\vspace{-1cm}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we introduced a novel framework based on the use of RKHS embedding to study online adaptive estimation problems. The applicability of this framework to solve estimation problems that involve high dimensional scattered data approximation provides the motivation for the theory and algorithms described in this paper. A quick overview of the background theory on RKHS enables rigorous derivation of the results in Sections \ref{sec:existence} and \ref{sec:finite}. In this paper we derive (1) the sufficient conditions for the existence and uniqueness of solutions to the RKHS embedding problem, (2) the stability and convergence of the state estimation error, and (3) the convergence of the finite dimensional approximate solutions to the solution of the infinite dimensional state space. To illustrate the utility of this approach, a simplified numerical example of adaptive estimation of a road profile is studied and the results are critically analyzed. It would be of further interest to see the ramifications of using multiscale kernels to achieve semi-optimal convergence rates for functions in a scale of Sobolev spaces. It would likewise be important to extend this framework to adaptive control problems and examine the consequences of {\em persistency of excitation} conditions in the RKHS setting, and further extend the approach to adaptively generate bases over the state space.
| {'timestamp': '2017-07-11T02:15:02', 'yymm': '1707', 'arxiv_id': '1707.01567', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.01567'} |
\section{Introduction}
Given a closed Riemannian manifold $(M,g)$ we consider the
conformal class of the metric $g$, $[g]$. The Yamabe
constant of $[g]$, $Y(M,[g])$, is the
infimum of the normalized total scalar curvature functional on
the conformal class. Namely,
$$Y(M,[g])= \inf_{h\in [g]}
\frac{\int {\bf s}_h \ dvol(h)}{(Vol(M,h))^{\frac{n-2}{n}}},$$
\noindent
where ${\bf s}_h$ denotes the scalar curvature of the metric $h$
and $dvol(h)$ its volume element.
If one writes metrics conformal to $g$ as $h=f^{4/(n-2)} \ g$,
one obtains the expression
$$Y(M,[g])= \inf_{f\in C^{\infty} (M)}
\frac{\int ( \ 4a_n {\| \nabla f \|}_g^2 + f^2 {\bf s}_g \ )
\ dvol(g)}{{\| f\|}_{p_n}^2},$$
\noindent
where $a_n =4(n-1)/(n-2) $ and $p_n =2n/(n-2)$. It is a fundamental
result
on the subject that the infimum is actually achieved
(\cite{Yamabe, Trudinger, Aubin, Schoen}). The functions $f$ achieving
the infimum are called {\it Yamabe functions} and the corresponding metrics
$f^{4/(n-2)} \ g$ are called {\it Yamabe metrics}. Since the critical points
of the total scalar curvature functional restricted to a conformal
class of metrics are precisely the metrics of constant scalar
curvature in the conformal class, Yamabe metrics are metrics of
constant scalar curvature.
It is well known that by considering functions supported in a small
normal neighborhood of a point one can prove that
$Y(M^n,[g]) \leq Y(S^n ,[g_0 ])$, where $g_0$ is the round metric
of radius one on the sphere and $(M^n ,g)$ is any closed n-dimensional
Riemannian manifold (\cite{Aubin}).
We will use the notation $Y_n = Y(S^n ,[g_0 ])$ and
$V_n =Vol(S^n ,g_0 )$. Therefore $Y_n =n(n-1)V_n^{\frac{2}{n}}$.
Then one
defines the {\it Yamabe invariant} of a closed manifold $M$
\cite{Kobayashi, Schoen2} as
$$Y(M)=\sup_g Y(M,[g]) \leq Y_n .$$
It follows that $Y(M)$ is positive if and only if $M$ admits a
metric of positive scalar curvature. Moreover, the sign of
$Y(M)$ determines the technical difficulties in understanding
the invariant. When the Yamabe constant of a conformal class
is non-positive there is a unique metric (up to multiplication
by a positive constant) of constant scalar curvature in the
conformal class and if $g$ is any metric in the conformal
class, the Yamabe constant is bounded from below by
$(\inf_M {\bf s}_g ) \ (Vol(M,g))^{2/n}$. This can be used for instance
to study the behavior of the invariant under surgery and so to
obtain information using cobordism theory \cite{Yun, Petean, Botvinnik}.
Note also that in the non-positive case the Yamabe invariant
coincides with Perelman's invariant \cite{Ishida}.
The previous estimate is no longer true
in the positive case, but one does get a lower bound in the case of
positive Ricci curvature by a theorem of S. Ilias:
if $Ricci(g)\geq \lambda g $
($\lambda >0$) then $Y(M,[g]) \geq n \lambda (Vol(M,g))^{2/n}$
(\cite{Ilias}). Then in order to use this inequality to
find lower bounds on the Yamabe invariant of a closed
manifold $M$ one would try to maximize the volume of the manifold
under some positive lower bound of the Ricci curvature.
Namely, if one denotes ${\bf Rv} (M)= \sup \{ Vol(M,g): Ricci(g)\geq
(n-1) g \} $ then one gets $Y(M) \geq n(n-1) ({\bf Rv} (M))^{2/n}$
(one should define ${\bf Rv} (M) =0$ if $M$ does not admit
a metric of positive Ricci curvature). Very little is known
about the invariant ${\bf Rv} (M)$. Of course, Bishop's inequality
tells us that for any n-dimensional closed manifold
${\bf Rv} (M^n) \leq {\bf Rv} (S^n )$
(which is of course attained by the volume
of the metric of constant sectional curvature 1). Moreover,
G. Perelman \cite{Perelman} proved that there is a
constant $\delta =\delta_n >0$ such that if ${\bf Rv} (M) \geq
{\bf Rv} (S^n ) -\delta_n $ then
$M$ is homeomorphic to $S^n$. Beyond this, results on
${\bf Rv} (M)$ have been obtained by computing Yamabe invariants, so
for instance ${\bf Rv} ({\bf CP}^2 )= 2 \pi^2 $
(achieved by the Fubini-Study
metric as shown by C. LeBrun \cite{Lebrun} and M. Gursky and C.
LeBrun \cite{Gursky}) and ${\bf Rv} ({\bf RP}^3) = \pi^2$ (achieved by the
metric of constant sectional curvature as shown by H. Bray and
A. Neves \cite{Bray}).
Of course, there is no hope to apply the previous comments directly
when the fundamental group of $M$ is infinite. Nevertheless it
seems that even in this case the Yamabe invariant is
realized by conformal classes of metrics which maximize volume
with a fixed positive lower bound on the Ricci curvature
``in certain sense''. The standard example is $S^{n-1} \times
S^1$. The fact that $Y(S^n \times S^1 ) =Y_{n+1}$ is one
of the first things we learned about the Yamabe invariant
\cite{Kobayashi, Schoen2}. One way to see this is as follows:
first one notes that $\lim_{T\rightarrow \infty}
Y(S^n \times S^1 ,[g_0 + T^2 dt^2 ])=
Y(S^n \times {\mathbb R}, [g_0 + dt^2 ])$ \cite{Akutagawa}
(the Yamabe constant for a non-compact Riemannian manifold
is computed as the infimum of the Yamabe functional over
compactly supported functions).
But the Yamabe
function for $g_0 + dt^2$ is precisely the conformal factor
between $S^n \times {\mathbb R}$ and $S^{n+1} -\{ S, N \}$. Therefore
one can think of $Y(S^n \times S^1 ) =Y_{n+1}$ as realized
by the positive
Einstein metric on $S^{n+1} -\{ S, N \} $. We will see in this
article that a similar situation occurs for any closed positive
Einstein manifold $(M,g)$ (although we only get the lower
bound for the invariant).
\vspace{.3cm}
Let $(N,h)$ be a closed Riemannian manifold. An
{\it isoperimetric region}
is an open subset $U$ with boundary $\partial U$ such that
$\partial U$ minimizes area among hypersurfaces bounding a
region of volume $Vol(U)$. Given any positive number $s$,
$s<Vol(N,h)$, there exists an isoperimetric region of
volume $s$. Its boundary is a stable constant mean curvature
hypersurface with some singularities of codimension at least 7.
Of course one does not need a closed Riemannian manifold
to consider isoperimetric regions, apriori one only
needs to be able to compute volumes of open subsets and areas
of hypersurfaces. One defines the {\it isoperimetric function}
of $(N,h)$ as $I_h :(0,1) \rightarrow {\mathbb R}>0$ by
$$I_h (\beta) =\inf \{ Vol(\partial U)/Vol(N,h) :
Vol(U,h) = \beta Vol(N,h) \},$$
\noindent
where $Vol(\partial U)$ is measured with the Riemannian metric
induced by $h$ (on the non-singular part of $\partial U$).
Given a closed Riemannian manifold $(M,g)$ we will call
the {\it spherical cone} on $M$ the space $X$ obtained collapsing
$M \times \{0 \} $ and $M\times \{ \pi \}$ in
$M\times [0,\pi ]$ to points $S$ and $N$ (the vertices)
with the metric ${\bf g} =\sin^2 (t)g + dt^2$
(which is a Riemannian metric on $X-\{ S,N \}$). Now if
$Ricci(g) \geq (n-1) g$ one can see that $Ricci({\bf g})
\geq n{\bf g}$. One should compare this with the Euclidean cones
considered by F. Morgan and M. Ritor\'{e} in \cite{Morgan}:
$\hat{g} =t^2 g + dt^2$ for which $Ricci(g) \geq (n-1)g $
implies that $Ricci(\hat{g}) \geq 0$. The importance of these
spherical cones for the study of Yamabe constants is that
if one takes out the vertices the corresponding (non-complete)
Riemannian manifold is conformal to
$M\times {\mathbb R}$. But using the (warped product version) of the
Ros Product Theorem \cite[Proposition 3.6]{Ros} (see
\cite[Section 3]{Morgan2}) and the Levy-Gromov isoperimetric
inequality \cite{Gromov} one can understand isoperimetric
regions in these spherical cones. Namely,
\begin{Theorem} Let $(M^n,g)$ be a compact manifold with
Ricci curvature $Ricci(g) \geq (n-1)g$. Let $(X,{\bf g})$ be
its spherical cone. Then geodesic balls around any of the
vertices are isoperimetric.
\end{Theorem}
But now, since the spherical cone over $(M,g)$
is conformal to $(M\times {\mathbb R} ,
g+ dt^2 )$ we can use the previous result
and symmetrization of a function with respect to the
geodesic balls centered at a vertex to prove:
\begin{Theorem} Let $(M,g)$ be a closed Riemannian manifold of
positive Ricci curvature, $Ricci(g) \geq (n-1)g$ and volume $V$.
Then
$$Y(M\times {\mathbb R} ,[g+dt^2 ]) \geq
(V/V_n )^{\frac{2}{n+1}} \ Y_{n+1} .$$
\end{Theorem}
\vspace{.2cm}
As we mentioned before one of the differences between the positive
and non-positive cases in the study of the Yamabe constant is
the non-uniqueness of constant scalar curvature metrics on
a conformal class with positive Yamabe constant. And the simplest
family of examples of non-uniqueness comes from Riemannian
products. If $(M,g)$ and $(N^n ,h)$ are closed Riemannian manifolds
of constant scalar curvature and ${\bf s}_g$ is positive then
for small $\delta >0$, $\delta g + h$ is a constant scalar
curvature metric on $M \times N$ which cannot be a Yamabe
metric. If $(M,g)$ is Einstein and $Y(M)=Y(M,[g])$ it seems
reasonable that $Y(M\times N)= \lim_{\delta \rightarrow 0}
Y(M\times N ,[ \delta g + h ])$.
Moreover as it is shown in \cite{Akutagawa}
$$ \lim Y(M\times N , [\delta g + h ]) =Y(M\times {\mathbb R}^n,[ g+ dt^2 ]).$$
The only case which is well understood is when $M=S^n$ and $N=S^1$.
Here every Yamabe function is a function of the $S^1$-factor
\cite{Schoen2} and the Yamabe function for $(S^n \times {\mathbb R} , g_0 +
dt^2 )$ is the factor which makes $S^n\times {\mathbb R}$ conformal to
$S^{n+1} -\{ S, N \}$. It seems possible that under
certain conditions on $(M,g)$ the Yamabe functions of
$(M \times {\mathbb R}^n , g+dt^2 )$ depend only on the second
variable. The best case scenario would be that this is true
if $g$ is a Yamabe metric but it seems more attainable the
case when $g$ is Einstein. It is a corollary to the previous
theorem that this is actually true in the case $n=1$. Namely,
using the notation (as in \cite{Akutagawa})
$Y_N (M\times N , g +h)$
to denote the infimum of the $(g+h)$-Yamabe functional restricted
to functions of the $N$-factor we have:
\begin{Corollary} Let $(M^n,g)$ be a closed positive Einstein manifold
with Ricci curvature $Ricci(g)=(n-1)g$. Then
$$Y(M\times {\mathbb R} , [g+ dt^2 ])=Y_{{\mathbb R}}(M\times {\mathbb R} , g+ dt^2 )=
{\left( \frac{V}{V_n} \right) }^{\frac{2}{n+1}} \ Y_{n+1}.$$
\end{Corollary}
\vspace{.3cm}
As $Y(M\times {\mathbb R} , [g+ dt^2 ]) = \lim_{T\rightarrow \infty }
Y(M\times S^1 ,[g+T dt^2 ])$ it also follows from Theorem 1.2
that:
\begin{Corollary} If $(M^n ,g)$ is a closed Einstein manifold
with $Ricci(g) = (n-1)g$ and volume $V$ then
$$Y(M\times S^1) \geq (V/V_n )^{\frac{2}{n+1}} \ Y_{n+1} .$$
\end{Corollary}
\vspace{.3cm}
So for example using the product metric we get
$$Y(S^2 \times S^2 \times S^1 )\geq {\left(\frac{2}{3}
\right)}^{(2/5)} \ Y_5 $$
\noindent
and using the Fubini-Study metric we get
$$Y({\bf CP}^2 \times S^1 ) \geq {\left(\frac{3}{4} \right)}^{(2/5)}
\ Y_5 .$$
\vspace{.4cm}
{\it Acknowledgements:} The author would like to thank
Manuel Ritor\'{e}, Kazuo Akutagawa and Frank Morgan
for several useful comments on the first drafts of
this manuscript.
\section{Isoperimetric regions in spherical cones}
As we mentioned in the introduction, the isoperimetric
problem for spherical cones (over manifolds with
Ricci curvature $\geq n-1$) is understood using
the Levy-Gromov isoperimetric inequality
(to compare the isoperimetric functions of
$M$ and of $S^n$) and the Ros Product Theorem for warped products
(to compare then the isoperimetric functions of
the spherical cone over $M$ to the isoperimetric function
of $S^{n+1}$).
See for example section 3 of \cite{Morgan2}
(in particular {\bf 3.2} and the remark after it). For the
reader familiar with isoperimetric problems, this should be
enough to understand Theorem 1.1. In this section, for the
convenience of the reader, we will
give a brief outline on these issues. We will mostly
discuss and follow section 3 of \cite{Ros} and ideas
in \cite{Morgan, Montiel} which we think might be useful in
dealing with other problems arising from the study of Yamabe
constants.
Let $(M^n ,g)$ be a closed Riemannian manifold
of volume $V$ and Ricci curvature $Ricci(g) \geq (n-1)g$.
We will
consider $(X^{n+1}, \bf{g}) $ where as a topological space $X$ is the
suspension of $M$ ($X=M\times [0,\pi ]$ with $M\times \{ 0 \}$ and
$M\times \{ \pi \}$ identified to points $S$ and $N$)
and $\bf{g}$ $ =\sin^2 (t) \ g \ +
dt^2$. Of course $X$ is not a manifold (except when $M$ is $S^n$) and
$\bf{g} $ is a Riemannian metric only on $X-\{ S,N \}$.
The following is a standard result in geometric measure theory.
$\bf{Theorem:}$ For any positive number $r< Vol(x)$ there exists
an isoperimetric open subset $U$ of $X$ of volume $r$. Moreover
$\partial U$ is a smooth stable constant mean curvature
hypersurface of $X$ except for a singular piece $\partial_1 U$
which consists of (possibly)
$S$, $N$, and a subset of codimension at least 7.
Let us call $\partial_0 U$ the regular part of $\partial U$,
$\partial_0 U= \partial U - \partial_1 U$. Let
$X_t$, $t\in (-\varepsilon ,\varepsilon )$,
be a variation of $\partial_0 U$ such that the
volume of the enclosed region $U_t$ remains constant.
Let $\lambda (t)$ be the area of $X_t$. Then $\lambda '(0) =0$
and $\lambda ''(0) \geq 0$. The first condition is satisfied
by hypersurfaces of constant mean curvature and the ones
satisfying the second condition are called ${\it stable}$.
If $N$ denotes a normal
vector field to the hypersurface then variations are obtained
by picking a function $h$ with compact support on $\partial_0 U$ and
moving $\partial_0 U$ in the direction of $h \ N$. Then
we have that if the mean of $h$ on $\partial_0 U$
is 0 then $\lambda_h '(0) =0$
$\lambda_h ''(0) \geq 0$. This last condition is written as
$$Q(h,h)=-\int_{\partial_0 U} h(\Delta h + (Ricci (N,N) +
\sigma^2 )h ) dvol(\partial_0 U) \geq 0.$$
\noindent
Here we consider $\partial_0 U$ as a Riemannian manifold
(with the induced metric) and use the corresponding Laplacian
and volume element. $\sigma^2$ is the square of the norm of the second
fundamental form.
This was worked out by J. L. Barbosa, M. do Carmo and
J. Eschenburg in \cite{Barbosa,
doCarmo}. As we said before, the function $h$
should apriori have compact support
in $\partial_0 U$ but as shown by F. Morgan and M. Ritor\'{e}
\cite[Lemma 3.3]{Morgan} it is enough that $h$ is bounded
and $h\in L^2 (\partial_0 U)$. This is important in order to study
stable constant mean curvature surfaces on a space like $X$ because
$X$ admits what is called a ${\it conformal}$ vector field $V=
\sin (t) \partial /\partial t$ and the function $h$ one wants to
consider is $h=div (V-{\bf g}(V,N) \ N )$ where $N$ is the unit
normal to the hypersurface (and then $h$ is the divergence of
the tangencial part of $V$). This has been used for instance in
\cite{Montiel,Morgan} to classify stable constant mean curvature
hypersurfaces in Riemannian manifolds with a conformal vector field.
When the hypersurface is smooth this function $h$ has mean 0 by
the divergence theorem and one can apply the stability condition.
But when the hypersurface has singularities one would apriori need
the function $h$ to have compact support on the regular part. This
was done by F. Morgan and M. Ritor\'{e} in
\cite[Lemma 3.3]{Morgan}.
We want to prove that the geodesic balls around $S$ are
isoperimetric. One could try to apply the techniques of
Morgan and Ritor\'{e} in \cite{Morgan} and see that they are
the only stable constant mean
curvature hypersurfaces in $X$. This should be possible, and
actually it might be necessary to deal with isoperimetric regions
of more general singular spaces that appear naturally in the study of
Yamabe constants of Riemannian products.
But in this case we will instead
take a more direct approach using the Levy-Gromov
isoperimetric inequality \cite{Gromov} and Ros Product Theorem
\cite{Ros}.
\vspace{.3cm}
The sketch of the proof is as follows: First one has to note that
geodesic balls centered at the vertices {\it produce} the same
isoperimetric function as the one of the round sphere. Therefore
to prove that geodesic balls around the vertices are isoperimetric
is equivalent to prove that the isoperimetric function of ${\bf g}$
is bounded from below by the isoperimetric function of $g_0$. To
do this, given any open subset $U$ of $X$ one considers
its symmetrization
$U^s \subset S^{n+1}$, so the the {\it slices} of $U^s$ are geodesic
balls with the same normalized volumes as the slices of $U$. Then
by the Levy-Gromov isoperimetric inequality we can compare the
normalized areas of the boundaries of the slices. We have to
prove that the normalized area of $\partial U^s$ is at most the
normalized area of $\partial U$.
This follows from
the warped product version of \cite[Proposition 3.6]{Ros}. We will
give an outline following Ros' proof for the Riemannian product case.
We will use the notion of Minkowski
content. This is the bulk of the proof and we will divide it into
Lemma 2.1, Lemma 2.2 and Lemma 2.3.
\vspace{.3cm}
{\it Proof of Theorem 1.1 :}
Let $U\subset X$ be a closed subset.
For any $t\in (0,\pi )$ let
$$U_t =U \cap (M\times \{ t \} ) .$$
Fix any point $E\in S^n$ and let $(U^s )_t$ be the geodesic ball
centered at $E$ with volume
$$Vol((U^s )_t , g_0 ) = \frac{V_n}{V} \ Vol(U_t ,g).$$
\noindent
(recall that $V=Vol(M,g)$ and $V_n = Vol(S^n ,g_0 )$).
Let $U^s
\subset S^{n+1}$ be the corresponding subset (i.e. we consider
$S^{n+1} -\{ S,N \}$ as $S^n \times (0,\pi )$ and $U^s$ is
such that $U^s \cap (S^n \times \{ t \}) $ =$(U^s )_t$.
One might add
$S$ and/or $N$ to make $U^s$ closed and connected). Note
that one can write $(U^s )_t = (U_t )^s = U_t^s$ as long as there
is no confusion (or no difference) on whether we are considering
it as a subset of $S^n$ or as a subset of $S^{n+1}$.
Now
$$Vol(U)=\int_0^{\pi} \sin^n (t) \ Vol(U_t ,g) \ dt $$
$$= \frac{V}{V_n} \int_0^{\pi} \sin^n (t) \ Vol((U^s )_t ,g_0 ) \ dt
= \frac{V}{V_n} Vol(U^s ,g_0 ).$$
Also if $B(r) =M\times [0,r]$ (the geodesic ball of radius
$r$ centered at the vertex at 0) then
$$Vol(B(r))=\int_0^r \sin^n (t) V dt = \frac{V}{V_n}
\int_0^r \sin^n (t) V_n dt = \frac{V}{V_n} Vol (B_0 (r)) \ \ (1)$$
\noindent
where $B_0 (r)$ is the geodesic ball of radius $r$ in the
round sphere. And
$$Vol(\partial B(r))=\sin^n (r) V =\frac{V}{V_n}
Vol(\partial B_0 (r)) \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2).$$
Formulas (1) and (2) tell us that the geodesic balls around the
vertices in $X$ produce the same isoperimetric function as
the round metric $g_0$. Therefore given any open subset $U \subset
X$ we want to compare the area of $\partial U$ with the area
of the boundary
of the geodesic ball in $S^{n+1}$ with the same normalized volume
as $U$.
\vspace{.3cm}
Given a closed set $W$ let $B(W,r)$ be the set of points at distance
at most $r$ from $W$. Then one considers the {\it Minkowski content}
of $W$,
$$\mu^+ (W) = \liminf \frac{Vol (B(W,r) ) -Vol(W)}{r}.$$
\noindent
If $W$ is a smooth submanifold with boundary then
$\mu^+ (W) = Vol (\partial W)$. And this is still true if the
boundary has singularities of codimension $\geq 2$ (and finite
codimension 1 Hausdorff measure).
The Riemannian measure on $(S^n ,g_0 )$, normalized to be a
probability measure is what is called a {\it model measure}:
if $D^t$, $t\in (0,1)$ is the family of geodesic balls
(with volume $Vol(D^t )=t$) centered at some fixed point then
they are
isoperimetric regions which are ordered by volume and such
that for any $t$, $ B(D^t ,r) =D^{t'}$ for some $t'$.
See \cite[Section 3.2]{Ros}. The following result follows
directly from the
Levy-Gromov isoperimetric inequality \cite[Appendix C]{Gromov}
and \cite[Proposition 3.5]{Ros} (see the lemma in
\cite[page 77]{Morgan3} for a more elementary proof and point of view
on \cite[Proposition 3.5]{Ros}).
\begin{Lemma}: Let $(M,g)$ be a closed Riemannian manifold
of volume $V$ and Ricci curvature $Ricci(g) \geq (n-1) g$.
For any nonempty closed subset $\Omega \subset M$ and
any $r\geq 0$ if $B_{\Omega}$ is a geodesic ball in
$(S^n , g_0 )$ with volume $Vol(B_{\Omega})=(V_n /V)
Vol(\Omega )$ then $Vol(B(B_{\Omega} ,r)) \leq
(V_n /V) Vol(B(\Omega ,r))$.
\end{Lemma}
\begin{proof} Given any closed Riemannian manifold $(M,g)$,
dividing the
Riemannian measure by the volume one obtains a probability
measure which we will denote $\mu_g$.
As we said before, the round metric on the sphere
gives a model measure $\mu_{g_0}$. On the other hand the Levy-Gromov
isoperimetric inequality \cite{Gromov}
says that $I_{\mu_g} \geq I_{\mu_{g_0}}$.
The definition of $B_{\Omega}$ says that $\mu_g (\Omega )=\mu_{g_0}
(B_{\Omega})$ and what we want to prove is that $\mu_g (B(\Omega ,r))
\geq \mu_{g_0}
(B(B_{\Omega} ,r) )$ .
Therefore the
statement of the lemma is precisely \cite[Proposition 3.5]{Ros}.
\end{proof}
Fix a positive constant $\lambda$. Note that the previous lemma
remains unchanged if we replace $g$ and $g_0$ by $\lambda g$
and $\lambda g_0$: the correspondence $\Omega \rightarrow
B_{\Omega}$ is the same and $\mu_{\lambda g} = \mu_g$.
\begin{Lemma} For any $t_0 \in (0,\pi )$
$B((U^s )_{t_0} ,r) \subset (B(U_{t_0} ,r ))^s $.
\end{Lemma}
\begin{proof} First note that the distance from a point
$(x,t) \in X$ to a vertex depends only on $t$ and not on $x$
(or even on $X$). Therefore if $r$ is greater than the
distance $\delta$ between $t_0$ and $0$ or $\pi$
then both sets in the lemma
will contain a geodesic ball of radius $r-\delta$ around the
corresponding vertex.
Also observe that the distance between points $(x,t_0 )$ and
$(y,t)$ depends only on the distance between $x$ and $y$
(and $t$, $t_0$, and the function in the warped product,
which in this case is $\sin$) but not on $x, y$ or $X$.
In particular for any $t$ so that $|t-t_0 |<r$,
$(B((U^s )_{t_0} ,r) )_t$ is a geodesic ball.
We have to prove that for any $t$
$$(B((U^s )_{t_0} ,r) )_t \subset ((B(U_{t_0} ,r ))^s )_t.$$
\noindent
But since they are both geodesic balls centered at the same point
it is enough to prove that the volume of the subset on the left is
less than or equal to the volume of the subset on the right.
By the definition of symmetrization the normalized volume of
$ ((B(U_{t_0} ,r ))^s )_t$ is equal to the normalized volume of
$(B(U_{t_0} ,r ))_t$. But from the previous comment there exist
$\rho >0$ such that, considered as subsets of $M$,
$$(B(U_{t_0} ,r ))_t = B(U_{t_0} ,\rho )$$
\noindent
and, as subsets of $S^n$,
$$(B((U^s )_{t_0} ,r) )_t =B(U^s_{t_0} ,\rho ).$$
The lemma then follows from Lemma 2.1 (and the comments after it).
\end{proof}
Now for any closed subset $U\subset X$ let $B_U$ be a
geodesic ball in $(S^{n+1} ,g_0 )$ with volume
$Vol(B_U ,g_0 )= (V_n /V)
Vol(U,{\bf g})$. Since geodesic balls in round spheres are isoperimetric
(and $Vol(B_U ,g_0 )=Vol(U^s ,g_0 )$)
it follows that $Vol(\partial B_U )\leq \mu^+ (U^s )$.
\begin{Lemma} Given any closed set $U\subset X$, $\mu^+(U)
\geq (V/V_n ) Vol(\partial B_U )$.
\end{Lemma}
\begin{proof}
Since $(B(U,r) )^s$ is closed
and $B(U^s ,r)$ is the closure of
$\cup_{t\in (0,\pi )} \ B(U_t ^s ,r)$
we have from the previous lemma that
$$B(U^s ,r) \subset (B(U,r ) )^s .$$
Then
$$Vol(\partial B_U )\leq \mu^+ (U^s )
=\liminf \frac{Vol(B(U^s ,r) ) - Vol (U^s )}{r}$$
$$\leq \liminf \frac{Vol((B(U ,r))^s ) - Vol (U^s )}{r}$$
$$=(V_n /V)\liminf \frac{Vol(B(U,r) ) - Vol (U)}{r}
=(V_n /V) \mu^+ (U) $$
\noindent
and the lemma follows.
\end{proof}
Now if we let $B_U^M$ be a geodesic ball around a vertex in $X$
with volume
$$Vol(B_U^M ,{\bf g}) = Vol(U,{\bf g} ) =
\frac{V}{V_n} Vol(B_U, g_0 )$$
\noindent
then it follows from (1) and (2) in the beginning of the proof that
$$Vol(\partial B_U^M ,{\bf g}) = \frac{V}{V_n} Vol(\partial B_U ,g_0 ).$$
\noindent
and so by Lemma 2.3
$$Vol(\partial B_U^M ,{\bf g}) \leq \mu^+ (U)$$
\noindent
and Theorem 1.1 is proved.
{\hfill$\Box$\medskip}
\section{The Yamabe constant of $M\times {\mathbb R}$}
Now assume that $g$ is a metric of positive Ricci curvature,
$Ricci(g) \geq (n-1)g$ on $M$ and consider as before the
spherical cone $(X,{\bf g})$ with ${\bf g} =\sin^2 (t) g + dt^2$.
By a direct
computation the sectional curvature of ${\bf g}$ is given by:
$$K_{{\bf g}} (v_i ,v_j )=\frac{K_g (v_i ,v_j )-\cos^2 (t)}{\sin^2 (t)}$$
$$K_{\bf g} (v_i ,\partial /\partial t)=1,$$
\noindent
for a $g$-orthonormal basis $\{ v_1 ,...,v_n \}$. And the Ricci
curvature is given by:
$$Ricci({\bf g}) (v_i ,\partial /\partial t )=0$$
$$Ricci({\bf g}) (v_i ,v_j )= Ricci(g) (v_i ,v_j ) - (n-1)\cos^2 (t)\delta_i^j
+\sin^2 (t) \delta_i^j$$
$$Ricci({\bf g}) (\partial_t ,\partial_t )=n.$$
Therefore by picking $\{ v_1 ,...,v_n \}$ which diagonalizes $Ricci(g)$ one
easily sees that if $Ricci(g)\geq (n-1)g$ then $Ricci({\bf g})\geq n
{\bf g}$. Moreover, if $g$ is an Einstein metric with Einstein
constant $n-1$ the ${\bf g}$ is Einstein with Einstein constant $n$.
Let us recall that for non-compact
Riemannian manifolds one defines
the Yamabe constant of a metric as the infimum of the Yamabe
functional of the metric
over smooth compactly supported functions (or functions
in $L_1^2$, of course). So for instance if $g$ is a Riemannian metric
on the closed manifold $M$ then
$$Y(M\times {\mathbb R} ,[g+dt^2 ]) =\inf_{f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}^2 +
{\bf s}_g \ f^2 \ \right)
dvol(g+dt^2)}{
{\| f \|}_{p_{n+1}}^2 } .$$
\vspace{.2cm}
{\it Proof of Theorem 1.2 :}
We have a closed Riemannian manifold $(M^n ,g)$
such that $Ricci(g) \geq (n-1) g$. Let $f_0 (t)= \cosh^{-2} (t)$
and consider the diffeomorphism
$$H: M \times (0, \pi ) \rightarrow M \times {\mathbb R} $$
\noindent
given by $H(x,t)=(x,h_0 (t))$, where $h_0 :(0,\pi ) \rightarrow {\mathbb R} $
is the diffeomorphism defined by $h_0 (t) =cosh^{-1} ( (\sin
(t))^{-1})$
on $[\pi /2, \pi )$ and $h_0 (t)=-h_0 (\pi /2 -t)$ if
$t\in(0,\pi /2 )$.
By a direct computation $H^* ( f_0 (g+dt^2))=
{\bf g}= \sin^2 (t) g +dt^2$ on $M\times (0,\pi )$.
Therefore by conformal invariance if we call $g_{f_0} = f_0 (g+dt^2)$
$$Y(M\times {\mathbb R} , [g+dt^2 ] )
=\inf_{ f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}_{g+dt^2}^2 +
{\bf s}_g f^2 \right) \ dvol(g+dt^2)}{
{\| f \|}_{p_{n+1}}^2 } $$
$$=\inf_{f \in C^{\infty}_0 (M\times {\mathbb R} )}
\frac{\int_{M\times {\mathbb R} } \left( \ a_{n+1} {\| \nabla f \|}^2_{g_{f_0}}
+ {\bf s}_{g_{f_0}} f^2 \ \right) \
dvol(g_{f_0} )}{
{\| f \|}_{p_{n+1}}^2 } $$
$$=\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } \ \left( a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ {\bf s}_{\bf g}
f^2 \ \right) \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}}^2 } =Y(M\times (0,\pi ),[{\bf g}]).$$
Now, as we showed in the previous section, $Ricci({\bf g})
\geq n$. Therefore ${\bf s}_{\bf g} \geq n(n+1)$. So we get
$$Y(M\times {\mathbb R} , [g+dt^2 ]) \geq
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } \ \left( a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ n(n+1)
f^2 \ \right) \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}}^2 }.$$
To compute the infimum one needs to consider only non-negative
functions.
Now for any non-negative function
$f \in C^{\infty}_0 (M\times (0,\pi ) \ )$ consider its symmetrization
$f_* :X \rightarrow {\mathbb R}_{\geq 0}$ defined by $f_* (S) =\sup f$ and
$f_* (x,t) =s$ if and only if $Vol(B(S,t), {\bf g} )=
Vol(\{ f > s \} ,{\bf g})$ (i.e. $f_*$ is a
non-increasing function of $t$
and $Vol(\{ f_* > s \})=Vol(\{ f > s \}) $ for any $s$).
It is inmediate that the $L^q$-norms of $f_*$ and $f$ are the
same for any $q$. Also, by the coarea formula
$$\int
\| \nabla f \|_{\bf g}^2 = \int_0^{\infty}
\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g} d\sigma_t \right) dt.$$
$$ \geq \int_0^{\infty} (\mu (f^{-1} (t)))^2
{\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt$$
\noindent
by H\"{o}lder's inequality, where $d\sigma_t$ is the measure induced
by ${\bf g}$ on $\{ f^{-1} (t) \}$. But
$$\int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
=-\frac{d}{dt} (\mu\{ f>t \})$$
$$=-\frac{d}{dt} (\mu\{ f_* >t \}) =
\int_{f_*^{-1}(t)} \| \nabla f_* \|_{\bf g}^{-1} d\sigma_t $$
\noindent
and since $f^{-1} (t) =\partial \{ f>t \}$ by Theorem 1.1
we have $\mu (f^{-1} (t))\geq \mu (f_*^{-1} (t))$. Therefore
$$ \int_0^{\infty} (\mu (f^{-1} (t)))^2
{\left( \int_{f^{-1}(t)} \| \nabla f \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt$$
$$ \geq \int_0^{\infty} (\mu (f_*^{-1} (t)))^2
{\left( \int_{f_*^{-1}(t)} \| \nabla f_* \|_{\bf g}^{-1} d\sigma_t
\right)}^{-1} \ dt $$
\noindent
(and since $\| \nabla f_* \|_{\bf g}$ is constant along
$f_*^{-1}(t)$ )
$$=\int_0^{\infty}\mu (f_*^{-1} (t)) \| \nabla f_* \|_{\bf g} \ dt$$
$$= \int_0^{\infty}
\left( \int_{f_* ^{-1}(t)} \| \nabla f_* \|_{\bf g} d\sigma_t \right)
dt =\int
\| \nabla f_* \|_{\bf g}^2 .$$
Considering $S^{n+1}$ as the spherical cone over $S^n$ we have
the function $f^0_* : S^{n+1} \rightarrow {\mathbb R}_{\geq 0}$ which
corresponds to $f_*$.
Then for all $s$
$$Vol (\{ f_*^0 >s \} ) =
\left( \frac{V_n}{V} \right) \ Vol( \{ f_* >s \},$$
\noindent
and so for any $q$,
$$\int (f^0_*)^q dvol(g_0 ) = \left( \frac{V_n}{V} \right)
\int (f_* )^q dvol({\bf g}).$$
Also for any $s\in (0,\pi )$
$$\mu ( (f_*^0 )^{-1} (s)) = \frac{V_n}{V} \mu (f_*^{-1} (s)),$$
\noindent
and since ${\| \nabla f_*^0 \|}_{g_0} = {\| \nabla f_* \| }_{\bf g}$
we have
$$ \int
\| \nabla f^0_* \|_{g_0}^2 = \frac{V_n}{V} \int
\| \nabla f_* \|_{\bf g}^2 .$$
We obtain
$$Y(M\times {\mathbb R} , [g+dt^2 ]) \geq
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f \|}^2_{\bf g}
+ n(n+1)
f^2 \ dvol({\bf g})}{
{\| f \|}_{p_{n+1}} ^2 }$$
$$\geq \inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f_* \|}^2_{\bf g}
+ n(n+1)
f_*^2 \ dvol({\bf g})}{
{\| f_* \|}_{p_{n+1}}^2 }$$
$$={\left( \frac{V}{V_n} \right)}^{1-(2/p_{n+1})}
\inf_{f \in C^{\infty}_0 (M\times (0,\pi ))}
\frac{\int_{M\times (0,\pi ) } a_{n+1} {\| \nabla f^0_* \|}^2_{g_0}
+ n(n+1)
{f^0_*}^2 dvol({g_0})}{
{\| f^0_* \|}_{p_{n+1}}^2 }$$
$$ \geq
{\left( \frac{V}{V_n} \right)}^{2/(n+1)} \ Y_{n+1}$$
This finishes the proof of Theorem 1.2.
{\hfill$\Box$\medskip}
{\it Proof of Corollary 1.3 :} Note that if
${\bf s}_g$ is constant $Y_{{\mathbb R}} (M \times {\mathbb R} , g +
dt^2)$
only depends on ${\bf s}_g$ and $V=Vol(M,g)$ Actually,
$$Y_{{\mathbb R}} (M\times {\mathbb R} ,g +dt^2 )=
\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2} V
+ {\bf s}_g V f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p} \ V^{2/p}}$$
$$=V^{1-(2/p)}
\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2}
+ {\bf s}_g f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p}}.$$
But as we said
$$\inf_{f\in C_0^{\infty} ( {\mathbb R} )} \frac{\int_{{\mathbb R}} \ a_{n+1} {\|\nabla f
\|}^2_{dt^2}
+ {\bf s}_g f^2 \ dt^2}{(\int_{{\mathbb R}} f^p )^{2/p}}$$
\noindent
is independent of $(M,g)$ and it is known to be equal to
$Y_{n+1} V_n^{-2/(n+1)}$. Corollary 1.3 then follows
directly from Theorem 1.2.
{\hfill$\Box$\medskip}
| {'timestamp': '2007-11-09T20:47:52', 'yymm': '0710', 'arxiv_id': '0710.2536', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.2536'} |
\section{Introduction}
The averaged quantities can be obtained in two different ways in
magnetohydrodynamics. The first way is to solve 3D MHD equations
and then average the results. The second way is to solve some
system of equations on averages. Combination of numerical
simulations and averaged theory brings phenomenology that can
describe observations or experimental data.
The problem of spherically symmetric accretion takes its origin
from Bondi's work \citep{bondi}. He presented idealized
hydrodynamic solution with accretion rate $\dot{M}_B.$ However,
magnetic field $\vec{B}$ always exists in the real systems. Even
small seed $\vec{B}$ amplifies in spherical infall and becomes
dynamically important \citep{schwa}.
Magnetic field inhibits accretion \citep{schwa}. None of many
theories has reasonably calculated the magnetic field evolution
and how it influences dynamics. These theories have some common
pitfalls. First of all, the direction of magnetic field is usually
defined. Secondly, the magnetic field strength is prescribed by
thermal equipartition assumption. In third, dynamical effect of
magnetic field is calculated with conventional magnetic energy and
pressure. All these inaccuracies can be eliminated.
In Section 2\ref{section_method} I develop a model that abandons
equipartition prescription, calculates the magnetic field
direction and strength and employs the correct equations of
magnetized fluid dynamics. In Section 3\ref{results} I show this
accretion pattern to be in qualitative agreement with Sgr A*
spectrum models. I discuss my assumptions in Section 4
\ref{discussion}.
\section{Analytical method}\label{section_method}
Reasonable turbulence evolution model is the key difference of my
method. I build an averaged turbulence theory that corresponds to
numerical simulations. I start with the model of isotropic
turbulence that is consistent with simulations of collisional MHD
in three regimes. Those regimes are decaying hydrodynamic
turbulence, decaying MHD turbulence and dynamo action. I introduce
effective isotropization of magnetic field in 3D model.
Isotropization is taken to have a timescale of the order of
dissipation timescale that is a fraction $\gamma\sim1$ of the
Alfven wave crossing time $\tau_{\rm diss}=\gamma r/v_A.$
Common misconception exists about the dynamical influence of
magnetic field. Neither magnetic energy nor magnetic pressure can
represent $\vec{B}$ in dynamics. Correct averaged Euler and energy
equations were derived in \citep{scharlemann} for radial magnetic
field. Magnetic force $\vec{F}_M=[\vec{j}\times\vec{B}]$ can be
averaged over the solid angle with proper combination of
$\vec{\nabla}\cdot\vec{B}=0.$ I extend the derivation to random
magnetic field without preferred direction. Dynamical effect of
magnetic helicity \citep{biskamp03} is also investigated. I
neglect radiative and mechanical transport processes.
The derived set of equations requires some modifications and
boundary conditions to be applicable to the real astrophysical
systems. I add external energy input to turbulence to balance
dissipative processes in the outer flow. The outer turbulence is
taken to be isotropic and has magnetization $\sigma\sim1.$
Transonic smooth solution is chosen as possessing the highest
accretion rate as in \citep{bondi}.
\begin{figure}\label{fig1}
\includegraphics[height=.5\textheight]{velocities}
\caption{Normalized to Keplerian speed characteristic velocities of magnetized flow. Horizontal lines correspond to self-similar solution $v\sim r^{-1/2}.$}
\end{figure}
\section{Results \& Application to Sgr A*}\label{results}
\begin{figure}\label{fig2}
\includegraphics[height=.5\textheight]{magnetization}
\caption{Plot of magnetization $\sigma=(E_M+E_K)/E_{Th}$ with radius.}
\end{figure}
The results of my calculations confirm some known facts about
spherical magnetized accretion, agree with the results of
numerical simulations and have some previously unidentified
features.
Initially isotropic magnetic field exhibits strong anisotropy with
larger radial field $B_r.$ Perpendicular magnetic field
$B_\perp\ll B_r$ is dynamically unimportant in the inner accretion
region Fig\ref{fig1}. Because magnetic field dissipates, infall
onto the black hole can proceed \citep{schwa}.
Turbulence is supported by external driving in the outer flow
regions, but internal driving due to freezing-in amplification
takes over in the inner flow Fig\ref{fig2}. Magnetization of the
flow increases in the inner region with decreasing radius
consistently with simulations \cite{igumen06}. Density profile
appears to be $\rho\sim r^{-1.25}$ that is different from
traditional ADAF scaling $\rho\sim r^{-1.5}$ \citep{narayan}. Thus
the idea of self-similar behavior is not supported.
Compared to non-magnetized accretion, infall rate is 2-5 times
smaller depending on outer magnetization. In turn, gas density is
2-5 times smaller in the region close to the black hole, where
synchrotron radiation emerges \citep{narayan}. Sgr A* produces
relatively weak synchrotron \citep{narayan}. So, either gas
density $n$ or electron temperature $T_e$ or magnetic field $B$
are small in the inner flow or combination of factors works. Thus
low gas density in magnetized model is in qualitative agreement
with the results of modelling the spectrum.
Flow is convectively stable on average in the model of moving
blobs, where dissipation heat is released homogeneously in volume.
Moving blobs are in radial and perpendicular pressure
equilibriums. They are governed by the same equations as the
medium.
\section{Discussion \& Conclusion}\label{discussion}
The presented accretion study self-consistently treats turbulence
in the averaged model. This model introduces many weak assumptions
instead of few strong ones.
I take dissipation rate to be that of collisional MHD simulations.
But flow in question is rather in collisionless regime.
Observations of collisionless flares in solar corona
\citep{noglik} gives dissipation rate $20$ times smaller than in
collisional simulations \citep{biskamp03}. However, flares in
solar corona may represent a large-scale reconnection event rather
than developed turbulence. It is unclear which dissipation rate is
more realistic for accretion.
Magnetic field presents another caveat. Magnetic field lines
should close, or $\vec{\nabla}\cdot\vec{B}=0$ should hold. Radial
field is much larger than perpendicular in the inner region.
Therefore, characteristic radial scale of the flow is much larger
than perpendicular. If radial turbulence scale is larger than
radius, freezing-in condition does not hold anymore. Matter can
freely slip along radial field lines into the black hole. If
matter slips already at the sonic point, the accretion rate should
be higher than calculated.
Some other assumptions are more likely to be valid. Diffusion
should be weak because of high Mach number that approaches unity
at large radius. Magnetic helicity was found to play very small
dynamical role. Only when the initial turbulence is highly
helical, magnetic helicity conservation may lead to smaller
accretion rate. Neglect of radiative cooling is justified a
posteriori. Line cooling time is about $20$ times larger that
inflow time from outer boundary.
The study is the extension of basic theory, but realistic
analytical models should include more physics. The work is
underway.
\begin{theacknowledgments}
I thank my advisor Prof. Ramesh Narayan for fruitful discussions.
\end{theacknowledgments}
\bibliographystyle{aipproc}
| {'timestamp': '2007-10-12T22:05:43', 'yymm': '0710', 'arxiv_id': '0710.2543', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.2543'} |
\section{Introduction}
Located at about 1$'$ to the NW of the Orion Trapezium, the
BN/KL region has been, as
the closest region of massive star formation, the subject of extensive studies.
Recently, Rodr\'\i guez et al. (2005) and G\'omez et al. (2005)
reported large proper motions (equivalent to velocities of the order of
a few tens of km s$^{-1}$) for the radio sources associated with the infrared sources
BN and n, as well as for the radio source I. All three objects
are located at the core of the BN/KL region and appear
to be moving away from a common point where they must all have been
located about 500 years ago.
Even when these proper motions are now available, there is no
radial velocity information for these three sources, with the
exception of the near-infrared spectroscopic study of BN
made by Scoville et al. (1983), that report an LSR radial
velocity of +21 km s$^{-1}$ for this source.
In this paper we present 7 mm continuum and H53$\alpha$
radio recombination line observations of the BN/KL region in an
attempt to obtain additional information on the radial velocities of
these sources.
\section{Observations}
The 7 mm observations were made in the B configuration
of the VLA of the NRAO\footnote{The National Radio
Astronomy Observatory is operated by Associated Universities
Inc. under cooperative agreement with the National Science Foundation.},
during 2007 December 14. The central rest frequency observed was
that of the H53$\alpha$ line, 42951.97 MHz,
and we integrated on-source for a total of
approximately 3 hours. We observed in the spectral line
mode, with 15 channels of 1.56 MHz each (10.9 km s$^{-1}$)
and both circular polarizations. The bandpass calibrator was
0319+415. A continuum channel recorded the
central 75\% of the full spectral window. The absolute amplitude
calibrator was 1331+305
(with an adopted flux density of 1.47 Jy)
and the phase calibrator was 0541$-$056 (with a bootstrapped flux density
of 1.78$\pm$0.08 Jy). The phase noise rms was about 30$^\circ$,
indicating good weather conditions. The phase center of these observations was at
$\alpha(2000) = 05^h~35^m~14\rlap.^s13;~\delta(2000) = -05^\circ~22{'}~26\rlap.^{''}6$.
The data were acquired and reduced using the recommended VLA procedures
for high frequency data, including the fast-switching mode with a
cycle of 120 seconds.
Clean maps were
obtained using the task IMAGR of AIPS with the ROBUST parameter set to 0.
\section{Continuum Analysis}
\subsection{Spectral Indices}
In Figure 1 we show the image obtained from the continuum channel.
Three sources, BN, I and n, are evident in the image. No other sources
were detected above a 5-$\sigma$ lower limit of 1.75 mJy in our $1'$
field of view. The positions, flux
densities, and deconvolved angular sizes of these sources are
given in Table 1. The continuum flux density of the sources
has been obtained from the line-free channels.
The line emission will be discussed below.
The flux density obtained at 7 mm by us
for BN is in good agreement with the values previously reported in
the literature:
we obtain a flux density of 28.6$\pm$0.6 mJy, while
values of 31$\pm$5 and 28.0$\pm$0.6 were obtained by Menten \& Reid (1995)
and Chandler \& Wood (1997), respectively.
In the case of source I, the agreement is acceptable,
since we obtain a flux density of 14.5$\pm$0.7 mJy,
while values of
13$\pm$2 and 10.8$\pm$0.6 mJy were reported by Menten \& Reid (1995)
and Chandler \& Wood (1997), respectively.
Careful monitoring would be required
to test if the radio continuum from source I is variable in time.
The spectral indices determined from our 7 mm observations and the
3.6 cm observations of G\'omez et al. (2008) are given in the last column of Table 2.
Our spectral indices for BN and
I are in excellent agreement in this spectral range with the more detailed analysis
presented by Plambeck et al. (1995) and Beuther et al. (2004).
We have detected source n for the first time
at 7 mm and this detection allows the first estimate of the spectral index of this source
over a wide frequency range.
The value of 0.2$\pm$0.1 suggests marginally thick free-free emission, as expected in
an ionized outflow. This supports the interpretation of this source
as an ionized outflow by G\'omez et al. (2008).
The position given by us in Table 1 is consistent with the
extrapolation of the proper motions of this source discussed by G\'omez et al. (2008).
\subsection{Deconvolved Angular Sizes}
The radio source I has parameters
consistent with an optically thick free-free source (spectral
index of $1.5\pm0.1$).
Beuther et al. (2004) suggest that this spectral index is either the result of
optically thick free-free plus dust emission, or $H^-$ free-free emission
that gives rise to a power-law spectrum with an index of $\sim$1.6.
In the case of the radio source associated with the infrared source n
we only have an upper limit to its size at 7 mm. In addition,
G\'omez et al. (2008) report important morphological variations
over time in this source
that suggest that comparisons at different frequencies should be made
only from simultaneous observations.
In the case of BN,
the frequency dependences of flux density and angular size (this last
parameter taken to
be the geometric mean of the major and minor axes reported in Tables 1 and 2) can be accounted for with
a simple model of a sphere of ionized gas in which
the electron density
decreases as a power-law function of radius, $n_e \propto r^{-\alpha}$.
In this case, the flux density of the source is expected to go with
frequency as $S_\nu \propto \nu^{(6.2-4\alpha)/(1-2\alpha)}$ and the angular size is expected to go with
frequency as $\theta_\nu \propto \nu^{2.1/(1-2\alpha)}$ (Reynolds 1986).
The frequency dependences of flux density ($S_\nu \propto \nu^{1.1\pm0.1}$) and angular
size ($\theta_\nu \propto \nu^{-0.36\pm0.12}$) for
BN are consistent with a steeply declining electron density
distribution
with power law index of
$\alpha = 3.0\pm0.3$. The continuum spectrum of BN produced
by Plambeck et al. (1995) indicates that a constant
spectral index extends from 5 to 100 GHz.
\section{Analysis of the H53$\alpha$ Recombination Line Emission}
\subsection{Radial LSR Velocity}
We clearly detected the H53$\alpha$ line emission only from BN.
The spectrum is shown in Figure 2. The parameters of
the Gaussian least squares fit to the profile are given in Table 3.
We note that the radial LSR velocity determined by us, $+20.1\pm2.1$
km s$^{-1}$, agrees well with the value of $+21$ km s$^{-1}$
reported by Scoville et al. (1983) from near-IR spectroscopy.
In a single dish study of the H41$\alpha$ line made with an
angular resolution of 24$''$ toward
Orion IRc2, Jaffe \& Mart\'\i n-Pintado (1999) report emission
with $v_{LSR}$ = -3.6 km s$^{-1}$.
Most likely, this is emission from the ambient H~II region, since
its radial velocity practically coincides with the
value determined for the large H~II region (Orion A) ionized by
the Trapezium stars (e. g. Peimbert et al. 1988).
The single dish observations of the H51$\alpha$ emission
of Hasegawa \& Akabane (1984), made with an angular resolution of 33$''$,
most probably come also from the ambient ionized gas and not
from BN.
\subsection{LTE Interpretation}
If we assume that the line emission is optically thin and in LTE,
the electron temperature, $T_e^*$, is given by
(Mezger \& H\"oglund 1967; Gordon 1969; Quireza et al. 2006):
\begin{equation}\Biggl[{{T_e^*} \over {K}}\Biggr] = \Biggl[7100 \biggl({{\nu_L} \over {GHz}} \biggr)^{1.1}
\biggl({{S_C} \over {S_L}} \biggr) \biggl({{\Delta v} \over {km~s^{-1}}}\biggr)^{-1}
(1 + y^+)^{-1} \Biggr]^{0.87}, \end{equation}
\noindent where $\nu_L$ is the line frequency, $S_C$ is the continuum flux density,
$S_L$ is the peak line flux density, $\Delta v$ is the FWHM line width, and
$y^+$ is the ionized helium to ionized hydrogen abundance ratio.
In the case of BN, we can adopt $y^+ \simeq 0$ given that the
source is not of very high luminosity, and using the values given in Tables 1 and 3,
we obtain $T_e^* \simeq 8,200$ K. This value is similar to that
determined for the nearby Orion A from radio recombination lines (e. g. Lichten, Rodr\'\i guez, \&
Chaisson 1979).
It is somewhat
surprising that we get a very reasonable estimate for $T_e^*$ when our previous discussion
seemed to imply that BN is partially optically thick at 7 mm.
One possibility is that we have two effects fortuitously canceling each other. For example, the
optical thickness of the source will diminish the
line emission, while maser effects (such as those observed
in MWC 349; Mart\'\i n-Pintado et al. 1989) will amplify the line.
However, in an attempt to understand this result in LTE conditions, we will discuss the expected
LTE radio recombination line emission from
a sphere of ionized gas in which the electron density
decreases as a power-law function of radius, $n_e \propto r^{-\alpha}$.
As noted before, the modeling of the continuum emission from such a source
was presented in detail by Panagia \& Felli (1975) and Reynolds (1986). The radio recombination line emission
for the case $\alpha = 2$ has been discussed by Altenhoff, Strittmatter, \&
Wendker (1981) and Rodr\'\i guez (1982).
Here we generalize the derivation of the recombination line emission
to the case of $\alpha > 1.5$. This lower limit is
adopted to avoid the total emission from the source to diverge.
For a sphere of ionized gas, the free-free continuum emission will be given by
(Panagia \& Felli 1975):
\begin{equation}S_C = 2 \pi {{r_0^2} \over {d^2}} B_\nu \int_0^\infty
\biggl(1 - exp[-\tau_C(\xi)]\biggr)~ \xi~ d\xi, \end{equation}
\noindent where $r_0$ is a reference radius, $d$ is the distance to the source,
$B_\nu$ is Planck's function, $\xi$ is the projected radius in units of $r_0$,
and $\tau_C(\xi)$ is the continuum optical depth along the line of sight with
projected radius $\xi$. On the other hand, the free-free continuum plus
radio recombination line emission will be given by an equation similar to eqn. (2), but with the
continuum opacity substituted by the continuum plus line opacity (Rodr\'\i guez 1982):
\begin{equation}S_{L+C} = 2 \pi {{r_0^2} \over {d^2}} B_\nu \int_0^\infty \biggl(1 - exp[-\tau_{L+C}(\xi)]
\biggr) \xi d\xi, \end{equation}
\noindent where $\tau_{L+C}(\xi)$ is the line plus continuum optical depth along the line of sight with
projected radius $\xi$.
The line-to-continuum ratio will be given by:
\begin{equation}{{S_L} \over {S_C}} = {{S_{L+C} - S_C} \over {S_C}}. \end{equation}
The opacity of these emission processes depends on projected radius as (Panagia \& Felli 1975):
\begin{equation}\tau(\xi) \propto \xi^{-(2 \alpha -1)}. \end{equation}
We now introduce the definite integral (Gradshteyn \& Ryzhik 1994)
\begin{equation}\int_0^\infty [1- exp(-\mu x^{-p})]~x~ dx =
- {{1} \over {p}}~ \mu^{{2} \over{p}}~ \Gamma(-{{2} \over{p}}), \end{equation}
\noindent valid for $\mu > 0$ and $p > 0$ and with $\Gamma$ being the Gamma function.
Substituting eqns. (2) and (3) in eqn. (4), and using the integral
defined in eqn. (7), it can be shown that
\begin{equation}{{S_L} \over {S_C}} = \Biggl[{{\kappa_L + \kappa_C}
\over {\kappa_C}} \Biggr]^{1/(\alpha -0.5)} - 1, \end{equation}
\noindent where $\kappa_L$ and $\kappa_C$ are the line and continuum absorption coefficients
at the frequency of observation, respectively.
In this last step we have also
assumed that the opacity of the line and continuum processes are proportional to
the line and continuum absorption coefficients, respectively, that is, that the
physical depths producing the line and continuum emissions are the
same. Under the LTE assumption, we have
that
\begin{equation}{{\kappa_L} \over {\kappa_C}} = 7100 \biggl({{\nu_L} \over {GHz}} \biggr)^{1.1}
\biggl({{T_e^*} \over {K}} \biggr)^{-1.1} \biggl({{\Delta v} \over {km~s^{-1}}}\biggr)^{-1}
(1 + y^+)^{-1}. \end{equation}
For $\nu \leq$ 43 GHz and typical parameters of an H II region, we
can see from eqn. (8) that $\kappa_L<\kappa_C$, and
eqn. (7) can be approximated by:
\begin{equation}{{S_L} \over {S_C}} \simeq {{1} \over
{(\alpha -0.5)}} \Biggl[{{\kappa_L} \over {\kappa_C}} \Biggr]. \end{equation}
That is, the expected optically-thin, LTE line-to-continuum ratio:
\begin{equation}{{S_L} \over {S_C}} \simeq \Biggl[{{\kappa_L} \over {\kappa_C}} \Biggr], \end{equation}
\noindent becomes attenuated by a factor $1/(\alpha -0.5)$. In the case of $\alpha = 2$,
the factor is 2/3, and we reproduce the result of Altenhoff, Strittmatter, \&
Wendker (1981) and Rodr\'\i guez (1982). In the case of BN, we have that $\alpha \simeq 3$, and
we expect the attenuation factor to be 2/5. If BN can be modeled this way, we would have expected
to derive electron temperatures under the LTE assumption (see eqn. 1) of order
\begin{equation}T_e^*(\alpha = 3) \simeq 2.2~ T_e^*(thin). \end{equation}
However, from the discussion in the first paragraph of this section observationally
we determine that
\begin{equation}T_e^*(\alpha = 3) \simeq T_e^*(thin). \end{equation}
Summarizing: i) BN seems to have significant optical depth in the continuum at
7 mm, ii) this significant optical depth should attenuate the observed recombination
line emission with respect to the optically-thin case, but iii) the line emission seems
to be as strong as in the optically-thin case.
As possible explanations for the ``normal'' (apparently optically-thin and in LTE)
radio recombination line emission
observed from BN we can think of two options.
The first is that, as noted before, there is a non-LTE line-amplifying
mechanism that approximately compensates for the optical depth attenuation.
The second possibility is that the free-free emission from BN at 7 mm is already optically thin.
However, this last possibility seems to be in contradiction with the results
of Plambeck et al. (1995) that suggest a single spectral index
from 5 to 100 GHz. Observations of radio recombination lines around
100 GHz are needed to solve this problem.
A comparison with the H53$\alpha$ emission from the hypercompact H~II
region G28.20-0.04N is also of interest.
The continuum flux densities from this source at
21, 6, 3.6, and 2 cm are 49, 135, 297, and 543 mJy, respectively
(Sewilo et al. 2004). At 7 mm the continuum flux density is 641 mJy
(Sewilo et al. 2008), indicating
that the source has become optically thin at this wavelength.
Using the H53$\alpha$ line parameters given by (Sewilo et al. 2008)
we derive an LTE electron temperature of $T_e^* \simeq 7,600$ K,
similar to the value for BN and in this case consistent with
the optically-thin nature of G28.20-0.04N.
The non detection of H53$\alpha$ emission from radio source I is consistent
with its expected large optical depth. The formulation above implies $\alpha \simeq 5$, and an
attenuation factor of 2/9.
This confirms the notion that BN and radio source I are two sources
intrinsically very different in nature.
This difference is also evident in the brightness temperature of both sources.
At 7 mm, the brightness temperature of a source is
\begin{equation}\Biggl[{{T_B} \over {K}} \Biggr] \simeq 0.96 \Biggl[{{S_\nu} \over {mJy}}
\Biggr] \Biggl[{{\theta_{maj} \times
\theta_{min}} \over {arcsec^2}} \Biggr]^{-2}. \end{equation}
Using the values of Table 1, we get $T_B \simeq$ 7,800 K for BN, confirming
its nature as photoionized gas. However, for the radio source I we get
$T_B \simeq$ 2,600 K. So, even when source I seems to be optically thick, its
brightness temperature is substantially lower than that expected for
a photoionized region. Reid et al. (2007) have discussed as possible
explanations for this low brightness temperature $H^-$ free-free opacity or
a photoionized disk.
Following the discussion of Reid et al. (2007), we consider
it unlikely that dust emission could be a dominant contributor to the 7 mm emission of BN or
Orion I. A dense, warm, dusty disk would be expected to show many molecular lines at
millimeter/submillimeter wavelengths. While Beuther et al. (2006) and Friedel
\& Snyder(2008) find numerous, strong,
molecular lines toward the nearby "hot core", they find no strong lines toward the position of
Orion I (with the exception of
the strong SiO masers slightly offset from Orion I) or BN.
Also, the brightness temperatures derived by us at 7 mm (7,800 K for BN and
2,600 K for source I) are
high enough to sublimate dust and suggest that free-free emission from
ionized gas dominates the continuum emission.
Finally, the continuum spectra of BN and of source I measured by Plambeck et al.(1995)
and Beuther et al. (2006), respectively, suggest that the dust
emission becomes dominant only above $\sim$300 GHz.
In the case of source n, no detection was expected given its
weakness even in the continuum.
\subsection{Spatial Distribution of the H53$\alpha$ Line Emission}
The H53$\alpha$ line emission in the individual velocity
channels shows evidence of structure but unfortunately the signal-to-noise
ratio is not large enough to reach reliable conclusions from the
analysis of these individual channels. However, an image
with good signal-to-noise ratio can be obtained averaging over the velocity
range of -21.2 to +66.1 km s$^{-1}$, using the task MOMNT in
AIPS. This line image is compared
in Figure 3 with a continuum image
made from the line-free channels.
The larger apparent size of the continuum image is simply the
result of its much better signal-to-noise ratio.
For the total line emission we obtain an upper limit of
$0\rlap.{''}12$ for its size, that is consistent with the
size of the continuum emission given in Table 1.
We also show images of the blueshifted (-21.2 to +22.5 km s$^{-1}$)
and redshifted (+22.5 to 66.1 km s$^{-1}$) line emission in Figure 3.
The cross in the figure indicates the centroid of the total line
emission. The centroid of the line emission does not appear to
coincide with the centroid of the continuum emission and
we attribute this to opacity effects.
An interesting conclusion comes from comparing the total
line emission, with the blueshifted and redshifted components.
The blueshifted emission seems slightly shifted to the SW, while the
redshifted emission seems slightly shifted to the NE, suggesting a
velocity gradient. This result supports the suggestion of
Jiang et al. (2005) of the presence of an outflow in BN along a
position angle of 36$^\circ$. Given the modest signal-to-noise ratio
of the data, it is difficult to estimate the magnitude
of the velocity shift and we crudely assume it is of order one
channel ($\sim$10 km s$^{-1}$), since most of the line
emission is concentrated in the central two channels
of the spectrum (see Figure 2). The position shift between the blueshifted and
the redshifted emissions is $0\rlap.{''}028 \pm 0\rlap.{''}007$
($12 \pm 3$ AU at the distance of 414 pc given by Menten et al. 2007), significant to the
4-$\sigma$ level. Unfortunately, the data of Jiang et al. (2005) does not
include line observations and there is no kinematic information in their paper to
compare with our results.
The small velocity gradient observed by us in BN is consistent with a
slow bipolar outflow but also with Keplerian rotation around a central mass
of only 0.2 $M_\odot$.
\section{Conclusions}
We presented observations of the H53$\alpha$ recombination line
and adjacent continuum toward the Orion BN/KL region.
In the continuum we detect the BN object, the radio source
I (GMR I) and the radio counterpart of the infrared source n
(Orion-n) and discuss its parameters.
In the H53$\alpha$ line we only detect the BN object,
the first time that radio recombination lines have been detected from this source.
The LSR radial velocity of BN from the H53$\alpha$ line, $v_{LSR} = 20.1 \pm 2.1$
km s$^{-1}$,
is consistent with that found from previous studies in near-infared lines,
$v_{LSR} = 21$ km s$^{-1}$.
We discuss the line-to-continuum ratio from BN and present evidence
for a possible velocity gradient across this source.
\acknowledgments
LFR and LAZ acknowledge the support
of CONACyT, M\'exico and DGAPA, UNAM.
{\it Facilities:} \facility{VLA}
| {'timestamp': '2008-10-28T16:34:26', 'yymm': '0810', 'arxiv_id': '0810.5055', 'language': 'en', 'url': 'https://arxiv.org/abs/0810.5055'} |
\section{Introduction}
The study of magnetic models has
generated considerable progresses in the understanding
of magnetic materials,
and lately, it has overcome the frontiers of magnetism,
being considered in many areas of knowledge.
Certainly, the Ising model represents one of the most
studied and important models of magnetism and
statistical mechanics~\cite{huang,reichl},
and it has been employed also to typify a wide variety of
physical systems, like lattice gases, binary alloys, and
proteins (with a particular interest in the problem of protein
folding).
Although real magnetic systems should be properly
described by means of Heisenberg spins (i.e.,
three-dimensional variables), many materials are
characterized by anisotropy fields that make these
spins prefer given directions in space, explaining
why simple models, characterized by
binary variables, became so important
for the area of magnetism.
Particularly, models defined in terms of Ising variables
have shown the ability for exhibiting a wide variety
of multicritical behavior
by introducing randomness, and/or competing
interactions, has attracted the attention of many
researchers~(see, e.g., Refs.~\cite{aharony,mattis,kaufman,nogueira98
nuno08a,nuno08b,salmon1,morais12}).
Certainly, the simplicity of Ising variables,
which are very suitable for both analytical and numerical
studies, has led to proposals of important models outside
the scope of magnetism, particularly in the
area of complex systems.
These models have been successful for describing
a wide variety of relevant
features in such systems, and have raised
interest in many fields, like
financial markets, optimization problems,
biological membranes, and social behavior.
In some cases, more than one Ising variable have been used,
especially by considering a coupling between them, as
proposed within the framework of choice
theories~\cite{fernandez}, or in plastic
crystals~\cite{plastic1,brandreview,folmer}.
In the former case, each set of Ising variables represents
a group of identical individuals, all of which can make two
independent binary choices.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=5.5cm]{figure1.eps}
\end{center}
\vspace{-1cm}
\caption{Illustrative pictures of the three phases as the temperature
increases, low-temperature (ordered) solid, intermediate
plastic crystal, and high-temperature (disordered) liquid phase.
In the plastic state the centers of mass of the molecules form a
regular crystalline lattice but the molecules are
disordered with respect to the orientational degrees of freedom.}
\label{fig:fasesdecristais}
\end{figure}
The so-called plastic
crystals~\cite{plastic1,brandreview,folmer,michel85,michel87,%
galam87,galam89,salinas1,salinas2} appear as states
of some compounds considered to be simpler than those of canonical
glasses, but still presenting rather nontrivial
relaxation and equilibrium properties. Such a plastic
phase corresponds to an intermediate stable state, between a
high-temperature (disordered) liquid phase, and a low-temperature
(ordered) solid phase and both transitions,
namely, liquid-plastic and plastic-solid, are first order.
In this intermediate phase, the rotational disorder coexists
with a translationally ordered state, characterized by
the centers of mass of the molecules forming a regular crystalline
lattice with the molecules presenting disorder in their
orientational degrees of freedom, as shown
in Fig.~\ref{fig:fasesdecristais}.
Many materials undergo a liquid-plastic phase transition,
where the lower-temperature phase presents such a
partial orientational order, like the plastic-crystal
of Fig.~\ref{fig:fasesdecristais}.
The property of translational invariance makes the plastic crystals
much simpler to be studied from both analytical and numerical
methods, becoming very useful towards a proper
understanding of the glass transition~\cite{plastic1,brandreview,folmer}.
In some plastic-crystal models one introduces a coupling
between two Ising models, associating each of these
systems respectively, to the translational and rotational degrees of
freedom~\cite{galam87,galam89,salinas1,salinas2},
as a proposal for explaining satisfactorily
thermodynamic properties of the plastic phase.
Accordingly, spin variables $\{t_{i}\}$ and $\{r_{i}\}$ are introduced in
such a way to mimic translational
and rotational degrees of freedom of each molecule $i$, respectively.
The following Hamiltonian is
considered~\cite{galam87,galam89,salinas1,salinas2},
\begin{equation}
\label{eq:hamplastcrystals}
{\cal H} = - J_{t}\sum_{\langle ij \rangle}t_{i}t_{j}
- J_{r} \sum_{\langle ij \rangle}r_{i}r_{j}
- \sum_{i} (\alpha t_{i} + h_{i})r_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
where $\sum_{\langle ij \rangle}$ represents a sum over
distinct pairs of nearest-neighbor spins.
In the first summation, the Ising variables $t_{i}=\pm 1$
may characterize two lattices A and B (or occupied and vacant sites).
One notices that the rotational
variables $r_{i}$ could be, in principle, continuous variables,
although the fact that the minimization of the coupling contribution
$\alpha t_{i}r_{i}$ is achieved
for $t_{i}r_{i} =1$ ($\alpha>0$), or
for $t_{i}r_{i} =-1$ ($\alpha<0$), suggests the simpler choice
of binary variables ($r_{i}=\pm 1$) to be appropriate,
based on the energy minimization requirement.
In the present model the variables $t_{i}$ and
$r_{i}$ represent very different characteristics of a
molecule. Particularly, the rotational variables $r_{i}$
are expected to change more freely than the translational ones;
for this reason, one introduces a random field acting only
on the rotational degrees of freedom.
In fact, the whole contribution $\sum_{i} (\alpha t_{i} + h_{i})r_{i}$
is known to play a fundamental role for the plastic phase of
ionic plastic crystals, like the alkalicyanides KCN, NaCN and RbCN.
In spite of its simplicity, the above Hamiltonian is able to capture
the most relevant features of the plastic-crystal phase, as well
as the associated phase transitions,
namely, liquid-plastic and plastic-solid ones~\cite{michel85,michel87,%
galam87,galam89,salinas1,salinas2,vives}.
A system described by a Hamiltonian slightly different
from the one of~\eq{eq:hamplastcrystals}, in which the
whole contribution
$\sum_{i} (\alpha t_{i} + h_{i})r_{i}$ was replaced by
$\sum_{i} \alpha_{i} t_{i}r_{i}$, i.e., with no random
field acting on variable $r_{i}$ separately, was considered
in Ref.~\cite{salinas2}. In such a work one finds a detailed
analysis of the phase diagrams and order-parameter behavior
of the corresponding model. However, to our knowledge,
previous investigations on the model defined
by~\eq{eq:hamplastcrystals} have not
considered thoroughly the effects of the random
field $h_{i}$, with a particular attention to the phase diagrams
for the case of a randomly distributed bimodal
one, $h_{i}=\pm h_{0}$;
this represents the main motivation
of the present work.
In the next section we define the model, determine its free-energy
density, and describe the
numerical procedure to be used.
In Section III we exhibit typical phase diagrams
and analyze the behavior of the corresponding order parameters,
for both zero and finite temperatures; the ability of the model
to exhibit a rich variety of phase diagrams, characterized
by multicritical behavior, is shown.
Finally, in Section IV we present our main conclusions.
\section{The Model and Free-Energy Density}
Based on the discussion of the previous section, herein
we consider a system composed by two interacting Ising models,
described by the Hamiltonian
\begin{equation}
\label{eq:hamiltonian1}
{\cal H}(\{h_{i}\}) = - J_{\sigma} \sum_{(ij)}\sigma_{i}\sigma_{j}
- J_{\tau} \sum_{(ij)}\tau_{i}\tau_{j} + D\sum_{i=1}^{N}\tau_{i}\sigma_{i}
-\sum_{i=1}^{N}h_{i}\tau_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
where $\sum_{(ij)}$ represent sums over all distinct pairs of spins,
a limit for which the mean-field approximation becomes exact. Moreover,
$\tau_{i}= \pm 1$ and $\sigma_{i}= \pm 1$ ($i=1,2, \cdots , N$) depict
Ising variables,
$D$ stands for a real parameter, whereas both $J_{\sigma}$ and
$J_{\tau}$ are positive coupling constants, which will be
restricted herein to
the symmetric case, $J_{\sigma}=J_{\tau}=J>0$. Although this later
condition may seem as a rather artificial simplification of the
Hamiltonian in~\eq{eq:hamplastcrystals}, the application of a
random field $h_{i}$ acting separately on one set of variables, will
produce the expected distinct physical behavior associated with
$\{ \tau_{i} \}$ and $\{ \sigma_{i} \}$. The random fields
$\{ h_{i} \}$ will be considered as following
a symmetric bimodal probability distribution function,
\begin{equation}
\label{eq:hpdf}
P(h_{i}) = \frac{1}{2} \, \delta(h_{i}-h_{0}) +\frac{1}{2} \, \delta(h_{i}+h_{0})~.
\end{equation}
\vskip \baselineskip
\noindent
The infinite-range character of the interactions allows one to write the above
Hamiltonian in the form
\begin{equation}
\label{eq:hamiltonian2}
{\cal H}(\{h_{i}\})= - \frac{J}{2N}{\left (\sum_{i=1}^{N}\sigma_{i} \right )}^{2}
- \frac{J}{2N}{\left (\sum_{i=1}^{N}\tau_{i} \right )}^{2}
+D\sum_{i=1}^{N}\tau_{i}\sigma_{i} -\sum_{i=1}^{N}h_{i}\tau_{i}~,
\end{equation}
\vskip \baselineskip
\noindent
from which one may calculate the partition function associated with
a particular configuration of the fields $\{ h_{i}\}$,
\begin{equation}
Z(\{h_{i}\}) = {\rm Tr} \exp \left[- \beta {\cal H}(\{h_{i}\}) \right]~,
\end{equation}
\vskip \baselineskip
\noindent
where $\beta=1/(kT)$ and
${\rm Tr} \equiv {\rm Tr}_{\{ \tau_{i},\sigma_{i}=\pm 1 \}} $ indicates a sum over
all spin configurations. One can now make use of
the Hubbbard-Stratonovich transformation~\cite{dotsenkobook,nishimoribook}
to linearize the quadratic terms,
\begin{equation}
Z(\{h_{i}\}) = \frac{1}{\pi} \int_{-\infty}^{\infty}dx dy \exp(-x^{2}-y^{2})
\prod_{i=1}^{N} {\rm Tr} \exp [ H_{i}(\tau,\sigma)]~,
\end{equation}
\vskip \baselineskip
\noindent
where $H_{i}(\tau,\sigma)$ depends on the random
fields $\{ h_{i}\}$,
as well as on the spin variables, being given by
\begin{equation}
H_{i}(\tau,\sigma) = \sqrt{\frac{2\beta J}{N}} \ x \tau + \sqrt{\frac{2\beta J}{N}} \ y \sigma
- \beta D \tau \sigma + \beta h_{i} \tau~.
\end{equation}
\vskip \baselineskip
\noindent
Performing the trace over the spins and defining new variables, related to
the respective order parameters,
\begin{equation}
\label{eq:mtausigma}
m_{\tau} = \sqrt{\frac{2kT}{JN}} \ x~; \qquad
m_{\sigma} = \sqrt{\frac{2kT}{JN}} \ y~,
\end{equation}
\vskip \baselineskip
\noindent
one obtains
\begin{equation}
Z(\{h_{i}\})= \frac{\beta J N}{2 \pi} \int_{-\infty}^{\infty} dm_{\tau} dm_{\sigma} \exp[N g_{i} (m_{\tau},m_{\sigma})]~,
\end{equation}
\vskip \baselineskip
\noindent
where
\begin{eqnarray}
g_{i}(m_{\tau},m_{\sigma}) &=& - \frac{1}{2} \beta J m_{\tau}^{2}
- \frac{1}{2} \beta J m_{\sigma}^{2} + \log \left \{
2e^{-\beta D} \cosh[\beta J(m_{\tau}+m_{\sigma}+h_{i}/J)]
\right. \nonumber \\ \nonumber \\
\label{eq:gimtausigma}
&+& \left. 2e^{\beta D} \cosh[\beta J(m_{\tau}-m_{\sigma}+h_{i}/J)] \right \}~.
\end{eqnarray}
\vskip \baselineskip
\noindent
Now, one takes the thermodynamic limit ($N \rightarrow \infty$), and uses the saddle-point
method to obtain
\begin{equation}
Z = \displaystyle \frac{\beta J N}{2 \pi} \int_{-\infty}^{\infty} dm_{\tau} dm_{\sigma}
\exp[-N \beta f(m_{\tau},m_{\sigma})]~,
\end{equation}
\vskip \baselineskip
\noindent
where the free-energy density functional $f(m_{\tau},m_{\sigma})$ results
from a quenched average of
$g_{i}(m_{\tau},m_{\sigma})$ in~\eq{eq:gimtausigma}, over the
bimodal probability distribution of~\eq{eq:hpdf},
\begin{equation}
\label{eq:freeenergy}
f(m_{\tau},m_{\sigma}) = \displaystyle \frac{1}{2} J m_{\tau}^{2}
+ \frac{1}{2} J m_{\sigma}^{2} - \frac{1}{2\beta}\log Q(h_{0})
- \frac{1}{2\beta}\log Q(-h_{0})~,
\end{equation}
\vskip \baselineskip
\noindent
with
\begin{equation}
Q(h_{0}) = 2e^{-\beta D} \cosh[\beta J(m_{\tau}+m_{\sigma} + h_{0}/J)]
+2e^{\beta D} \cosh[\beta J(m_{\tau}-m_{\sigma} + h_{0}/J)]~.
\end{equation}
\vskip \baselineskip
\noindent
The extremization of the free-energy density above with respect to the
parameters $m_{\tau}$ and $m_{\sigma}$ yields the following equations of state,
\begin{eqnarray}
\label{eq:mtau}
m_{\tau} &=& \frac{1}{2} \frac{R_{+}(h_{0})}{Q(h_{0})}
+ \frac{1}{2} \frac{R_{+}(-h_{0})}{Q(-h_{0})}~,
\\ \nonumber \\
\label{eq:msigma}
m_{\sigma} &=& \frac{1}{2} \frac{R_{-}(h_{0})}{Q(h_{0})}
+ \frac{1}{2} \frac{R_{-}(-h_{0})}{Q(-h_{0})}~,
\end{eqnarray}
\vskip \baselineskip
\noindent
where
\begin{equation}
R_{\pm}(h_{0}) = e^{-\beta D} \sinh[\beta J(m_{\tau}+m_{\sigma} + h_{0}/J)]
\pm e^{\beta D} \sinh[\beta J(m_{\tau}-m_{\sigma} +h_{0}/J)]~.
\end{equation}
\vskip \baselineskip
\noindent
In the following section we present numerical results for the
order parameters and phase diagrams of the model, at both
zero and finite temperatures.
All phase diagrams are represented
by rescaling conveniently the energy parameters of the system, namely,
$kT/J$, $h_{0}/J$ and $D/J$.
Therefore, for given values of these dimensionless parameters,
the equations of state [Eqs.(\ref{eq:mtau}) and~(\ref{eq:msigma})]
are solved numerically for $m_{\tau}$ and $m_{\sigma}$.
In order to avoid metastable states, all solutions obtained for
$m_{\tau} \in [-1,1]$ and $m_{\sigma} \in [-1,1]$ are
substituted in~\eq{eq:freeenergy},
to check for the minimization of the free-energy density.
The continuous (second order) critical frontiers are found by the set
of input values for which the order parameters fall continuously down to
zero, whereas the first-order frontiers were found through
Maxwell constructions.
Both ordered ($m_{\tau} \neq 0$ and $m_{\sigma} \neq 0$)
and partially-ordered ($m_{\tau}=0$ and $m_{\sigma} \neq 0$)
phases have appeared in our analysis, and will be labeled
accordingly.
The usual paramagnetic phase ({\bf P}),
given by $m_{\tau}=m_{\sigma}=0$, always occurs for sufficiently
high temperatures.
A wide variety of critical points appeared in our analysis
(herein we follow the classification due to Griffiths~\cite{griffiths}):
(i) a tricritical point signals the encounter of a continuous frontier
with a first-order line with no change of slope;
(ii) an ordered critical point corresponds to an isolated critical
point inside the ordered region, terminating a first-order line that
separates two distinct ordered phases;
(ii) a triple point, where three distinct phases coexist, signaling the
encounter of three first-order critical frontiers.
In the phase diagrams we shall use distinct symbols and
representations for the critical points and frontiers, as described below.
\begin{itemize}
\item Continuous (second order) critical frontier: continuous line;
\item First-order critical frontier: dotted line;
\item Tricritical point: located by a black circle;
\item Ordered critical point: located by a black asterisk;
\item Triple point: located by an empty triangle.
\end{itemize}
\section{Phase Diagrams and Behavior of Order Parameters}
\subsection{Zero-Temperature Analisis}
At $T=0$, one has to analyze the different spin orderings that
minimize the Hamiltonian of~\eq{eq:hamiltonian2}.
Due to the coupling between the two
sets of spins, the minimum-energy configurations will correspond to
$\{\tau_{i}\}$ and $\{\sigma_{i}\}$ antiparallel ($D>0$), or parallel ($D<0$).
Therefore, in the absence of random fields ($h_{0}=0$) one should have
$m_{\tau}=-m_{\sigma}$ ($D>0$), and
$m_{\tau}=m_{\sigma}$ ($D<0$), where $m_{\sigma}=\pm1$.
However, when random fields act on the $\{\tau_{i}\}$ spins, there will
be a competition between these fields and the coupling parameter $D$,
leading to several phases, as represented in Fig.~\ref{fig:groundstate},
in the plane $h_{0}/J$ versus $D/J$. One finds three ordered
phases for sufficiently low values of $h_{0}/J$
and $|D|/J$, in addition to {\bf P} phases for $(|D|/J)>0.5$ and $(h_{0}/J)>1$.
All frontiers shown in Fig.~\ref{fig:groundstate} are first-order critical lines.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=5.5cm]{figure2.eps}
\end{center}
\caption{Phase diagram of the model defined by Hamiltonian
of~\eq{eq:hamiltonian2}, at zero temperature. All critical frontiers
represent first-order phase transitions; the empty triangles denote
triple points.}
\label{fig:groundstate}
\end{figure}
When $(h_{0}/J) \leq 1/2$ one finds ordered phases for all values of $D/J$,
with a vertical straight line at $D=0$ separating the
symmetric state ($D<0$), where $m_{\tau}=m_{\sigma}$, from the
antisymmetric one ($D>0$), characterized by $m_{\tau}=-m_{\sigma}$.
Two critical frontiers (symmetric under a reflection operation)
emerge from the triple point at
$(D/J)=0.0$ and $(h_{0}/J)=0.5$, given, respectively, by
$(h_{0}/J)=0.5 + (D/J)$ for $D>0$, and
$(h_{0}/J)=0.5 - (D/J)$ for $D<0$.
These critical frontiers terminate at $(h_{0}/J)=1.0$ and
separate the low random-field-valued ordered phases from
a partially-ordered
phase, given by $m_{\tau}=0$ and $m_{\sigma}= \pm 1$.
As shown in Fig.~\ref{fig:groundstate}, three triple points
appear, each of them signaling the encounter
of three first-order lines, characterized by a coexistence of three phases,
defined by distinct values of the magnetizations $m_{\tau}$ and
$m_{\sigma}$, as described below.
\begin{itemize}
\item $[(D/J)=-0.5$ and $(h_{0}/J)=1.0]$~:
$(m_{\tau},m_{\sigma})=\{ (0,0);(0,\pm 1); (\pm 1, \pm 1) \}$.
\item $[(D/J)=0.5$ and $(h_{0}/J)=1.0]$~:
$(m_{\tau},m_{\sigma})=\{ (0,0);(0,\pm 1); (\pm 1, \mp 1) \}$.
\item $[(D/J)=0.0$ and $(h_{0}/J)=0.5]$~:
$(m_{\tau},m_{\sigma})=\{ (\pm 1,\pm 1);(\pm 1, \mp 1); (0, \pm 1) \}$.
\end{itemize}
Such a rich critical behavior shown for $T=0$ suggests that interesting
phase diagrams should occur when the temperature is taken
into account. From now on, we investigate the model
defined by the Hamiltonian of~\eq{eq:hamiltonian2} for finite temperatures.
\subsection{Finite-Temperature Analysis}
As shown above, the zero-temperature phase diagram presents
a reflection symmetry with respect
to $D=0$ (cf. Fig.~\ref{fig:groundstate}).
The only difference between the two sides of this phase
diagram concerns the magnetization solutions
characterizing the ordered phases for low random-field values,
where one has
$m_{\tau}=-m_{\sigma}$ ($D>0$), or
$m_{\tau}=m_{\sigma}$ ($D<0$).
These results come as a consequence
of the symmetry of the Hamiltonian of~\eq{eq:hamiltonian2},
which remains unchanged under the operations,
$D \rightarrow -D$, $\sigma_{i} \rightarrow -\sigma_{i}$ $(\forall i)$, or
$D \rightarrow -D$, $\tau_{i} \rightarrow -\tau_{i}$,
$h_{i} \rightarrow -h_{i}$ $(\forall i)$.
Hence, the finite-temperature phase diagrams should present similar
symmetries with respect to a change $D \rightarrow -D$. From now on,
for the sake of simplicity, we will restrict ourselves to the
case $(D/J) \geq 0$, for which the zero-temperature and
low-random-field magnetizations present
opposite signals, as shown in Fig.~\ref{fig:groundstate}, i.e.,
$m_{\tau}=-m_{\sigma}$~.
\begin{figure}[htp]
\begin{center}
\vspace{.5cm}
\includegraphics[height=5cm]{figure3a.eps}
\hspace{0.5cm}
\includegraphics[height=5cm]{figure3b.eps}
\end{center}
\vspace{-.5cm}
\caption{Phase diagrams of the model
defined by the Hamiltonian of~\eq{eq:hamiltonian2} in two
particular cases:
(a) The plane of dimensionless variables $kT/J$ versus $D/J$,
in the absence of random fields $(h_{0}=0)$;
(b) The plane of dimensionless variables $kT/J$
versus $h_{0}/J$, for $D=0$.
The full lines represent continuous phase transitions,
whereas the dotted line stands for a
first-order critical frontier.
For sufficiently high temperatures one finds a
paramagnetic phase ({\bf P}), whereas
the magnetizations $m_{\tau}$ and
$m_{\sigma}$ become nonzero
by lowering the temperature.
In case (b), two low-temperature phases appear, namely,
the ordered (lower values of $h_{0}$) and
the partially-ordered (higher values of $h_{0}$).
These two phases are separated by a continuous
critical frontier (higher temperatures), which turns into
a first-order critical line (lower temperatures) at a tricritical
point (black circle). The type of phase
diagram exhibited in (b) will be called herein of topology I.}
\label{fig:tdh00}
\end{figure}
In Fig.~\ref{fig:tdh00} we exhibit phase diagrams of the model
in two particular cases, namely, in the absence of fields $(h_{0}=0)$
[Fig.~\ref{fig:tdh00}(a)] and for zero coupling
$(D=0)$ [Fig.~\ref{fig:tdh00}(b)].
These figures provide useful
reference data in the numerical procedure
to be employed for constructing phase diagrams in
more general situations, e.g., in the plane
$kT/J$ versus $h_{0}/J$, for several values of $(D/J)>0$.
In Fig.~\ref{fig:tdh00}(a) we present the phase diagram
of the model in the plane of dimensionless variables $kT/J$ versus $D/J$,
in the absence of random fields ($h_{0}=0$),
where one sees the point $D=0$ that corresponds
to two noninteracting Ising models, leading to the well-known mean-field
critical temperature of the Ising model [$(kT_{c}/J)=1$].
Also in Fig.~\ref{fig:tdh00}(a),
the ordered solution $m_{\tau}=-m_{\sigma}$
minimizes the free energy at low temperatures
for any $D>0$; a second-order frontier separates this ordered phase
from the paramagnetic one that appears for sufficiently high temperatures.
For high values of $D/J$ one sees that this critical frontier approaches
asymptotically $(kT/J) = 2$.
Since the application of a random field results in
a decrease of the critical temperature, when compared with the one
of the case $h_{0}=0$~\cite{aharony,mattis,kaufman},
the result of Fig.~\ref{fig:tdh00}(a) shows that no
ordered phase should occur for $h_{0}>0$ and $(kT/J)>2$.
The phase diagram for $D=0$ is shown in
the plane of dimensionless variables $kT/J$
versus $h_{0}/J$ in Fig.~\ref{fig:tdh00}(b).
The {\bf P} phase occurs for $(kT/J)>1$, whereas
for $(kT/J)<1$ two phases appear, namely,
the ordered one (characterized by
$m_{\sigma} \neq 0$ and $m_{\tau} \neq 0$, with
$|m_{\sigma}| \geq |m_{\tau}|$),
as well as the partially-ordered phase
($m_{\sigma} \neq 0$ and $m_{\tau} = 0$).
Since the two Ising models are uncorrelated for $D=0$
and the random fields act only on the $\{\tau_{i}\}$
variables, one finds that the critical behavior associated
with variables $\{\sigma_{i}\}$ and $\{\tau_{i}\}$ occur
independently:
(i) The variables $\{\sigma_{i}\}$ order at $(kT/J)=1$, for
all values of $h_{0}$;
(ii) The critical frontier shown in
Fig.~\ref{fig:tdh00}(b), separating the two
low-temperature phases, is characteristic of an
Ising ferromagnet in the presence of a bimodal
random field~\cite{aharony}. The black circle
denotes a tricritical point, where the higher-temperature
continuous frontier meets the lower-temperature
first-order critical line. The type of phase
diagram exhibited in Fig.~\ref{fig:tdh00}(b)
will be referred herein as topology I.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7.0cm,clip,angle=-90]{figure4a.eps}
\hspace{0.1cm}
\includegraphics[height=7.0cm,clip,angle=-90]{figure4b.eps} \\
\vspace{0.5cm} \hspace{-0.5cm}
\includegraphics[height=4.5cm,clip]{figure4c.eps}
\hspace{1.0cm}
\includegraphics[height=4.5cm,clip]{figure4d.eps}
\end{center}
\vspace{-.2cm}
\caption{Phase diagram and order parameters in the case
$(D/J)=0.1$.
(a) Phase diagram in the plane of dimensionless variables $kT/J$
versus $h_{0}/J$. At low temperatures, a first-order
critical frontier that terminates in an
ordered critical point (black asterisk) separates
the ordered phase (lower values of $h_{0}/J$) from
the partially-ordered phase (higher values of $h_{0}/J$);
this type of phase
diagram will be referred herein as topology II.
The order parameters $m_{\tau}$ and $m_{\sigma}$
are represented versus the dimensionless temperature
$kT/J$ for typical values of $h_{0}/J$:
(b) As one goes through the ordered
phase (low temperatures) to the {\bf P} phase;
(c) As one goes through the first-order critical
frontier, which separates the two ordered phases,
up to the {\bf P} phase;
(d) As one goes through the partially-ordered phase
(slightly to the right of the first-order critical frontier) up
to the {\bf P} phase. Equivalent solutions exist by
inverting the signs of $m_{\tau}$ and $m_{\sigma}$.}
\label{fig:d01}
\end{figure}
The effects of a small interaction [$(D/J)=0.1$]
between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$ are presented in Fig.~\ref{fig:d01}, where
one sees that the topology I [Fig.~\ref{fig:tdh00}(b)] goes
through substantial changes, as shown
in Fig.~\ref{fig:d01}(a) (to be called herein as topology II).
As expected from the behavior presented
in Fig.~\ref{fig:tdh00}(a), one notices that
the border of the {\bf P} phase (a continuous frontier)
is shifted to higher temperatures.
However, the most significant difference between
topologies I and II consists in
the low-temperature frontier
separating the ordered and partially-ordered phases.
Particularly, the continuous frontier, as well
as the tricritical point shown
in Fig.~\ref{fig:tdh00}(b), give place to an
ordered critical point~\cite{griffiths}, at which
the low-temperature first-order critical
frontier terminates.
Such a topology has been found also in some
random magnetic systems, like the Ising and Blume-Capel
models, subject to random fields and/or
dilution~\cite{kaufman,salmon1,salmon2,benyoussef,
carneiro,kaufmankanner}.
In the present model, we verified that topology II holds
for any $0<(D/J)<1/2$, with
the first-order frontier starting at zero temperature and
$(h_{0}/J)=(D/J)+1/2$, which in Fig.~\ref{fig:d01}(a)
corresponds to $(h_{0}/J)=0.6$. Such a first-order line
essentially affects the parameter $m_{\tau}$, as will be
discussed next.
In Figs.~\ref{fig:d01}(b)--(d) the order parameters
$m_{\tau}$ and $m_{\sigma}$ are exhibited versus
$kT/J$ for conveniently chosen values of $h_{0}/J$,
corresponding to distinct physical situations of the
phase diagram for $(D/J)=0.1$.
A curious behavior is
presented by the magnetization
$m_{\tau}$ by varying $h_{0}/J$, and more
particularly, around the first-order critical line.
For $(h_{0}/J)=0.59$ [Fig.~\ref{fig:d01}(c)],
one starts at low temperatures
essentially to the left of the critical frontier and by increasing
$kT/J$ one crosses this critical frontier at $(kT/J)=0.499$,
very close to the ordered critical point.
At this crossing point,
$|m_{\tau}|$ presents an abrupt decrease, i.e.,
a discontinuity, corresponding
to a change to the partially-ordered phase; on
the other hand, the magnetization $m_{\sigma}$
remains unaffected when going through this critical frontier.
For higher temperatures,
$|m_{\tau}|$ becomes very small, but still finite,
turning up zero only at the {\bf P} boundary; in fact,
the whole region around the ordered critical point
is characterized by a finite small value of $|m_{\tau}|$.
Another unusual effect is presented in
Fig.~\ref{fig:d01}(d), for which $(h_{0}/J)=0.65$, i.e.,
slightly to the right of the first-order critical frontier:
the order parameter $m_{\tau}$ is zero
for low temperatures, but becomes nonzero by increasing the
temperature, as one becomes closer to the critical ordered
point. This rather curious phenomenon is directly related to
the correlation between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$: since for $(kT/J) \approx 0.5$ the magnetization
$m_{\sigma}$ is still very close to its maximum value,
a small value for $|m_{\tau}|$ is induced, so that both
order parameters go to zero together only at the {\bf P} frontier.
Behind the results presented in Figs.~\ref{fig:d01}(a)--(d)
one finds a very interesting feature, namely, the
possibility of going continuously from the ordered phase to the
partially-ordered phase by circumventing the ordered critical point.
This is analogous to what happens in many substances, e.g., water,
where one goes continuously (with no latent heat)
from the liquid to the gas
phase by circumventing a critical end point~\cite{huang,reichl}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=6.5cm,angle=-90]{figure5a.eps}
\hspace{0.2cm}
\includegraphics[height=6.5cm,angle=-90]{figure5b.eps}
\end{center}
\vspace{-.5cm}
\caption{The first-order critical line in Fig.~\ref{fig:d01}(a),
corresponding to $(D/J)=0.1$, is amplified, and
the dimensionless free-energy density $f/J$ of~\eq{eq:freeenergy}
(shown in the insets) is analyzed
at two distinct points along this frontier:
(a) A low-temperature point located at $[(h_{0}/J)=0.599,(kT/J)=0.010]$, showing the
coexistence of the ordered ($|m_{\tau}|=1$) and partially-ordered ($m_{\tau}=0$)
solutions;
(b) A higher-temperature point located at $[(h_{0}/J)=0.594,(kT/J)=0.387]$,
showing the coexistence of solutions with $|m_{\tau}|>0$, namely,
$|m_{\tau}|=0.868$ and $|m_{\tau}|=0.1$.
In both cases (a) and (b) the free energy presents four minima,
associated with distinct pairs of solutions
$(m_{\tau},m_{\sigma})$: the full lines show the two minima
with positive $m_{\sigma}$, whereas the dashed lines correspond
to the two minima with negative $m_{\sigma}$.}
\label{fig:freeenergyd01}
\end{figure}
In Fig.~\ref{fig:freeenergyd01} the free-energy density of~\eq{eq:freeenergy}
is analyzed at two different points along the first-order critical frontier of
Fig.~\ref{fig:d01}(a), namely, a low-temperature
one [Fig.~\ref{fig:freeenergyd01}(a)], and a point at a higher
temperature [Fig.~\ref{fig:freeenergyd01}(b)].
In both cases the free energy presents four minima
associated with distinct pairs of solutions
$(m_{\tau},m_{\sigma})$. The point at $(kT/J)=0.010$ presents
$(m_{\tau},m_{\sigma})=\{(-1,1);(0,1); (0,-1);(1,-1)\}$, whereas the
point at $(kT/J)=0.387$ presents
$(m_{\tau},m_{\sigma})=\{(-0.868, 0.991); (-0.100,0.991); (0.100, -0.991);
(0.868, -0.991)\}$.
The lower-temperature point represents a coexistence of the two phases
shown in the case $D=0$ [cf. Fig.~\ref{fig:tdh00}(b)], namely, the
ordered ($|m_{\tau}|=1$) and partially-ordered ($m_{\tau}=0$) phases.
However, the higher-temperature point typifies the phenomenon
discussed in Fig.~\ref{fig:d01}, where distinct solutions with
$|m_{\tau}|>0$ coexist, leading to a jump in this
order parameter as one crosses the critical frontier,
like illustrated in Fig.~\ref{fig:d01}(c) for the point
$[(h_{0}/J)=0.59,(kT/J)=0.499]$. Although the
magnetization $m_{\tau}$ presents a very
curious behavior in topology II [cf., e.g.,
Figs.~\ref{fig:d01}(b)--(d)],
$m_{\sigma}$ remains essentially
unchanged by the presence of the first-order
critical frontier of
Fig.~\ref{fig:d01}(a), as shown also in
Fig.~\ref{fig:freeenergyd01}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,angle=-90]{figure6a.eps}
\hspace{0.2cm}
\includegraphics[height=7cm,angle=-90]{figure6b.eps}
\end{center}
\vspace{-.5cm}
\caption{Phase diagrams in the plane of dimensionless variables $kT/J$
versus $h_{0}/J$ for two different values of $D/J$:
(a) $(D/J)=0.5$, to be referred as topology III;
(b) $(D/J)=0.7$, to be referred as topology IV.}
\label{fig:phasediagd0507}
\end{figure}
In Fig.~\ref{fig:phasediagd0507} we present two other possible phase
diagrams, namely, the cases $(D/J)=0.5$ [Fig.~\ref{fig:phasediagd0507}(a),
called herein topology III] and
$(D/J)=0.7$ [Fig.~\ref{fig:phasediagd0507}(b), called herein topology IV].
Whereas topology III represents
a special situation that applies only for $(D/J)=0.5$, exhibiting the
richest critical behavior of the present model, topology IV holds
for any $(D/J)>0.5$.
In Fig.~\ref{fig:phasediagd0507}(a) one observes the appearance of
several multicritical points, denoted by the black circle (tricritical
point), black asterisk (ordered critical point), and
empty triangles (triple points):
(i) The tricritical point, which signals the
encounter of the higher-temperature continuous phase transition
with the lower-temperature first-order phase transition,
found in the $D=0$ phase diagram [cf. Fig.~\ref{fig:tdh00}(b)],
have curiously disappeared for $0<(D/J)<0.5$,
and emerged again for $(D/J)=0.5$;
(ii) The ordered critical point exists for any $0 < (D/J) \leq 0.5$
[as shown in Fig.~\ref{fig:d01}(a)];
(iii) Two triple points, one at a finite temperature, whereas the
other one occurs at zero temperature. It should be mentioned
that such a zero-temperature triple point corresponds
precisely to the one of Fig.~\ref{fig:groundstate}, at
$(D/J)=0.5$ and $(h_{0}/J)=1.0$.
The value $(D/J)=0.5$ is very special and will be considered as
a threshold for both multicritical behavior and correlations
between the two systems. We have observed that for
$(D/J) \gtrsim 0.5$, the critical points shown in
Fig.~\ref{fig:phasediagd0507}(a) disappear, except for the
tricritical point that survives for
$(D/J)>0.5$ [as shown in Fig.~\ref{fig:phasediagd0507}(b)].
Changes similar to those occurring
herein between topologies II and III, as well as
topologies III and IV,
were found also in some
magnetic systems, like the Ising and Blume-Capel
models, subject to random fields and/or
dilution~\cite{kaufman,salmon1,salmon2,benyoussef,
carneiro,kaufmankanner}.
Particularly, the splitting of the
low-temperature first-order critical frontier into
two higher-temperature first-order lines that terminate
in the ordered and tricritical points,
respectively [as exhibited in Fig.~\ref{fig:phasediagd0507}(a)],
is consistent with results found in
the Blume-Capel model under
a bimodal random magnetic, by
varying the intensity of the crystal
field~\cite{kaufmankanner}.
Another important feature of topology III concerns the
lack of any type of
magnetic order at finite temperatures for $(h_{0}/J)>1.1$,
in contrast to the phase diagrams for
$0 \leq (D/J) < 0.5$, for which there is $m_{\sigma} \neq 0$
for all $h_{0}/J$
[see, e.g., Figs.~\ref{fig:tdh00}(b) and~\ref{fig:d01}(a)].
This effect shows that $(D/J)=0.5$ represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$, so that for $(D/J) \geq 0.5$ the
correlations among these variables become significant.
As a consequence of these correlations, the fact
of no magnetic
order on the $\tau$-system ($m_{\tau} =0$)
drives the the magnetization of the
$\sigma$-system to zero as well, for $(h_{0}/J)>1.1$.
It is important to notice that the $T=0$ phase diagram
of Fig.~\ref{fig:groundstate}
presents a first-order critical line for $(D/J)=0.5$ and
$(h_{0}/J)>1.0$, at which
$m_{\tau} =0$, whereas in the $\sigma$-system both
$m_{\sigma}=0$ and $|m_{\sigma}|=1$ minimize the Hamiltonian.
By analyzing numerically the free-energy density
of~\eq{eq:freeenergy} at low temperatures and $(h_{0}/J)>1.0$,
we have verified that for any infinitesimal value of
$kT/J$ destroys such a coexistence of solutions, leading to
a minimum free energy at
$m_{\tau}=m_{\sigma}=0 \ (\forall \, T>0)$. Consequently,
one finds that the low-temperature region in the interval
$1.0 \leq (h_{0}/J) \leq 1.1$ becomes part of the {\bf P} phase.
Hence, the phase diagram in
Fig.~\ref{fig:phasediagd0507}(a) presents
a reentrance phenomena for
$1.0 \leq (h_{0}/J) \leq 1.1$. In this region, by lowering
the temperature gradually, one goes from a {\bf P} phase
to the ordered phase
($m_{\tau} \neq 0$ ; $m_{\sigma} \neq 0$), and then back
to the {\bf P} phase. This effect appears frequently
in both theoretical and experimental investigations of
disordered magnets~\cite{dotsenkobook,nishimoribook}.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,clip,angle=-90]{figure7a.eps}
\hspace{0.5cm} \vspace{0.7cm}
\includegraphics[height=7cm,clip,angle=-90]{figure7b.eps} \\
\vspace{0cm} \hspace{-0.8cm}
\includegraphics[height=4.5cm,clip]{figure7c.eps}
\hspace{1.2cm}
\includegraphics[height=4.5cm,clip]{figure7d.eps}
\end{center}
\vspace{0.2cm}
\caption{(a) The region of multicritical points of the phase diagram for
$(D/J)=0.5$ [Fig.~\ref{fig:phasediagd0507}(a)] is amplified and three
thermodynamic paths are chosen for analyzing the magnetizations
$m_{\tau}$ and $m_{\sigma}$.
(b) Order parameters along thermodynamic path (1):
$(h_{0}/J)=0.97$ and increasing temperatures.
(c) Order parameters along thermodynamic path (2):
$(h_{0}/J)=1.05$ and increasing temperatures.
(d) Order parameters along thermodynamic path (3):
$(kT/J)=0.42$ and varying the field
strength in the interval $0.9 \leq (h_{0}/J) \leq 1.15$.
Equivalent solutions exist by inverting the signs of
$m_{\tau}$ and $m_{\sigma}$.}
\label{fig:magpaths123}
\end{figure}
In Fig.~\ref{fig:magpaths123} we analyze the behavior of the
$m_{\tau}$ and $m_{\sigma}$ for topology III,
in the region of multicritical
points of the phase diagram for
$(D/J)=0.5$, along three typical thermodynamic paths, as
shown in Fig.~\ref{fig:magpaths123}(a).
In Fig.~\ref{fig:magpaths123}(b) we exhibit the behavior of
$m_{\tau}$ and $m_{\sigma}$ along path (1), where one
sees that both parameters go through a jump by
crossing the first-order critical line [$(kT/J)=0.445$],
expressing a coexistence of different types
of solutions for $m_{\tau}$ and $m_{\sigma}$ at this
point. One notices a larger jump in $m_{\tau}$, so that
to the right of the ordered critical point
one finds a behavior similar to the one verified in topology II,
where $|m_{\tau}|$ becomes very small, whereas
$m_{\sigma}$ still presents significant values.
Then, by further increasing the temperature, these parameters
tend smoothly to zero at the continuous critical frontier
separating the ordered and {\bf P} phases.
In Fig.~\ref{fig:magpaths123}(c) we show the magnetizations
$m_{\tau}$ and $m_{\sigma}$ along path (2),
within the region of the phase diagram
where the reentrance phenomenon occurs; along this path,
one increases the temperature, going from the {\bf P} phase
to the ordered phase and then to the {\bf P} phase again.
Both parameters are zero for low enough temperatures,
jumping to nonzero values at $(kT/J)=0.396$, as one
crosses the first-order critical line. After such jumps,
by increasing the temperature, these parameters
tend smoothly to zero at the border of the
{\bf P} phase. The behavior shown in
Fig.~\ref{fig:magpaths123}(c) confirm the reentrance
effect, discussed previously.
Finally, in Fig.~\ref{fig:magpaths123}(d) we exhibit
the order parameters along thermodynamic path (3),
for which the temperature is fixed at $(kT/J)=0.42$, with
the field varying in the range
$0.9 \leq (h_{0}/J) \leq 1.15$. One sees that both
magnetizations $m_{\tau}$ and $m_{\sigma}$ display
jumps as one crosses each of the two first-order lines,
evidencing a
coexistence of different ordered states at the lower-temperature
jump, as well as a coexistence of the ordered and {\bf P} states
at the higher-temperature jump.
The behavior presented by the order parameters in
Figs.~\ref{fig:magpaths123}(b)--(d) shows clearly
the fact that $(D/J)=0.5$ represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$. In all these cases, one sees that jumps
in the magnetization $m_{\sigma}$ are correlated with
corresponding jumps in $m_{\tau}$.
These results should be contrasted with those for the
cases $(D/J)<0.5$, as illustrated
in Fig.~\ref{fig:d01}(c), where a discontinuity
in $m_{\tau}$ does not affect the smooth behavior presented
by $m_{\sigma}$.
\begin{figure}[htp]
\begin{center}
\includegraphics[height=7cm,angle=-90]{figure8a.eps}
\hspace{0.2cm}
\includegraphics[height=7cm,angle=-90]{figure8b.eps}
\end{center}
\vspace{-.5cm}
\caption{The order parameters $m_{\tau}$ and $m_{\sigma}$
are represented versus the dimensionless temperature
$kT/J$ for $(D/J)=8.0$ and two typical values of $h_{0}/J$:
(a) Slightly to the left of the tricritical point;
(b) Slightly to the right of the tricritical point.
The associated phase diagram corresponds
to topology IV [cf. Fig.~\ref{fig:phasediagd0507}(b)].
Equivalent solutions exist by inverting the signs of
$m_{\tau}$ and $m_{\sigma}$.}
\label{fig:magd80}
\end{figure}
The phase diagram shown in Fig.~\ref{fig:phasediagd0507}(b),
which corresponds to topology IV, is valid for any
for any $(D/J)>0.5$. Particularly,
the critical point where the low-temperature
first-order critical
frontier touches the zero-temperature axis
is kept at $(h_{0}/J)=1$, for all $(D/J)>0.5$,
in agreement with Fig.~\ref{fig:groundstate}.
We have verified
only quantitative changes in such a phase diagram
by increasing
the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$. Essentially, the whole continuous critical
frontier moves towards
higher temperatures, leading to an increase in the values
of the critical temperature
for $(h_{0}/J)=0$, as well as in the temperature
associated with the tricritical point, whereas the abscissa
of this point remains typically unchanged. Moreover,
in what concerns the order parameters,
the difference between $|m_{\tau}|$
and $m_{\sigma}$ decreases, in such a way
that for $(D/J) \rightarrow \infty$, one obtains
$m_{\tau}=-m_{\sigma}$.
This later effect is illustrated in
Fig.~\ref{fig:magd80}, where we represent the
order parameters $m_{\tau}$ and $m_{\sigma}$
versus temperature, for a sufficiently large value of
$D/J$, namely, $(D/J)=8.0$, in
two typical choices of $h_{0}/J$, close
to the tricritical point.
In Fig.~\ref{fig:magd80}(a) $m_{\tau}$ and $m_{\sigma}$
are analyzed slightly to the left of the tricritical
point, exhibiting the usual continuous behavior,
whereas in Fig.~\ref{fig:magd80}(b) they
are considered slightly to the right of the tricritical
point, presenting jumps as one crosses the
first-order critical frontier. However,
the most important conclusion from Fig.~\ref{fig:magd80}
concerns the fact that in both cases one has essentially
$m_{\tau}=-m_{\sigma}$, showing that the random
field applied solely to the $\tau$-system influences the
$\sigma$-system in a similar way, due to the
high value of $D/J$ considered.
We have verified that for $(D/J)=8.0$
the two systems become so strongly
correlated, such that
$m_{\tau}=-m_{\sigma}$ holds along
the whole phase diagram,
within our numerical accuracy.
\section{Conclusions}
We have analyzed the effects of a coupling $D$
between two Ising models, defined in terms of variables
$\{\tau_{i}\}$ and $\{\sigma_{i}\}$.
The model was considered in the limit of infinite-range
interactions, where all spins in each system
interact by means of an exchange coupling $J>0$, typical
of ferromagnetic interactions.
Motivated by a qualitative description of
systems like plastic crystals,
the variables $\{\tau_{i}\}$ and $\{\sigma_{i}\}$ would
represent rotational and translational degrees
of freedom, respectively. Since the rotational
degrees of freedom are expected to change more
freely than the translational ones,
a random field acting only on the variables
$\{\tau_{i}\}$ was considered.
For this purpose, a bimodal random field,
$h_{i} = \pm h_{0}$, with equal probabilities,
was defined on the $\tau$-system.
The model was investigated through its free energy
and its two order parameters, namely,
$m_{\tau}$ and $m_{\sigma}$.
We have shown that such a system presents a very rich
critical behavior, depending on the particular choices
of $D/J$ and $h_{0}/J$.
Particularly, at zero temperature, the phase diagram in the plane
$h_{0}/J$ versus $D/J$ exhibits ordered, partially-ordered,
and disordered phases. This phase diagram is symmetric
around $(D/J)=0$, so that for sufficiently low values of
$h_{0}/J$ one finds ordered phases characterized by
$m_{\sigma}=m_{\tau}=\pm 1$ ($D<0$) and
$m_{\sigma}=-m_{\tau}=\pm 1$ ($D>0$).
We have verified that $|D/J|=1/2$
plays an important role in the present model, such
that at zero temperature one has the disordered
phase ($m_{\sigma}=m_{\tau}=0$)
for $|D/J|>1/2$ and $(h_{0}/J)>1$.
Moreover, the partially-ordered phase,
where $m_{\sigma}=\pm 1$ and $m_{\tau}=0$,
occurs for $(h_{0}/J)>1/2+|D/J|$ and $|D/J|<1/2$.
In this phase diagram all phase transitions are
of the first-order type, and three triple points were found.
In the case of plastic crystals,
the sequence of transitions from the disordered
to the partially-ordered, and then to the
ordered phases, would correspond to the
sequence of transitions from the liquid to
the plastic crystal, and then to ordered crystal
phases.
Due to the symmetry around $D=0$, the
finite-temperature phase diagrams were considered
only for $D>0$, for which the ordered
phase was identified by $m_{\sigma}>0$ and
$m_{\tau}<0$, whereas the partially-ordered phase
by $m_{\sigma}>0$ and
$m_{\tau}=0$ (equivalent solutions also exist by
inverting the signs of these order parameters).
Several phase diagrams in the
plane $kT/J$ versus $h_{0}/J$ were studied,
by varying gradually $D/J$. We have found
four qualitatively different types of phase diagrams,
denominated as topologies I [$(D/J)=0$], II [$0<(D/J)<1/2$],
III [$(D/J)=1/2$], and IV [$(D/J)>1/2$]. Such a
classification reflects the fact that $(D/J)=1/2$
represents a threshold value
for the coupling between the variables $\{\sigma_{i}\}$ and
$\{\tau_{i}\}$, so that for $(D/J) \geq 1/2$ the
correlations among these variables become significant,
as verified through the behavior of the order parameters
$m_{\tau}$ and $m_{\sigma}$.
From all these cases, only topology IV
typifies a well-known phase diagram,
characterized by a tricritical point, where the
higher-temperature continuous frontier meets
the lower-temperature first-order critical line.
This phase diagram is qualitatively similar to
the one found for the
Ising ferromagnet in the presence of a bimodal
random field~\cite{aharony}, and it does not
present the herein physically relevant
partially-ordered phase.
For $(D/J) \geq 1/2$, even though the random field
is applied only in the $\tau$-system, the correlations
lead the $\sigma$-system to follow a qualitatively
similar behavior.
The phase diagrams referred as topologies I and II
exhibit all three phases. In the later case we have found
a first-order critical line terminating at an ordered
critical point, leading to the potential physical realization
of going continuously from the ordered phase to the
partially-ordered phase by circumventing this
critical point.
In these two topologies, the sequence of transitions
from the disordered
to the partially-ordered, and then to the
ordered phase, represents the physical
situation that occurs in plastic crystals.
For conveniently chosen thermodynamic paths,
i.e., varying temperature and random field appropriately,
one may go from the liquid phase
($m_{\sigma}=m_{\tau}=0$), to a plastic-crystal phase
($m_{\sigma} \neq 0$; $m_{\tau}=0$), where the rotational degrees
of freedom are found in a disordered state, and then,
to an ordered crystal phase
($m_{\sigma} \neq 0$; $m_{\tau} \neq 0$).
From the point of view of multicritical behavior,
topology III [$(D/J)=1/2$] corresponds to
the richest type of phase diagram, being
characterized by several critical lines and
multicritical points; one finds its most
complex criticality around $(h_{0}/J)=1$, signaling
a great competition among the different types of orderings.
Although the partially-ordered phase
does not appear in this particular case, one has also
the possibility of circumventing the ordered critical point,
such as to reach a region of the phase diagram
along which $|m_{\tau}|$ becomes very small,
resembling a partially-ordered phase.
Since the infinite-range interactions among
variables of each Ising system correspond to a limit
where mean-field approach becomes exact, an immediate
question concerns whether some of the results obtained above
represent an artifact of such limit.
Certainly, such a relevant point is directly related with the
existence of some of these features in the associated
short-range three-dimensional magnetic models. For example, the
tricritical point found in topologies III and IV is essentially
the same that appears within the mean-field approach of the
Ising model in the presence of a bimodal random field.
This later model has been extensively investigated on a cubic
lattice through different numerical approaches, where the
existence of this tricritical point is still very controversial.
On the other hand, a first-order critical frontier terminating
at an ordered critical point, and the fact that one can
go from one phase to another by
circumventing this point, represents a typical
physical situation that occurs in real
substances. The potential for exhibiting such a
relevant feature represents an important advantage
of the present model.
Finally, we emphasize that the rich critical behavior presented
in the phase diagrams corresponding to topologies II and III
suggest the range $0<(D/J) \leq 1/2$ as appropriate
for describing plastic crystals.
The potential of exhibiting successive transitions from the
ordered to the partially-ordered and then to the
disordered phase should be useful for a better
understanding of these systems.
Furthermore, the characteristic
of going continuously from the ordered phase
to the partially-ordered phase by circumventing an ordered
critical point represents a typical physical situation that
occurs in many substances,
and opens the possibility for
the present model to describe a wider range of materials.
\vskip 2\baselineskip
{\large\bf Acknowledgments}
\vskip \baselineskip
\noindent
The partial financial supports from CNPq,
FAPEAM-Projeto-Universal-Amazonas,
and FAPERJ (Brazilian agencies) are acknowledged.
\vskip 2\baselineskip
| {'timestamp': '2014-06-24T02:06:43', 'yymm': '1406', 'arxiv_id': '1406.5628', 'language': 'en', 'url': 'https://arxiv.org/abs/1406.5628'} |
\section{Introduction}
\label{intro}
It is now a well-established fact that according to our present theory of gravity, 85\%~of the matter content of our universe is missing. Observational evidence for this discrepancy ranges from macroscopic to microscopic scales, e.g. gravitational lensing in galaxy clusters, galactic rotation curves and fluctuations measured in the Cosmic Microwave Background. This has resulted in the hypothesised existence of a new type of matter called Dark Matter. Particle physics provides a well-motivated explanation for this hypothesis: The existence of (until now unobserved) massive weakly interacting particles (WIMPs). A favorite amongst the several WIMP candidates is the neutralino, the lightest particle predicted by Supersymmetry, itself a well-motivated extension of the Standard Model.
If Supersymmetry is indeed realised in Nature, Supersymmetric particles would have been copiously produced at the start of our Universe in the Big Bang. Initially these particles would have been in thermal equilibrium. After the temperature of the Universe dropped below the neutralino mass, the neutralino number density would have decreased exponentially. Eventually the expansion rate of the Universe would have overcome the neutralino annihilation rate, resulting in a neutralino density in our Universe today similar to the cosmic microwave background.
These relic neutralinos could then accumulate in massive celestial bodies in our Universe like our Sun through weak interactions with normal matter and gravity. Over time the neutralino density in the core of the object would increase considerably, thereby increasing the local neutralino annihilation probability. In the annihilation process new particles would be created, amongst which neutrinos. This neutrino flux could be detectable as a localised emission with earth-based neutrino telescopes like ANTARES.
This paper gives a brief overview of the prospects for the detection of neutrinos originating from neutralino annihilation in the Sun with the ANTARES neutrino telescope.
\begin{figure}[b]
\center{
\includegraphics[width=0.45\textwidth,angle=0]{NEA_60kHz0XOFF_off.png}
\caption{The ANTARES Neutrino Effective Area vs. $E_\nu$.}
\label{fig:1}
}
\end{figure}
\begin{figure*}[t]
\center{
\includegraphics[width=0.8\textwidth,angle=0]{psflux.png}
\caption{Predicted $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun in mSUGRA parameter space.}
\label{fig:2}
}
\end{figure*}
\section{The ANTARES neutrino telescope}
\label{sec:1}
The ANTARES undersea neutrino telescope consists of a 3D~grid of 900~photomultiplier tubes arranged in 12~strings, at a depth of 2475~m in the Mediterranean Sea. Three quarters of the telescope have been deployed and half of the detector is already fully operational, making ANTARES the largest neutrino telescope on the northern hemisphere. The angular resolution of the telescope is of the order of one degree at low energy, relevant to dark matter searches, and reaches 0.3 degree at high energies ($>$~10~TeV).
The sensitivity of a neutrino detector is conventionally expressed as the Neutrino Effective Area, $A_{\rm eff}^{\nu}$. The $A_{\rm eff}^{\nu}$ is a function of neutrino energy $E_\nu$ and direction $\Omega$, and is defined as
\begin{equation}
A_{\rm eff}^{\nu}(E_\nu,\Omega) \;=\; V_{\rm eff}(E_\nu,\Omega)\;\sigma(E_\nu)\;\rho N_A\;P_E(E_\nu,\Omega)
\label{eq:1}
\end{equation}
\noindent where $\sigma(E_\nu)$ is the neutrino interaction cross section, $\rho\,N_A$ is the nucleon density in/near the detector,\linebreak $P_E(E_\nu,\Omega)$ is the neutrino transmission probability\linebreak through the Earth and $V_{\rm eff}(E_\nu,\Omega)$ represents the effective detector volume. This last quantity depends not only on the detector geometry and instrumentation, but is also on the efficiency of the trigger and reconstruction algorithms that are used.
The ANTARES $A_{\rm eff}^{\nu}$ for upgoing $\nu_\mu$ and $\bar{\nu}_\mu$'s, integrated over all directions as a function of the neutrino energy is shown in Fig.~\ref{fig:1}. The curves represent the $A_{\rm eff}^{\nu}$ after triggering only (``{\em Trigger level}'', in blue) and after reconstruction and selection (``{\em Detection level}'', in red). The increase of the $A_{\rm eff}^{\nu}$ with neutrino energy is mainly due to the fact that $\sigma(E_\nu)$ as well as the muon range are both proportional to the neutrino energy.
The detection rate $R(t)$ for a certain neutrino flux $\Phi(E_\nu,\Omega,t)$ is then defined as
\begin{equation}
R(t) \;=\; \iint\,A_{\rm eff}^{\nu}(E_\nu,\Omega)\;\frac{d\Phi(E_\nu,\Omega,t)}{dE_\nu\,d\Omega}\;dE_\nu\,d\Omega
\label{eq:2}
\end{equation}
\section{Neutralino annihilation in the Sun}
\label{sec:2}
We calculated the $\nu_\mu+\bar{\nu}_\mu$ flux resulting from neutralino annihilation in the centre of the Sun using the DarkSUSY simulation package \cite{DarkSUSY}. Furthermore, the effects of neutrino oscillations in matter and vacuum as well as absorption were taken into account. The top quark mass was set to 172.5~GeV and the NFW-model for the Dark Matter halo with a local halo density \mbox{$\rho_0 = 0.3$~GeV/cm$^3$} was used. Instead of the general Supersymmetric framework, we used the more constrained approach of minimal Supergravity (mSUGRA). In mSUGRA, models are characterized by four parameters and a sign: A common gaugino mass $m_{1/2}$, scalar mass $m_0$ and tri-linear scalar coupling $A_0$ at the GUT scale ($10^{16}$ GeV), the ratio of vacuum expectation values of the two Higgs fields $tan(\beta)$ and the sign of the Higgsino mixing parameter $\mu$. We considered only $\mu=+1$ models within the following parameter ranges: \mbox{$0<m_0<8000$~GeV,} \mbox{$0<m_{1/2}<2000$~GeV,}\linebreak \mbox{$-3m_0<A_0<3m_0$} and \mbox{$0<tan(\beta)<60$.} The SUSY parameters were subsequently calculated using the\linebreak ISASUGRA package \cite{Isasugra}.
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{neutrino_flux_relic_density.png}
\caption{Predicted $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun vs. $m_\chi$.}
\label{fig:3}
\end{figure}
\begin{figure*}[t]
\center{
\includegraphics[width=0.8\textwidth,angle=0]{psexcl.png}
\caption{mSUGRA models 90\% CL excludable by ANTARES in mSUGRA parameter space.}
\label{fig:4}
}
\end{figure*}
\pagebreak Only a small subset of all mSUGRA models possess a relic neutralino density $\Omega_\chi h^2$ that is compatible with the Cold Dark Matter energy density $\Omega_{\rm CDM} h^2$ as measured by WMAP \cite{WMAP}. To investigate specifically those mSUGRA models, we sampled the mSUGRA parameter space using a random walk method based on the Metropolis algorithm where $\Omega_\chi h^2$ acted as a guidance parameter \cite{MarkovChain}.
The resulting $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun per~km$^{2}$ per year, integrated above \mbox{$E_\nu=10$~GeV}, can be seen in the \mbox{$m_0$-$m_{1/2}$~plane} for different ranges of $tan(\beta)$ in Fig.~\ref{fig:2}. The white regions correspond to mSUGRA models without radiative electroweak symmetry breaking, models with $\Omega_\chi h^2>1$, models that are already experimentally excluded, or models where the neutralino is not the lightest superpartner. Models in the so-called ``Focus Point'' region\,\footnote{The region of mSUGRA parameter space around $(m_0,m_{1/2}) = (2000,400)$.} produce the highest neutrino flux: In this region the neutralino has a relatively large Higgsino component \cite{Nerzi}. This enhances the neutralino capture rate through $Z$-boson exchange as well as the neutralino annihilation through the\linebreak \mbox{$\chi\chi\rightarrow WW/ZZ$} channel, resulting in a large flux of relatively high energetic neutrinos.
The $\nu_\mu+\bar{\nu}_\mu$ flux can also be plotted against the neutralino mass $m_\chi$, as is shown in Fig.~\ref{fig:3}. In this plot, the mSUGRA models are subdivided into three categories according to how well their $\Omega_\chi h^2$ agrees with $\Omega_{\rm CDM} h^2$ as measured by WMAP\,\footnote{WMAP: $\Omega_{\rm CDM} h^2 = 0.1126_{-0.013}^{+0.008}$}: \mbox{$\Omega_\chi h^2-\Omega_{\rm CDM}h^2<2\sigma$} (black), \mbox{$0< \Omega_\chi h^2 < \Omega_{\rm CDM} h^2$} (blue) and \mbox{$\Omega_{\rm CDM} h^2 < \Omega_\chi h^2 < 1$} (magenta).
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{detection_rate_relic_density.png}
\caption{ANTARES detection rate per 3~years vs. $m_\chi$.}
\label{fig:5}
\end{figure}
\section{ANTARES prospects to detect neutralino annihilation in the Sun}
\label{sec:3}
The ANTARES detection rate (See Eq.~\ref{eq:2}) for the detection of neutralino annihilation in the Sun was calculated as follows: For each mSUGRA model considered in Sect.~\ref{sec:2}, the differential $\nu_\mu+\bar{\nu}_\mu$ flux from the Sun was convoluted with the Sun's zenith angle distribution as well as the ANTARES $A_{\rm eff}^{\nu}$ (see Eq.~\ref{eq:1} and Fig.~\ref{fig:1}). The resulting detection rate in ANTARES per 3~years is shown as a function of the neutralino mass in Fig.~\ref{fig:5}. The color coding in the plot corresponds to the one used in Fig.~\ref{fig:3}.
The ANTARES exclusion limit for the detection of neutralino annihilation in the Sun was calculated as follows: As can be seen from Fig.~\ref{fig:5}, the expected detection rates for all mSUGRA model considered in Sect.~\ref{sec:2} are small. Therefore the Feldman Cousins approach \cite{FeldmanCousins} was used to calculate 90\%~CL exclusion limits. The two sources of background were taken into account as follows: Since we know the Sun's position in the sky, the atmospheric neutrino background (Volkova parametrisation) was considered only in a 3~degree radius search cone around the Sun's position. After applying the event selection criteria used to determine the $A_{\rm eff}^{\nu}$ in Fig.~\ref{fig:1}, the misreconstructed atmospheric muon background was considered to be 10\% of the atmospheric neutrino background. mSUGRA models that are excludable at 90\%~CL by ANTARES in 3~years are shown in blue in Fig.~\ref{fig:6}, those that are non-excludable are shown in red. Bright colors indicate models which have \mbox{$\Omega_\chi h^2-\Omega_{\rm CDM}h^2<2\sigma$}. The fraction of ANTARES excludable models in mSUGRA parameter space is shown in Fig.~\ref{fig:4}.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth,angle=0]{detection_rate_exclusion.png}
\caption{mSUGRA models 90\% CL excludable by ANTARES per 3~years vs. $m_\chi$.}
\label{fig:6}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.45\textwidth,angle=0]{crossection_exclusion_direct_limits.png}
\caption{Spin-independent $\chi p$~cross section vs. $m_\chi$.}
\label{fig:7}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.44\textwidth,angle=0]{NEA_triggercomparison_off.png}
\caption{The ANTARES Neutrino Effective Area at the trigger level vs. $E_\nu$.}
\label{fig:8}
\end{figure}
\section{Comparison to direct detection}
To compare with direct detection experiments, the spin-independent $\chi p$~cross section versus the neutralino mass for all mSUGRA models considered in Sect.~\ref{sec:2} is shown in Fig.~\ref{fig:7}. The color coding in the plot corresponds to the one used in Fig.~\ref{fig:6}. The limits in this plot were taken from the Dark Matter Limit Plot Generator \cite{DirectDetection}. The spin-independent cross section is driven by CP-even Higgs boson exchange \cite{Nerzi}. Therefore, mSUGRA models in which the neutralino is of the mixed gaugino-Higgsino type will produce the largest cross sections. This implies a correlation between the models that are excludable by direct detection experiments and models excludable ANTARES, as can be seen from Fig.~\ref{fig:7}.
\section{Conclusion \& Outlook}
Nearly half of the ANTARES detector has been operational since January this year. The detector is foreseen to be completed in early 2008. The data show that the detector is working within the design specifications.
As can be seen from Fig.~\ref{fig:4}, mSUGRA models that are excludable by ANTARES at 90\%~CL are found in the Focus Point region. The same models should also be excludable by future direct detection experiments, as is shown in Fig.~\ref{fig:7}.
To improve the ANTARES sensitivity, a directional trigger algorithm has recently been implemented in the data acquisition system. In this algorithm, the known position of the potential neutrino source is used to lower the trigger condition. This increases the trigger efficiency, resulting in a larger $A_{\rm eff}^{\nu}$. In Fig.~\ref{fig:8}, the $A_{\rm eff}^{\nu}$ at the trigger level for the standard- and the directional trigger algorithm are shown in black (``{\em trigger3D}'') and red (``{\em triggerMX}'') respectively.
| {'timestamp': '2007-10-19T14:19:38', 'yymm': '0710', 'arxiv_id': '0710.3685', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.3685'} |
\section{Introduction}\label{S1}
The multiple access interferences (MAI) is the root of user
limitation in CDMA systems \cite{R1,R3}. The parallel least mean
square-partial parallel interference cancelation (PLMS-PPIC) method
is a multiuser detector for code division multiple access (CDMA)
receivers which reduces the effect of MAI in bit detection. In this
method and similar to its former versions like LMS-PPIC \cite{R5}
(see also \cite{RR5}), a weighted value of the MAI of other users is
subtracted before making the decision for a specific user in
different stages \cite{cohpaper}. In both of these methods, the
normalized least mean square (NLMS) algorithm is engaged
\cite{Haykin96}. The $m^{\rm th}$ element of the weight vector in
each stage is the true transmitted binary value of the $m^{\rm th}$
user divided by its hard estimate value from the previous stage. The
magnitude of all weight elements in all stages are equal to unity.
Unlike the LMS-PPIC, the PLMS-PPIC method tries to keep this
property in each iteration by using a set of NLMS algorithms with
different step-sizes instead of one NLMS algorithm used in LMS-PPIC.
In each iteration, the parameter estimate of the NLMS algorithm is
chosen whose element magnitudes of cancelation weight estimate have
the best match with unity. In PLMS-PPIC implementation it is assumed
that the receiver knows the phases of all user channels. However in
practice, these phases are not known and should be estimated. In
this paper we improve the PLMS-PPIC procedure \cite{cohpaper} in
such a way that when there is only a partial information of the
channel phases, this modified version simultaneously estimates the
phases and the cancelation weights. The partial information is the
quarter of each channel phase in $(0,2\pi)$.
The rest of the paper is organized as follows: In section \ref{S4}
the modified version of PLMS-PPIC with capability of channel phase
estimation is introduced. In section \ref{S5} some simulation
examples illustrate the results of the proposed method. Finally the
paper is concluded in section \ref{S6}.
\section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\label{S4}
We assume $M$ users synchronously send their symbols
$\alpha_1,\alpha_2,\cdots,\alpha_M$ via a base-band CDMA
transmission system where $\alpha_m\in\{-1,1\}$. The $m^{th}$ user
has its own code $p_m(.)$ of length $N$, where $p_m(n)\in \{-1,1\}$,
for all $n$. It means that for each symbol $N$ bits are transmitted
by each user and the processing gain is equal to $N$. At the
receiver we assume that perfect power control scheme is applied.
Without loss of generality, we also assume that the power gains of
all channels are equal to unity and users' channels do not change
during each symbol transmission (it can change from one symbol
transmission to the next one) and the channel phase $\phi_m$ of
$m^{th}$ user is unknown for all $m=1,2,\cdots,M$ (see
\cite{cohpaper} for coherent transmission). According to the above
assumptions the received signal is
\begin{equation}
\label{e1} r(n)=\sum\limits_{m=1}^{M}\alpha_m
e^{j\phi_m}p_m(n)+v(n),~~~~n=1,2,\cdots,N,
\end{equation}
where $v(n)$ is the additive white Gaussian noise with zero mean and
variance $\sigma^2$. Multistage parallel interference cancelation
method uses $\alpha^{s-1}_1,\alpha^{s-1}_2,\cdots,\alpha^{s-1}_M$,
the bit estimates outputs of the previous stage, $s-1$, to estimate
the related MAI of each user. It then subtracts it from the received
signal $r(n)$ and makes a new decision on each user variable
individually to make a new variable set
$\alpha^{s}_1,\alpha^{s}_2,\cdots,\alpha^{s}_M$ for the current
stage $s$. Usually the variable set of the first stage (stage $0$)
is the output of a conventional detector. The output of the last
stage is considered as the final estimate of transmitted bits. In
the following we explain the structure of a modified version of the
PLMS-PIC method \cite{cohpaper} with simultaneous capability of
estimating the cancelation weights and the channel phases.
Assume $\alpha_m^{(s-1)}\in\{-1,1\}$ is a given estimate of
$\alpha_m$ from stage $s-1$. Define
\begin{equation}
\label{e6} w^s_{m}=\frac{\alpha_m}{\alpha_m^{(s-1)}}e^{j\phi_m}.
\end{equation}
From (\ref{e1}) and (\ref{e6}) we have
\begin{equation}
\label{e7} r(n)=\sum\limits_{m=1}^{M}w^s_m\alpha^{(s-1)}_m
p_m(n)+v(n).
\end{equation}
Define
\begin{subequations}
\begin{eqnarray}
\label{e8} W^s&=&[w^s_{1},w^s_{2},\cdots,w^s_{M}]^T,\\
\label{e9}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!X^{s}(n)\!\!\!&=&\!\!\![\alpha^{(s-1)}_1p_1(n),\alpha^{(s-1)}_2p_2(n),\cdots,\alpha^{(s-1)}_Mp_M(n)]^T.
\end{eqnarray}
\end{subequations}
where $T$ stands for transposition. From equations (\ref{e7}),
(\ref{e8}) and (\ref{e9}), we have
\begin{equation}
\label{e10} r(n)=W^{s^T}X^{s}(n)+v(n).
\end{equation}
Given the observations $\{r(n),X^{s}(n)\}^{N}_{n=1}$, in modified
PLMS-PPIC, like the PLMS-PPIC \cite{cohpaper}, a set of NLMS
adaptive algorithm are used to compute
\begin{equation}
\label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T,
\end{equation}
which is an estimate of $W^s$ after iteration $N$. To do so, from
(\ref{e6}), we have
\begin{equation}
\label{e13} |w^s_{m}|=1 ~~~m=1,2,\cdots,M,
\end{equation}
which is equivalent to
\begin{equation}
\label{e14} \sum\limits_{m=1}^{M}||w^s_{m}|-1|=0.
\end{equation}
We divide $\Psi=\left(0,1-\sqrt{\frac{M-1}{M}}\right]$, a sharp
range for $\mu$ (the step-size of the NLMS algorithm) given in
\cite{sg2005}, into $L$ subintervals and consider $L$ individual
step-sizes $\Theta=\{\mu_1,\mu_2,\cdots,\mu_L\}$, where
$\mu_1=\frac{1-\sqrt{\frac{M-1}{M}}}{L}, \mu_2=2\mu_1,\cdots$, and
$\mu_L=L\mu_1$. In each stage, $L$ individual NLMS algorithms are
executed ($\mu_l$ is the step-size of the $l^{th}$ algorithm). In
stage $s$ and at iteration $n$, if
$W^{s}_k(n)=[w^s_{1,k},\cdots,w^s_{M,k}]^T$, the parameter estimate
of the $k^{\rm th}$ algorithm, minimizes our criteria, then it is
considered as the parameter estimate at time iteration $n$. In other
words if the next equation holds
\begin{equation}
\label{e17} W^s_k(n)=\arg\min\limits_{W^s_l(n)\in I_{W^s}
}\left\{\sum\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\right\},
\end{equation}
where $W^{s}_l(n)=W^{s}(n-1)+\mu_l \frac{X^s(n)}{\|X^s(n)\|^2}e(n),
~~~ l=1,2,\cdots,k,\cdots,L-1,L$ and
$I_{W^s}=\{W^s_1(n),\cdots,W^s_L(n)\}$, then we have
$W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their
weight estimate by $W^{s}_k(n)$. At time instant $n=N$, this
procedure gives $W^s(N)$, the final estimate of $W^s$, as the true
parameter of stage $s$.
Now consider $R=(0,2\pi)$ and divide it into four equal parts
$R_1=(0,\frac{\pi}{2})$, $R_2=(\frac{\pi}{2},\pi)$,
$R_3=(\pi,\frac{3\pi}{2})$ and $R_4=(\frac{3\pi}{2},2\pi)$. The
partial information of channel phases (given by the receiver) is in
a way that it shows each $\phi_m$ ($m=1,2,\cdots,M$) belongs to
which one of the four quarters $R_i,~i=1,2,3,4$. Assume
$W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\cdots,w^{s}_M(N)]^T$ is the weight
estimate of the modified algorithm PLMS-PPIC at time instant $N$ of
the stage $s$. From equation (\ref{e6}) we have
\begin{equation}
\label{tt3}
\phi_m=\angle({\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m}).
\end{equation}
We estimate $\phi_m$ by $\hat{\phi}^s_m$, where
\begin{equation}
\label{ee3}
\hat{\phi}^s_m=\angle{(\frac{\alpha^{(s-1)}_m}{\alpha_m}w^s_m(N))}.
\end{equation}
Because $\frac{\alpha^{(s-1)}_m}{\alpha_m}=1$ or $-1$, we have
\begin{eqnarray}
\hat{\phi}^s_m=\left\{\begin{array}{ll} \angle{w^s_m(N)} &
\mbox{if}~
\frac{\alpha^{(s-1)}_m}{\alpha_m}=1\\
\pm\pi+\angle{w^s_m(N)} & \mbox{if}~
\frac{\alpha^{(s-1)}_m}{\alpha_m}=-1\end{array}\right.
\end{eqnarray}
Hence $\hat{\phi}^s_m\in P^s=\{\angle{w^s_m(N)},
\angle{w^s_m(N)+\pi, \angle{w^s_m(N)}-\pi}\}$. If $w^s_m(N)$
sufficiently converges to its true value $w^s_m$, the same region
for $\hat{\phi}^s_m$ and $\phi_m$ is expected. In this case only one
of the three members of $P^s$ has the same region as $\phi_m$. For
example if $\phi_m \in (0,\frac{\pi}{2})$, then $\hat{\phi}^s_m \in
(0,\frac{\pi}{2})$ and therefore only $\angle{w^s_m(N)}$ or
$\angle{w^s_m(N)}+\pi$ or $\angle{w^s_m(N)}-\pi$ belongs to
$(0,\frac{\pi}{2})$. If, for example, $\angle{w^s_m(N)}+\pi$ is such
a member between all three members of $P^s$, it is the best
candidate for phase estimation. In other words,
\[\phi_m\approx\hat{\phi}^s_m=\angle{w^s_m(N)}+\pi.\]
We admit that when there is a member of $P^s$ in the quarter of
$\phi_m$, then $w^s_m(N)$ converges. What would happen when non of
the members of $P^s$ has the same quarter as $\phi_m$? This
situation will happen when the absolute difference between $\angle
w^s_m(N)$ and $\phi_m$ is greater than $\pi$. It means that
$w^s_m(N)$ has not converged yet. In this case where we can not
count on $w^s_m(N)$, the expected value is the optimum choice for
the channel phase estimation, e.g. if $\phi_m \in (0,\frac{\pi}{2})$
then $\frac{\pi}{4}$ is the estimation of the channel phase
$\phi_m$, or if $\phi_m \in (\frac{\pi}{2},\pi)$ then
$\frac{3\pi}{4}$ is the estimation of the channel phase $\phi_m$.
The results of the above discussion are summarized in the next
equation
\begin{eqnarray}
\nonumber \hat{\phi}^s_m = \left\{\begin{array}{llll} \angle
{w^s_m(N)} & \mbox{if}~
\angle{w^s_m(N)}, \phi_m\in R_i,~~i=1,2,3,4\\
\angle{w^s_m(N)}+\pi & \mbox{if}~ \angle{w^s_m(N)}+\pi, \phi_m\in
R_i,~~i=1,2,3,4\\
\angle{w^n_m(N)}-\pi & \mbox{if}~ \angle{w^s_m(N)}-\pi, \phi_m\in
R_i,~~i=1,2,3,4\\
\frac{(i-1)\pi+i\pi}{4} & \mbox{if}~ \phi_m\in
R_i,~~\angle{w^s_m(N)},\angle
{w^s_m(N)}\pm\pi\notin R_i,~~i=1,2,3,4.\\
\end{array}\right.
\end{eqnarray}
Having an estimation of the channel phases, the rest of the proposed
method is given by estimating $\alpha^{s}_m$ as follows:
\begin{equation}
\label{tt4}
\alpha^{s}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N}
q^s_m(n)e^{-j\hat{\phi}^s_m}p_m(n)\right\}\right\},
\end{equation}
where
\begin{equation} \label{tt5}
q^{s}_{m}(n)=r(n)-\sum\limits_{m^{'}=1,m^{'}\ne
m}^{M}w^{s}_{m^{'}}(N)\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n).
\end{equation}
The inputs of the first stage $\{\alpha^{0}_m\}_{m=1}^M$ (needed for
computing $X^1(n)$) are given by
\begin{equation}
\label{qte5}
\alpha^{0}_m=\mbox{sign}\left\{\mbox{real}\left\{\sum\limits_{n=1}^{N}
r(n)e^{-j\hat{\phi}^0_m}p_m(n)\right\}\right\}.
\end{equation}
Assuming $\phi_m\in R_i$, then
\begin{equation}
\label{qqpp} \hat{\phi}^0_m =\frac{(i-1)\pi+i\pi}{4}.
\end{equation}
Table \ref{tab4} shows the structure of the modified PLMS-PPIC
method. It is to be notified that
\begin{itemize}
\item Equation (\ref{qte5}) shows the conventional bit detection
method when the receiver only knows the quarter of channel phase in
$(0,2\pi)$. \item With $L=1$ (i.e. only one NLMS algorithm), the
modified PLMS-PPIC can be thought as a modified version of the
LMS-PPIC method.
\end{itemize}
In the following section some examples are given to illustrate the
effectiveness of the proposed method.
\section{Simulations}\label{S5}
In this section we have considered some simulation examples.
Examples \ref{ex2}-\ref{ex4} compare the conventional, the modified
LMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced
channels, unbalanced channels and time varying channels. In all
examples, the receivers have only the quarter of each channel phase.
Example \ref{ex2} is given to compare the modified LMS-PPIC and the
PLMS-PPIC in the case of balanced channels.
\begin{example}{\it Balanced channels}:
\label{ex2}
\begin{table}
\caption{Channel phase estimate of the first user (example
\ref{ex2})} \label{tabex5} \centerline{{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\
&&&&\\
\cline{2-5} & \multirow{2}{*}{64}& s = 2 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\
\cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{3.24\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.18\pi}{8}$ \\
\cline{2-5} & \multirow{2}{*}{256}& s = 2 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\
\cline{3-5} & & s = 3 & $\hat{\phi}^s_m=\frac{2.85\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.88\pi}{8}$ \\
\cline{2-5} \hline
\end{tabular} }}
\end{table}
Consider the system model (\ref{e7}) in which $M$ users
synchronously send their bits to the receiver through their
channels. It is assumed that each user's information consists of
codes of length $N$. It is also assumd that the signal to noise
ratio (SNR) is 0dB. In this example there is no power-unbalanced or
channel loss is assumed. The step-size of the NLMS algorithm in
modified LMS-PPIC method is $\mu=0.1(1-\sqrt{\frac{M-1}{M}})$ and
the set of step-sizes of the parallel NLMS algorithms in modified
PLMS-PPIC method are
$\Theta=\{0.01,0.05,0.1,0.2,\cdots,1\}(1-\sqrt{\frac{M-1}{M}})$,
i.e. $\mu_1=0.01(1-\sqrt{\frac{M-1}{M}}),\cdots,
\mu_4=0.2(1-\sqrt{\frac{M-1}{M}}),\cdots,
\mu_{12}=(1-\sqrt{\frac{M-1}{M}})$. Figure~\ref{Figexp1NonCoh}
illustrates the bit error rate (BER) for the case of two stages and
for $N=64$ and $N=256$. Simulations also show that there is no
remarkable difference between results in two stage and three stage
scenarios. Table~\ref{tabex5} compares the average channel phase
estimate of the first user in each stage and over $10$ runs of
modified LMS-PPIC and PLMS-PPIC, when the the number of users is
$M=15$.
\end{example}
Although LMS-PPIC and PLMS-PPIC, as well as their modified versions,
are structured based on the assumption of no near-far problem
(examples \ref{ex3} and \ref{ex4}), these methods and especially the
second one have remarkable performance in the cases of unbalanced
and/or time varying channels.
\begin{example}{\it Unbalanced channels}:
\label{ex3}
\begin{table}
\caption{Channel phase estimate of the first user (example
\ref{ex3})} \label{tabex6} \centerline{{
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{6}{*}{\rotatebox{90}{$\phi_m=\frac{3\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\
&&&&\\
\cline{2-5} & \multirow{2}{*}{64}& s=2 & $\hat{\phi}^s_m=\frac{2.45\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.36\pi}{8}$ \\
\cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.71\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.80\pi}{8}$ \\
\cline{2-5} & \multirow{2}{*}{256}& s=2 & $\hat{\phi}^s_m=\frac{3.09\pi}{8}$ & $\hat{\phi}^s_m=\frac{2.86\pi}{8}$ \\
\cline{3-5} & & s=3 & $\hat{\phi}^s_m=\frac{2.93\pi}{8}$ & $\hat{\phi}^s_m=\frac{3.01\pi}{8}$ \\
\cline{2-5} \hline
\end{tabular} }}
\end{table}
Consider example \ref{ex2} with power unbalanced and/or channel loss
in transmission system, i.e. the true model at stage $s$ is
\begin{equation}
\label{ve7} r(n)=\sum\limits_{m=1}^{M}\beta_m
w^s_m\alpha^{(s-1)}_m c_m(n)+v(n),
\end{equation}
where $0<\beta_m\leq 1$ for all $1\leq m \leq M$. Both the LMS-PPIC
and the PLMS-PPIC methods assume the model (\ref{e7}), and their
estimations are based on observations $\{r(n),X^s(n)\}$, instead of
$\{r(n),\mathbf{G}X^s(n)\}$, where the channel gain matrix is
$\mathbf{G}=\mbox{diag}(\beta_1,\beta_2,\cdots,\beta_m)$. In this
case we repeat example \ref{ex2}. We randomly get each element of
$G$ from $[0,0.3]$. Figure~\ref{Figexp2NonCoh} illustrates the BER
versus the number of users. Table~\ref{tabex6} compares the channel
phase estimate of the first user in each stage and over $10$ runs of
modified LMS-PPIC and modified PLMS-PPIC for $M=15$.
\end{example}
\begin{example}
\label{ex4} {\it Time varying channels}: Consider example \ref{ex2}
with time varying Rayleigh fading channels. In this case we assume
the maximum Doppler shift of $40$HZ, the three-tap
frequency-selective channel with delay vector of $\{2\times
10^{-6},2.5\times 10^{-6},3\times 10^{-6}\}$sec and gain vector of
$\{-5,-3,-10\}$dB. Figure~\ref{Figexp3NonCoh} shows the average BER
over all users versus $M$ and using two stages.
\end{example}
\section{Conclusion}\label{S6}
In this paper, parallel interference cancelation using adaptive
multistage structure and employing a set of NLMS algorithms with
different step-sizes is proposed, when just the quarter of the
channel phase of each user is known. In fact, the algorithm has been
proposed for coherent transmission with full information on channel
phases in \cite{cohpaper}. This paper is a modification on the
previously proposed algorithm. Simulation results show that the new
method has a remarkable performance for different scenarios
including Rayleigh fading channels even if the channel is
unbalanced.
| {'timestamp': '2007-10-23T02:36:00', 'yymm': '0710', 'arxiv_id': '0710.4172', 'language': 'en', 'url': 'https://arxiv.org/abs/0710.4172'} |
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{H}{uman} action recognition is a fast developing research area due to its wide applications
in intelligent surveillance, human-computer interaction, robotics, and so on.
In recent years, human activity analysis based on human skeletal data has attracted a lot of attention,
and various methods for feature extraction and classifier learning have been developed for skeleton-based action recognition \cite{zhu2016handcrafted,presti20163d,han2016review}.
A hidden Markov model (HMM) is utilized by Xia {\emph{et~al.}}~ \cite{HOJ3D} to model the temporal dynamics over a histogram-based representation of joint positions for action recognition.
The static postures and dynamics of the motion patterns are represented via eigenjoints by Yang and Tian \cite{eigenjointsJournal}.
A Naive-Bayes-Nearest-Neighbor classifier learning approach is also used by \cite{eigenjointsJournal}.
Vemulapalli {\emph{et~al.}}~ \cite{vemulapalli2014liegroup} represent the skeleton configurations and action patterns as points and curves in a Lie group,
and then a SVM classifier is adopted to classify the actions.
Evangelidis {\emph{et~al.}}~ \cite{skeletalQuads} propose to learn a GMM over the Fisher kernel representation of the skeletal quads feature.
An angular body configuration representation over the tree-structured set of joints is proposed in \cite{hog2-ohnbar}.
A skeleton-based dictionary learning method using geometry constraint and group sparsity is also introduced in \cite{Luo_2013_ICCV}.
Recently, recurrent neural networks (RNNs) which can handle the sequential data with variable lengths \cite{graves2013speechICASSP,sutskever2014sequence},
have shown their strength in language modeling \cite{mikolov2011extensions,sundermeyer2012lstm,mesnil2013investigation},
image captioning \cite{vinyals2015show,xu2015show},
video analysis \cite{srivastava2015unsupervised,Singh_2016_CVPR,Jain_2016_CVPR,Alahi_2016_CVPR,Deng_2016_CVPR,Ibrahim_2016_CVPR,Ma_2016_CVPR,Ni_2016_CVPR,li2016online},
and RGB-based activity recognition \cite{yue2015beyond,donahue2015long,li2016action,wu2015ACMMM}.
Applications of these networks have also shown promising achievements in skeleton-based action recognition \cite{du2015hierarchical,veeriah2015differential,nturgbd}.
In the current skeleton-based action recognition literature, RNNs are mainly used to model the long-term context information across the temporal dimension by representing motion-based dynamics.
However, there is often strong dependency relations among the skeletal joints in spatial domain also,
and the spatial dependency structure is usually discriminative for action classification.
To model the dynamics and dependency relations in both temporal and spatial domains,
we propose a spatio-temporal long short-term memory (ST-LSTM) network in this paper.
In our ST-LSTM network,
each joint can receive context information from its stored data from previous frames and also from the neighboring joints at the same time frame to represent its incoming spatio-temporal context.
Feeding a simple chain of joints to a sequence learner limits the performance of the network,
as the human skeletal joints are not semantically arranged as a chain.
Instead, the adjacency configuration of the joints in the skeletal data can be better represented by a tree structure.
Consequently, we propose a traversal procedure by following the tree structure of the skeleton
to exploit the kinematic relationship among the body joints for better modeling spatial dependencies.
Since the 3D positions of skeletal joints provided by depth sensors are not always very accurate,
we further introduce a new gating framework, so called ``trust gate'',
for our ST-LSTM network to analyze the reliability of the input data at each spatio-temporal step.
The proposed trust gate gives better insight to the ST-LSTM network about
when and how to update, forget, or remember the internal memory content as the representation of the long-term context information.
In addition, we introduce a feature fusion method within the ST-LSTM unit to better exploit the multi-modal features extracted for each joint.
We summarize the main contributions of this paper as follows.
(1) A novel spatio-temporal LSTM (ST-LSTM) network for skeleton-based action recognition is designed.
(2) A tree traversal technique is proposed to feed the structured human skeletal data into a sequential LSTM network.
(3) The functionality of the ST-LSTM framework is further extended by adding the proposed ``trust gate''.
(4) A multi-modal feature fusion strategy within the ST-LSTM unit is introduced.
(5) The proposed method achieves state-of-the-art performance on seven benchmark datasets.
The remainder of this paper is organized as follows.
In section \ref{sec:relatedwork}, we introduce the related works on skeleton-based action recognition, which used recurrent neural networks to model the temporal dynamics.
In section \ref{sec:approach}, we introduce our end-to-end trainable spatio-temporal recurrent neural network for action recognition.
The experiments are presented in section \ref{sec:exp}.
Finally, the paper is concluded in section \ref{sec:conclusion}.
\section{Related Work}
\label{sec:relatedwork}
Skeleton-based action recognition has been explored in different aspects during recent years \cite{7284883,actionletPAMI,MMMP_PAMI,MMTW,Vemulapalli_2016_CVPR,rahmani2014real,shahroudy2014multi,rahmani2015learning,lillo2014discriminative,
jhuang2013towards,
chen_2016_icassp,liu2016IVC,cai2016TMM,al2016PRL,Tao_2015_ICCV_Workshops
}.
In this section, we limit our review to more recent approaches which use RNNs or LSTMs for human activity analysis
Du {\emph{et~al.}}~ \cite{du2015hierarchical} proposed a Hierarchical RNN network by utilizing multiple bidirectional RNNs in a novel hierarchical fashion.
The human skeletal structure was divided to five major joint groups.
Then each group was fed into the corresponding bidirectional RNN.
The outputs of the RNNs were concatenated to represent the upper body and lower body,
then each was further fed into another set of RNNs.
By concatenating the outputs of two RNNs, the global body representation was obtained, which was fed to the next RNN layer.
Finally, a softmax classifier was used in \cite{du2015hierarchical} to perform action classification.
Veeriah {\emph{et~al.}}~ \cite{veeriah2015differential} proposed to add a new gating mechanism for LSTM to model the derivatives of the memory states and explore the salient action patterns.
In this method, all of the input features were concatenated at each frame and were fed to the differential LSTM at each step.
Zhu {\emph{et~al.}}~ \cite{zhu2016co} introduced a regularization term to the objective function of the LSTM
network to push the entire framework towards learning co-occurrence relations among the joints for action recognition.
An internal dropout \cite{dropout} technique within the LSTM unit was also introduced in \cite{zhu2016co}.
Shahroudy {\emph{et~al.}}~ \cite{nturgbd} proposed to split the LSTM's memory cell to sub-cells to push the network towards learning the context representations for each body part separately.
The output of the network was learned by concatenating the multiple memory sub-cells.
Harvey and Pal \cite{harvey2015semi} adopted an encoder-decoder recurrent network to reconstruct the skeleton sequence and perform action classification at the same time.
Their model showed promising results on motion capture sequences.
Mahasseni and Todorovic \cite{mahasseni2016regularizing} proposed to use LSTM to encode a skeleton sequence as a feature vector.
At each step, the input of the LSTM consists of the concatenation of the skeletal joints' 3D locations in a frame.
They further constructed a feature manifold by using a set of encoded feature vectors.
Finally, the manifold was used to assist and regularize the supervised learning of another LSTM for RGB video based action recognition.
Different from the aforementioned works,
our proposed method does not simply concatenate the joint-based input features to build the body-level feature representation.
Instead, the dependencies between the skeletal joints are explicitly modeled by applying recurrent analysis over temporal and spatial dimensions concurrently.
Furthermore, a novel trust gate is introduced to make our ST-LSTM network more reliable against the noisy input data.
This paper is an extension of our preliminary conference version \cite{liu2016spatio}.
In \cite{liu2016spatio}, we validated the effectiveness of our model on four benchmark datasets.
In this paper, we extensively evaluate our model on seven challenging datasets.
Besides, we further propose an effective feature fusion strategy inside the ST-LSTM unit.
In order to improve the learning ability of our ST-LSTM network, a last-to-first link scheme is also introduced.
In addition, we provide more empirical analysis of the proposed framework.
\section{Spatio-Temporal Recurrent Networks}
\label{sec:approach}
In a generic skeleton-based action recognition problem, the input observations are limited to the 3D locations of the major body joints at each frame.
Recurrent neural networks have been successfully applied to
this problem recently \cite{du2015hierarchical,zhu2016co,nturgbd}.
LSTM networks \cite{lstm} are among the most successful extensions of recurrent neural networks.
A gating mechanism controlling the contents of an internal memory cell is adopted by the LSTM model
to learn a better and more complex representation of long-term dependencies in the input sequential data.
Consequently, LSTM networks are very suitable for feature learning over time series data (such as human skeletal sequences over time).
We will briefly review the original LSTM model in this section,
and then introduce our ST-LSTM network and the tree-structure based traversal approach.
We will also introduce a new gating mechanism for ST-LSTM to handle the noisy measurements in the input data for better action recognition.
Finally, an internal feature fusion strategy for ST-LSTM will be proposed.
\subsection{Temporal Modeling with LSTM}
\label{sec:approach:lstm}
In the standard LSTM model, each recurrent unit contains an input gate $i_t$, a forget gate $f_t$, an output gate $o_t$, and an internal memory cell state $c_t$, together with a hidden state $h_t$.
The input gate $i_{t}$ controls the contributions of the newly arrived input data at time step $t$ for updating the memory cell,
while the forget gate $f_{t}$ determines how much the contents of the previous state $(c_{t-1})$ contribute to deriving the current state $(c_{t})$.
The output gate $o_{t}$ learns how the output of the LSTM unit at current time step should be derived from the current state of the internal memory cell.
These gates and states can be obtained as follows:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{t} \\
f_{t} \\
o_{t} \\
u_{t} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M
\left(
\begin{array}{ccc}
x_{t} \\
h_{t-1} \\
\end{array}
\right)
\right)\\
c_{t} &=& i_{t} \odot u_{t} + f_{t} \odot c_{t-1}
\label{eq:ct}\\
h_{t} &=& o_{t} \odot \tanh( c_{t})
\label{eq:ht}
\end{eqnarray}
where $x_t$ is the input at time step $t$, $u_t$ is the modulated input, $\odot$ denotes the element-wise product,
and $M: \mathbb{R}^{D+d} \to \mathbb{R}^{4d}$ is an affine transformation.
$d$ is the size of the internal memory cell, and $D$ is the dimension of $x_t$.
\subsection{Spatio-Temporal LSTM}
\label{sec:approach:stlstm}
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.338]{STLSTM.pdf}}
\end{minipage}
\caption{
Illustration of the spatio-temporal LSTM network.
In temporal dimension, the corresponding body joints are fed over the frames.
In spatial dimension, the skeletal joints in each frame are fed as a sequence.
Each unit receives the hidden representation of the previous joints and the same joint from previous frames.}
\label{fig:STLSTM}
\end{figure}
RNNs have already shown their strengths in modeling the complex dynamics of human activities as time series data,
and achieved promising performance in skeleton-based human action recognition \cite{du2015hierarchical,zhu2016co,veeriah2015differential,nturgbd}.
In the existing literature, RNNs are mainly utilized in temporal domain to discover the discriminative dynamics and motion patterns for action recognition.
However, there is also discriminative spatial information encoded in the joints' locations and posture configurations at each video frame,
and the sequential nature of the body joints makes it possible to apply RNN-based modeling to spatial domain as well.
Different from the existing methods which concatenate the joints' information as the entire body's representation,
we extend the recurrent analysis to spatial domain by discovering the spatial dependency patterns among different body joints.
We propose a spatio-temporal LSTM (ST-LSTM) network to simultaneously model the temporal dependencies among different frames and also the spatial dependencies of different joints at the same frame.
Each ST-LSTM unit, which corresponds to one of the body joints,
receives the hidden representation of its own joint from the previous time step
and also the hidden representation of its previous joint at the current frame.
A schema of this model is illustrated in \figurename{ \ref{fig:STLSTM}}.
In this section, we assume the joints are arranged in a simple chain sequence, and the order is depicted in \figurename{ \ref{fig:tree16joints}(a)}.
In section \ref{sec:approach:skeltree}, we will introduce a more advanced traversal scheme to take advantage of the adjacency structure among the skeletal joints.
We use $j$ and $t$ to respectively denote the indices of joints and frames,
where $j \in \{1,...,J\}$ and $t \in \{1,...,T\}$.
Each ST-LSTM unit is fed with the input ($x_{j, t}$, the information of the corresponding joint at current time step),
the hidden representation of the previous joint at current time step $(h_{j-1,t})$,
and the hidden representation of the same joint at the previous time step $(h_{j,t-1})$.
As depicted in \figurename{ \ref{fig:STLSTMFig}},
each unit also has two forget gates, $f_{j, t}^{T}$ and $f_{j, t}^{S}$, to handle the two sources of context information in temporal and spatial dimensions, respectively.
The transition equations of ST-LSTM are formulated as follows:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{j, t} \\
f_{j, t}^{S} \\
f_{j, t}^{T} \\
o_{j, t} \\
u_{j, t} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M
\left(
\begin{array}{ccc}
x_{j, t} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
c_{j, t} &=& i_{j, t} \odot u_{j, t} + f_{j, t}^{S} \odot c_{j-1, t} + f_{j, t}^{T} \odot c_{j, t-1}
\\
h_{j, t} &=& o_{j, t} \odot \tanh( c_{j, t})
\end{eqnarray}
\begin{figure}
\centerline{\includegraphics[scale=0.479]{STLSTMFig.pdf}}
\caption{Illustration of the proposed ST-LSTM with one unit.}
\label{fig:STLSTMFig}
\end{figure}
\subsection{Tree-Structure Based Traversal}
\label{sec:approach:skeltree}
\begin{figure}
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{Skeleton16Joints.pdf}}
\centerline{(a)}
\end{minipage}
\begin{minipage}[b]{0.63\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{Tree16Joints.pdf}}
\centerline{(b)}
\end{minipage}
\begin{minipage}[b]{0.99\linewidth}
\centering
\centerline{\includegraphics[scale=.27]{BiTree16Joints.pdf}}
\centerline{(c)}
\end{minipage}
\caption{(a) The skeleton of the human body. In the simple joint chain model, the joint visiting order is 1-2-3-...-16.
(b) The skeleton is transformed to a tree structure.
(c) The tree traversal scheme. The tree structure can be unfolded to a chain with the traversal scheme, and the joint visiting order is 1-2-3-2-4-5-6-5-4-2-7-8-9-8-7-2-1-10-11-12-13-12-11-10-14-15-16-15-14-10-1.}
\label{fig:tree16joints}
\end{figure}
Arranging the skeletal joints in a simple chain order ignores the kinematic interdependencies among the body joints.
Moreover, several semantically false connections between the joints, which are not strongly related, are added.
The body joints are popularly represented as a tree-based pictorial structure \cite{zou2009automatic,yang2011articulated} in human parsing,
as shown in \figurename{ \ref{fig:tree16joints}(b)}.
It is beneficial to utilize the known interdependency relations between various sets of body joints as an adjacency tree structure inside our ST-LSTM network as well.
For instance, the hidden representation of the neck joint (joint 2 in \figurename{ \ref{fig:tree16joints}(a)})
is often more informative for the right hand joints (7, 8, and 9) compared to the joint 6, which lies before them in the numerically ordered chain-like model.
Although using a tree structure for the skeletal data sounds more reasonable here, tree structures cannot be directly fed into our current form of the proposed ST-LSTM network.
In order to mitigate the aforementioned limitation, a bidirectional tree traversal scheme is proposed.
In this scheme, the joints are visited in a sequence, while the adjacency information in the skeletal tree structure will be maintained.
At the first spatial step, the root node (central spine joint in \figurename{ \ref{fig:tree16joints}(c)}) is fed to our network.
Then the network follows the depth-first traversal order in the spatial (skeleton tree) domain.
Upon reaching a leaf node, the traversal backtracks in the tree.
Finally, the traversal goes back to the root node.
In our traversal scheme, each connection in the tree is met twice,
thus it guarantees the transmission of the context data in both top-down and bottom-up directions within the adjacency tree structure.
In other words, each node (joint) can obtain the context information from both its ancestors and descendants in the hierarchy defined by the tree structure.
Compared to the simple joint chain order described in section \ref{sec:approach:stlstm},
this tree traversal strategy, which takes advantage of the joints' adjacency structure, can discover stronger long-term spatial dependency patterns in the skeleton sequence.
Our framework's representation capacity can be further improved by stacking multiple layers of the tree-structured ST-LSTMs and making the network deeper, as shown in \figurename{ \ref{fig:stackedTreeSTLSTM}}.
It is worth noting that at each step of our ST-LSTM framework,
the input is limited to the information of a single joint at a time step,
and its dimension is much smaller compared to the concatenated input features used by other existing methods.
Therefore, our network has much fewer learning parameters.
This can be regarded as a weight sharing regularization for our learning model,
which leads to better generalization in the scenarios with relatively small sets of training samples.
This is an important advantage for skeleton-based action recognition, since the numbers of training samples in most existing datasets are limited.
\begin{figure}
\begin{minipage}[b]{0.99\linewidth}
\centering
\centerline{\includegraphics[scale=.38]{StackedTreeSTLSTM.pdf}}
\end{minipage}
\caption{
Illustration of the deep tree-structured ST-LSTM network.
For clarity, some arrows are omitted in this figure.
The hidden representation of the first ST-LSTM layer is fed to the second ST-LSTM layer as its input.
The second ST-LSTM layer's hidden representation is fed to the softmax layer for classification.
}
\label{fig:stackedTreeSTLSTM}
\end{figure}
\subsection{Spatio-Temporal LSTM with Trust Gates}
\label{sec:approach:trustgate}
In our proposed tree-structured ST-LSTM network, the inputs are the positions of body joints provided by depth sensors (such as Kinect),
which are not always accurate because of noisy measurements and occlusion.
The unreliable inputs can degrade the performance of the network.
To circumvent this difficulty, we propose to add a novel additional gate to our ST-LSTM network to analyze the reliability of the input measurements based on the derived estimations of the input from the available context information at each spatio-temporal step.
Our gating scheme is inspired by the works in natural language processing \cite{sutskever2014sequence},
which use the LSTM representation of previous words at each step to predict the next coming word.
As there are often high dependency relations among the words in a sentence, this idea works decently.
Similarly, in a skeletal sequence, the neighboring body joints often move together,
and this articulated motion follows common yet complex patterns,
thus the input data $x_{j,t}$ is expected to be predictable by using the contextual information ($h_{j-1,t}$ and $h_{j,t-1}$) at each spatio-temporal step.
Inspired by this predictability concept, we add a new mechanism to our ST-LSTM calculating a prediction of the input at each step and comparing it with the actual input.
The amount of estimation error is then used to learn a new ``trust gate''.
The activation of this new gate can be used to assist the ST-LSTM network to learn better decisions about when and how to remember or forget the contents in the memory cell.
For instance, if the trust gate learns that the current joint has wrong measurements,
then this gate can block the input gate and prevent the memory cell from being altered by the current unreliable input data.
Concretely, we introduce a function to produce a prediction of the input at step $(j,t)$ based on the available context information as:
\begin{equation}
p_{j, t} = \tanh
\left(
M_{p}
\left(
\begin{array}{ccc}
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\label{eq:p_j_t}
\end{equation}
where $M_p$ is an affine transformation mapping the data from $\mathbb{R}^{2d}$ to $\mathbb{R}^d$, thus the dimension of $p_{j,t}$ is $d$.
Note that the context information at each step does not only contain the representation of the previous temporal step,
but also the hidden state of the previous spatial step.
This indicates that the long-term context information of both the same joint at previous frames and the other visited joints at the current frame are seamlessly incorporated.
Thus this function is expected to be capable of generating reasonable predictions.
In our proposed network, the activation of trust gate is a vector in $\mathbb{R}^d$ (similar to the activation of input gate and forget gate).
The trust gate $\tau_{j, t}$ is calculated as follows:
\begin{eqnarray}
x'_{j, t} &=& \tanh
\left(
M_{x}
\left(
x_{j, t}
\right)
\right)
\label{eq:x_prime_j_t}
\\
\tau_{j, t} &=& G (p_{j, t} - x'_{j, t})
\label{eq:tau}
\end{eqnarray}
where $M_x: \mathbb{R}^{D} \to \mathbb{R}^{d}$ is an affine transformation.
The activation function $G(\cdot)$ is an element-wise operation calculated as $G(z) = \exp(-\lambda z^{2})$,
for which $\lambda$ is a parameter to control the bandwidth of Gaussian function ($\lambda > 0$).
$G(z)$ produces a small response if $z$ has a large absolute value and a large response when $z$ is close to zero.
Adding the proposed trust gate, the cell state of ST-LSTM will be updated as:
\begin{eqnarray}
c_{j, t} &=& \tau_{j, t} \odot i_{j, t} \odot u_{j, t}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}) \odot f_{j, t}^{S} \odot c_{j-1, t}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}) \odot f_{j, t}^{T} \odot c_{j, t-1}
\end{eqnarray}
This equation can be explained as follows:
(1) if the input $x_{j,t}$ is not trusted (due to the noise or occlusion),
then our network relies more on its history information, and tries to block the new input at this step;
(2) on the contrary, if the input is reliable, then our learning algorithm updates the memory cell regarding the input data.
The proposed ST-LSTM unit equipped with trust gate is illustrated in \figurename{ \ref{fig:TrustGateSTLSTMFig}}.
The concept of the proposed trust gate technique is theoretically generic and can be used in other domains to handle noisy input information for recurrent network models.
\begin{figure}
\centerline{\includegraphics[scale=0.479]{TrustGateSTLSTMFig_X.pdf}}
\caption{Illustration of the proposed ST-LSTM with trust gate.}
\label{fig:TrustGateSTLSTMFig}
\end{figure}
\subsection{Feature Fusion within ST-LSTM Unit}
\label{sec:approach:innerfusion}
\begin{figure}
\centerline{\includegraphics[scale=0.469]{FusionSTLSTMFig.pdf}}
\caption{Illustration of the proposed structure for feature fusion inside the ST-LSTM unit.}
\label{fig:FusionSTLSTMFig}
\end{figure}
As mentioned above, at each spatio-temporal step, the positional information of the corresponding joint at the current frame is fed to our ST-LSTM network.
Here we call joint position-based feature as a geometric feature.
Beside utilizing the joint position (3D coordinates),
we can also extract visual texture and motion features ({\emph{e.g.}}~ HOG, HOF \cite{dalal2006human,wang2011action}, or ConvNet-based features \cite{simonyan2014very,cheron2015p})
from the RGB frames, around each body joint as the complementary information.
This is intuitively effective for better human action representation, especially in the human-object interaction scenarios.
A naive way for combining geometric and visual features for each joint is to concatenate them in the feature level
and feed them to the corresponding ST-LSTM unit as network's input data.
However, the dimension of the geometric feature is very low intrinsically,
while the visual features are often in relatively higher dimensions.
Due to this inconsistency, simple concatenation of these two types of features in the input stage of the network causes degradation in the final performance of the entire model.
The work in \cite{nturgbd} feeds different body parts into the Part-aware LSTM \cite{nturgbd} separately,
and then assembles them inside the LSTM unit.
Inspired by this work, we propose to fuse the two types of features inside the ST-LSTM unit,
rather than simply concatenating them at the input level.
We use $x_{j,t}^{\mathcal{F}}$ (${\mathcal{F}} \in \{1,2\}$) to denote the geometric feature and visual feature for a joint at the $t$-th time step.
As illustrated in \figurename{ \ref{fig:FusionSTLSTMFig}}, at step $(j,t)$, the two features $(x_{j,t}^{1}$ and $x_{j,t}^{2})$ are fed to the ST-LSTM unit separately as the new input structure.
Inside the recurrent unit, we deploy two sets of gates, input gates $(i_{j,t}^{\mathcal{F}})$, forget gates with respect to time $(f_{j,t}^{T, \mathcal{F}})$ and space $(f_{j,t}^{S, \mathcal{F}})$, and also trust gates $(\tau_{j, t}^{\mathcal{F}})$, to deal with the two heterogeneous sets of modality features.
We put the two cell representations $(c_{j,t}^{\mathcal{F}})$ together to build up the multimodal context information of the two sets of modality features.
Finally, the output of each ST-LSTM unit is calculated based on the multimodal context representations,
and controlled by the output gate $(o_{j,t})$ which is shared for the two sets of features.
For the features of each modality, it is efficient and intuitive to model their context information independently.
However, we argue that the representation ability of each modality-based sets of features can be strengthened by borrowing information from the other set of features.
Thus, the proposed structure does not completely separate the modeling of multimodal features.
Let us take the geometric feature as an example.
Its input gate, forget gates, and trust gate are all calculated from the new input $(x_{j,t}^{1})$ and hidden representations $(h_{j,t-1}$ and $h_{j-1,t})$,
whereas each hidden representation is an associate representation of two features' context information from previous steps.
Assisted by visual features' context information,
the input gate, forget gates, and also trust gate for geometric feature can effectively learn how to update its current cell state $(c_{j,t}^{1})$.
Specifically, for the new geometric feature input $(x_{j,t}^{1})$,
we expect the network to produce a better prediction when it is not only based on the context of the geometric features, but also assisted by the context of visual features.
Therefore, the trust gate $(\tau_{j, t}^{1})$ will have stronger ability to assess the reliability of the new input data $(x_{j,t}^{1})$.
The proposed ST-LSTM with integrated multimodal feature fusion is formulated as:
\begin{eqnarray}
\left(
\begin{array}{ccc}
i_{j, t}^\mathcal{F} \\
f_{j, t}^{S,\mathcal{F}} \\
f_{j, t}^{T,\mathcal{F}} \\
u_{j, t}^\mathcal{F} \\
\end{array}
\right)
&=&
\left(
\begin{array}{ccc}
\sigma \\
\sigma \\
\sigma \\
\tanh \\
\end{array}
\right)
\left(
M^\mathcal{F}
\left(
\begin{array}{ccc}
x_{j, t}^\mathcal{F} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
p_{j, t}^\mathcal{F} &=& \tanh
\left(
M_{p}^\mathcal{F}
\left(
\begin{array}{ccc}
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
{x'}_{j, t}^\mathcal{F} &=& \tanh
\left(
M_{x}^\mathcal{F}
\left(
\begin{array}{ccc}
x_{j, t}^\mathcal{F}\\
\end{array}
\right)
\right)
\\
\tau_{j, t}^{\mathcal{F}} &=& G ({x'}_{j, t}^{\mathcal{F}} - p_{j, t}^{\mathcal{F}})
\\
c_{j, t}^{\mathcal{F}} &=& \tau_{j, t}^{\mathcal{F}} \odot i_{j, t}^{\mathcal{F}} \odot u_{j, t}^{\mathcal{F}}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}^{\mathcal{F}}) \odot f_{j, t}^{S,\mathcal{F}} \odot c_{j-1, t}^{\mathcal{F}}
\nonumber\\
&&+ (\bold{1} - \tau_{j, t}^{\mathcal{F}}) \odot f_{j, t}^{T,\mathcal{F}} \odot c_{j, t-1}^{\mathcal{F}}
\\
o_{j, t} &=& \sigma
\left(
M_{o}
\left(
\begin{array}{ccc}
x_{j, t}^{1} \\
x_{j, t}^{2} \\
h_{j-1, t} \\
h_{j, t-1} \\
\end{array}
\right)
\right)
\\
h_{j, t} &=& o_{j, t} \odot \tanh
\left(
\begin{array}{ccc}
c_{j, t}^{1} \\
c_{j, t}^{2} \\
\end{array}
\right)
\end{eqnarray}
\subsection{Learning the Classifier}
\label{sec:approach:learning}
As the labels are given at video level, we feed them as the training outputs of our network at each spatio-temporal step.
A softmax layer is used by the network to predict the action class $\hat{y}$ among the given class set $Y$.
The prediction of the whole video can be obtained by averaging the prediction scores of all steps.
The objective function of our ST-LSTM network is as follows:
\begin{equation}
\mathcal{L} = \sum_{j=1}^J \sum_{t=1}^T l(\hat{y}_{j,t}, y)
\end{equation}
where $l(\hat{y}_{j,t}, y)$ is the negative log-likelihood loss \cite{graves2012supervised}
that measures the difference between the prediction result $\hat{y}_{j,t}$ at step $(j,t)$ and the true label $y$.
The back-propagation through time (BPTT) algorithm \cite{graves2012supervised} is often effective for minimizing the objective function for the RNN/LSTM models.
As our ST-LSTM model involves both spatial and temporal steps, we adopt a modified version of BPTT for training.
The back-propagation runs over spatial and temporal steps simultaneously by starting at the last joint at the last frame.
To clarify the error accumulation in this procedure, we use $e_{j,t}^T$ and $e_{j,t}^S$ to denote the error back-propagated from step $(j,t+1)$ to $(j,t)$ and the error back-propagated from step $(j+1,t)$ to $(j,t)$, respectively.
Then the errors accumulated at step $(j,t)$ can be calculated as $e_{j,t}^T+e_{j,t}^S$.
Consequently, before back-propagating the error at each step, we should guarantee both its subsequent joint step and subsequent time step have already been computed.
The left-most units in our ST-LSTM network do not have preceding spatial units, as shown in \figurename{ \ref{fig:STLSTM}}.
To update the cell states of these units in the feed-forward stage,
a popular strategy is to input zero values into these nodes to substitute the hidden representations from the preceding nodes.
In our implementation, we link the last unit at the last time step to the first unit at the current time step.
We call the new connection as last-to-first link.
In the tree traversal, the first and last nodes refer to the same joint (root node of the tree),
however the last node contains holistic information of the human skeleton in the corresponding frame.
Linking the last node to the starting node at the next time step provides the starting node with the whole body structure configuration,
rather than initializing it with less effective zero values.
Thus, the network has better ability to learn the action patterns in the skeleton sequence.
\section{Experiments}
\label{sec:exp}
The proposed method is evaluated and empirically analyzed on seven benchmark datasets for which the coordinates of skeletal joints are provided.
These datasets are NTU RGB+D, UT-Kinect, SBU Interaction, SYSU-3D, ChaLearn Gesture, MSR Action3D, and Berkeley MHAD.
We conduct extensive experiments with different models to verify the effectiveness of individual technical contributions proposed, as follows:
(1) ``ST-LSTM (Joint Chain)''.
In this model, the joints are visited in a simple chain order, as shown in \figurename{ \ref{fig:tree16joints}(a)};
(2) ``ST-LSTM (Tree)''.
In this model, the tree traversal scheme illustrated in \figurename{ \ref{fig:tree16joints}(c) is used to take advantage of the tree-based spatial structure of skeletal joints;
(3) ``ST-LSTM (Tree) + Trust Gate''.
This model uses the trust gate to handle the noisy input.
The input to every unit of of our network at each spatio-temporal step is the location of the corresponding skeletal joint (i.e., geometric features) at the current time step.
We also use two of the datasets (NTU RGB+D dataset and UT-Kinect dataset) as examples
to evaluate the performance of our fusion model within the ST-LSTM unit by fusing the geometric and visual features.
These two datasets include human-object interactions (such as making a phone call and picking up something)
and the visual information around the major joints can be complementary to the geometric features for action recognition.
\subsection{Evaluation Datasets}
\label{sec:exp:datasets}
{\bf NTU RGB+D dataset} \cite{nturgbd} was captured with Kinect (v2).
It is currently the largest publicly available dataset for depth-based action recognition, which contains more than 56,000 video sequences and 4 million video frames.
The samples in this dataset were collected from 80 distinct viewpoints.
A total of 60 action classes (including daily actions, medical conditions, and pair actions) were performed by 40 different persons aged between 10 and 35.
This dataset is very challenging due to the large intra-class and viewpoint variations.
With a large number of samples, this dataset is highly suitable for deep learning based activity analysis.
The parameters learned on this dataset can also be used to initialize the models for smaller datasets to improve and speed up the training process of the network.
The 3D coordinates of 25 body joints are provided in this dataset.
{\bf UT-Kinect dataset} \cite{HOJ3D} was captured with a stationary Kinect sensor.
It contains 10 action classes.
Each action was performed twice by every subject.
The 3D locations of 20 skeletal joints are provided.
The significant intra-class and viewpoint variations make this dataset very challenging.
{\bf SBU Interaction dataset} \cite{yun2012two} was collected with Kinect.
It contains 8 classes of two-person interactions, and includes 282 skeleton sequences with 6822 frames.
Each body skeleton consists of 15 joints.
The major challenges of this dataset are:
(1) in most interactions, one subject is acting, while the other subject is reacting; and
(2) the 3D measurement accuracies of the joint coordinates are low in many sequences.
{\bf SYSU-3D dataset} \cite{jianfang_CVPR15} contains 480 sequences and was collected with Kinect.
In this dataset, 12 different activities were performed by 40 persons.
The 3D coordinates of 20 joints are provided in this dataset.
The SYSU-3D dataset is a very challenging benchmark because:
(1) the motion patterns are highly similar among different activities, and
(2) there are various viewpoints in this dataset.
{\bf ChaLearn Gesture dataset} \cite{escalera2013multi} consists of 23 hours of videos captured with Kinect.
A total of 20 Italian gestures were performed by 27 different subjects.
This dataset contains 955 long-duration videos and has predefined splits of samples as training, validation and testing sets.
Each skeleton in this dataset has 20 joints.
{\bf MSR Action3D dataset} \cite{li2010action} is widely used for depth-based action recognition.
It contains a total of 10 subjects and 20 actions.
Each action was performed by the same subject two or three times.
Each frame in this dataset contains 20 skeletal joints.
{\bf Berkeley MHAD dataset} \cite{ofli2013berkeley} was collected by using a motion capture network of sensors.
It contains 659 sequences and about 82 minutes of recording time.
Eleven action classes were performed by five female and seven male subjects.
The 3D coordinates of 35 skeletal joints are provided in each frame.
\subsection{Implementation Details}
\label{sec:exp:impdetails}
In our experiments, each video sequence is divided to $T$ sub-sequences with the same length, and one frame is randomly selected from each sub-sequence.
This sampling strategy has the following advantages:
(1) Randomly selecting a frame from each sub-sequence can add variation to the input data, and improves the generalization strengths of our trained network.
(2) Assume each sub-sequence contains $n$ frames,
so we have $n$ choices to sample a frame from each sub-sequence.
Accordingly, for the whole video, we can obtain a total number of $n^T$ sampling combinations.
This indicates that the training data can be greatly augmented.
We use different frame sampling combinations for each video over different training epochs.
This strategy is useful for handling the over-fitting issues,
as most datasets have limited numbers of training samples.
We observe this strategy achieves better performance in contrast with uniformly sampling frames.
We cross-validated the performance based on the leave-one-subject-out protocol on the large scale NTU RGB+D dataset, and found $T=20$ as the optimum value.
We use Torch7 \cite{collobert2011torch7} as the deep learning platform to perform our experiments.
We train the network with stochastic gradient descent,
and set the learning rate, momentum, and decay rate to $2$$\times$$10^{-3}$, $0.9$, and $0.95$, respectively.
We set the unit size $d$ to 128, and the parameter $\lambda$ used in $G(\cdot)$ to $0.5$.
Two ST-LSTM layers are used in our stacked network.
Although there are variations in terms of joint number, sequence length, and data acquisition equipment for different datasets,
we adopt the same parameter settings mentioned above for all datasets.
Our method achieves promising results on all the benchmark datasets with these parameter settings untouched, which shows the robustness of our method.
An NVIDIA TitanX GPU is used to perform our experiments.
We evaluate the computational efficiency of our method on the NTU RGB+D dataset and set the batch size to $100$.
On average, within one second, $210$, $100$, and $70$ videos can be processed
by using ``ST-LSTM (Joint Chain)'', ``ST-LSTM (Tree)'', and ``ST-LSTM (Tree) + Trust Gate'', respectively.
\subsection{Experiments on the NTU RGB+D Dataset}
\label{sec:exp:resNTU}
The NTU RGB+D dataset has two standard evaluation protocols \cite{nturgbd}.
The first protocol is the cross-subject (X-Subject) evaluation protocol,
in which half of the subjects are used for training and the remaining subjects are kept for testing.
The second is the cross-view (X-View) evaluation protocol,
in which $2/3$ of the viewpoints are used for training,
and $1/3$ unseen viewpoints are left out for testing.
We evaluate the performance of our method on both of these protocols.
The results are shown in \tablename{ \ref{table:resultNTU}}.
\begin{table}[!htp]
\caption{Experimental results on the NTU RGB+D Dataset}
\label{table:resultNTU}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Method & Feature & X-Subject & X-View \\
\hline
Lie Group \cite{vemulapalli2014liegroup} & Geometric & 50.1\% & 52.8\% \\
Cippitelli {\emph{et~al.}}~ \cite{cippitelli2016evaluation} & Geometric & 48.9\% & 57.7\% \\
Dynamic Skeletons \cite{jianfang_CVPR15} & Geometric & 60.2\% & 65.2\% \\
FTP \cite{rahmani20163d} & Geometric & 61.1\% & 72.6\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 59.1\% & 64.0\% \\
Deep RNN \cite{nturgbd} & Geometric & 56.3\% & 64.1\% \\
Part-aware LSTM \cite{nturgbd} & Geometric & 62.9\% & 70.3\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 61.7\% & 75.5\% \\
ST-LSTM (Tree) & Geometric & 65.2\% & 76.1\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{69.2\%} & \textbf{77.7\%} \\
\hline
\end{tabular}
\end{table}
In \tablename{ \ref{table:resultNTU}},
the deep RNN model concatenates the joint features at each frame and then feeds them to the network to model the temporal kinetics, and ignores the spatial dynamics.
As can be seen, both ``ST-LSTM (Joint Chain)'' and ``ST-LSTM (Tree)'' models outperform this method by a notable margin.
It can also be observed that our approach utilizing the trust gate brings significant performance improvement,
because the data provided by Kinect is often noisy and multiple joints are frequently occluded in this dataset.
Note that our proposed models (such as ``ST-LSTM (Tree) + Trust Gate'') reported in this table only use skeletal data as input.
We compare the class specific recognition accuracies of ``ST-LSTM (Tree)'' and ``ST-LSTM (Tree) + Trust Gate'', as shown in \figurename{ \ref{fig:ClassAccuracy_NTU}}.
We observe that ``ST-LSTM (Tree) + Trust Gate'' significantly outperforms ``ST-LSTM (Tree)'' for most of the action classes,
which demonstrates our proposed trust gate can effectively improve the human action recognition accuracy by learning the degrees of reliability over the input data at each time step.
\begin{figure*}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=0.38]{ClassAccuracy_NTU.pdf}}
\end{minipage}
\caption{Recognition accuracy per class on the NTU RGB+D dataset}
\label{fig:ClassAccuracy_NTU}
\end{figure*}
As shown in \figurename{ \ref{fig:NTUNoisySamples}},
a notable portion of videos in the NTU RGB+D dataset were collected in side views.
Due to the design of Kinect's body tracking mechanism,
skeletal data is less accurate in side view compared to the front view.
To further investigate the effectiveness of the proposed trust gate,
we analyze the performance of the network by feeding the side views samples only.
The accuracy of ``ST-LSTM (Tree)'' is 76.5\%,
while ``ST-LSTM (Tree) + Trust Gate'' yields 81.6\%.
This shows how trust gate can effectively deal with the noise in the input data.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=0.199]{NoisySamples.jpg}}
\end{minipage}
\caption{Examples of the noisy skeletons from the NTU RGB+D dataset.}
\label{fig:NTUNoisySamples}
\end{figure}
To verify the performance boost by stacking layers,
we limit the depth of the network by using only one ST-LSTM layer,
and the accuracies drop to 65.5\% and 77.0\% based on the cross-subject and cross-view protocol, respectively.
This indicates our two-layer stacked network has better representation power than the single-layer network.
To evaluate the performance of our feature fusion scheme,
we extract visual features from several regions based on the joint positions and use them in addition to the geometric features (3D coordinates of the joints).
We extract HOG and HOF \cite{dalal2006human,wang2011action} features from a $80\times80$ RGB patch centered at each joint location.
For each joint, this produces a 300D visual descriptor,
and we apply PCA to reduce the dimension to 20.
The results are shown in \tablename{ \ref{table:resultNTUFusion}}.
We observe that our method using the visual features together with the joint positions improves the performance.
Besides, we compare our newly proposed feature fusion strategy within the ST-LSTM unit with two other feature fusion methods:
(1) early fusion which simply concatenates two types of features as the input of the ST-LSTM unit;
(2) late fusion which uses two ST-LSTMs to deal with two types of features respectively,
then concatenates the outputs of the two ST-LSTMs at each step,
and feeds the concatenated result to a softmax classifier.
We observe that our proposed feature fusion strategy is superior to other baselines.
\begin{table}[h]
\caption{Evaluation of different feature fusion strategies on the NTU RGB+D dataset.
``Geometric + Visual (1)'' indicates the early fusion scheme.
``Geometric + Visual (2)'' indicates the late fusion scheme.
``Geometric $\bigoplus$ Visual'' means our newly proposed feature fusion scheme within the ST-LSTM unit.}
\label{table:resultNTUFusion}
\centering
\begin{tabular}{|l|c|c|}
\hline
Feature Fusion Method & X-Subject & X-View
\\
\hline
Geometric Only & 69.2\% & 77.7\% \\
Geometric + Visual (1) & 70.8\% & 78.6\% \\
Geometric + Visual (2) & 71.0\% & 78.7\% \\
Geometric $\bigoplus$ Visual &73.2\% & 80.6\% \\
\hline
\end{tabular}
\\
\end{table}
We also evaluate the sensitivity of the proposed network with respect to the variation of neuron unit size and $\lambda$ values.
The results are shown in \figurename{ \ref{fig:NTUResultLambda}}.
When trust gate is added,
our network obtains better performance for all the $\lambda$ values compared to the network without the trust gate.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.57]{NTUResultLambda1.pdf}}
\centerline{\includegraphics[scale=.57]{NTUResultLambda2.pdf}}
\end{minipage}
\caption{(a) Performance comparison of our approach using different values of neuron size ($d$) on the NTU RGB+D dataset (X-subject).
(b) Performance comparison of our method using different $\lambda$ values on the NTU RGB+D dataset (X-subject).
The blue line represents our results when different $\lambda$ values are used for trust gate,
while the red dashed line indicates the performance of our method when trust gate is not added.}
\label{fig:NTUResultLambda}
\end{figure}
Finally, we investigate the recognition performance with early stopping conditions
by feeding the first $p$ portion of the testing video to the trained network based on the cross-subject protocol ($p \in \{0.1, 0.2, ..., 1.0\}$).
The results are shown in \figurename{ \ref{fig:NTUResultEarlyStop}}.
We can observe that the results are improved when a larger portion of the video is fed to our network.
\begin{figure}
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[scale=.57]{NTUResultEarlyStop.pdf}}
\end{minipage}
\caption{Experimental results of our method by early stopping the network evolution at different time steps.}
\label{fig:NTUResultEarlyStop}
\end{figure}
\subsection{Experiments on the UT-Kinect Dataset}
\label{sec:exp:resUTKinect}
There are two evaluation protocols for the UT-Kinect dataset in the literature.
The first is the leave-one-out-cross-validation (LOOCV) protocol \cite{HOJ3D}.
The second protocol is suggested by \cite{zhu2013fusing}, for which half of the subjects are used for training, and the remaining are used for testing.
We evaluate our approach using both protocols on this dataset.
Using the LOOCV protocol,
our method achieves better performance than other skeleton-based methods,
as shown in \tablename{ \ref{table:resultUTKinectprotocol1}}.
Using the second protocol (see \tablename{ \ref{table:resultUTKinectprotocol2}}),
our method achieves competitive result (95.0\%) to the Elastic functional coding method \cite{anirudh2015elastic} (94.9\%),
which is an extension of the Lie Group model \cite{vemulapalli2014liegroup}.
\begin{table}[!htp]
\caption{Experimental results on the UT-Kinect dataset (LOOCV protocol \cite{HOJ3D})}
\label{table:resultUTKinectprotocol1}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Grassmann Manifold \cite{slama2015accurate} & Geometric & 88.5\% \\
Jetley {\emph{et~al.}}~ \cite{jetley20143d} & Geometric& 90.0\% \\
Histogram of 3D Joints \cite{HOJ3D} & Geometric & 90.9\% \\
Space Time Pose \cite{devanne2013space} & Geometric & 91.5\% \\
Riemannian Manifold \cite{devanne20153d} & Geometric & 91.5\% \\
SCs (Informative Joints) \cite{jiang2015informative} & Geometric & 91.9\% \\
Chrungoo {\emph{et~al.}}~ \cite{chrungoo2014activity} & Geometric & 92.0\% \\
Key-Pose-Motifs Mining\cite{Wang_2016_CVPR_Mining} & Geometric & 93.5\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 91.0\% \\
ST-LSTM (Tree) & Geometric & 92.4\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{97.0\%} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!htp]
\caption{Results on the UT-Kinect dataset (half-vs-half protocol \cite{zhu2013fusing})}
\label{table:resultUTKinectprotocol2}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Skeleton Joint Features \cite{zhu2013fusing} & Geometric & 87.9\% \\
Chrungoo {\emph{et~al.}}~ \cite{chrungoo2014activity} & Geometric & 89.5\% \\
Lie Group \cite{vemulapalli2014liegroup} (reported by \cite{anirudh2015elastic}) & Geometric & 93.6\% \\
Elastic functional coding \cite{anirudh2015elastic} & Geometric & 94.9\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{95.0\%} \\
\hline
\end{tabular}
\end{table}
Some actions in the UT-Kinect dataset involve human-object interactions, thus appearance based features representing visual information of the objects can be complementary to the geometric features.
Thus we can evaluate our proposed feature fusion approach within the ST-LSTM unit on this dataset.
The results are shown in \tablename{ \ref{table:resultUTFusion}.
Using geometric features only, the accuracy is 97\%.
By simply concatenating the geometric and visual features, the accuracy improves slightly.
However, the accuracy of our approach can reach 98\% when the proposed feature fusion method is adopted.
\begin{table}[h]
\caption{Evaluation of our approach for feature fusion on the UT-Kinect dataset (LOOCV protocol \cite{HOJ3D}).
``Geometric + Visual'' indicates we simply concatenate the two types of features as the input.
``Geometric $\bigoplus$ Visual'' means we use the newly proposed feature fusion scheme within the ST-LSTM unit.}
\label{table:resultUTFusion}
\centering
\begin{tabular}{|l|c|c|}
\hline
Feature Fusion Method & Acc. \\
\hline
Geometric Only & 97.0\% \\
Geometric + Visual & 97.5\% \\
Geometric $\bigoplus$ Visual &98.0\% \\
\hline
\end{tabular}
\\
\scriptsize
\end{table}
\subsection{Experiments on the SBU Interaction Dataset}
\label{sec:exp:resSBU}
We follow the standard evaluation protocol in \cite{yun2012two} and perform 5-fold cross validation on the SBU Interaction dataset.
As two human skeletons are provided in each frame of this dataset,
our traversal scheme visits the joints throughout the two skeletons over the spatial steps.
We report the results in terms of average classification accuracy in \tablename{ \ref{table:resultSBU}}.
The methods in \cite{zhu2016co} and \cite{du2015hierarchical} are both LSTM-based approaches, which are more relevant to our method.
\begin{table}[h]
\caption{Experimental results on the SBU Interaction dataset}
\label{table:resultSBU}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Yun {\emph{et~al.}}~ \cite{yun2012two} & Geometric & 80.3\% \\
Ji {\emph{et~al.}}~ \cite{ji2014interactive} & Geometric & 86.9\% \\
CHARM \cite{li2015category} & Geometric & 83.9\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 80.4\% \\
Co-occurrence LSTM \cite{zhu2016co} & Geometric & 90.4\% \\
Deep LSTM \cite{zhu2016co} & Geometric & 86.0\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 84.7\% \\
ST-LSTM (Tree) & Geometric & 88.6\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{93.3\%} \\
\hline
\end{tabular}
\end{table}
The results show that the proposed ``ST-LSTM (Tree) + Trust Gate'' model outperforms all other skeleton-based methods.
``ST-LSTM (Tree)'' achieves higher accuracy than ``ST-LSTM (Joint Chain)'',
as the latter adds some false links between less related joints.
Both Co-occurrence LSTM \cite{zhu2016co} and Hierarchical RNN \cite{du2015hierarchical} adopt the Svaitzky-Golay filter \cite{savitzky1964smoothing} in the temporal domain
to smooth the skeletal joint positions and reduce the influence of noise in the data collected by Kinect.
The proposed ``ST-LSTM (Tree)'' model without the trust gate mechanism outperforms Hierarchical RNN,
and achieves comparable result (88.6\%) to Co-occurrence LSTM.
When the trust gate is used, the accuracy of our method jumps to 93.3\%.
\subsection{Experiments on the SYSU-3D Dataset}
\label{sec:exp:resSYSU}
We follow the standard evaluation protocol in \cite{jianfang_CVPR15} on the SYSU-3D dataset.
The samples from 20 subjects are used to train the model parameters,
and the samples of the remaining 20 subjects are used for testing.
We perform 30-fold cross validation and report the mean accuracy in \tablename{~\ref{table:resultSYSU}}.
\begin{table}[h]
\caption{Experimental results on the SYSU-3D dataset}
\label{table:resultSYSU}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
LAFF (SKL) \cite{hu2016ECCV} & Geometric & 54.2\% \\
Dynamic Skeletons \cite{jianfang_CVPR15} & Geometric & 75.5\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 72.1\% \\
ST-LSTM (Tree) & Geometric & 73.4\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{76.5\%} \\
\hline
\end{tabular}
\end{table}
The results in \tablename{~\ref{table:resultSYSU}} show that our proposed ``ST-LSTM (Tree) + Trust Gate'' method outperforms all the baseline methods on this dataset.
We can also find that the tree traversal strategy can help to improve the classification accuracy of our model.
As the skeletal joints provided by Kinect are noisy in this dataset,
the trust gate, which aims at handling noisy data, brings significant performance improvement (about 3\% improvement).
There are large viewpoint variations in this dataset.
To make our model reliable against viewpoint variations,
we adopt a similar skeleton normalization procedure as suggested by \cite{nturgbd} on this dataset.
In this preprocessing step, we perform a rotation transformation on each skeleton,
such that all the normalized skeletons face to the same direction.
Specifically, after rotation, the 3D vector from ``right shoulder'' to ``left shoulder'' will be parallel to the X axis,
and the vector from ``hip center'' to ``spine'' will be aligned to the Y axis
(please see \cite{nturgbd} for more details about the normalization procedure).
We evaluate our ``ST-LSTM (Tree) + Trust Gate'' method by respectively using the original skeletons without rotation and the transformed skeletons,
and report the results in \tablename{~\ref{table:resultSYSURotation}}.
The results show that it is beneficial to use the transformed skeletons as the input for action recognition.
\begin{table}[h]
\caption{Evaluation for skeleton rotation on the SYSU-3D dataset}
\label{table:resultSYSURotation}
\centering
\begin{tabular}{|l|c|}
\hline
Method & Acc. \\
\hline
With Skeleton Rotation & 76.5\% \\
Without Skeleton Rotation & 73.0\% \\
\hline
\end{tabular}
\\
\end{table}
\subsection{Experiments on the ChaLearn Gesture Dataset}
\label{sec:exp:resChaLearn}
We follow the evaluation protocol adopted in \cite{wang2015hierarchical,fernando2015modeling}
and report the F1-score measures on the validation set of the ChaLearn Gesture dataset.
\begin{table}[h]
\caption{Experimental results on the ChaLearn Gesture dataset}
\label{table:resultChaLearn}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & F1-Score \\
\hline
Portfolios \cite{yao2014gesture} & Geometric & 56.0\% \\
Wu {\emph{et~al.}}~ \cite{wu2013fusing} & Geometric & 59.6\% \\
Pfister {\emph{et~al.}}~ \cite{pfister2014domain} & Geometric & 61.7\% \\
HiVideoDarwin \cite{wang2015hierarchical} & Geometric & 74.6\% \\
VideoDarwin \cite{fernando2015modeling} & Geometric & 75.2\% \\
Deep LSTM \cite{nturgbd} & Geometric & 87.1\% \\
\hline
ST-LSTM (Joint Chain) & Geometric & 89.1\% \\
ST-LSTM (Tree) & Geometric & 89.9\% \\
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{92.0\%} \\
\hline
\end{tabular}
\end{table}
As shown in \tablename{~\ref{table:resultChaLearn}},
our method surpasses the state-of-the-art methods \cite{yao2014gesture,wu2013fusing,pfister2014domain,wang2015hierarchical,fernando2015modeling,nturgbd},
which demonstrates the effectiveness of our method in dealing with skeleton-based action recognition problem.
Compared to other methods, our method focuses on modeling both temporal and spatial dependency patterns in skeleton sequences.
Moreover, the proposed trust gate is also incorporated to our method to handle the noisy skeleton data captured by Kinect,
which can further improve the results.
\subsection{Experiments on the MSR Action3D Dataset}
\label{sec:exp:resMSR3D}
We follow the experimental protocol in \cite{du2015hierarchical} on the MSR Action3D dataset,
and show the results in \tablename{~\ref{table:resultMSR3D}}.
On the MSR Action3D dataset, our proposed method, ``ST-LSTM (Tree) + Trust Gate'', achieves 94.8\% of classification accuracy,
which is superior to the Hierarchical RNN model \cite{du2015hierarchical} and other baseline methods.
\begin{table}[h]
\caption{Experimental results on the MSR Action3D dataset}
\label{table:resultMSR3D}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Histogram of 3D Joints \cite{HOJ3D} & Geometric & 79.0\% \\
Joint Angles Similarities \cite{hog2-ohnbar} & Geometric & 83.5\% \\
SCs (Informative Joints) \cite{jiang2015informative} & Geometric & 88.3\% \\
Oriented Displacements \cite{gowayyed2013histogram} & Geometric & 91.3\% \\
Lie Group \cite{vemulapalli2014liegroup} & Geometric & 92.5\% \\
Space Time Pose \cite{devanne2013space} & Geometric & 92.8\% \\
Lillo {\emph{et~al.}}~ \cite{lillo2016hierarchical} & Geometric & 93.0\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 94.5\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{94.8\%} \\
\hline
\end{tabular}
\end{table}
\subsection{Experiments on the Berkeley MHAD Dataset}
\label{sec:exp:resMHAD}
\begin{table}[h]
\caption{Experimental results on the Berkeley MHAD dataset}
\label{table:resultMHAD}
\centering
\begin{tabular}{|l|c|c|}
\hline
Method & Feature & Acc. \\
\hline
Ofli {\emph{et~al.}}~ \cite{Ofli2014jvci} & Geometric & 95.4\% \\
Vantigodi {\emph{et~al.}}~ \cite{vantigodi2013real} & Geometric & 96.1\% \\
Vantigodi {\emph{et~al.}}~ \cite{vantigodi2014action} & Geometric & 97.6\% \\
Kapsouras {\emph{et~al.}}~ \cite{kapsouras2014action} & Geometric & 98.2\% \\
Hierarchical RNN \cite{du2015hierarchical} & Geometric & 100\% \\
Co-occurrence LSTM \cite{zhu2016co} & Geometric & 100\% \\
\hline
ST-LSTM (Tree) + Trust Gate & Geometric & \textbf{100\%} \\
\hline
\end{tabular}
\end{table}
We adopt the experimental protocol in \cite{du2015hierarchical} on the Berkeley MHAD dataset.
384 video sequences corresponding to the first seven persons are used for training,
and the 275 sequences of the remaining five persons are held out for testing.
The experimental results in \tablename{ \ref{table:resultMHAD}} show that our method achieves very high accuracy (100\%) on this dataset.
Unlike \cite{du2015hierarchical} and \cite{zhu2016co}, our method does not use any preliminary manual smoothing procedures.
\subsection{Visualization of Trust Gates}
\label{sec:visualization}
In this section, to better investigate the effectiveness of the proposed trust gate scheme, we study the behavior of the proposed framework against the presence of noise in skeletal data from the MSR Action3D dataset.
We manually rectify some noisy joints of the samples by referring to the corresponding depth images.
We then compare the activations of trust gates on the noisy and rectified inputs.
As illustrated in \figurename{ \ref{fig:TrustGateEffect}(a)},
the magnitude of trust gate's output ($l_2$ norm of the activations of the trust gate) is smaller when a noisy joint is fed, compared to the corresponding rectified joint.
This demonstrates how the network controls the impact of noisy input on its stored representation of the observed data.
In our next experiment, we manually add noise to one joint for all testing samples on the Berkeley MHAD dataset, in order to further analyze the behavior of our proposed trust gate.
Note that the Berkeley MHAD dataset was collected with motion capture system, thus
the skeletal joint coordinates in this dataset are much more accurate than those captured with Kinect sensors.
We add noise to the right foot joint by moving the joint away from its original location.
The direction of the translation vector is randomly chosen and the norm is a random value around $30cm$, which is a significant noise in the scale of human body.
We measure the difference in the magnitudes of trust gates' activations between the noisy data and the original ones.
For all testing samples, we carry out the same operations and then calculate the average difference.
The results in \figurename{ \ref{fig:TrustGateEffect}(b)} show that the magnitude of trust gate is reduced when the noisy data is fed to the network.
This shows that our network tries to block the flow of noisy input and stop it from affecting the memory.
We also observe that the overall accuracy of our network does not drop after adding the above-mentioned noise to the input data.
\begin{figure}[htb]
\begin{minipage}[b]{0.47\linewidth}
\centering
\centerline{\includegraphics[scale=.53]{VisualizationTrustGate1.pdf}}
\end{minipage}
\begin{minipage}[b]{0.52\linewidth}
\centering
\centerline{\includegraphics[scale=.53]{VisualizationTrustGate2.pdf}}
\end{minipage}
\caption{Visualization of the trust gate's behavior when inputting noisy data.
(a) $j_{3'}$ is a noisy joint position, and $j_3$ is the corresponding rectified joint location.
In the histogram, the blue bar indicates the magnitude of trust gate when inputting the noisy joint $j_{3'}$.
The red bar indicates the magnitude of the corresponding trust gate when $j_{3'}$ is rectified to $j_3$.
(b) Visualization of the difference between the trust gate calculated when the noise is imposed at the step $(j_N, t_N)$ and that calculated when inputting the original data.}
\label{fig:TrustGateEffect}
\end{figure}
\begin{table*}[htb]
\caption{Performance comparison of different spatial sequence models}
\label{table:resultDoubleChain}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\
\hline
ST-LSTM (Joint Chain) & 61.7\% & 75.5\% & 91.0\% & 84.7\% & 89.1\% \\
ST-LSTM (Double Joint Chain) & 63.5\% & 75.6\% & 91.5\% & 85.9\% & 89.2\% \\
ST-LSTM (Tree) & 65.2\% & 76.1\% & 92.4\% & 88.6\% & 89.9\% \\
\hline
\end{tabular}
\\
\end{table*}
\begin{table*}[tb]
\caption{Performance comparison of Temporal Average, LSTM, and our proposed ST-LSTM}
\label{table:resultLSTMTG}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture\\
\hline
Temporal Average & 47.6\% & 52.6\% & 81.9\% & 71.5\% & 77.9\% \\
\hline
LSTM & 62.0\% & 70.7\% & 90.5\% & 86.0\% & 87.1\% \\
LSTM + Trust Gate & 62.9\% & 71.7\% & 92.0\% & 86.6\% & 87.6\% \\
\hline
ST-LSTM & 65.2\% & 76.1\% & 92.4\% & 88.6\% & 89.9\% \\
ST-LSTM + Trust Gate & 69.2\% & 77.7\% & 97.0\% & 93.3\% & 92.0\% \\
\hline
\end{tabular}
\\
\end{table*}
\begin{table*}[tb]
\caption{Evaluation of the last-to-first link in our proposed network}
\label{table:resultLTFLink}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline
~~~~~~~~~~~~~~~~~~Dataset~~~~~~~~~~~~~~~~~~ & NTU (X-Subject) & NTU (X-View) & ~~~UT-Kinect~~~ & SBU Interaction & ChaLearn Gesture \\
\hline
Without last-to-first link & 68.5\% & 76.9\% & 96.5\% & 92.1\% & 90.9 \% \\
With last-to-first link & 69.2\% & 77.7\% & 97.0\% & 93.3\% & 92.0 \% \\
\hline
\end{tabular}
\\
\end{table*}
\subsection{Evaluation of Different Spatial Joint Sequence Models}
\label{sec:discussion1}
The previous experiments showed how ``ST-LSTM (Tree)'' outperforms ``ST-LSTM (Joint Chain)'', because ``ST-LSTM (Tree)'' models the kinematic dependency structures of human skeletal sequences.
In this section, we further analyze the effectiveness of our ``ST-LSTM (Tree)'' model and compare it with a ``ST-LSTM (Double Joint Chain)'' model.
The ``ST-LSTM (Joint Chain)'' has fewer steps in the spatial dimension than the ``ST-LSTM (Tree)''.
One question that may rise here is if the advantage of ``ST-LSTM (Tree)'' model could be only due to the higher length and redundant sequence of the joints fed to the network, and not because of the proposed semantic relations between the joints.
To answer this question, we evaluate the effect of using a double chain scheme to increase the spatial steps of the ``ST-LSTM (Joint Chain)'' model.
Specifically, we use the joint visiting order of 1-2-3-...-16-1-2-3-...-16,
and we call this model as ``ST-LSTM (Double Joint Chain)''.
The results in \tablename{~\ref{table:resultDoubleChain}} show that the performance of ``ST-LSTM (Double Joint Chain)'' is better than ``ST-LSTM (Joint Chain)'',
yet inferior to ``ST-LSTM (Tree)''.
This experiment indicates that it is beneficial to introduce more passes in the spatial dimension to the ST-LSTM for performance improvement.
A possible explanation is that the units visited in the second round can obtain the global level context representation from the previous pass,
thus they can generate better representations of the action patterns by using the context information.
However, the performance of ``ST-LSTM (Double Joint Chain)'' is still weaker than ``ST-LSTM (Tree)'',
though the numbers of their spatial steps are almost equal.
The proposed tree traversal scheme is superior because it connects the most semantically related joints
and avoids false connections between the less-related joints (unlike the other two compared models).
\subsection{Evaluation of Temporal Average, LSTM and ST-LSTM}
\label{sec:discussion2}
To further investigate the effect of simultaneous modeling of dependencies in spatial and temporal domains,
in this experiment, we replace our ST-LSTM with the original LSTM which only models the temporal dynamics among the frames without explicitly considering spatial dependencies.
We report the results of this experiment in \tablename{ \ref{table:resultLSTMTG}}.
As can be seen, our ``ST-LSTM + Trust Gate'' significantly outperforms ``LSTM + Trust Gate''.
This demonstrates that the proposed modeling of the dependencies in both temporal and spatial dimensions provides much richer representations than the original LSTM.
The second observation of this experiment is that if we add our trust gate to the original LSTM,
the performance of LSTM can also be improved,
but its performance gain is less than the performance gain on ST-LSTM.
A possible explanation is that we have both spatial and temporal context information at each step of ST-LSTM to generate a good prediction of the input at the current step ((see Eq. (\ref{eq:p_j_t})),
thus our trust gate can achieve a good estimation of the reliability of the input at each step by using the prediction (see Eq. (\ref{eq:tau})).
However, in the original LSTM, the available context at each step is from the previous temporal step,
i.e., the prediction can only be based on the context in the temporal dimension,
thus the effectiveness of the trust gate is limited when it is added to the original LSTM.
This further demonstrates the effectiveness of our ST-LSTM framework for spatio-temporal modeling of the skeleton sequences.
In addition, we investigate the effectiveness of the LSTM structure for handling the sequential data.
We evaluate a baseline method (called ``Temporal Average'') by averaging the features from all frames instead of using LSTM.
Specifically, the geometric features are averaged over all the frames of the input sequence (i.e., the temporal ordering information in the sequence is ignored),
and then the resultant averaged feature is fed to a two-layer network, followed by a softmax classifier.
The performance of this scheme is much weaker than our proposed ST-LSTM with trust gate,
and also weaker than the original LSTM, as shown in \tablename{~\ref{table:resultLSTMTG}}.
The results demonstrate the representation strengths of the LSTM networks for modeling the dependencies and dynamics in sequential data, when compared to traditional temporal aggregation methods of input sequences.
\subsection{Evaluation of the Last-to-first Link Scheme}
\label{sec:discussion3}
In this section, we evaluate the effectiveness of the last-to-first link in our model (see section \ref{sec:approach:learning}).
The results in \tablename{ \ref{table:resultLTFLink}} show the advantages of using the last-to-first link in improving the final action recognition performance.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have extended the RNN-based action recognition method to both spatial and temporal domains.
Specifically, we have proposed a novel ST-LSTM network which analyzes the 3D locations of skeletal joints at each frame and at each processing step.
A skeleton tree traversal method based on the adjacency graph of body joints is also proposed to better represent the structure of the input sequences and
to improve the performance of our network by connecting the most related joints together in the input sequence.
In addition, a new gating mechanism is introduced to improve the robustness of our network against the noise in input sequences.
A multi-modal feature fusion method is also proposed for our ST-LSTM framework.
The experimental results have validated the contributions and demonstrated the effectiveness of our approach
which achieves better performance over the existing state-of-the-art methods on seven challenging benchmark datasets.
\section*{Acknowledgement}
This work was carried out at Rapid-Rich Object Search (ROSE) Lab, Nanyang Technological University.
ROSE Lab is supported by the National Research Foundation, Singapore, under its IDM Strategic Research Programme.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {'timestamp': '2017-06-27T02:10:29', 'yymm': '1706', 'arxiv_id': '1706.08276', 'language': 'en', 'url': 'https://arxiv.org/abs/1706.08276'} |
\section{Introduction}
Adoption of convolutional neural network(CNN) \cite{lecun1989backpropagation} brought huge success on a lot of computer vision tasks such as classification and segmentation. One of limitation of CNN is its poor scalability with increasing input image size in terms of computation efficiency. With limited time and resources, it is necessary to be smart on selecting where, what, and how to look the image. Facing bird specific fine grained classification task, for example, it does not help much to pay attention on non-dog image part such as tree and sky. Rather, one should focus on regions which play decisive roles on classification such as beak or wings. If machine can learn how to pay attention on those regions will results better performance with lower energy usage.
In this context, \textbf{Recurrent Attention Model(RAM)} \cite{mnih2014recurrent} introduces visual attention method in the problem of fine-grained classification task. By sequentially choosing where and what to look, RAM achieved better performance with lower usage of memory. Even more, attention mechanism tackled the vulnerable point, that deep learning model is the black box model by enabling interpretations of the results. But still there is more room for RAM for improvement. In addition to where and what to look, if one can give some clues on how to look, the task specific hint, learning could be more intuitive and efficient. From this insight, we propose the novel architecture, \textbf{Clued Recurrent Attention Model(CRAM)} which inserts problem solving oriented clue on RAM. These clues, or constraints give directions to machine for faster convergence and better performance.
For evaluation, we perform experiments on two computer vision task classification and inpainting. In classification task, clue is given as the binary saliency of the image which indicates the rough location of the object. In inpainting problem, clue is given as the binary mask which indicates the location the occluded region. Codes are implemented in tensorflow version 1.6.0 and uploaded at https://github.com/brekkanegg/cram.
In summary, the contributions of this work are as follows:
\begin{enumerate}
\item Proposed novel model clued recurrent attention model(CRAM) which inserted clue on RAM for more efficient problem solving.
\item Defined clues for classification and inpainting task respectively for CRAM which are easy to interpret and approach.
\item Evaluated on classification and inpainting task, showing the powerful extension of RAM.
\end{enumerate}
\section{Related Work}
\subsection{Reccurrent Attention Model(RAM)}
RAM \cite{mnih2014recurrent} first proposed recurrent neural network(RNN) \cite{mikolov2010recurrent} based attention model inspired by human visual system. When human are confronted with large image which is too big to be seen at a glance, he processes the image from part by part depending on his interest. By selectively choosing what and where to look RAM showed higher performance while reducing calculations and memory usage. However, since RAM attend the image region by using sampling method, it has fatal weakness of using REINFORCE, not back-propagation for optimization. After works of RAM, Deep Recurrent Attention Model(DRAM) \cite{ba2014multiple} showed advanced architecture for multiple object recognition and Deep Recurrent Attentive Writer(DRAW) \cite{gregor2015draw} introduced sequential image generation method without using REINFORCE.
Spatial transformer network (STN) \cite{jaderberg2015spatial} first proposed a parametric spatial attention module for object classification task. This model includes a localization network that outputs the parameters for selecting region to attend in the input image. Recently, Recurrent Attentional Convolutional-Deconvolutional Network(RACDNN) \cite{kuen2016recurrent} gathered the strengths of both RAM and STN in saliency detection task. By replacing RAM locating module with STN, RACDNN can sequentially select where to attend on the image while still using back-propagation for optimization. This paper mainly adopted the RACDNN network with some technical twists to effectively insert the clue which acts as supervisor for problem solving.
\section{CRAM}
The architecture of CRAM is based on encoder-decoder structure. Encoder is similar to RACDNN\cite{kuen2016recurrent} with modified spatial transformer network\cite{jaderberg2015spatial} and inserted clue. While encoder is identical regardless of the type of task, decoder becomes different where the given task is classification or inpainting. Figure \ref{fig:overall} shows the overall architecture of CRAM.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{overall_architecture.png}
\caption{Overall architecture of CRAM. Note that image and clue become different depending on the task (left under and right under).}
\label{fig:overall}
\end{figure}
\subsection{\bf{Encoder}}
Encoder is composed of 4 subnetworks: context network, spatial transformer network, glimpse network and core recurrent neural network. The overall architecture of encoder is shown in Figure \ref{fig:enc}. Considering the flow of information, we will go into details of each network one by one.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{encoder.jpg}
\caption{Architecture of CRAM encoder. Note that the image is for inpainting task, where clue is given as binary mask that indicates the occluded region.}
\label{fig:enc}
\end{figure}
\textbf{Context Network: }The context network is the first part of encoder which receives image and clue as inputs and outputs the initial state tuple of {$r_{0}^{2}$}. {$r_{0}^{2}$} is first input of second layer of core recurrent neural network as shown in Figure \ref{fig:enc}. Using downsampled image{$(i_{coarse}),$} and downsampled clue{$(c_{coarse})$}, context network provides reasonable starting point for choosing image region to concentrate. Downsampled image and clue are processed with CNN followed by MLP respectively.
\begin{align}\label{eq:cn}
c_{0} = MLP_{c}(CNN_{context}(i_{coarse}, c_{coarse})) \\
h_{0} = MLP_{h}(CNN_{context}(i_{coarse}, c_{coarse}))
\end{align}
where ({$c_{0}$}, {$h_{0}$}) is the first state tuple of {$r_{0}^{2}$}.
\textbf{Spatial Transformer Network: } Spatial transformer network(STN) select region to attend considering given task and clue \cite{jaderberg2015spatial}. Different from existing STN, CRAM uses modified STN which receives image, clue, and output of second layer of core RNN as an inputs and outputs glimpse patch. From now on, glimpse patch indicates the attended image which is cropped and zoomed in. Here, the STN is composed of two parts. One is the localization part which calculates the transformation matrix {$\tau$} with CNN and MLP. The other is the transformer part which zoom in the image using the transformation matrix {$\tau$} above and obtain the glimpse. The affine transformation matrix {$\tau$} with isotropic scaling and translation is given as Equation \ref{eq:tau}.
\begin{equation}\label{eq:tau}
\tau = \begin{bmatrix}
s & 0 & t_{x} \\
0 & s & t_{y}\\
0 & 0 & 1
\end{bmatrix}
\end{equation}
where {$s, t_{x}, t_{y}$} are the scaling, horizontal translation and vertical translation parameter respectively.
Total process of STN is shown in Figure \ref{fig:stn}.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{stn.png}
\caption{Architecture of STN. STN is consists of localisation part which calculates {$\tau$} and transformer part which obtain glimpse. }
\label{fig:stn}
\end{figure}
In equations, the process STN is as follows:
\begin{equation}\label{eq:sn}
glimpse\_patch_{n} = STN(image, clue, \tau_{n})
\end{equation}
where {$n$} in {$ glimpse\_patch_{n}$} is the step of core RNN ranging from 1 to total glimpse number. {$\tau$} is obtained by the below equation.
\begin{equation}\label{eq:en}
\tau_{n} = MLP_{loc}(CNN_{i}(image)\oplus CNN_{c}(clue)\oplus MLP_{r}(r_{n}^{(2)}))
\end{equation}
where {$\oplus$} is concat operation.
\textbf{Glimpse Network: }The glimpse network is a non-linear function which receives current glimpse patch, {$ glimpse\_patch_{n}(gp_{n}$)} and attend region information, {$\tau$} as inputs and outputs current step glimpse vector. Glimpse vector is later used as the input of first core RNN. {$glimpse\_vector_{n}(gv_{n})$} is obtained by multiplicative interaction between extracted features of {$glimpse\_patch_{n}$} and {$\tau$}. The method of interaction is first proposed by \cite{larochelle2010learning}. Similar to other mentioned networks, CNN and MLP are used for feature extraction.
\begin{equation}\label{eq:gn}
\begin{split}
gv_{n} = MLP_{what}(CNN_{what}(gp_{n})) \odot MLP_{where}(\tau_{n})
\end{split}
\end{equation}
where {$\odot$} is a element-wise vector multiplication operation.
\textbf{Core Recurrent Neural Network: } Recurrent neural network is the core structure of CRAM, which aggregates information extracted from the stepwise glimpses and calculates encoded vector z. Iterating for set RNN steps(total glimpse numbers), core RNN receives {$glimpse\_vector_{n}$} at the first layer. The output of second layer {$r_{n}^{(2)}$} is again used by spatial transformer network's localization part as Equation \ref{eq:en}.
\begin{equation}\label{eq:rn}
r_{n}^{(1)} = R_{recur}^{ 1}(glimpse\_vector_{n}, r_{n-1}^{(1)}) \\
\end{equation}
\begin{equation}\label{eq:rn}
r_{n}^{(2)} = R_{recur}^{ 2}(r_{n}^{(1)}, r_{n-1}^{(2)})
\end{equation}
\subsection{\bf{Decoder}}
\subsubsection{Classification}
Like general image classification approach, encoded z is passed through MLP and outputs possibility of each class. Image of decoder for classification is shown in Figure \ref{fig:deccls}.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{cls_decoder.png}
\caption{Architecture of CRAM decoder for image classification.}
\label{fig:deccls}
\end{figure}
\subsubsection{Inpainting}
Utilizing the architecture of DCGAN \cite {radford2015unsupervised}, contaminated image is completed starting from the the encoded z from the encoder. To ensure the quality of completed image, we adopted generative adversarial network(GAN)\cite{goodfellow2014generative} framework in both local and global scale \cite{iizuka2017globally}. Here decoder works as generator and local and global discriminators evaluate its plausibility in local and global scale respectively. Image of decoder for inpainting is shown in Figure \ref{fig:dec}.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{ip_decoder.jpg}
\caption{Architecture of CRAM decoder and discriminators for image inpainting.}
\label{fig:dec}
\end{figure}
\section{Training}
Loss function of CRAM can be divided into two: encoder related loss({$L_{enc}$}) and decoder-related loss({$L_{dec}$}). {$L_{enc}$} constraints the glimpse patch to be consistent with the clue. For classification task, where the clue is object saliency, it is favorable if the glimpse patches covers the salient part at most. For inpainting case, there should be a supervisor that urges the glimpse patches contains the occluded region considering the region neighbor of occlusion are the most relevant part for completion. In order to satisfy above condition for both classification and inpainting cases {$L_{enc}$} or {$L_{clue}$} is as follows:
\begin{equation}\label{eq:lossg}
L_{enc}=L_{clue}(clue, STN, \tau) = \sum_{n}{ST(clue, \tau_{n})}
\end{equation}
where {$STN$} is the trained spatial transformer network Equation \ref{eq:sn} and {$\tau_{n}$} is obtained from Equation \ref{eq:tau} in each step of core RNN. Decoder loss which is different depending on the given task will be dealt separately shortly. Note that clue is binary image for both classification and inpainting tasks.
Since {$L_{dec}$} is different depending on whether the problem is classification or completion, further explanations for losses will be divided into two.
\subsection{Classification}
Decoder related loss in image classification task utilize cross entropy loss like general classification approach. Then total loss {$L_{tot-cls}$} for image classification becomes:
\begin{align}\label{eq:losscls}
L_{tot-cls} &= L_{enc} + L_{dec} \\
& = L_{clue}(clue, ST, \tau s) + L_{cls}(Y, Y^{*})
\end{align}
where clue is the binary image which takes the value of 1 for salient part and otherwise 0, and {$Y$} and {$Y^{*}$} are predicted class label vector and ground truth class label vector respectively.
\subsection{Inpainting}
Decoder related loss for image inpainting consists of reconstruction loss and gan loss.
Reconstruction loss helps completion to be more stable and gan loss enables better quality of restoration. For reconstruction loss L1 loss considering contaminated region of input is used:
\begin{equation}\label{eq:reconloss}
L_{recon}(z, clue, Y^{*}) = \| clue \odot (G(z) - Y^{*}) \| _{1}
\end{equation}
where z is encoded vector from the encoder, clue is the binary image which takes the value of 1 for occluded region and otherwise 0, G is generator(or decoder) and {$Y^{*}$} is the original image before contamination.
Since there are two discriminators, local and global scale gan loss is summation of local gan loss and global gan loss.
\begin{equation}\label{eq:ganlosses}
\begin{split}
L_{gan} &= L_{global\_gan} + L_{local\_gan}
\end{split}
\end{equation}
GAN loss for local and global scale are defined as follows:
\begin{equation}\label{eq:ganloss}
\begin{split}
L_{local\_gan} &= log(1-D_{local}(Y^{*} \odot clue)) \\ &+ logD_{local}(G(image, clue) \odot clue) \\
\end{split}
\end{equation}
\begin{equation}\label{eq:ganloss2}
\begin{split}
L_{global\_gan} &= log(1-D_{global}(Y^{*} ))\\ &+ logD_{global}(G(image, clue))
\end{split}
\end{equation}
Combining Equation \ref{eq:lossg}, \ref{eq:reconloss} and \ref{eq:ganlosses}, the total loss for image inpainting {$L_{tot-ip}$} becomes:
\begin{align}\label{eq:ganloss3}
L_{tot-ip} &= L_{enc} + L_{dec} \\
&= L_{clue} + \alpha L_{recon} +\beta L_{gan}
\end{align}
where {$\alpha$} and {$\beta$} is weighting hyperparameter and {$L_{gan}$} is summation of {$L_{global\_gan}$} and {$L_{global\_gan}$}.
\section{Implementation Details}
\subsection{Classification}
In order to obtain the clue, saliency map, we use a convolutional deconvolutional network (CNN-DecNN) \cite{noh2015learning} as shown in Figure \ref{fig:cnndecnn}. CNN-DecNN is pre-trained with the MSRA10k\cite{cheng2015global} dataset, which is by far the largest publicly available saliency detection dataset, containing 10,000 annotated saliency images. This CNN-DecNN is trained with Adam\cite{kingma2014adam} in default learning settings. In training and inference period, rough saliency(or clue) is obtained from the pre-trained CNN-DecNN.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{cnndecnn.png}
\caption{CNN-DecNN to obtain rough saliency of image. This rough saliency is the clue classification task.}
\label{fig:cnndecnn}
\end{figure}
As mentioned earlier, encoder consists of 4 subnetworks: context network, spatial transformer network, glimpse network, and core RNN.
Image and clue is 4 times downsampled and used an input of context network. Each passes 3 layer-CNN(3 x 3 kernel size, 1 x 1 stride, same zero padding) each followed by max-pooling layer(3 x 3 kernel size, 2 x 2 stride, same zero padding) and outputs vectors. These vectors are concatenated and once again passes 2 layer MLP and outputs the initial state for second layer of core RNN.
Localization part of spatial transformer network consists of CNN and MLP. For image and clue input, 3 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) is applied. 2 layer-MLP is applied in second core RNN output input. Output vectors of CNN and MLP are concatenated and once again pass through 2-layer MLP for {$s, t_{x}, t_{y}$}, the parameters of {$\tau$}.
Glimpse network receives glimpse patch and {$\tau$} above as an input. 1-layer MLP is applied on {$\tau$} while Glimpse patch passes through 3-layer CNN and 1-layer MLP to match the vector length with the {$\tau$} vector after 1-layer MLP. Glimpse vector is obtained by element-wise vector multiplication operation of above output vectors.
Core RNN is composed of 2 layer with Long-Short-Term Memory (LSTM) units for \cite{hochreiter1997long} for of its ability to learn long-range dependencies and stable learning dynamics.
Decoder is quite simple, only made up of 3-layer MLP.
Filter number of CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.
All CNN and MLP except last layer includes batch normalization\cite{ioffe2015batch} and elu activation\cite{clevert2015fast}.
We used Adam optimizer \cite{kingma2014adam} with learning rate 1e-4.
\subsection{Inpainting}
Encoder settings are identical with image classification case.
Decoder(or generator) consists of fractionally-strided CNN(3 x 3 kernel size, 1/2 stride) until the original image size are recovered.
Both local and global discriminators are based on CNN, extracts the features from the image to judge the input genuineness. Local discriminator is composed of 4 layer-CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding) and 2-layer MLP. Global discriminator consists of 3-layer CNN(5 x 5 kernel size, 2 x 2 stride, same zero padding)and 2-layer MLP. Sigmoid function is applied on the last outputs of local and global discriminator in order to ensure the output value to be between 0 and 1. All CNN, fractionally-strided CNN, and MLP except last layer includes batch normalization and elu activation. Same as classification settings, filter number of CNNs, filter number of fractionally-strided CNNs, dimensions of MLPs, dimensions of core RNNs, number of core RNN steps vary depending on the size of the image.
\section{Experiment}
\subsection{Image Classification}
Work in progress.
\subsection{Image Inpainting}
\subsubsection{Dataset}
Street View House Numbers (SVHN) dataset\cite{netzer2011reading} is a real world image dataset for object recognition obtained from house numbers in Google street view image. SVHN dataset contains 73257 training digits and 26032 testing digits size of 32 x 32 in RGB color scale.
\subsubsection{Result}
Figure \ref{fig:svhn} showed the result of inpainting with SVHN dataset. 6.25\% pixels of image at the center are occluded. Even though the result is not excellent, it is enough to show the possibility and scalability of CRAM. With better generative model, it is expected to show better performance.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{svhn.png}
\caption{Experiment results on SVHN. From left to right is ground truth, input contaminated image, generated image by CRAM decoder and finally the completed image which was partially replaced with generated image only for missing region.}
\label{fig:svhn}
\end{figure}
\section{Conclusion}
Work in progress.
\bibliographystyle{IEEEtran}
| {'timestamp': '2018-05-01T02:07:17', 'yymm': '1804', 'arxiv_id': '1804.10844', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.10844'} |
\section{Introduction}
Over the past decade, our large-scale view of the Universe has
undergone a revolution. Cosmologists have agreed on a standard model
that matches a wide range of astronomical data (eg. Spergel et
al. 2007). However, this $\Lambda$CDM concordance model relies on
three ingredients whose origin and nature are unknown: dark matter,
dark energy and fundamental fields driving a period of inflation,
during which density fluctuations are imprinted on the Universe. All
these elements of the model represent new aspects of fundamental
physics, which can best be studied via astronomy. The nature of the
dark energy, which now comprises the bulk of the mass-energy budget of
the Universe, will determine the ultimate fate of the Universe and is
among the deepest questions in physics.
The most powerful tool that can be brought to bear on these problems
is weak gravitational lensing of distant galaxies; this forms the core
of the DUNE mission\footnote{for further information on DUNE:
www.dune-mission.net}. Gravitational deflection of light by
intervening dark matter concentrations causes the images of background
galaxies to acquire an additional ellipticity of order of a percent,
which is correlated over scales of tens of arcminutes. Measuring this
signature probes the expansion history in two complementary ways: (1)
geometrically, through the distance-redshift relation, and (2)
dynamically, through the growth rate of density fluctuations in the
Universe.
Utilisation of these cosmological probes relies on the measurement of
image shapes and redshifts for several billion galaxies. The
measurement of galaxy shapes for weak lensing imposes tight
requirements on the image quality which can only be met in the absence
of atmospheric phase errors and in the thermally stable environment of
space. For this number of galaxies, distances must be estimated using
photometric redshifts, involving photometry measurements over a wide
wavelength range in the visible and near-IR. The necessary visible
galaxy colour data can be obtained from the ground, using current or
upcoming instruments, complementing the unique image quality of space
for the measurement of image distortions. However, at wavelengths
beyond 1$\mu$m, we require a wide NIR survey to depths that are only
achievable from space.
Given the importance of the questions being addressed and to provide
systematic cross-checks, DUNE will also measure Baryon Acoustic
Oscillations, the Integrated Sachs-Wolfe effect, and galaxy Cluster
Counts. Combining these independent cosmological probes, DUNE will
tackle the following questions: What are the dynamics of dark energy?
What are the physical characteristics of the dark matter? What are
the seeds of structure formation and how did structure grow? Is
Einstein's theory of General Relativity the correct theory of gravity?
DUNE will combine its unique space-borne observation with existing and
planned ground-based surveys, and hence increases the science return
of the mission while limiting costs and risks. The panoramic visible
and NIR surveys required by DUNE's primary science goals will afford
unequalled sensitivity and survey area for the study of galaxy
evolution and its relationship with the distribution of the dark
matter, the discovery of high redshift objects, and of the physical
drivers of star formation. Additional surveys at low galactic
latitudes will provide a unique census of the Galactic plane and
earth-mass exoplanets at distances of 0.5-5 AU from their host star
using the microlensing technique. These DUNE surveys will provide a
unique all-sky map in the visible and NIR and thus complement other
space missions such as Planck, WMAP, eROSITA, JWST, and WISE. The
following describes the science objectives, instrument concept and
mission profile (see Table~\ref{table:summary} for a baseline
summary). A description of an earlier version of the mission without
NIR capability and developped during a CNES phase 0 study can be found
in Refregier et al 2006 and Grange et al. 2006.
\begin{table}
\caption{DUNE Baseline summary}
\label{table:summary}
\begin{tabular}{|l|l|}
\hline
Science objectives & Must: Cosmology and Dark Energy. Should: Galaxy formation\\
& Could: Extra-solar planets\\
\hline
Surveys & Must: 20,000 deg$^2$ extragalactic, Should: Full sky (20,000
deg$^2$ \\
& Galactic), 100 deg$^2$ medium-deep. Could: 4 deg$^2$ planet hunting\\
\hline
Requirements & 1 visible band (R+I+J) for high-precision shape measurements,\\
& 3 NIR bands (Y, J, H) for photometry\\
\hline
Payload & 1.2m telescope, Visible \& NIR cameras with 0.5 deg$^2$ FOV
each\\
\hline
Service module & Mars/Venus express, Gaia heritage \\
\hline
Spacecraft & 2013kg launch mass\\
\hline
Orbit & Geosynchronous\\
\hline
Launch & Soyuz S-T Fregat\\
\hline
Operations & 4 year mission\\
\hline
\end{tabular}
\end{table}
\section{\label{section2}Science Objectives}
The DUNE mission will investigate a broad range of astrophysics and
fundamental physics questions detailed below. Its aims are twofold:
first study dark energy and measure its equation of state parameter
$w$ (see definition below) and its evolution with a precision of 2\%
and 10\% respectively, using both expansion history and structure
growth, second explore the nature of dark matter by testing the Cold
Dark Matter (CDM) paradigm and by measuring precisely the sum of the
neutrino masses. At the same time, it will test the validity of
Einstein's theory of gravity. In addition, DUNE will investigate how
galaxies form, survey all Milky-Way-like galaxies in the 2$\pi$
extra-galactic sky out to $z \sim 2$ and detect thousands of galaxies
and AGN at $6<z<12$. It will provide a detailed visible/NIR map of
the Milky Way and nearby galaxies and provide a statistical
census of exoplanets with masses above 0.1 Earth mass and orbits
greater than 0.5 AU
\subsection{Understanding Dark Energy}
A variety of independent observations overwhelmingly indicate that the
cosmological expansion began to accelerate when the Universe was
around half of its present age. Presuming the correctness of general
relativity this requires a new energy component known as dark
energy. The simplest case would be Einstein's cosmological constant
($\Lambda$), in which the dark energy density would be exactly
homogeneous and independent of time. However, the description of
vacuum energy from current Particle Physics concepts conflicts by 120
orders of magnitude with the observed value, and the discrepancy is
still not understood. Cosmologists are thus strongly motivated to
consider models of a dynamical dark energy, or even to contemplate
modifications to General Relativity. Explaining dark energy may well
require a radical change in our understanding of Quantum Theory or
Gravity, or both. One of the major aims of DUNE is to determine
empirically which of these alternatives is to be preferred. The
properties of dark energy can be quantified by considering its
equation of state parameter $w=p/\rho c^2$, where $p$ and $\rho$ are
its effective pressure and density. Unlike matter, dark energy has the
remarkable property of having negative pressure ($w<0$) and thus of
driving the Universe into a period of accelerated expansion (if
$w<-1/3$). The latter began relatively recently, around $z \le 1$. If
the dark energy resembles a cosmological constant ($w=-1$), it can
only be directly probed in the low-redshift Universe (see
Fig.~\ref{figc1}). This expansion history can be measured in two
distinct ways (see Fig.~\ref{figc1}): (1) the distance-redshift
relation $D(z)$; (2) the growth of structure (i.e. galaxies and
clusters of galaxies). The $D(z)$ relation can be probed geometrically
using 'standard candles' such as supernovae, or via measures of the
angular diameter distance from gravitational lensing or from the
``standard rod'' of Baryon Acoustic Oscillations (BAO). The
accelerated expansion slows down the gravitational growth of density
fluctuations; this growth of structure can be probed by comparing the
amplitude of structure today relative to that when the CMB was formed.
Many models for dark energy and modifications to gravity have been
proposed in which the equation of state parameter $w$ vary with
time. A convenient approximation is a linear dependence on the scale
factor $a=1/(1+z)$: $w(a)=w_n+(a_n-a)w_a$, where $w_n$ is the value of
the equation of state at a pivot scale factor $a_n$ (close to 0.6 for
most probes) and $w_a$ describes the redshift evolution. The goal of
future surveys is to measure $w_n$ and $w_a$ to high precision. To
judge their relative strengths we use a standard dark energy figure of
merit (FoM) (Albrecht et al. 2006), which we define throughout this
proposal as: $FoM=1/(\Delta w_n \Delta w_a)$, where $\Delta w_n$ and
$\Delta w_a$ are the (1$\sigma$) errors on the equation of state
parameters. This FoM is inversely proportional to the area of the
error ellipse in the ($w_n-w_a$) plane.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,angle=0]{figtheo1_2.eps}\includegraphics[width=0.5\textwidth,angle=0]{figtheo2_2.eps}
\end{center}
\caption{Effects of dark energy. Left: Fraction of the density of the
Universe in the form of dark energy as a function of redshift $z$.,
for a model with a cosmological constant ($w=-1$, black solid line),
dark energy with a different equation of state ($w=-0.7$, red dotted
line), and a modified gravity model (blue dashed line). Dark energy
becomes dominant in the low redshift Universe era probed by DUNE,
while the early Universe is probed by the CMB. Right: Growth factor of
structures for the same models. Only by measuring the geometry
(bottom panel) and the growth of structure (top panel) at low
redshifts can a modification of dark energy be distinguished from that
of gravity. Weak lensing measures both effects. }
\label{figc1}
\end{figure}
\subsection{DUNE's Cosmological Probes}
DUNE will deduce the expansion history from the two methods,
distance-redshift relation and growth of structure. DUNE has thus the
advantage of probing the parameters of dark energy in two independent
ways. A single accurate technique can rule out many of the suggested
members of the family of dark energy models, but it cannot test the
fundamental assumptions about gravity theory. If General Relativity is
correct, then either $D(z)$ or the growth of structure can determine
the expansion history. In more radical models that violate General
Relativity, however, this equivalence between $D(z)$ and growth of
structure does not apply (see Fig.~\ref{figc1}). For this purpose,
DUNE will use a combination of the following cosmological probes. The
precision on Dark Energy parameters achieved by DUNE's weak lensing
survey and complementary probes described below is shown in
Fig.~\ref{figc3} and Table~\ref{tableC2}.
{\it Weak Lensing - A Dark Universe Probe:}
As light from galaxies travels towards us, its path is deflected by
the intervening mass density distribution, causing the shapes of these
galaxies to appear distorted by a few percent. The weak lensing method
measures this distortion by
correlating the shapes of background galaxies to probe the density
field of the Universe. By dividing galaxies into redshift (or
distance) bins, we can examine the growth of structure and make
three-dimensional maps of the dark matter. An accurate lensing survey,
therefore, requires precise measurements of the shapes of galaxies as
well as information about their redshifts. High-resolution images of
large portions of the sky are required, with low levels of systematic
errors that can only be achieved via observations from a thermally
stable satellite in space. Analyses of the dark energy require precise
measurements of both the cosmic expansion history and the growth of
structure. Weak lensing stands apart from all other available methods
because it is able to make accurate measurements of both
effects. Given this, the optimal dark energy mission (and dark sector
mission) is one that is centred on weak gravitational lensing and is
complemented by other dark energy probes.
{\it Baryon Acoustic Oscillations (BAO) -- An Expansion History
Probe:}
The scale of the acoustic oscillations caused by the
coupling between radiation and baryons in the early Universe can be
used as a 'standard ruler' to determine the distance-redshift
relation. Using DUNE, we can perform BAO measurements using
photometric redshifts yielding the three-dimensional positions of a
large sample of galaxies. All-sky coverage in the NIR enabled by DUNE,
impossible from the ground, is crucial to reach the necessary
photometric redshift accuracy for this BAO survey.
{\it Cluster Counts (CC) -- A Growth of Structure Probe:}
Counts of the abundance of galaxy clusters (the most massive bound
objects in the Universe) as a function of redshift are a powerful
probe of the growth of structure. There are three ways to exploit the
DUNE large-area survey, optimised for weak lensing, for cluster
detection: strong lensing; weak lensing; and optical richness.
{\it Integrated Sachs-Wolfe (ISW) Effect -- A Higher Redshift
Probe:} The ISW effect is the change in CMB photon energy as it
passes through a changing potential well. Its presence indicates
either space curvature, a dark energy component or a modification to
gravity. The ISW effect is measured by cross-correlating the CMB with
a foreground density field covering the entire extra-galactic sky, as
measured by DUNE. Because it is a local probe of structure growth, ISW
will place complementary constraints on dark energy, at higher
redshifts, relative to the other probes (Douspis et al. 2008).
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,angle=0]{Errors1_new.ps}\includegraphics[width=0.5\textwidth,angle=0]{Errors2_new.ps}
\end{center}
\caption{Left: Expected errors on the dark energy equation of state
parameters for the four probes used by DUNE (68\% statistical
errors). The light blue band indicates the expected errors from
Planck. Of the four methods, weak lensing clearly has the greatest
potential. Right: The combination of BAO, CC and ISW (red solid
line) begins to reach the potential of the lensing survey (blue
dashed line) and provides an additional cross-check on
systematics. The yellow ellipse corresponds to the combination of
all probes and reaches a precision on dark energy of 2\% on $w_n$
and 10\% on $w_a$.}
\label{figc3}
\end{figure}
\subsection{Understanding Dark Matter}
Besides dark energy, one major component of the concordance model of
cosmology is dark matter ($\sim90$\% of the matter in the Universe,
and $\sim 25$\% of the total energy). The standard assumption is that
the dark matter particle(s) is cold and non-collisional (CDM). Besides
direct and indirect dark matter detection experiments, its nature may
well be revealed by experiments such as the Large Hadron Collider
(LHC) at CERN, but its physical properties may prove to be harder to
pin down without astronomical input. one way of testing this is to
study the amount of substructure in dark matter halos on scales
1-100'', which can be done using high order galaxy shape measurements
and strong lensing with DUNE. Weak lensing measurements can constrain
the total neutrino mass and number of neutrino species through
observations of damping of the matter power spectrum on small
scales. Combining DUNE measurements with Planck data would reduce the
uncertainty on the sum of neutrino masses to 0.04eV, and may therefore
make the first measurement of the neutrino mass (Kitching et al 2008).
\subsection{Understanding the Seeds of Structure Formation}
It is widely believed that cosmic structures originated from vacuum
fluctuations in primordial quantum fields stretched to cosmic scales
in a brief period during inflation. In the most basic inflationary
models, the power spectrum of these fluctuations is predicted to be
close to scale-invariant, with a spectral index $n_s$ and amplitude
parameterised by $\sigma_8$. As the Universe evolved, these initial
fluctuations grew. CMB measurements probe their imprint on the
radiation density at $z \sim 1100$. Density fluctuations continued to
grow into the structures we see today. Weak lensing observations with
DUNE will lead to a factor of 20 improvement on the initial
conditions as compared to CMB alone (see Table~\ref{tableC2}).
\subsection{Understanding Einstein's Gravity}
Einstein's General Theory of Relativity, the currently accepted theory
of gravity, has been well tested on solar system and galactic
scales. Various modifications to gravity on large scales (e.g. by
extra dimensions, superstrings, non-minimal couplings or additional
fields) have been suggested to avoid dark matter and dark energy. The
weak lensing measurements of DUNE will be used to test the underlying
theory of gravity, using the fact that modified gravity theories
typically alter the relation between geometrical measures and the
growth of structure (see Fig.~\ref{figc1}). DUNE can measure the the
growth factor exponent $\gamma$ with a precision of 2\%.
\begin{table}
\caption{Dark energy and initial conditions figures of merit for DUNE
and Planck.}
\label{tableC2}
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
& \multicolumn{2}{c}{Dark Energy Sector}\vline & \multicolumn{2}{c}{Initial Conditions
Sector}\vline& DE &IC \\ \hline
& $\Delta w_n$ & $\Delta w_a$ & $\Delta \sigma_8$ & $\Delta n_s$ & FoM & FoM \\ \hline
Planck &0.03 &3 &0.03 &0.004&11 &8,000 \\ \hline
DUNE &0.02 &0.1&0.006 &0.01 &500 &17,000 \\ \hline
DUNE + Planck& 0.01&0.05&0.002&0.003&2000&170,000\\ \hline
\multicolumn{5}{l}{Factor improvement of DUNE + Planck over Planck only}&180&20 \\ \hline
\end{tabular}
\end{table}
\par\bigskip
Meeting the above cosmological objectives necessitates an
extra-galactic all-sky survey (DASS-EX) in the visible/NIR with
galaxies at a median redhift of $z \sim 1$. To this survey, will be
added a shallower survey of the Galactic plane (DASS-G) which will
complete the coverage to the full sky, as well as a medium-deep survey
of 100 deg$^{\rm 2}$ (DUNE-MD) and a pencil beam microlensing survey
for planets in the Galactic bulge.
Focussed on the dark sector, DUNE will produce an invaluable broad survey
legacy. DASS will cover a 10000 times larger area than other
optical/near-IR surveys of the same or better resolution, will be 4mag
deeper than the GAIA photometry and six times higher resolution than
the SDSS. In the infrared, DASS-EX will be nearly 1000 times deeper
(in J) than the all-sky 2MASS survey with an effective
search volume
which will be 5000-fold that of the UKIDDS large area survey currently
underway, and 500-fold that of the proposed VISTA Hemisphere
Survey. It would take VISTA 8000 years to match DASS-EX depth and
20,000 deg$^{\rm 2}$ area coverage. DASS-MD will bridge the gap
between DASS-EX and expected JWST surveys.
\subsection{Tracking the Formation of Galaxies and AGN with DUNE}
While much progress has been made in understanding the formation of
large scale structure, there are still many problems in forming
galaxies within this structure with the observed luminosity function
and morphological properties. This is now a major problem in
astronomy. Obtaining deep high spatial resolution near-IR images will
be central to the study galaxy morphology and clustering. A large area
survey is required for rare but important events, such as the merger
rate of very massive galaxies. DUNE will deliver this key capability.
Using DUNE's weak lensing maps, we will study the relationship between
galaxy mass and light, the bias, by mapping the total mass density and
the stellar mass and luminosity.
Galaxy clusters are the largest scale signposts of
structure formation.
While at present only a few massive clusters
at $z>1$ are known, DUNE will find hundreds of Virgo-cluster-mass
objects at $z>2$, and several thousand clusters of M=$1-2 \times
10^{13}$M$_{\odot}$. The latter are the likely environments in which the peak
of QSO activity at $z\sim2$ takes place, and hold the empirical
key to understanding the heyday of QSO activity.
Using the Lyman-dropout technique in the near-IR, the DUNE-MD survey
will be able to detect the most luminous objects in the early Universe
($z>6$): $\sim 10^4$ star-forming galaxies at $z\sim8$ and up to
$10^3$ at $z\sim10$, for SFRs $>30-100$M$_{\odot}$/yr. It will also be able
to detect significant numbers of high-$z$ quasars: up to $10^4$ at
$z\sim7$, and $10^3$ at $z\sim9$. These will be central to understanding the
reionisation history of the Universe.
Dune will also detect a very large number of strong lensing systems:
about $10^5$ galaxy-galaxy lenses, $10^3$ galaxy-quasar lenses and
5000 strong lensing arcs in clusters (see Menegetthi et al. 2007). It
is also estimated that several tens of galaxy-galaxy lenses will be
\emph{double} Einstein rings (Gavazzi et al. 2008), which are powerful
probes of the cosmological model as they simultaneously probe several redshifts.
In addition, during the course of the DUNE-MD survey (over 6 months),
we expect to detect $\sim 3000$ Type Ia Supernovae with redshifts up
to $z\sim0.6$ and a comparable number of Core Collapse SNe (Types II
and Ib/c) out to $z\sim0.3$. This will lead to measurement of SN rates
thus providing information on their progenitors and on the star
formation history.
\subsection{Studying the Milky Way with DUNE}
DUNE is also primed for a breakthrough in Galactic astronomy. DASS-EX,
complemented by the shallower survey of the Galactic plane (with
$|b|<30\; deg$) will provide all-sky high resolution (0.23'' in the wide red
band, and 0.4'' in YJH) deep imaging of the stellar content of the
Galaxy, allowing the deepest detailed structural studies of the thin
and thick disk components, the bulge/bar, and the Galactic halo
(including halo stars in nearby galaxies such as M31 and M33) in bands
which are relatively insensitive to dust in the Milky Way.
DUNE will be little affected by extinction and will supersede by
orders of magnitude all of the ongoing surveys in terms of angular
resolution and sensitivity.
DUNE will thus
enable the most comprehensive stellar census of late-type dwarfs and
giants, brown dwarfs, He-rich white dwarfs, along with detailed
structural studies, tidal streams and merger fragments. DUNE's
sensitivity will also open up a new discovery space for rare stellar
and low-temperature objects via its H-band imaging. Currently, much
of Galactic structure studies are focussed on the halo. Studying the
Galactic disk components requires the combination of spatial
resolution (crowding) and dust-penetration (H-band) that DUNE can
deliver.
Beyond our Milky Way, DUNE will also yield the most detailed and
sensitive survey of structure and substructure in nearby galaxies,
especially of their outer boundaries, thus constraining their merger
and accretion histories.
\subsection{Search for Exo-Planets}
The discovery of extrasolar planets is the most exciting development
in astrophysics over the past decade, rivalled only by the discovery
of the acceleration of the Universe. Space observations (e.g. COROT, KEPLER), supported by
ground-based high-precision radial velocity surveys will probe
low-mass planets (down to $1 M_\oplus$). DUNE is also
perfectly suited to trace the distribution of matter on very small
scales those of the normally invisible extrasolar planets. Using
microlensing effect, DUNE can provide a statistical census of
exoplanets in the Galaxy with masses over $0.1 M_\oplus$ from orbits
of 0.5 AU to free-floating objects. This includes analogues to all the
solar system's planets except for Mercury, as well as most planets
predicted by planet formation theory. Microlensing is the temporary
magnification of a Galactic bulge source star by the gravitational
potential of an intervening lens star passing near the line of
sight. A planet orbiting the lens star, will have an altered
magnification, showing a brief flash or a dip in the observed light
curve of the star(see Fig. \ref{figc5}).
Because of atmospheric seeing (limiting the monitoring to large source
stars), and poor duty cycle even using networks, ground-based
microlensing surveys are only able to detect a few to 15 $M_\oplus$
planets in the vicinity of the Einstein ring radius (2-3 AU). The high
angular resolution of DUNE, and the uninterrupted visibility and NIR
sensitivity afforded by space observations will provide detections of
microlensing events using as sources G and K bulge dwarfs stars and
therefore can detect planets down to $0.1-1 M_\odot$ from orbits of
0.5 AU. Moreover, there will be a very large number of transiting hot
Jupiters detected towards the Galactic bulge as 'free' ancillary
science. A space-based microlensing survey is thus the only way to
gain a comprehensive census and understanding of the nature of
planetary systems and their host stars. We also underline that the
planet search scales linearly with the surface of the focal plane and
the duration of the experiment.
\begin{figure}
\begin{center}
\includegraphics[width=7cm, height=6cm, angle=0]{duneml.ps}
\caption{Exoplanet discovery parameter space (planet mass vs orbit size)
showing for reference the 8 planets from our solar system (labeled as letters),
those detected by Doppler wobble (T), transit (circle), and
microlensing. We outline regions that can be probed by different
methods. Note the uniqueness of the parameter space probed by DUNE
compared to other techniques.
}
\label{figc5}
\end{center}
\end{figure}
\section{DUNE Surveys: the Need for All-Sky Imaging from Space}
There are two key elements to a high precision weak lensing survey: a
large survey area to provide large statistics, and the control of
systematic errors. Figure \ref{fig:req} shows that to
reach our dark energy target (2\% error on $w_n$) a survey of 20,000
square degrees with galaxies at $z\sim1$ is required. This result is
based on detailed studies showing that, for a fixed observing time,
the accuracy of all the cosmological parameters is highest for a wide
rather than deep survey (Amara \& Refregier 2007a, 2007b). This required
survey area drives the choice of a 1.2m telescope and a combined
visible/NIR FOV of 1 deg$^{\rm 2}$ for the DUNE baseline.
\begin{figure}
\includegraphics[width=0.5\textwidth,height=5cm,angle=0]{surveyarea.ps}\includegraphics[width=0.5\textwidth,angle=0]{des_dens.eps}
\caption{Left: Error on the dark energy equation of state
parameter $w_n$ as a function of weak lensing survey area (in deg$^{\rm
2}$) for several shape measurement systematic levels (assuming 40
galaxies/amin$^2$ with a median redshift $z_m$=1). An area of 20,000 deg$^2$
and a residual systematic shear variance of
$\sigma_{sys}^2<10^{-7}$ is required to achieve the DUNE objective
(error on $w_n$ better than 2\%).
Right (from Abdalla et
al. 2007): Photometric redshift performance for a DES-like ground survey
with and without the DUNE NIR bands (J,H). The deep NIR photometry,
only achievable in space, results in a dramatic reduction of the
photometric redshift errors and catastrophic failures which are needed for all
the probes (weak lensing, BAO, CC, ISW).}
\label{fig:req}
\end{figure}
Ground based facilities plan to increase area coverage, but they will
eventually be limited by systematics inherent in ground based
observations (atmospheric seeing which smears the image, instabilities
of ground based PSFs, telescope flexure and wind-shake, and
inhomogeneous photometric calibrations arising from seeing
fluctuations). The most recent ground-based wide field imagers
(e.g. MegaCam on CFHT, and Subaru) have a stochastic variation of the
PSF ellipticity of the order of a few percent, i.e. of the same order
of magnitude as the sought-after weak lensing signal. Current
measurements have a residual shear systematics variance of
$\sigma_{sys}^2 \sim 10^{-5}$, as indicated with both the results of
the STEPII program and the scatter in measured value of
$\sigma_8$. This level of systematics is comparable to the statistical
errors for surveys that cover a few tens of square degree
(Fig. \ref{fig:req}). As seen on the figure, to reach DUNE's dark
energy targets, the systematics must be at the level of
$\sigma_{sys}^2 \sim 10^{-7}$, 100 times better than the current level
(see Amara \& Refregier 2007b for details). While ground based surveys
may improve their systematics control, reaching this level will be an
extreme challenge. One ultimate limit arises from the finite
information contained in the stars used to calibrate the PSF, due to
noise and pixelisation. Simulations by Paulin-Henriksson et al. (2008)
show that, to reach our systematic targets, the PSF must remain
constant (within a tolerance of 0.1\%) over 50 arcmin$^2$ (which
corresponds to $\sim 50$ stars). While this is prohibitive from the
ground, we have demonstrated during a CNES phase 0 study (Refregier et
al. 2006), that with the careful design of the instrument, this can be
achieved from space. In addition to shape measurements, wide area
imagining surveys use photometric information to measure the redshift
of galaxies in the images. Accurate measurements of the photometric
redshifts require the addition of NIR photometry (an example of this
is shown in Fig. \ref{fig:req}, right panel, and also Abdalla et
al. 2007). Such depths in the NIR cannot be achieved from the ground
over wide area surveys and can only be done from space.
\par\bigskip
To achieve the scientific goals listed in section \ref{section2}, DUNE will
perform four surveys detailed in the following and in Table \ref{tableC5}.
\subsection{Wide Extragalactic Survey: DASS-EX }
To measure dark energy to the required precision, we need to make
measurements over the entire extra-galactic sky to a depth which
yields 40 gal/arcmin$^2$ useful for lensing with a median redshift
$z_m \simeq 0.9$. This can be achieved with a survey (DASS-EX) that
has AB-magnitude limit of 24.5 (10$\sigma$ extended source) in a broad
red visible filter (R+I+Z). Based on the fact that DUNE focuses on
observations that cannot be obtained from the ground, the wide survey
relies on two unique factors that are enabled by space: image quality
in the visible and NIR photometry. Central to shape measurements for
weak lensing the PSF of DUNE needs to be sampled better than 2-2.5
pixels per FWHM (Paulin-Henriksson et al. 2008), to be constant over
50 stars around each galaxy (within a tolerance of $\sim 0.1\%$ in
shape parameters), and to have a wavelength dependence which can be
calibrated. Accurate measurement of the redshift of distant galaxies
($z \sim 1$) requires photometry in the NIR where galaxies have a
distinctive feature (the 4000$\AA$ break). Deep NIR photometry
requires space observations. The bands Y, J and H are the perfect
synergy for ground based survey complement (see Abdalla et al. 2007
for discussion), as recommended by the ESO/ESA Working Group on
Fundamental Cosmology (Peacock et al. 2006).
\subsection{Legacy surveys: DASS-G, DUNE-MD, and DUNE-ML}
We propose to allocate six months to a medium deep survey (DUNE-MD) with
an area of 100 deg$^2$ to magnitudes of 26 in Y, J and H, located at
the N and S ecliptic poles. This survey can be used to calibrate DUNE
during the mission, by constructing it from a stack of $>30$
sub-images too achieve the required depths. DUNE will also perform a
wide Galactic survey (DASS-G) that will complement the 4$\pi$ coverage
of the sky and a microlensing survey (DUNE-ML). Both surveys require
short exposures. Together with the DASS-EX, these surveys need good
image quality with low level of stray light. A summary of all the
surveys is shown in Table \ref{tableC5}.
\begin{table}
\caption{Requirements and geometry for the four DUNE surveys.}
\label{tableC5}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{\textbf{Wide Extragalactic Survey DASS-EX (must)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{20,000 sq degrees -- $|b|> 30 \deg$
}\\ \hline
\multirow{2}{*}{Survey Strategy}& Contiguous patches & \multicolumn{2}{|c|}{$> 20 \deg \times 20 \deg$} \\ \cline{2-4}
& Overlap & \multicolumn{2}{|c|}{$ 10 \%$} \\ \hline
\multicolumn{2}{|c|}{Shape Measurement Channel}& R+I+Z (550-920nm) & R+I+$Z_{AB}$ $<$24.5 (10$\sigma$ ext) \\ \hline
\multicolumn{2}{|c|}{ } & Y (920-1146nm) & $Y_{AB}<$24 (5$\sigma$ point) \\ \cline{3-4}
\multicolumn{2}{|c|}{Photometric Channel} & J (1146-1372nm) & $J_{AB}<$24 (5$\sigma$ point) \\ \cline{3-4}
\multicolumn{2}{|c|}{ } & H (1372-1600nm) & $H_{AB}<$24 (5$\sigma$ point) \\ \hline
\multirow{2}{*}{PSF} & Size \& Sample & 0.23" FWHM & $>$ 2.2 pixels per FWHM \\\cline{2-4}
& Stability & \multicolumn{2}{|c|}{within tolerance of 50 stars} \\ \hline
\multirow{2}{*}{Image Quality} & Dead pixels &\multicolumn{2}{|c|}{$<$ 5 \% of final image}\\ \cline{2-4}
& Linearity &\multicolumn{2}{|c|}{Instrument calibratable for $1<$S/N$<1000$}\\ \hline
\multicolumn{4}{|c|}{\textbf{Medium Deep Survey DUNE-MD (should)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ $\sim$100 sq degrees
-- Ecliptic poles}\\ \hline
Survey Strategy& Contiguous patches & \multicolumn{2}{|c|}{Two patches each $7 \deg \times 7 \deg$} \\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{ $Y_{AB}, \; J_{AB}, \; H_{AB} <$26 (5$\sigma$ point) -- for stack}\\ \hline
\multicolumn{2}{|c|}{PSF} & \multicolumn{2}{c|}{Same conditions as the wide survey} \\ \hline
\multicolumn{4}{|c|}{\textbf{Wide Galactic Survey DASS-G (should)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ 20,000 sq degrees --
$|b| < 30 \deg$}\\ \hline
\multicolumn{2}{|c|}{Shape Measurement Channel}&\multicolumn{2}{|c|}{$R+I+Z_{AB}<23.8$ ($5\sigma$ ext)}\\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{ $Y_{AB}, \; J_{AB}, \; H_{AB} <$22 (5$\sigma$ point)}\\ \hline
PSF & Size & \multicolumn{2}{|c|}{$< 0.3"$ FWHM}\\ \hline
\multicolumn{4}{|c|}{\textbf{Microlensing Survey DUNE-ML (could)}}\\ \hline
\multicolumn{2}{|c|}{Area}&\multicolumn{2}{|c|}{ 4 sq degrees -- Galactic bulge}\\ \hline
Survey Strategy & Time sampling & \multicolumn{2}{|c|}{Every 20 min -- 1 month blocks -- total of 3 months}\\ \hline
\multicolumn{2}{|c|}{Photometric Channel} & \multicolumn{2}{|c|}{
$Y_{AB}, \; J_{AB}, \; H_{AB} <$22 (5$\sigma$ point) -- per visit}\\ \hline
PSF & Size & \multicolumn{2}{|c|}{$< 0.4"$ FWHM}\\ \cline{2-4}
\hline
\end{tabular}
\end{table}
\section{Mission Profile and Payload instrument}
The mission design of DUNE is driven by the need for the stability of
the PSF and large sky coverage. PSF stability puts stringent
requirements on pointing and thermal stability during the observation
time. The 20,000 square degrees of DASS-EX demands high operational
efficiency, which can be achieved using a drift scanning mode (or Time
Delay Integration, TDI, mode) for the CCDs in the visible focal
plane. TDI mode necessitates the use of a counter-scanning mirror to
stabilize the image in the NIR focal plane channel.
The baseline for DUNE is a Geosynchronous Earth orbit (GEO), with a
low inclination and altitude close to a standard geostationary
orbit. Based on Phase 0 CNES study, this solution was chosen to meet
both the high science telemetry needs and the spacecraft low
perturbation requirements. This orbit also provides substantial
launch flexibility, and simplifies the ground segment.
As for the PSF size and sampling requirements, a
baseline figure for the line-of-sight stability is 0.5 pixel (smearing
MTF $> 0.99$ at cut-off frequency), with the stability budget to be
shared between the telescope thermal stability (0.35 pixel) and the
attitude orbit control system (AOCS) (0.35 pixel). This implies a
line-of-sight stability better than 0.2 $\mu$rad over 375 sec (the
integration time across one CCD). This stringent requirement calls for
a minimalization of external perturbations which mainly consist of solar radiation
pressure and gravity gradient torques. A gravitational torque of 20
$\mu$Nm is acceptable, and requires an orbit altitude of at least
25,000 km. The attitude and orbit control design is based on proportional
actuators.
A stable thermal environment is requested for the payload ($\sim 10
mK$ variation over 375sec), hence mission design requires a permanent
cold face for the focal plane radiators and an orbit that minimizes
heat load from the Earth. This
could be achieved by having the whole payload in a regulated temperature
cavity.
A primary driver for the GEO orbit choice is the high data rate -- the
orbit must be close enough to Earth to facilitate the transmission of
the high amount of data produced by the payload every day (about
1.5~Tbits) given existing ground network facilities, while minimizing
communication downlink time during which science observations cannot
be made (with a fixed antenna).
The effects of the radiation environment at GEO, for both CCD bulk
damage induced by solar protons and false detections induced by
electrons with realistic shielding, is considered acceptable. However,
DUNE specific radiation tests on CCD sensors will be required as an
early development for confirming the measurement robustness to proton
induced damage. A High Elliptical Orbit (HEO) operating beyond the
radiation belt is an alternative in case electron radiation or thermal
constraints prevent the use of GEO.
The payload for DUNE is a passively cooled 1.2m diameter Korsch-like
three-mirror telescope with two focal planes, visible and NIR covering
1 square degree. Figure~\ref{fig:4.1} provides an overview of the
payload. The Payload module design uses Silicon Carbide (SiC)
technology for the telescope optics and structure. This provides low
mass, high stability, low sensitivity to radiation and the ability to
operate the entire instrument at cold temperature, typically below 170
K, which will be important for cooling the large focal planes. The two
FPAs, together with their passive cooling structures are isostatically
mounted on the M1 baseplate. Also part of the payload are the de-scan
mirror mechanism for the NIR channel and the additional payload data
handling unit (PDHU).
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth,angle=0]{telescope.eps}
\caption{Overview of all payload elements. }
\label{fig:4.1}
\end{center}
\end{figure}
\subsection{Telescope}
The telescope is a Korsch-like $f/20$ three-mirror telescope. After
the first two mirrors, the optical bundle is folded just after passing
the primary mirror (M1) to reach the off-axis tertiary mirror. A
dichroic element located near the exit pupil of the system provides
the spectral separation of the visible and NIR channels. For the NIR,
the de-scan mechanism close to the dichroic filter allows for a
largely symmetric configuration of both spectral channels. The whole
instrument fits within a cylinder of 1.8m diameter and
2.65m length. The payload mass is approximately 500~kg, with 20\%
margin, and average/peak power estimates are 250/545~W.
Simulations have shown that the overall wavefront error (WFE) can be
contained within 50 nm r.m.s, compatible with the required
resolution. Distortion is limited to between 3-4$\mu$m, introducing an
0.15$\mu$rad fixed (hence accessible to calibration) displacement in
the object space. The need to have a calibration of the PSF shape
error better than 0.1 \% over 50 arcmin$^2$ leads to a thermal
stability of $\sim 10$ mK over 375 s. Slow variations of solar
incidence angle on the sunshield for DUNE will not significantly
perturb the payload performance, even for angles as large as 30
degrees.
\subsection{Visible FPA}
The visible Focal Plane Array (VFP) consists of 36 large format
red-sensitive CCDs, arranged in a 9x4 array (Figure~\ref{fig:4.2})
together with the associated mechanical support structure and
electronics processing chains. Four additional CCDs dedicated to
the AOCS measurements are located at the edge of
the array. All CCDs are 4096 pixel red-enhanced e2v CCD203-82 devices
with square 12 $\mu$m pixels. The physical size of the array is
466x233 mm which corresponds to $1.09\deg \times 0.52 \deg$. Each pixel is
0.102 arcsec, so that the PSF is well sampled in each direction over
approximately 2.2 pixels, including all contributions. The VFP
operates in the red band from 550-920nm. This bandpass is produced by
the dichroic. The CCDs are 4-phase devices, so they can be clocked in
$1/4$ pixel steps. The exposure duration on each CCD is 375s,
permitting a slow readout rate and minimising readout noise. Combining
4 rows of CCDs will then provide a total exposure time of 1500s.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\textwidth,angle=0]{VisFPA.eps}
\caption{Left: The VFP assembly with the 9x4 array of CCDs
and the 4 AOCS sensors on the front (blue) and the warm electronics
radiator at the back (yellow). Right: An expanded view of the VFP
assembly, including the electronics modules and thermal hardware (but
excluding the CCD radiator). Inset: The e2v CCD203-82 4kx4k pixels
shown here in a metal pack with flexi-leads for electrical
connections. One of the flexi-leads will be removed. }
\label{fig:4.2}
\end{center}
\end{figure}
The VFP will be used by the spacecraft in a closed-loop system to
ensure that the scan rate and TDI clocking are synchronised. The two
pairs of AOCS CCDs provide two speed measurements on relatively bright
stars (V $\sim 22-23$). The DUNE VFP is largely a self-calibrating
instrument. For the shape measurements, stars of the appropriate
magnitude will allow the PSF to be monitored for each CCD including
the effects of optical distortion and detector alignment.
Radiation-induced charge transfer inefficiency will modify the PSF and
will also be self-calibrated in orbit.
\subsection{NIR FPA}
The NIR FPA consists of a 5 x 12 mosaic of 60 Hawaii 2RG detector
arrays from Teledyne, NIR bandpass filters for the wavelength bands Y,
J, and H, the mechanical support structure, and the detector readout
and signal processing electronics (see Figure~\ref{fig:4.3}). The FPA
is operated at a maximum temperature of 140 K for low dark current of
0.02$e^-$/s. Each array has 2048 x 2048 square pixels of 18 $\mu$m
size resulting in a 0.15 x 0.15 arcsec$^2$ field of view (FOV) per pixel.
The mosaic has a physical size of 482 x 212 mm, and covers a
FOV of $1.04^\circ \times 0.44^\circ$ or 0.46 square degrees. The
HgCdTe Hawaii 2RG arrays are standard devices sensitive in the 0.8 to
1.7 $\mu$m wavelength range.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\textwidth,angle=0]{NIRFPA.eps}
\caption{Layout of the NIR FPA (MPE/Kayser-Threde). The 5
x 12 Hawaii 2RG Teledyne detector arrays (shown in the inset) are
installed in a molybdenum structure}
\label{fig:4.3}
\end{center}
\end{figure}
As the spacecraft is scanning the sky, the image motion on the NIR FPA
is stabilised by a de-scanning mirror during the integration time of
300s or less per NIR detector. The total integration time of 1500 s
for the $0.4^\circ$ high field is split among five rows and 3
wavelengths bands along the scan direction. The effective integration
times are 600 s in J and H, and 300 s in Y. For each array, the
readout control, A/D conversion of the video output, and transfer of
the digital data via a serial link is handled by the SIDECAR ASIC
developed for JWST. To achieve the limiting magnitudes defined by the
science requirements within these integration times, a minimum of 13
reads are required. Data are
processed in the dedicated unit located in the service module.
\section{Basic spacecraft key factors}
The spacecraft platform architecture is fully based on well-proven and
existing technologies. The mechanical, propulsion, and solar array
systems are reused from Venus Express (ESA) and Mars-Express. All the
AOCS, $\mu$-propulsion, Power control and DMS systems are reused from
GAIA. Finally, the science telemetry system is a direct reuse from the
PLEIADES (CNES) spacecraft. All TRLs are therefore high and all
technologies are either standard or being developed for GAIA (AOCS for
instance).
\subsection{Spacecraft architecture and configuration}
The spacecraft driving requirements are: (1) Passive cooling of both
visible and NIR focal planes below 170 K and 140 K, respectively; (2)
the PSF stability requirement, which translates to line of sight (LOS)
and payload thermal stability requirements; and (3) the high science
data rate. The spacecraft consists of a Payload Module (PLM) that
includes the instrument (telescope hardware, focal plane assemblies
and on board science data management) and a Service Module (SVM). The
SVM design is based on the Mars Express parallelepiped structure that
is 1.7 m $\times$ 1.7 m $\times$ 1.4 m, which accommodates all
subsystems (propulsion, AOCS, communication, power, sunshield, etc) as
well as the PLM.
The spacecraft platform, and all technologies, are either
standard ones or being developed into GAIA programme (e.g. AOCS).
\subsection{Sunshield and Attitude Control}
The nominal scan strategy assumes a constant, normal ($0\deg$)
incidence of the sun on the sunshield, while allowing a sun incidence
angle of up to $30\deg$ to provide margin, flexibility for data
transmission manoeuvres and potential for further scan
optimisation. The sunshield is a ribbed, MLI-covered central frame
fixed to the platform. The satellite rotates in a draft
scan-and-sweep-back approach where the spacecraft is brought back to
next scan position after each $20\deg$ strip. The scan rate is $1.12
\deg$ per hour, such that every day, one complete strip is scanned and
transmitted to ground.
Due to the observation strategy and the fixed high gain antenna (HGA),
the mission requires a high level of attitude manoeuvrability.
During data collection, the spacecraft is
rotated slowly about the sunshield axis. The slow scan control
requirements are equivalent to three-axis satellite control. The
line-of-sight stability requirement is 0.2 $\mu$rad over 375s (the
integration time for one CCD) and is driven by optical quality and PSF
smearing, and will be partially achieved using
a continuous PSF calibration using the stars located in the
neighborhood (50 arcmin$^2$) of each observed galaxy. Detailed
analyses show that DUNE high pointing performance is comparable in
difficulty to that achieved on GAIA during science
observations. Similarly to GAIA, two pairs of dedicated CCD in the
visible focal plane are used for measuring the spacecraft attitude
speed vector. Hybridisation of the star tracker and payload
measurements is used to reduce the noise injected by the star tracker
in the loop. For all other operational phases and for the transition
from coarse manoeuvres to the science observation mode, the attitude
is controlled using the Mars Express propulsion system. The attitude
estimation is based on using two star trackers (also used in science
observing mode), six single-axis gyros and two sun sensors for
monitoring DUNE pointing during manoeuvres with a typically accuracy
better than 1 arcmin.
\subsection{Functional architecture: propulsion and electrical systems}
The star signal collected in the instrument is spread on the focal
plane assembly and transformed into a digital electrical signal which
is transferred to the Payload Data Handling Unit (PDHU), based on
Thales/AlienaSpace heritage. Power management and regulation are
performed by the Power Conditioning \& Distribution Unit (PCDU), and
based on the GAIA program. Electrical power is generated by two solar
arrays (2.7 m$^2$ each), as used in the Mars Express and Venus Express
ESA spacecraft. The control of their orientation is based on the
orientation of the whole spacecraft
towards the Sun. The panels are filled with AsGa cells.
The RF architecture is divided into two parts with the TT\&C system
(S-Band) plus a dedicated payload telemetry system (X-Band in the EES
band (Earth Exploration Service). The allocated bandwidth for payload
telemetry is 375 MHz and high rate transmitters already exist for
this purpose. The X-band 155 Mbits/s TMTHD modulator can be reused
from Pleiades spacecraft. A single fixed HGA of 30 cm diameter can be
used (re-used from Venus Express). The RF power required is 25 W, which
also enables the re-use of the solid state power amplifier (SSPA) from
Pleiades. The transmitted science data volume is estimated at 1.5
Tbits per day. The baseline approach consists in
storing the science data on board in the PDHU, then to downlink the
data twice per day. This can be achieved naturally twice per orbit at
06h and 18h local time and using the rotation degree of freedom about
the satellite-sun axis for orienting the antenna towards the ground
station. The total transmission duration is less than 3 hours. The
spacecraft attitude variation during transmission is less than 30 deg
(including AOCS margins). 20 kg hydrazine propellant budget is
required. In case the operational orbit would change to HEO, a dual
frequency (S-Band + X-Band) 35 m ESOC antenna could fit with the
mission needs, with in an increased HGA diameter (70 cm).
The required power on the GEO orbit is 1055 W. The
sizing case is the science mode after eclipse with battery
charging. Electrical power is generated by the two solar arrays of 2.7
m$^2$ each. With a $30\deg$ solar angle,
the solar array can generate up to 1150 W at the end of its life. The battery
has been sized in a preliminary
approach for the eclipse case (64 Ah need).
\section{Science Operations and Data Processing}
The DUNE operational scenario follows the lines of a survey-type
project. The satellite will operate autonomously except for defined
ground contact periods during which housekeeping and science telemetry
will be downlinked, and the commands needed to control spacecraft and
payload will be uploaded. The DUNE processing pipeline is inspired by
the Terapix pipeline used for the CFHT Legacy Survey. The total
amount of science image data expected from DUNE is $\sim 370$
Terapixels (TPx): 150TPx from the Wide, 120TPx for 3 months of the
microlensing survey, 60TPx for the 3 months of the Galactic plane
survey, and 40TPx for 6 months deep survey. Based on previous
experience, we estimate an equal amount of calibration data (flat
fields, dark frames, etc.) will be taken over the course of the
mission. This corresponds to 740TB, roughly 20 times the amount of
science data for CFHT during 2003-2007.
There are four main activities necessary for the data processing,
handling, and data organisation of the DUNE surveys:
\begin{enumerate}
\item software development: image and catalogue processing,
quality control, image and catalogue handling tools,
pipeline development, survey monitoring, data archiving and
distribution, numerical simulations, image simulations;
\item processing operation: running the pipeline, quality control and
quality assessment operation and versioning,
pipeline/software/database update and maintenance;
\item data archiving and data distribution: data and meta-data
products and product description, public user interface, external data
(non-DUNE) archiving and distribution, public outreach;
\item computing resources: data storage, cluster architecture,
GRID technology.
\end{enumerate}
\section{Conclusion: DUNE's Impact on Cosmology and Astrophysics}
ESA's Planck mission will bring unprecedented precision to the
measurement of the high redshift Universe. This will leave the dark
energy dominated low redshift Universe as the next frontier in high
precision cosmology. Constraints from the radiation perturbation in
the high redshift CMB, probed by Planck, combined with density
perturbations at low redshifts, probed by DUNE, will form a complete
set for testing all sectors of the cosmological model. In this
respect, a DUNE+Planck programme can be seen as the next significant
step in testing, and thus challenging, the standard model of
cosmology. Table \ref{tableC2} illustrates just how precise the
constraints on theory are expected to be: DUNE will offer high
potential for ground-breaking discoveries of new physics, from dark
energy to dark matter, initial conditions and the law of gravity. Our
understanding of the Universe will be fundamentally altered in a
post-DUNE era, with ESA's science programmes at the forefront of these
discoveries.
As described above, the science goals of DUNE go far beyond the
measurement of dark energy. It is a mission which:
(i) measures both effects of dark energy (i.e. the expansion history
of the Universe and the growth of structure) by using weak lensing as the
central probe; (ii) places this high precision measurement of dark
energy within a broader framework of high precision cosmology by
constraining all sectors of the standard cosmology model (dark matter,
initial conditions and Einstein gravity); (iii) through a collection
of unique legacy surveys is able to push the frontiers of the
understanding of galaxy
evolution and the physics of the local group; and finally (iv) is able
to obtain information on some of the lowest masses astronomy
extrasolar planets, which could contain mirror Earths.
DUNE has been selected jointly with SPACE (Cimatti et al. 2008) in
ESA's Cosmic Vision programme for an assessment phase which lead to
the Euclid merged concept.
\begin{acknowledgements}
We thank CNES for support on an earlier version of the DUNE mission
and EADS/Astrium, Alcatel/Alenia Space, as well as Kayser/Threde for
their help in the preparation of the ESA proposal.
\end{acknowledgements}
\bibliographystyle{spbasic}
| {'timestamp': '2008-07-24T11:01:20', 'yymm': '0802', 'arxiv_id': '0802.2522', 'language': 'en', 'url': 'https://arxiv.org/abs/0802.2522'} |
\section{Introduction}
Two dimensional (2D) materials like graphene, silicene and germanene are semimetals with zero-gap \cite{w11,cta09}, and their charge carriers are massless fermions\cite{nltzzq12}. Graphene have been studied vastly because of its superior advantages such as mechanical, optical and electronic properties \cite{ajyr19, kjna19, lkk19, lmhlz19, m18, qxz18, rilts19, sjhs19, thxwl20,ytycqw19, zxldxl, pky17,geh17,z16, mwh18}. Different doping are performed in graphene for the new applications such as sulfur-doping for micro-supercapacitors\cite{csqmj19}, nitrogen-doped graphene quantum dots for photovoltaic\cite{hgrpgc19}, silicon nanoparticles embedded in n-doped few-layered graphene for lithium ion batteries\cite{lyzsgc} and implanting germanium into graphene for single-atom catalysis applications\cite{tmbfbk18}. Theoretical and experimental investigations of graphene-like structures such as silicene and germanene have been vastly carried out \cite{vpqafa,loekve,dsbf12,wqdzkd17,cxhzlt17,ctghhg17}. Silicene and germanene have been grown on Au(111)\cite{cstfmg17}, Ag(111)\cite{jlscl16} and Ir(111)\cite{mwzdwl13} that can encourage researchers to do more study about them. Due to the buckled structure of silicene, it has different physical properties compared to graphene, such as higher surface reactivity\cite{jlscl16}, and a tunable band gap by using an external electric field which is highly favorable in nanoelectronic devices\cite{nltzzq12,dzf12}. However, the formation of imperfections on the synthesis of silicene is usually inevitable which influences the magnetic and electronic properties of the material\cite{lwtwjl15}. There are some studies about doped atoms such as lithium, aluminum and phosphorus in silicene to achieve wide variety of electronic and optical properties\cite{mmt17,dcmj15}.
Recently simulation and fabrication of 2D silicon-carbon compounds known as siligraphene (Si$_m$C$_n$) have received more attentions due to their extraordinary electronic and optical properties. For example, SiC$_2$ siligraphene which has been experimentally synthesized\cite{lzlxpl15}, is a promising anchoring material for lithium-sulfur batteries\cite{dlghll15}, a promising metal-free catalyst for oxygen reduction reaction\cite{dlghll15}, and a novel donor material in excitonic solar cells\cite{zzw13}. Also, graphitic siligraphene g-SiC$_3$ in the presence of strain can be classified in different electrical phases such as a semimetal or a semiconductor. g-SiC$_3$ has a semimetallic behavior under compression strain up to 8\%, but it becomes a semiconductor with direct band gap (1.62 eV) for 9\% of compression strain and becomes a semiconductor with indirect band gap (1.43 eV) for 10\% of compression strain \cite{dlghll15}. Moreover, g-SiC$_5$ has semimetallic properties and it can be used as a gas sensor for air pollutant\cite{dwzhl17}. Furthermore, SiC$_7$ siligraphene has a good photovoltaic applications \cite{heakba19} and can be used as a high capacity hydrogen storage material\cite{nhla18}. It shows superior structural, dynamical and thermal stability comparing to other types of siligraphene and it is a novel donor material with extraordinary sunlight absorption\cite{dzfhll16}. The structural and electronic properties of silicene-like SiX and XSi$_3$ (X = B, C, N, Al, P) honeycomb lattices have been investigated\cite{dw13}. Also, the planarity and non-planarity properties for g-SiC$_n$ and g-Si$_n$C (n = 3, 5, and 7) structures have been studied\cite{tllpz19}.
The excellent properties of siligraphene\cite{dzfhll16} motivated us to study CSi$_7$ and GeSi$_7$, in order to find a new approach of silicene buckling and band gap control and to obtain new electronic and optical properties. Here we call CSi$_7$ carbosilicene and GeSi$_7$ germasilicene. We choose carbon and germanium atoms respectively for CSi$_7$ and GeSi$_7$ because these atoms, same as silicon atom, have four valence electrons in their highest energy orbitals. Using density functional theory, we show that both structures are stable but CSi$_7$ is more stable than GeSi$_7$. The carbon atom in CSi$_7$ decreases the buckling, while germanium atom in GeSi$_7$ increases the buckling. It is shown that CSi$_7$ is a semiconductor with 0.24 eV indirect band gap\cite{plgkl20} but GeSi$_7$, similar to silicene, is a semimetal. Also, we investigate the effects of strain and we show that for CSi$_7$, the compressive strain can increase the band gap and the tensile strain can decrease. At sufficient tensile strain (>3.7\%), the band gap of CSi$_7$ becomes zero and thus the semiconducting properties of this material change to metallic properties. As a result, the band gap of CSi$_7$ can be tuned by strain and this material can be used in straintronic devices such as strain sensors and strain switches. For GeSi$_7$, strain does not have any significant effect on it. In contrast, GeSi$_7$ has high dielectric constant and can be used as a 2D material with high dielectric constant in advanced capacitors. Finally, we investigate the optical properties of these materials and we find that the light absorption of both CSi$_7$ and GeSi$_7$ are significantly greater than the light absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as a good candidate for solar cell applications. It is worth to mention that germasilicene, GeSi$_7$, is a new 2D material proposed and studied in this paper, while carbosilicene, CSi$_7$, has been proposed previously as a member of siligraphene but only its band structure has been studied\cite{tllpz19,plgkl19,plgkl20}.
The rest of the paper is organized as follows. In Sec. II, method of calculations is introduced and the results and discussion are given in Sec. III. Section IV contains a summary and conclusion.
\section{Method of calculations}
Density functional theory (DFT) calculations are performed using the projector-augmented wave pseudopotentials \cite{b94} as implemented in the Quantum-ESPRESSO code\cite{gc09}. To describe the exchange-correlation functional, the generalized gradient approximation (GGA) of Perdew-Bruke-Ernzerhof (PBE) is used\cite{pbe96}. After optimization, the optimum value for the cutoff energy is obtained equal to 80 Ry. Also, Brillouin-zone integrations are performed using Monkhorst-Pack\cite{mp76} and optimum reciprocal meshes of 12×12×1 are considered for calculations. At first, unit cells and atomic positions of both CSi$_7$ and GeSi$_7$ are optimized and then their electronic properties are determined by calculating the density of states and band structure. Moreover, their optical properties are determined by calculating the absorption and the imaginary and real parts of dielectric constant.
\section{Results and discussion}
\subsection{Structural properties}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig1.eps}
\caption{(a) Top view of silicene and (b) Si$_8$ unit cells. (c) Side view of Si$_8$ unit cell.
}
\label{fig1}
\end{figure}
By increasing silicene unit cell [see Fig.~\ref{fig1} (a)] in x and y direction twice, Si$_8$ has been constructed [see Fig. ~\ref{fig1}(b)] in hexagonal lattice (i.e., $ \alpha=\beta=90^{\circ},\gamma=120^{\circ}$). In physical view, both silicene and Si$_8$ have the same physical properties because by increasing both unit cells, silicene monolayer has been achieved. In this work, Si$_8$ unit cell considered because CSi$_7$ and GeSi$_7$ can be constructed by replacing a silicone atom with a carbon or a germanium atom. After relaxation, the bond length of Si$_8$ was $d=2.4 \;\AA$ [see Fig.~\ref{fig1} (a)] and lattice parameters were $|a|=|b|=7.56 \; \AA$ and $|c|=14.4 \;\AA$ [see Figs.~\ref{fig1}(b) and 1(c)] and buckling parameter $\Delta=0.44 \;\AA$ [see Fig.~\ref{fig1}1 (c)] which has a good agreement with previous works\cite{wwltxh,gzj12,zlyqzw16}. Here c is the distance to make sure that there is no interaction between adjacent layers.
For carbosilicene, CSi$_7$, unit cell construction, a silicon atom can be replaced with a carbon atom as shown in Fig. ~\ref{fig2}. Because of structural symmetry of CSi$_7$ monolayer (see Fig. ~\ref{fig6}), the position of impurity atom is not important, and our calculations also show the same ground state energy for all the eight possible impurity positions. After relaxation, optimum lattice parameters are obtained as $|a|= |b|=7.49\; \AA$ and $|c|= 12.86 \; \AA$ for CSi$_7$ unit cell. Fig. ~\ref{fig2} shows this structure before and after relaxation. For a more detailed explanation, we labeled atoms in this figure. It is observed that Si-C bond length (i.e., $d_{2-4}=1.896 \; \AA$) is shorter than Si-Si band length (i.e., $d_{1-2}=2.317,\; d_{1-3}=2.217 \; \AA$) because of sp$^2$ hybridization. Also, unlike graphene, the hexagonal ring is not a regular hexagon due to the electronegativity difference between C and Si atoms\cite{dzfhll16}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth,clip=true]{Fig2.eps}
\caption{Top view of CSi$_7$ unit cell (a) before and (b) after relaxation. Carbon atom is shown by yellow sphere and silicon atoms by blue spheres.
}
\label{fig2}
\end{figure}
Fig. ~\ref{fig3} shows the side view of CSi$_7$ unit cell. After relaxation, the buckling parameter between atoms 1 and 3 ($\Delta_{1-3}$) is 0.1 $\AA$ whereas this parameter for atoms 2 and 4 ($\Delta_{2-4}$) is 0.39 $\AA$. So, CSi$_7$ has a structure with two different buckling parameters and one can use the carbon atoms to decrease buckling parameter of silicene. Silicene has one buckling and two sublattices\cite{zyy15}, while carbosilicene has two bucklings and thus three sublattices including one for carbon atoms and two others for silicon atoms.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig3.eps}
\caption{Side view of CSi$_7$ unit cell (a) before and (b) after relaxation.
}
\label{fig3}
\end{figure}
If we replace a silicon atom with a germanium atom as shown in Fig.~\ref{fig4}, we could obtain germasilicene, GeSi$_7$, structure. As we can see in this figure, the optimized parameters are $|a|$=$|b|$=7.8$ \AA$, $|c|$=11.98 $\AA$ and the Si-Ge bond length and lattice constants are greater than those of Si-Si. Also, by comparing bond lengths and lattice parameters of GeSi$_7$ and CSi$_7$ structures, it is seen that the bond lengths and lattice parameters of GeSi$_7$ are significantly greater than those of CSi$_7$ which is due to the larger atomic number and thus atomic radius of germanium relative to the carbon\cite{zyihm18}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth,clip=true]{Fig4.eps}
\caption{ Top view of GeSi$_7$ unit cell (a) before and (b) after relaxation. Here germanium atom is shown by purple color.
}
\label{fig4}
\end{figure}
The buckling parameters of germasilicene structure are depicted in Fig. ~\ref{fig5}. After relaxation, we find that the value of these parameters are $\Delta_{2-4}=0.53\; \AA$ and $\Delta_{1-3}=0.43 \; \AA$. Therefore, GeSi$_7$ like CSi$_7$ has a structure with two different buckling and the germanium impurity atom increases the buckling of silicene. Bond length values and other structural parameters after relaxation are shown in Table 1.
\begin{table*}[t]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
&$|a|=|b|$& $|c|$ &$d_{1-2}$ &$d_{2-4}$& $d_{1-3}$ &$\Delta_{2-4}$& $\Delta_{1-3}$& $\Delta_d$ \\
\hline
Si$_8$ & 7.65 & 14.4 & 2.4 & 2.4 & 2.4 & 0.44 & 0.44 & 0 \\
\hline
CSi$_7$ & 7.49 & 12.86 & 2.317 & 1.896 & 2.217 & 0.1 & 0.39 & 0.29\\
\hline
GeSi$_7$ & 7.8 & 11.98 & 2.287 & 2.34 & 2.297 & 0.53 & 0.43 & 0.1\\
\hline
\end{tabular}
\caption{Optimum lattice parameters $|a|$, $|b|$ and $|c|$, bond lengths $d_{1-2}$, $d_{2-4}$ and
$d_{1-3}$ and buckling parameters $\Delta_{2-4}$, $\Delta_{1-3}$ and $\Delta_d$. All values are in Angstrom.
}
\label{tab1}
\end{table*}
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig5.eps}
\caption{ Side view of GeSi$_7$ unit cell (a) before and (b) after relaxation
}
\label{fig5}
\end{figure}
We now introduce a new parameter for buckling as
\begin{equation}
\Delta_d=|\Delta_{2-4}-\Delta_{1-3}|
\label{eq1}
\end{equation}
which shows the difference between two buckling parameters. Value of $\Delta_d$ for CSi$_7$ (i.e., 0.29 $\AA$) is greater than that for GeSi$_7$ (i.e., 0.062 $\AA$) which means the carbon impurity atom has a greater impact than germanium on silicene buckling. This effect could be explained based on electronegativity difference\cite{drsbf13}. The electronegativity by Pauling scale is 2.55 \cite{ipl09,zhhkl20}, 1.9 \cite{gperdd20} and 2.01 \cite{mgyz19} for carbon, silicon, and germanium respectively. Therefore, electronegativity difference is 0.65 for CSi$_7$ and 0.11 for GeSi$_7$ which show that CSi$_7$ has a greater electronegativity difference which leads to the in-plane hybridized bondings and reduces the buckling in comparison to the other cases.
Fig. ~\ref{fig6} shows the charge density of a monolayer of CSi$_7$ and GeSi$_7$. The charge density of a monolayer of Si is also shown in this figure for comparison [see Fig. ~\ref{fig6}(a)]. The high charge density around the carbon and germanium impurity atoms [see Figs. ~\ref{fig6}(b) and 6(c)] shows charge transfer from silicon atoms to impurity atoms. Also, the electron aggregation around impurity atoms indicates ionic- covalent bonds in CSi$_7$ and GeSi$_7$ structures because of electronegativity difference.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.98\linewidth,clip=true]{Fig6.eps}
\caption{ Charge density of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$
}
\label{fig6}
\end{figure}
Now, we calculate the cohessive and formation energies for these structures. The cohessive energy is -4.81 eV/atom and -4.32 eV/atom for CSi$_7$ and GeSi$_7$, respectively. The negative value of cohecive energy for CSi$_7$ and GeSi$_7$ means that these structures will not be decomposed into their atoms. The more negative cohesive energy, the more stable structure, so CSi$_7$ is more stable than GeSi$_7$. Also, the caculated cohesive energy for silicene is -4.89 eV/atom which is in good agreement with previous studies \cite{gperdd20,mgyz19} and shows CSi$_7$ has a stable structure with cohessive energy very close to silicene.
Our calculations show the formation energy for CSi$_7$ and GeSi$_7$ structures are +0.16 eV/atom and -0.005 eV/atom, respectively. So, the formation of CSi$_7$ (GeSi$_7$) from their constituents is endothermic (exothermic) because of the positive (negative) value of formation energy. On the other hand, positive formation energy for CSi$_7$ represents a high stability of this structure, while the negative or nearly zero value for GeSi$_7$ is attributed mostly to the high reactivity related to silicene\cite{dw13}.
\subsection{Electronic properties}
To investigate electronic properties of CSi$_7$ and GeSi$_7$, at first, we compare band structure of silicene, CSi$_7$ and GeSi$_7$ monolayers and we show the results in Fig. ~\ref{fig7}. As we can see in this figure, like graphene and silicene, GeSi$_7$ is semi-metal (or zero-gap semiconductor) with Dirac cone in point $K$. This is because the $\pi$ and $\pi^*$ bands cross linearly at the Fermi energy $E_F$. These band structures indicate that the charge career in silicene and GeSi$_7$ behave like massless Dirac fermions\cite{zlyqzw16}. In contrast with GeSi$_7$, CSi$_7$ is a semiconductor with indirect band gap. The value of its indirect band gap is 0.24 eV in $K-\Gamma$ direction which significantly less than its direct band gap value (i.e., 0.5 eV in $K-K$ direction).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.7\linewidth,clip=true]{Fig7.eps}
\caption{ Band structure of (a) silicene, (b) CSi$_7$ and (c) GeSi$_7$.
}
\label{fig7}
\end{figure}
For a better comparison, an enlarged band structure of silicene, CSi$_7$ and GeSi$_7$ are shown in Fig. ~\ref{fig8}. It is seen that, in point $K$, silicene and GeSi$_7$ have similar band structures with zero band gap, whereas CSi$_7$ has a band gap. In Dirac cone of graphene and silicene, $\pi$ and $\pi^*$ bands are made from the same atoms\cite{dw13} but these bonds in GeSi$_7$ are made from two different atoms. To determine the Fermi velocity, $v_F$, the graphs for silicene and GeSi$_7$ must be fitted linearly near the Fermi level by using equation $E_{k+K}=\gamma k$. Then the Fermi velocity is given by $v_F=\gamma/ \hbar$. Our calculations show that $v_F$ is $5\cross10^5$ m/s for silicene (which shows a good agreement with previous works\cite{dw13,wd13}) and $4.8\cross10^5$ m/s for GeSi$_7$. A comparison between Fermi velocity in silicene and GeSi$_7$ indicates that Ge atoms in GeSi$_7$ do not have a significant effect on Fermi velocity. The total density of states (DOS) is also shown in Fig. ~\ref{fig8}. It is observed that the total DOS has a good agreement with the band structure.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth,clip=true]{Fig8.eps}
\caption{ Enlarged band structure and total DOS of silicene, CSi$_7$ and GeSi$_7$.
}
\label{fig8}
\end{figure}
We now investigate the effect of strain on the band structure of CSi$_7$ and GeSi$_7$ and the results are shown in Fig. ~\ref{fig9}. As we can see in Figs. ~\ref{fig9}(a) and ~\ref{fig9}(b), compressive strain has important effects on band structure of CSi$_7$ but it has no significant effect on GeSi$_7$ [compare these figures with Figs. ~\ref{fig7}(b) and ~\ref{fig7}(c)]. In the presence of compressive strain for CSi$_7$, both direct and indirect band gaps increase, respectively from 0.5 eV and 0.24 eV to 0.52 eV and 0.44 eV. But for GeSi$_7$, the zero-band gap remains unchanged and compressive strain cannot open any band gap. Fig. ~\ref{fig9}(c) shows the direct and indirect band gap variations of CSi$_7$ versus the both compressive and tensile strains. It is observed that both direct and indirect band gaps increase with increasing the compressive strain, while they decrease with increasing the tensile strain. The variation of band gaps versus strain S is nearly linear and could be formulated by $E_g=-0.017S+0.447$ for direct band gap and $E_g=-0.059 S+0.227$ for indirect one. Under strain and without strain, the direct band gap has significantly larger values relative to indirect band gap, thus it has no important effect on electronic transport properties in CSi$_7$. In contrast with GeSi$_7$, the strain is an important factor for tuning of band gap in CSi$_7$. For example, when the tensile strain increases above the band gap of CSi$_7$ disappears and this 2D material becomes a metal [see Fig. ~\ref{fig9}(c)]. This property of CSi$_7$ is important in straintronic devices such as strain switches and strain sensors.
\begin{figure}[ht!]
\centering
\includegraphics[width=.98\linewidth,clip=true]{Fig9.eps}
\caption{ Band structure of (a) CSi$_7$ and (b) GeSi$_7$ under compressive strain with value -3$\%$. (c) Energy gap variation of CSi$_7$ versus both compressive and tensile strains.
}
\label{fig9}
\end{figure}
\subsection{Optical properties}
The complex dielectric function $\epsilon=\epsilon_r+\epsilon_i$ can be calculated for both polarizations of light: (i) parallel (x direction) and (ii) perpendicular (z direction), where $\epsilon_r$ is the real part and $\epsilon_i$ is the imaginary part of the dielectric function. This function is an important parameter for calculation of optical properties of matters. For instance, the real and imaginary parts of refractive index (i.e., $n=n_r+n_i$) can be written as\cite{w}
\begin{equation}
n_r=\sqrt{\frac{(\epsilon_r^2+\epsilon_i^2)^{1/2}+\epsilon_r}{2}}
\label{eq2}
\end{equation}
and
\begin{equation}
n_i=\sqrt{\frac{(\epsilon_r^2+\epsilon_i^2)^{1/2}-\epsilon_r}{2}}
\label{eq3}
\end{equation}
respectively. The absorption coefficient $\alpha$ is given by\cite{w}
\begin{equation}
\alpha=\frac{2\omega n_i}{C}
\label{eq4}
\end{equation}
where C is the speed of light in vacuum. The real parts of dielectric function of CSi$_7$, GeSi$_7$ and silicene are depicted in Fig. ~\ref{fig10} for x and z directions. This figure shows that $\epsilon _r$ in both directions are inhomogeneous because the graphs of $\epsilon_r$ are not similar for the two directions. The root of real part (where $\epsilon_r=0$) represents the plasma energy (frequency) which for these materials it locates at $4.3\; eV \;(1.04\;PHz)$ for x-direction. It can be seen from Figs. ~\ref{fig10}(a) and ~\ref{fig10}(b) that the values of static dielectric constant (the value of dielectric function real part at zero frequency or zero energy) in the x-direction are 12.3 for silicene and CSi$_7$ and 30 for GeSi$_7$, and in the z-direction are 2.4, 2 and 2.9 for silicene, CSi$_7$ and GeSi$_7$ respectively. Thus, for both directions GeSi$_7$ has the biggest static dielectric constant. Also, the static dielectric constant of GeSi$_7$ is significantly greater than graphene (1.25 for z-direction and 7.6 for x-direction\cite{rdj14}). According to the energy density equation of capacitors (i.e., $u=\epsilon\epsilon_0 E^2/2$), by increasing dielectric constant $\epsilon$, the energy density u increases. Here, E in the electric field inside the capacitor. So, materials with high dielectric constant have attracted a lot of attentions because of their potential applications in transistor gate, non-volatile ferroelectric memory and integral capacitors\cite{tic06}. Among the 2D-materials, graphene has been used for electrochemical capacitors\cite{cls13} and supercapacitors\cite{vrsgr08}. Since GeSi$_7$ has a high dielectric constant, it can be used as a 2D-material with high performance dielectric in advanced capacitors.
\begin{figure}[ht!]
\centering
\includegraphics[width=.7\linewidth,,clip=true]{Fig10.eps}
\caption{ Comparison of real part of dielectric function for CSi$_7$ and GeSi$_7$ (a) in x direction and (b) in z direction. The graphs of silicene are also shown in this figure for comparison.
}
\label{fig10}
\end{figure}
Fig. ~\ref{fig11} shows absorption coefficient $\alpha$ for CSi$_7$ and GeSi$_7$. Absorption coefficient for silicene is also shown in this figure for comparison. The absorption coefficient shown in this figure for silicene is in agreement with previous works\cite{hkb18,cjlmty16}. There are two peaks for CSi$_7$: one locates in 1.18 eV (infrared region) and the other in 1.6 eV (visible region). The peak for silicene (at 1.83 eV) locates in visible region (1.8-3.1 eV). So, carbon atom increases and shifts the edge of absorption from the visible region to infrared region because it breaks the symmetry of silicene structure and it opens a narrow energy band gap in silicene band structure. For GeSi$_7$ there is an absorption peak in visible region (at 2.16 eV). Also, the peak height of GeSi$_7$ is larger than that of silicene and CSi$_7$. The sun light spectrum includes different wavelengths and absorption of each part has a special application. For example, ultraviolet-visible region absorption spectrophotometry and its analysis are used in pharmaceutical analysis, clinical chemistry, environmental analysis and inorganic analysis\cite{rop88}. Also near infrared ($\lambda$= 800 to 1100 nm or E = 1.55 eV to 1.13 eV) and infrared ($\lambda$ > 1100 nm or E < 1.13eV) regions are used for solar cells\cite{wrmss,sgzlgp20}, latent fingerprint development\cite{bsckm19}, brain stimulation and imaging\cite{cwcgy20}, photothermal therapy\cite{hhwl19}, photocatalysis\cite{qzzwz10} and photobiomodulation\cite{whwlh17}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.75\linewidth,clip=true]{Fig11.eps}
\caption{ Absorption coefficient for silicene, CSi$_7$ and GeSi$_7$.
}
\label{fig11}
\end{figure}
On the other hand, sunlight radiation received by earth is comprising 5$\%$ u$\%$ltraviolet, 45$\%$ infrared and 50$\%$ visible \cite{hs11}. So, we investigate area under the absorption curve of CSi$_7$ and GeSi$_7$ in visible (from 1.8 to 3.1 eV), near infrared (from 1.13 to 1.55 eV) and infrared (<1.13 eV). Fig. ~\ref{fig12} shows this area for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions. As we can see in this figure, the absorption of CSi$_7$ for all three spectrum regions and total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. For comparison, we also calculate the absorption coefficient in infrared region for siligraphene SiC$_7$ , a new material studied recently\cite{dzfhll16}. The absorption for siligraphene in infrared region is equal to 2.7 which shows that CSi$_7$ with absorption 8.78 and GeSi$_7$ with absorption 6.31 have more than two times greater absorption relative to siligraphene in infrared region.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\linewidth,clip=true]{Fig12.eps}
\caption{ Areas under the absorption curve for silicene, CSi$_7$ and GeSi$_7$ in infrared, near infrared and visible spectrum regions.
}
\label{fig12}
\end{figure}
\subsection{Summary and conclusion}
We studied the structural, electronic and optical properties of CSi$_7$ and GeSi$_7$ structures using density functional theory within Quantum Espresso code. We showed that the carbon atom in CSi$_7$ decreases the buckling, whereas germanium atom in GeSi$_7$ increases the buckling which promises a new way to control the buckling in silicene-like structures. Both structures are stable but CSi$_7$ is more stable than GeSi$_7$. Band structure and DOS plots show CSi$_7$ is a semiconductor with 0.24 eV indirect band gap but GeSi$_7$, similar to silicene, is a semimetal. Strain does not have any significant effect on GeSi$_7$, but for CSi$_7$, the compressive strain can increase the band gap and tensile strain can decrease it. At sufficient tensile strain ($> 3.7 \%$), the band gap becomes zero or negative and thus the semiconducting properties of CSi$_7$ change to metallic properties. As a result, the band gap of CSi$_7$ could be changed and controlled by strain and this material can be used in straintronic devices such as strain sensor and strain switch. Furthermore, we investigated the optical properties of CSi$_7$ and GeSi$_7$ such as static dielectric constant and light absorption. The GeSi$_7$ has high dielectric constant relative to CSi$_7$, silicene and graphene and can be used as a 2D-material with high performance dielectric in advanced capacitors. The light absorption of CSi$_7$ for near infrared, infrared and visible regions and its total absorption are significantly greater than those of silicene. The absorption of GeSi$_7$ is greater than that of silicene in infrared and visible regions and it is smaller in near infrared region, but the total absorption of GeSi$_7$ is significantly greater than the total absorption of silicene. Because of high absorption of CSi$_7$ and GeSi$_7$, these materials can be considered as proper candidates to solar cell applications.
| {'timestamp': '2022-01-24T02:09:19', 'yymm': '2201', 'arxiv_id': '2201.08590', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.08590'} |
\section{Introduction}
De Sitter (dS) spacetime is among the most popular backgrounds in
gravitational physics. There are several reasons for this. First of all dS
spacetime is the maximally symmetric solution of Einstein's equation with a
positive cosmological constant. Due to the high symmetry numerous physical
problems are exactly solvable on this background. A better understanding of
physical effects in this background could serve as a handle to deal with
more complicated geometries. De Sitter spacetime plays an important role in
most inflationary models, where an approximately dS spacetime is employed to
solve a number of problems in standard cosmology \cite{Lind90}. More
recently astronomical observations of high redshift supernovae, galaxy
clusters and cosmic microwave background \cite{Ries07} indicate that at the
present epoch the universe is accelerating and can be well approximated by a
world with a positive cosmological constant. If the universe would
accelerate indefinitely, the standard cosmology would lead to an asymptotic
dS universe. In addition to the above, an interesting topic which has
received increasing attention is related to string-theoretical models of dS
spacetime and inflation. Recently a number of constructions of metastable dS
vacua within the framework of string theory are discussed (see, for
instance, \cite{Kach03,Silv07} and references therein).
There is no reason to believe that the version of dS spacetime which may
emerge from string theory, will necessarily be the most familiar version
with symmetry group $O(1,4)$ and there are many different topological spaces
which can accept the dS metric locally. There are many reasons to expect
that in string theory the most natural topology for the universe is that of
a flat compact three-manifold \cite{McIn04}. In particular, in Ref. \cite%
{Lind04} it was argued that from an inflationary point of view universes
with compact spatial dimensions, under certain conditions, should be
considered a rule rather than an exception. The models of a compact universe
with nontrivial topology may play an important role by providing proper
initial conditions for inflation (for the cosmological consequences of the
nontrivial topology and observational bounds on the size of compactified
dimensions see, for example, \cite{Lach95}). The quantum creation of the
universe having toroidal spatial topology is discussed in \cite{Zeld84} and
in references \cite{Gonc85} within the framework of various supergravity
theories. The compactification of spatial dimensions leads to the
modification of the spectrum of vacuum fluctuations and, as a result, to
Casimir-type contributions to the vacuum expectation values of physical
observables (for the topological Casimir effect and its role in cosmology
see \cite{Most97,Bord01,Eliz06} and references therein). The effect of the
compactification of a single spatial dimension in dS spacetime (topology $%
\mathrm{R}^{D-1}\times \mathrm{S}^{1}$) on the properties of quantum vacuum
for a scalar field with general curvature coupling parameter and with
periodicity condition along the compactified dimension is investigated in
Ref. \cite{Saha07} (for quantum effects in braneworld models with dS spaces
see, for instance, \cite{dSbrane}).
In view of the above mentioned importance of toroidally compactified dS
spacetimes, in the present paper we consider a general class of
compactifications having the spatial topology $\mathrm{R}^{p}\times (\mathrm{%
S}^{1})^{q}$, $p+q=D$. This geometry can be used to describe two types of
models. For the first one $p=3$, $q\geqslant 1$,\ and which corresponds to
the universe with Kaluza-Klein type extra dimensions. As it will be shown in
the present work, the presence of extra dimensions generates an additional
gravitational source in the cosmological equations which is of barotropic
type at late stages of the cosmological evolution. For the second model $D=3$
and the results given below describe how the properties of the universe with
dS geometry are changed by one-loop quantum effects induced by the
compactness of spatial dimensions. In quantum field theory on curved
backgrounds among the important quantities describing the local properties
of a quantum field and quantum back-reaction effects are the expectation
values of the field square and the energy-momentum tensor for a given
quantum state. In particular, the vacuum expectation values of these
quantities are of special interest. In order to evaluate these expectation
values, we construct firstly the corresponding positive frequency Wightman
function. Applying to the mode-sum the Abel-Plana summation formula, we
present this function as the sum of the Wightman function for the topology $%
\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ plus an additional term
induced by the compactness of the $(p+1)$th dimension. The latter is finite
in the coincidence limit and can be directly used for the evaluation of the
corresponding parts in the expectation \ values of the field square and the
energy-momentum tensor. In this way the renormalization of these quantities
is reduced to the renormalization of the corresponding quantities in
uncompactified dS spacetime. Note that for a scalar field on the background
of dS spacetime the renormalized vacuum expectation values of the field
square and the energy-momentum tensor are investigated in Refs. \cite%
{Cand75,Dowk76,Bunc78} by using various regularization schemes (see also
\cite{Birr82}). The corresponding effects upon phase transitions in an
expanding universe are discussed in \cite{Vile82,Alle83}.
The paper is organized as follows. In the next section we consider the
positive frequency Wightman function for dS spacetime of topology $\mathrm{R}%
^{p}\times (\mathrm{S}^{1})^{q}$. In sections \ref{sec:vevPhi2} and \ref%
{sec:vevEMT2} we use the formula for the Wightman function for the
evaluation of the vacuum expectation values of the field square and the
energy-momentum tensor. The asymptotic behavior of these quantities is
investigated in the early and late stages of the cosmological evolution. The
case of a twisted scalar field with antiperiodic boundary conditions is
considered in section \ref{sec:Twisted}. The main results of the paper are
summarized in section \ref{sec:Conc}.
\section{Wightman function in de Sitter spacetime with toroidally
compactified dimensions}
\label{sec:WF}
We consider a free massive scalar field with curvature coupling parameter $%
\xi $\ on background of $(D+1)$-dimensional de Sitter spacetime ($\mathrm{dS}%
_{D+1}$) generated by a positive cosmological constant $\Lambda $. The field
equation has the form%
\begin{equation}
\left( \nabla _{l}\nabla ^{l}+m^{2}+\xi R\right) \varphi =0, \label{fieldeq}
\end{equation}%
where $R=2(D+1)\Lambda /(D-1)$ is the Ricci scalar for $\mathrm{dS}_{D+1}$
and $\xi $ is the curvature coupling parameter. The special cases $\xi =0$
and $\xi =\xi _{D}\equiv (D-1)/4D$ correspond to minimally and conformally
coupled fields respectively. The importance of these special cases is
related to that in the massless limit the corresponding fields mimic the
behavior of gravitons and photons. We write the line element for $\mathrm{dS}%
_{D+1}$ in planar (inflationary) coordinates most appropriate for
cosmological applications:%
\begin{equation}
ds^{2}=dt^{2}-e^{2t/\alpha }\sum_{i=1}^{D}(dz^{i})^{2}, \label{ds2deSit}
\end{equation}%
where the parameter $\alpha $ is related to the cosmological constant by the
formula%
\begin{equation}
\alpha ^{2}=\frac{D(D-1)}{2\Lambda }. \label{alfa}
\end{equation}%
Below, in addition to the synchronous time coordinate $t$ we will also use
the conformal time $\tau $ in terms of which the line element takes
conformally flat form:%
\begin{equation}
ds^{2}=(\alpha /\tau )^{2}[d\tau ^{2}-\sum_{i=1}^{D}(dz^{i})^{2}],\;\tau
=-\alpha e^{-t/\alpha },\;-\infty <\tau <0. \label{ds2Dd}
\end{equation}%
We assume that the spatial coordinates $z^{l}$, $l=p+1,\ldots ,D$, are
compactified to $\mathrm{S}^{1}$ of the length $L_{l}$: $0\leqslant
z^{l}\leqslant L_{l}$, and for the other coordinates we have $-\infty
<z^{l}<+\infty $, $l=1,\ldots ,p$. Hence, we consider the spatial topology $%
\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$, where $q=D-p$. For $p=0$, as a
special case we obtain the toroidally compactified dS spacetime discussed in
\cite{McIn04,Lind04,Zeld84}. The Casimir densities for a scalar field with
periodicity conditions in the case $q=1$ were discussed previously in Ref.
\cite{Saha07}.
In the discussion below we will denote the position vectors along the
uncompactified and compactified dimensions by $\mathbf{z}_{p}=(z^{1},\ldots
,z^{p})$ and $\mathbf{z}_{q}=(z^{p+1},\ldots ,z^{D})$. For a scalar field
with periodic boundary condition one has (no summation over $l$)%
\begin{equation}
\varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=\varphi (t,%
\mathbf{z}_{p},\mathbf{z}_{q}), \label{periodicBC}
\end{equation}%
where $l=p+1,\ldots ,D$ and $\mathbf{e}_{l}$ is the unit vector along the
direction of the coordinate $z^{l}$. In this paper we are interested in the
effects of non-trivial topology on the vacuum expectation values (VEVs) of
the field square and the energy-momentum tensor. These VEVs are obtained
from the corresponding positive frequency Wightman function $%
G_{p,q}^{+}(x,x^{\prime })$ in the coincidence limit of the arguments. The
Wightman function is also important in consideration of the response of
particle detectors at a given state of motion (see, for instance, \cite%
{Birr82}). Expanding the field operator over the complete set $\left\{
\varphi _{\sigma }(x),\varphi _{\sigma }^{\ast }(x)\right\} $ of positive
and negative frequency solutions to the classical field equation, satisfying
the periodicity conditions along the compactified dimensions, the positive
frequency Wightman function is presented as the mode-sum:
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=\langle 0|\varphi (x)\varphi (x^{\prime
})|0\rangle =\sum_{\sigma }\varphi _{\sigma }(x)\varphi _{\sigma }^{\ast
}(x^{\prime }), \label{Wigh1}
\end{equation}%
where the collective index $\sigma $ specifies the solutions.
Due to the symmetry of the problem under consideration the spatial
dependence of the eigenfunctions $\varphi _{\sigma }(x)$ can be taken in the
standard plane-wave form, $e^{i\mathbf{k}\cdot \mathbf{z}}$. Substituting
into the field equation, we obtain that the time dependent part of the
eigenfunctions is a linear combination of the functions $\tau ^{D/2}H_{\nu
}^{(l)}(|\mathbf{k|}\tau )$, $l=1,2$, where $H_{\nu }^{(l)}(x)$ is the
Hankel function and
\begin{equation}
\nu =\left[ D^{2}/4-D(D+1)\xi -m^{2}\alpha ^{2}\right] ^{1/2}. \label{knD}
\end{equation}%
Different choices of the coefficients in this linear combination correspond
to different choices of the vacuum state. We will consider de Sitter
invariant Bunch-Davies vacuum \cite{Bunc78} for which the coefficient for
the part containing the function $H_{\nu }^{(1)}(|\mathbf{k|}\tau )$ is
zero. The corresponding eigenfunctions satisfying the periodicity conditions
take the form
\begin{equation}
\varphi _{\sigma }(x)=C_{\sigma }\eta ^{D/2}H_{\nu }^{(1)}(k\eta )e^{i%
\mathbf{k}_{p}\cdot \mathbf{z}_{p}+i\mathbf{k}_{q}\cdot \mathbf{z}%
_{q}},\;\eta =\alpha e^{-t/\alpha }, \label{eigfuncD}
\end{equation}%
where we have decomposed the contributions from the uncompactified and
compactified dimensions with the notations%
\begin{eqnarray}
\mathbf{k}_{p} &=&(k_{1},\ldots ,k_{p}),\;\mathbf{k}_{q}=(k_{p+1},\ldots
,k_{D}),\;k=\sqrt{\mathbf{k}_{p}^{2}+\mathbf{k}_{q}^{2}},\; \notag \\
\;k_{l} &=&2\pi n_{l}/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D.
\label{kD1D2}
\end{eqnarray}%
Note that we have transformed the Hankel function to have the positive
defined argument and instead of the conformal time $\tau $ the variable $%
\eta $ is introduced which we will call the conformal time as well. The
eigenfunctions are specified by the set $\sigma =(\mathbf{k}%
_{p},n_{p+1},\ldots ,n_{D})$ and the coefficient $C_{\sigma }$ is found from
the standard orthonormalization condition
\begin{equation}
-i\int d^{D}x\sqrt{|g|}g^{00}\varphi _{\sigma }(x)\overleftrightarrow{%
\partial }_{\tau }\varphi _{\sigma ^{\prime }}^{\ast }(x)=\delta _{\sigma
\sigma ^{\prime }}, \label{normcond}
\end{equation}%
where the integration goes over the spatial hypersurface $\tau =\mathrm{const%
}$, and $\delta _{\sigma \sigma ^{\prime }}$ is understood as the Kronecker
delta for the discrete indices and as the Dirac delta-function for the
continuous ones. By using the Wronskian relation for the Hankel functions
one finds%
\begin{equation}
C_{\sigma }^{2}=\frac{\alpha ^{1-D}e^{i(\nu -\nu ^{\ast })\pi /2}}{%
2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}. \label{normCD}
\end{equation}
Having the complete set of eigenfunctions and using the mode-sum formula (%
\ref{Wigh1}), for the positive frequency Wightman function we obtain the
formula
\begin{eqnarray}
G_{p,q}^{+}(x,x^{\prime }) &=&\frac{\alpha ^{1-D}(\eta \eta ^{\prime
})^{D/2}e^{i(\nu -\nu ^{\ast })\pi /2}}{2^{p+2}\pi ^{p-1}L_{p+1}\cdots L_{D}}%
\int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}} \notag
\\
&&\times \sum_{\mathbf{n}_{q}=-\infty }^{+\infty }e^{i\mathbf{k}_{q}\cdot
\Delta \mathbf{z}_{q}}H_{\nu }^{(1)}(k\eta )[H_{\nu }^{(1)}(k\eta ^{\prime
})]^{\ast }, \label{GxxD}
\end{eqnarray}%
with $\Delta \mathbf{z}_{p}=\mathbf{z}_{p}-\mathbf{z}_{p}^{\prime }$, $%
\Delta \mathbf{z}_{q}=\mathbf{z}_{q}-\mathbf{z}_{q}^{\prime }$, and%
\begin{equation}
\sum_{\mathbf{n}_{q}=-\infty }^{+\infty }=\sum_{n_{p+1}=-\infty }^{+\infty
}\ldots \sum_{n_{D}=-\infty }^{+\infty }. \label{nqsum}
\end{equation}%
As a next step, we apply to the series over $n_{p+1}$ in (\ref{GxxD}) the
Abel-Plana formula \cite{Most97,Saha07Gen}%
\begin{equation}
\sideset{}{'}{\sum}_{n=0}^{\infty }f(n)=\int_{0}^{\infty
}dx\,f(x)+i\int_{0}^{\infty }dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}-1},
\label{Abel}
\end{equation}%
where the prime means that the term $n=0$ should be halved. It can be seen
that after the application of this formula the term in the expression of the
Wightman function which corresponds to the first integral on the right of (%
\ref{Abel}) is the Wightman function for dS spacetime with the topology $%
\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$, which, in the notations
given above, corresponds to the function $G_{p+1,q-1}^{+}(x,x^{\prime })$.
As a result one finds
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=G_{p+1,q-1}^{+}(x,x^{\prime })+\Delta
_{p+1}G_{p,q}^{+}(x,x^{\prime }). \label{G1decomp}
\end{equation}%
The second term on the right of this formula is induced by the compactness
of the $z^{p+1}$ - direction and is given by the expression
\begin{eqnarray}
\Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&\frac{2\alpha ^{1-D}(\eta \eta
^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}%
_{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\
&&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}-1)} \notag \\
&&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta
x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2}
\end{eqnarray}%
where $\mathbf{n}_{q-1}=(n_{p+2},\ldots ,n_{D})$, $I_{\nu }(x)$ and $K_{\nu
}(x)$ are the Bessel modified functions and the notation%
\begin{equation}
k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}(2\pi n_{l}/L_{l})^{2}
\label{knD1+2}
\end{equation}%
is introduced. In formula (\ref{GxxD2}), $V_{q-1}=L_{p+2}\cdots L_{D}$ is
the volume of $(q-1)$-dimensional compact subspace. Note that the
combination of the Bessel modified functions appearing in formula (\ref%
{GxxD2}) can also be written in the form%
\begin{eqnarray}
K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta x)K_{\nu }(\eta
^{\prime }x) &=&\frac{2}{\pi }\sin (\nu \pi )K_{\nu }(\eta x)K_{\nu }(\eta
^{\prime }x) \notag \\
&&+I_{\nu }(\eta x)K_{\nu }(\eta ^{\prime }x)+K_{\nu }(\eta x)I_{\nu }(\eta
^{\prime }x), \label{eqformComb}
\end{eqnarray}%
which explicitly shows that this combination is symmetric under the
replacement $\eta \rightleftarrows \eta ^{\prime }$. In formula (\ref{GxxD2}%
) the integration with respect to the angular part of $\mathbf{k}_{p}$ can
be done by using the formula%
\begin{equation}
\int d\mathbf{k}_{p}\,e^{i\mathbf{k}_{p}\cdot \Delta \mathbf{z}_{p}}F(|%
\mathbf{k}_{p}|)=\frac{(2\pi )^{p/2}}{|\Delta \mathbf{z}_{p}|^{p/2-1}}%
\int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p/2}F(|\mathbf{k}%
_{p}|)J_{p/2-1}(|\mathbf{k}_{p}||\Delta \mathbf{z}_{p}|), \label{intang}
\end{equation}%
where $J_{\mu }(x)$ is the Bessel function.
After the recurring application of formula (\ref{GxxD2}), the Wightman
function for dS spacetime with spatial topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$ is presented in the form%
\begin{equation}
G_{p,q}^{+}(x,x^{\prime })=G_{\mathrm{dS}}^{+}(x,x^{\prime })+\Delta
G_{p,q}^{+}(x,x^{\prime }), \label{GdSGcomp}
\end{equation}%
where $G_{\mathrm{dS}}^{+}(x,x^{\prime })\equiv G_{D,0}^{+}(x,x^{\prime })$
is the corresponding function for uncompactified dS spacetime and the part%
\begin{equation}
\Delta G_{p,q}^{+}(x,x^{\prime })=\sum_{l=1}^{q}\Delta
_{D-l+1}G_{D-l,l}^{+}(x,x^{\prime }), \label{DeltaGtop}
\end{equation}%
is induced by the toroidal compactification of the $q$-dimensional subspace.
Two-point function in the uncompactified dS spacetime is investigated in
\cite{Cand75,Dowk76,Bunc78,Bros96,Bous02} (see also \cite{Birr82}) and is
given by the formula%
\begin{equation}
G_{\mathrm{dS}}^{+}(x,x^{\prime })=\frac{\alpha ^{1-D}\Gamma (D/2+\nu
)\Gamma (D/2-\nu )}{2^{(D+3)/2}\pi ^{(D+1)/2}\left( u^{2}-1\right) ^{(D-1)/4}%
}P_{\nu -1/2}^{(1-D)/2}(u), \label{WFdS}
\end{equation}%
where $P_{\nu }^{\mu }(x)$ is the associated Legendre function of the first
kind and
\begin{equation}
u=-1+\frac{\sum_{l=1}^{D}(z^{l}-z^{\prime l})^{2}-(\eta -\eta ^{\prime })^{2}%
}{2\eta \eta ^{\prime }}. \label{u}
\end{equation}%
An alternative form is obtained by using the relation between the the
associated Legendre function and the hypergeometric function.
\section{Vacuum expectation values of the field square}
\label{sec:vevPhi2}
We denote by $\langle \varphi ^{2}\rangle _{p,q}$ the VEV of the field
square in dS spacetime with spatial topology $\mathrm{R}^{p}\times (\mathrm{S%
}^{1})^{q}$. Having the Wightman function we can evaluate this VEV taking
the coincidence limit of the arguments. Of course, in this limit the
two-point functions are divergent and some renormalization procedure is
needed. The important point here is that the local geometry is not changed
by the toroidal compactification and the divergences are the same as in the
uncompactified dS spacetime. As in our procedure we have already extracted
from the Wightman function the part $G_{\mathrm{dS}}^{+}(x,x^{\prime })$,
the renormalization of the VEVs is reduced to the renormalization of the
uncompactified dS part which is already done in literature. The VEV\ of the
field square is presented in the decomposed form%
\begin{equation}
\langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle _{\mathrm{dS}%
}+\langle \varphi ^{2}\rangle _{c},\;\langle \varphi ^{2}\rangle
_{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle \varphi ^{2}\rangle _{D-l,l},
\label{phi2dSplComp}
\end{equation}%
where $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is the VEV in
uncompactified $\mathrm{dS}_{D+1}$ and the part $\langle \varphi ^{2}\rangle
_{c}$ is due to the compactness of the $q$-dimensional subspace. Here the
term $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ is defined by the
relation similar to (\ref{G1decomp}):
\begin{equation}
\langle \varphi ^{2}\rangle _{p,q}=\langle \varphi ^{2}\rangle
_{p+1,q-1}+\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}.
\label{phi2decomp}
\end{equation}%
This term is the part in the VEV induced by the compactness of the $z^{p+1}$
- direction. This part is directly obtained from (\ref{GxxD2}) in the
coincidence limit of the arguments:%
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{2\alpha ^{1-D}\eta
^{D}}{2^{p}\pi ^{p/2+1}\Gamma (p/2)V_{q-1}}\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\int_{0}^{\infty }d|\mathbf{k}_{p}|\,|\mathbf{k}_{p}|^{p-1}
\notag \\
&&\times \int_{0}^{\infty }dx\,\frac{xK_{\nu }(x\eta )\left[ I_{-\nu }(x\eta
)+I_{\nu }(x\eta )\right] }{\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}%
_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}%
_{q-1}}^{2}}}-1)}. \label{phi2Dc}
\end{eqnarray}%
Instead of $|\mathbf{k}_{p}|$ introducing a new integration variable $y=%
\sqrt{x^{2}+\mathbf{k}_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}$ and expanding $%
(e^{Ly}-1)^{-1}$, the integral over $y$ is explicitly evaluated and one finds%
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelPhi2}
\end{eqnarray}%
where we use the notation%
\begin{equation}
f_{\mu }(y)=y^{\mu }K_{\mu }(y). \label{fmunot}
\end{equation}%
By taking into account the relation between the conformal and synchronous
time coordinates, we see that the VEV of the field square is a function of
the combinations $L_{l}/\eta =L_{l}e^{t/\alpha }/\alpha $. In the limit when
the length of the one of the compactified dimensions, say $z^{l}$, $%
l\geqslant p+2$, is large, $L_{l}\rightarrow \infty $, the main contribution
into the sum over $n_{l}$ in (\ref{DelPhi2}) comes from large values of $%
n_{l}$ and we can replace the summation by the integration in accordance
with the formula%
\begin{equation}
\frac{1}{L_{l}}\sum_{n_{l}=-\infty }^{+\infty }f(2\pi n_{l}/L_{l})=\frac{1}{%
\pi }\int_{0}^{\infty }dy\,f(y). \label{sumtoint}
\end{equation}%
The integral over $y$ is evaluated by using the formula from \cite{Prud86}
and we can see that from (\ref{DelPhi2}) the corresponding formula is
obtained for the topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$.
For a conformally coupled massless scalar field one has $\nu =1/2$ and $%
\left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)=1/x$. In this case the
corresponding integral in formula (\ref{DelPhi2}) is explicitly evaluated
and we find%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}=\frac{2(\eta /\alpha )^{D-1}%
}{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\frac{f_{p/2}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p}}%
,\;\xi =\xi _{D},\;m=0. \label{DelPhi2Conf}
\end{equation}%
In particular, the topological part is always positive. Formula (\ref%
{DelPhi2Conf}) could also be obtained from the corresponding result in $%
(D+1) $-dimensional Minkowski spacetime with spatial topology $\mathrm{R}%
^{p}\times (\mathrm{S}^{1})^{q}$, taking into account that two problems are
conformally related: $\Delta _{p+1}\langle \varphi ^{2}\rangle
_{p,q}=a^{1-D}(\eta )\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{%
\mathrm{(M)}}$, where $a(\eta )=\alpha /\eta $ is the scale factor. This
relation is valid for any conformally flat bulk. The similar formula takes
place for the total topological part $\langle \varphi ^{2}\rangle _{c}$.
Note that, in this case the expressions for $\Delta _{p+1}\langle \varphi
^{2}\rangle _{p,q}$ are obtained from the formulae for $\Delta _{p+1}\langle
\varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}}$ replacing the lengths $L_{l}$ of
the compactified dimensions by the comoving lengths $\alpha L_{l}/\eta $, $%
l=p,\ldots ,D$.
Now we turn to the investigation of the topological part $\Delta
_{p+1}\langle \varphi ^{2}\rangle _{p,q}$ in the VEV of the field square in
the asymptotic regions of the ratio $L_{p+1}/\eta $. For small values of
this ratio, $L_{p+1}/\eta \ll 1$, we introduce a new integration variable $%
y=L_{p+1}x$. By taking into account that for large values $x$ one has $\left[
I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\approx 1/x$, we find that to the
leading order $\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}$ coincides
with the corresponding result for a conformally coupled massless field,
given by (\ref{DelPhi2Conf}):%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx (\eta /\alpha
)^{D-1}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}^{\mathrm{(M)}%
},\;L_{p+1}/\eta \ll 1. \label{DelPhi2Poq}
\end{equation}%
For fixed value of the ratio $L_{p+1}/\alpha $, this limit corresponds to $%
t\rightarrow -\infty $ and the topological part $\langle \varphi ^{2}\rangle
_{c}$ behaves like $\exp [-(D-1)t/\alpha ]$. By taking into account that the
part $\langle \varphi ^{2}\rangle _{\mathrm{dS}}$ is time independent, from
here we conclude that in the early stages of the cosmological expansion the
topological part dominates in the VEV\ of the field square.
For small values of the ratio $\eta /L_{p+1}$, we introduce a new
integration variable $y=L_{p+1}x$ and expand the integrand by using the
formulae for the Bessel modified functions for small arguments. For real
values of the parameter $\nu $, after the integration over $y$ by using the
formula from \cite{Prud86}, to the leading order we find%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{2^{(1-p)/2+\nu
}\eta ^{D-2\nu }\Gamma (\nu )}{\pi ^{(p+3)/2}V_{q-1}\alpha ^{D-1}}%
\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{%
f_{(p+1)/2-\nu }(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(L_{p+1}n)^{p+1-2\nu }}%
,\;\eta /L_{p+1}\ll 1. \label{DelPhi2Mets}
\end{equation}%
In the case of a conformally coupled massless scalar field $\nu =1/2$ and
this formula reduces to the exact result given by Eq. (\ref{DelPhi2Conf}).
For fixed values of $L_{p+1}/\alpha $, the limit under consideration
corresponds to late stages of the cosmological evolution, $t\rightarrow
+\infty $, and the topological part $\langle \varphi ^{2}\rangle _{c}$ is
suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$. Hence, in this limit
the total VEV is dominated by the uncompactified dS part $\langle \varphi
^{2}\rangle _{\mathrm{dS}}$. Note that formula (32) also describes the
asymptotic behavior of the topological part in the strong curvature regime
corresponding to small values of the parameter $\alpha $.
In the same limit, for pure imaginary values of the parameter $\nu $ in a
similar way we find the following asymptotic behavior
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &\approx &\frac{4\alpha
^{1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n%
}_{q-1}=-\infty }^{+\infty }\frac{1}{(nL_{p+1})^{p+1}} \notag \\
&&\times {\mathrm{Re}}\left[ 2^{i|\nu |}\Gamma (i|\nu |)(nL_{p+1}/\eta
)^{2i|\nu |}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{\mathbf{n}_{q-1}})\right] .
\label{DelPhi2MetsIm}
\end{eqnarray}%
Defining the phase $\phi _{0}$ by the relation
\begin{equation}
Be^{i\phi _{0}}=2^{i|\nu |}\Gamma (i|\nu |)\sum_{n=1}^{\infty }\sum_{\mathbf{%
n}_{q-1}=-\infty }^{+\infty }n^{2i|\nu |-p-1}f_{(p+1)/2-i|\nu |}(nL_{p+1}k_{%
\mathbf{n}_{q-1}}), \label{Bphi0}
\end{equation}%
we write this formula in terms of the synchronous time:%
\begin{equation}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}\approx \frac{4\alpha
e^{-Dt/\alpha }B}{(2\pi )^{(p+3)/2}L_{p+1}^{p+1}V_{q-1}}\cos [2|\nu
|t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}]. \label{DelPhi2MetsIm1}
\end{equation}%
Hence, in the case under consideration at late stages of the cosmological
evolution the topological part is suppressed by the factor $\exp (-Dt/\alpha
)$ and the damping of the corresponding VEV has an oscillatory nature.
\section{Vacuum energy-momentum tensor}
\label{sec:vevEMT2}
In this section we investigate the VEV for the energy-momentum tensor of a
scalar field in $\mathrm{dS}_{D+1}$ with toroidally compactified $q$%
-dimensional subspace. In addition to describing the physical structure of
the quantum field at a given point, this quantity acts as the source of
gravity in the semiclassical Einstein equations. It therefore plays an
important role in modelling self-consistent dynamics involving the
gravitational field. Having the Wightman function and the VEV of the field
square we can evaluate the vacuum energy-momentum tensor by using the formula%
\begin{equation}
\langle T_{ik}\rangle _{p,q}=\lim_{x^{\prime }\rightarrow x}\partial
_{i}\partial _{k}^{\prime }G_{p,q}^{+}(x,x^{\prime })+\left[ \left( \xi -%
\frac{1}{4}\right) g_{ik}\nabla _{l}\nabla ^{l}-\xi \nabla _{i}\nabla
_{k}-\xi R_{ik}\right] \langle \varphi ^{2}\rangle _{p,q}, \label{emtvev1}
\end{equation}%
where $R_{ik}=Dg_{ik}/\alpha ^{2}$ is the Ricci tensor for $\mathrm{dS}_{D+1}
$. Note that in (\ref{emtvev1}) we have used the expression for the
classical energy-momentum tensor which differs from the standard one by the
term which vanishes on the solutions of the field equation (see, for
instance, Ref. \cite{Saha04}). As in the case of the field square, the VEV
of the energy-momentum tensor is presented in the form%
\begin{equation}
\langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{p+1,q-1}+\Delta
_{p+1}\langle T_{i}^{k}\rangle _{p,q}. \label{TikDecomp}
\end{equation}%
Here $\langle T_{i}^{k}\rangle _{p+1,q-1}$ is the part corresponding to dS
spacetime with $p+1$ uncompactified and $q-1$ toroidally compactified
dimensions and $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ is induced by
the compactness along the $z^{p+1}$ - direction. The recurring application
of formula (\ref{TikDecomp}) allows us to write the VEV in the form%
\begin{equation}
\langle T_{i}^{k}\rangle _{p,q}=\langle T_{i}^{k}\rangle _{\mathrm{dS}%
}+\langle T_{i}^{k}\rangle _{c},\;\langle T_{i}^{k}\rangle
_{c}=\sum_{l=1}^{q}\Delta _{D-l+1}\langle T_{i}^{k}\rangle _{D-l,l},
\label{TikComp}
\end{equation}%
where the part corresponding to uncompactified dS spacetime, $\langle
T_{i}^{k}\rangle _{\mathrm{dS}}$, is explicitly decomposed. The part $%
\langle T_{i}^{k}\rangle _{c}$ is induced by the comactness of the $q$%
-dimensional subspace.
The second term on the right of formula (\ref{TikDecomp}) is obtained
substituting the corresponding parts in the Wightman function, Eq. (\ref%
{GxxD2}), and in the field square, Eq. (\ref{DelPhi2}), into formula (\ref%
{emtvev1}). After the lengthy calculations for the energy density one finds%
\begin{eqnarray}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &=&\frac{2\alpha ^{-1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx \notag \\
&&\times \frac{xF^{(0)}(x\eta )}{(nL_{p+1})^{p-1}}f_{(p-1)/2}(nL_{p+1}\sqrt{%
x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelT00}
\end{eqnarray}%
with the notation%
\begin{eqnarray}
F^{(0)}(y) &=&y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu }^{\prime }(y)\right]
K_{\nu }^{\prime }(y)+D(1/2-2\xi )y\left[ (I_{-\nu }(y)+I_{\nu }(y))K_{\nu
}(y)\right] ^{\prime } \notag \\
&&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left( \nu
^{2}+2m^{2}\alpha ^{2}-y^{2}\right) , \label{F0}
\end{eqnarray}%
and the function $f_{\mu }(y)$ is defined by formula (\ref{fmunot}). The
vacuum stresses are presented in the form (no summation over $i$)%
\begin{eqnarray}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q} &=&A_{p,q}-\frac{4\alpha
^{-1-D}\eta ^{D+2}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }\sum_{%
\mathbf{n}_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta )
\notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p+1}}%
f_{p}^{(i)}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{DelTii}
\end{eqnarray}%
where we have introduced the notations%
\begin{eqnarray}
f_{p}^{(i)}(y) &=&f_{(p+1)/2}(y),\;i=1,\ldots ,p, \notag \\
f_{p}^{(p+1)}(y) &=&-y^{2}f_{(p-1)/2}(y)-pf_{(p+1)/2}(y), \label{fp+1} \\
f_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{(p-1)/2}(y),\;i=p+2,\ldots ,D.
\notag
\end{eqnarray}%
In formula (\ref{DelTii}) (no summation over $i$, $i=1,\ldots ,D$),
\begin{eqnarray}
A_{p,q} &=&\left[ \left( \xi -\frac{1}{4}\right) \nabla _{l}\nabla ^{l}-\xi
g^{ii}\nabla _{i}\nabla _{i}-\xi R_{i}^{i}\right] \Delta _{p+1}\langle
\varphi ^{2}\rangle _{p,q} \notag \\
&=&\frac{2\alpha ^{-1-D}\eta ^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}%
\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}\int_{0}^{\infty }dx\,\frac{xF(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}), \label{A}
\end{eqnarray}%
with the notation%
\begin{eqnarray}
F(y) &=&\left( 4\xi -1\right) y^{2}\left[ I_{-\nu }^{\prime }(y)+I_{\nu
}^{\prime }(y)\right] K_{\nu }^{\prime }(y)+\left[ 2(D+1)\xi -D/2\right] y(%
\left[ I(y)+I_{\nu }(y)\right] K_{\nu }(y))^{\prime } \notag \\
&&+\left[ I_{-\nu }(y)+I_{\nu }(y)\right] K_{\nu }(y)\left[ \left( 4\xi
-1\right) \left( y^{2}+\nu ^{2}\right) \right] . \label{Fy}
\end{eqnarray}%
As it is seen from the obtained formulae, the topological parts in the VEVs
are time-dependent and, hence, the local dS symmetry is broken by them.
As an additional check of our calculations it can be seen that the
topological terms satisfy the trace relation
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=D(\xi -\xi _{D})\nabla
_{l}\nabla ^{l}\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q}+m^{2}\Delta
_{p+1}\langle \varphi ^{2}\rangle _{p,q}. \label{tracerel}
\end{equation}%
In particular, from here it follows that the topological part in the VEV\ of
the energy-momentum tensor is traceless for a conformally coupled massless
scalar field. The trace anomaly is contained in the uncompactified dS part
only. We could expect this result, as the trace anomaly is determined by the
local geometry and the local geometry is not changed by the toroidal
compactification.
For a conformally coupled massless scalar field $\nu =1/2$ and, by using the
formulae for $I_{\pm 1/2}(x)$ and $K_{1/2}(x)$, after the integration over $%
x $ from formulae (\ref{DelT00}), (\ref{DelTii}) we find (no summation over $%
i$)%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}=-\frac{2(\eta /\alpha )^{D+1}}{%
(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}_{q-1}=-\infty
}^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{(nL_{p+1})^{p+2}%
}, \label{DelTConf}
\end{equation}%
with the notations%
\begin{eqnarray}
g_{p}^{(0)}(y) &=&g_{p}^{(i)}(y)=f_{p/2+1}(y),\;i=1,\ldots ,p, \notag \\
g_{p}^{(i)}(y) &=&(nL_{p+1}k_{i})^{2}f_{p/2}(y),\;i=p+2,\ldots ,D,
\label{gi} \\
g_{p}^{(p+1)}(y) &=&-(p+1)f_{p/2+1}(y)-y^{2}f_{p/2}(y). \notag
\end{eqnarray}%
As in the case of the filed square, this formula can be directly obtained by
using the conformal relation between the problem under consideration and the
corresponding problem in $(D+1)$-dimensional Minkowski spacetime with the
spatial topology $\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$. Note that in
this case the topological part in the energy density is always negative and
is equal to the vacuum stresses along the uncompactified dimensions. In
particular, for the case $D=3$, $p=0$ (topology $(\mathrm{S}^{1})^{3}$) and
for $L_{i}=L$, $i=1,2,3$, from formulae (\ref{TikComp}), (\ref{DelTConf})
for the topological part in the vacuum energy density we find $\langle
T_{0}^{0}\rangle _{c}=-0.8375(a(\eta )L)^{-4}$ (see, for example, Ref. \cite%
{Most97}).
The general formulae for the topological part in the VEV of the energy
density are simplified in the asymptotic regions of the parameters. For
small values of the ratio $L_{p+1}/\eta $ we can see that to the leading
order $\Delta _{p+1}\langle T_{i}^{k}\rangle _{p,q}$ coincides with the
corresponding result for a conformally coupled massless field (no summation
over $i$):%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx -\frac{2(\eta /\alpha
)^{D+1}}{(2\pi )^{p/2+1}V_{q-1}}\sum_{n=1}^{\infty }\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\frac{g_{p}^{(i)}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{%
(nL_{p+1})^{p+2}},\;L/\eta \ll 1. \label{TiiSmall}
\end{equation}%
For fixed values of the ratio $L_{p+1}/\alpha $, this formula describes the
asymptotic behavior of the VEV at the early stages of the cosmological
evolution corresponding to $t\rightarrow -\infty $. In this limit the
topological part behaves as $\exp [-(D+1)t/\alpha ]$ and, hence, it
dominates the part corresponding to the uncompactified dS spacetime which is
time independent. In particular, the total energy density is negative.
In the opposite limit of small values for the ratio $\eta /L_{p+1}$ we
introduce in the formulae for the VEV of the energy-momentum tensor an
integration variable $y=L_{p+1}x$ and expand the integrants over $\eta
/L_{p+1}$. For real values of the parameter $\nu $, for the energy density
to the leading order we find%
\begin{eqnarray}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q} &\approx &\frac{2^{\nu }D\left[
D/2-\nu +2\xi \left( 2\nu -D-1\right) \right] }{(2\pi
)^{(p+3)/2}L_{p+1}^{1-q}V_{q-1}\alpha ^{D+1}}\Gamma (\nu ) \notag \\
&&\times \left( \frac{\eta }{L_{p+1}}\right) ^{D-2\nu }\sum_{n=1}^{\infty
}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }\frac{f_{(p+1)/2-\nu
}(nL_{p+1}k_{\mathbf{n}_{q-1}})}{n^{(p+1)/2-\nu }}. \label{T00smallEta}
\end{eqnarray}%
In particular, this energy density is positive for a minimally coupled
scalar field and for a conformally coupled massive scalar field. Note that
for a conformally coupled massless scalar the coefficient in (\ref%
{T00smallEta}) vanishes. For the vacuum stresses the second term on the
right of formula (\ref{DelTii}) is suppressed with respect to the first term
given by (\ref{A}) by the factor $(\eta /L_{p+1})^{2}$ for $i=1,\ldots ,p+1$%
, and by the factor $(\eta k_{i})^{2}$ for $i=p+2,\ldots ,D$. As a result,
to the leading order we have the relation (no summation over $i$)
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{2\nu }{D}\Delta
_{p+1}\langle T_{0}^{0}\rangle _{p,q},\;\eta /L_{p+1}\ll 1,
\label{TiismallEta}
\end{equation}%
between the energy density and stresses, $i=1,\ldots ,D$. The coefficient in
this relation does not depend on $p$ and, hence, it takes place for the
total topological part of the VEV as well. Hence, in the limit under
consideration the topological parts in the vacuum stresses are isotropic and
correspond to the gravitational source with barotropic equation of state.
Note that this limit corresponds to late times in terms of synchronous time
coordinate $t$, $(\alpha /L_{p+1})e^{-t/\alpha }\ll 1$, and the topological
part in the VEV is suppressed by the factor $\exp [-(D-2\nu )t/\alpha ]$.
For a conformally coupled massless scalar field the coefficient of the
leading term vanishes and the topological parts are suppressed by the factor
$\exp [-(D+1)t/\alpha ]$. As the uncompactified dS part is constant, it
dominates the topological part at the late stages of the cosmological
evolution.
For small values of the ratio $\eta /L_{p+1}$ and for purely imaginary $\nu $%
, in the way similar to that used for the case of the field square we can
see that the energy density behaves like%
\begin{equation}
\Delta _{p+1}\langle T_{0}^{0}\rangle _{p,q}\approx \frac{4De^{-Dt/\alpha
}BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\sin [2|\nu |t/\alpha
+2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}], \label{T00ImEta}
\end{equation}%
where the coefficient $B_{D}$ and the phase $\phi _{1}$ are defined by the
relation%
\begin{equation}
|\nu |(1/2-2\xi )+i\left[ D/4-(D+1)\xi \right] =B_{D}e^{i\phi _{1}}.
\label{DefBD}
\end{equation}%
In the same limit, the main contribution into the vacuum stresses comes from
the term $A$ in (\ref{A}) and one has (no summation over $i$)%
\begin{equation}
\Delta _{p+1}\langle T_{i}^{i}\rangle _{p,q}\approx \frac{8|\nu
|e^{-Dt/\alpha }BB_{D}}{(2\pi )^{(p+3)/2}\alpha L_{p+1}^{p+1}V_{q-1}}\cos
[2|\nu |t/\alpha +2|\nu |\ln (L_{p+1}/\alpha )+\phi _{0}+\phi _{1}].
\label{TiiImEta}
\end{equation}%
As we see, in the limit under consideration to the leading order the vacuum
stresses are isotropic.
\section{Twisted scalar field}
\label{sec:Twisted}
One of the characteristic features of field theory on backgrounds with
non-trivial topology is the appearance of topologically inequivalent field
configurations \cite{Isha78}. In this section we consider the case of a
twisted scalar field on background of dS spacetime with the spatial topology
$\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ assuming that the field obeys
the antiperiodicity condition (no summation over $l$)%
\begin{equation}
\varphi (t,\mathbf{z}_{p},\mathbf{z}_{q}+L_{l}\mathbf{e}_{l})=-\varphi (t,%
\mathbf{z}_{p},\mathbf{z}_{q}), \label{AntiPer}
\end{equation}%
where $\mathbf{e}_{l}$ is the unit vector along the direction of the
coordinate $z^{l}$, $l=p+1,\ldots ,D$. The corresponding Wightman fucntion
and the VEVs of the field square and the energy-momentum tensor can be found
in the way similar to that for the field with periodicity conditions. The
eigenfunctions have the form given by (\ref{eigfuncD}), where now%
\begin{equation}
k_{l}=2\pi (n_{l}+1/2)/L_{l},\;n_{l}=0,\pm 1,\pm 2,\ldots ,\;l=p+1,\ldots ,D.
\label{nltwisted}
\end{equation}%
The positive frequency Wightman function is still given by formula (\ref%
{GxxD}). For the summation over $n_{p+1}$ we apply the Abel-Plana formula in
the form \cite{Most97,Saha07Gen}%
\begin{equation}
\sum_{n=0}^{\infty }f(n+1/2)=\int_{0}^{\infty }dx\,f(x)-i\int_{0}^{\infty
}dx\,\frac{f(ix)-f(-ix)}{e^{2\pi x}+1}. \label{abel2}
\end{equation}%
Similar to (\ref{GxxD2}), for the correction to the Wightman function due to
the compactness of the $(p+1)$th spatial direction this leads to the result
\begin{eqnarray}
\Delta _{p+1}G_{p,q}^{+}(x,x^{\prime }) &=&-\frac{2\alpha ^{1-D}(\eta \eta
^{\prime })^{D/2}}{(2\pi )^{p+1}V_{q-1}}\int d\mathbf{k}_{p}\,e^{i\mathbf{k}%
_{p}\cdot \Delta \mathbf{z}_{p}}\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty
}e^{i\mathbf{k}_{q-1}\cdot \Delta \mathbf{z}_{q-1}} \notag \\
&&\times \int_{0}^{\infty }dx\,\frac{x\cosh (\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}\Delta z^{p+1})}{\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}(e^{L_{p+1}\sqrt{x^{2}+\mathbf{k}%
_{p}^{2}+k_{\mathbf{n}_{q-1}}^{2}}}+1)} \notag \\
&&\times \left[ K_{\nu }(\eta x)I_{-\nu }(\eta ^{\prime }x)+I_{\nu }(\eta
x)K_{\nu }(\eta ^{\prime }x)\right] , \label{GxxD2tw}
\end{eqnarray}%
where now $\mathbf{k}_{q-1}=(\pi (2n_{p+2}+1)/L_{p+2},\ldots ,\pi
(2n_{D}+1)/L_{D})$, and
\begin{equation}
k_{\mathbf{n}_{q-1}}^{2}=\sum_{l=p+2}^{D}\left[ \pi (2n_{l}+1)/L_{l}\right]
^{2}. \label{knqtw}
\end{equation}%
Taking the coincidence limit of the arguments, for the VEV of the field
square we find
\begin{eqnarray}
\Delta _{p+1}\langle \varphi ^{2}\rangle _{p,q} &=&\frac{4\alpha ^{1-D}\eta
^{D}}{(2\pi )^{(p+3)/2}V_{q-1}}\sum_{n=1}^{\infty }(-1)^{n}\sum_{\mathbf{n}%
_{q-1}=-\infty }^{+\infty }\int_{0}^{\infty }dx\,xK_{\nu }(x\eta ) \notag \\
&&\times \frac{I_{-\nu }(x\eta )+I_{\nu }(x\eta )}{(nL_{p+1})^{p-1}}%
f_{(p-1)/2}(nL_{p+1}\sqrt{x^{2}+k_{\mathbf{n}_{q-1}}^{2}}),
\label{DelPhi2tw}
\end{eqnarray}%
with the notations being the same as in (\ref{DelPhi2}). Note that in this
formula we can put $\sum_{\mathbf{n}_{q-1}=-\infty }^{+\infty }=2^{q-1}\sum_{%
\mathbf{n}_{q-1}=0}^{+\infty }$. In particular, for the topology $\mathrm{R}%
^{D-1}\times \mathrm{S}^{1}$ with a single compactified dimension of the
length $L_{D}=L$, considered in \cite{Saha07}, we have $\langle \varphi
^{2}\rangle _{c}=\Delta _{D}\langle \varphi ^{2}\rangle _{D-1,1}$ with the
topological part given by the formula%
\begin{eqnarray}
\langle \varphi ^{2}\rangle _{c} &=&\frac{4\alpha ^{1-D}}{(2\pi )^{D/2+1}}%
\sum_{n=1}^{\infty }(-1)^{n}\int_{0}^{\infty }dx\,x^{D-1} \notag \\
&&\times \left[ I_{-\nu }(x)+I_{\nu }(x)\right] K_{\nu }(x)\frac{%
K_{D/2-1}(nLx/\eta )}{(nLx/\eta )^{D/2-1}}. \label{phi2SingComp}
\end{eqnarray}%
In figure \ref{fig1} we have plotted the topological part in the VEV of the
field square in the case of a conformally coupled twisted massive scalar ($%
\xi =\xi _{D}$) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}%
^{2}\times \mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right
panel) as a function of $L/\eta =Le^{t/\alpha }/\alpha $. In the second case
we have taken the lengths for all compactified dimensions being the same: $%
L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves correspond to the
values of the parameter $m\alpha $. Note that we have presented conformally
non-trivial examples and the graphs are plotted by using the general formula
(\ref{DelPhi2tw}). For the case $m\alpha =1$ the parameter $\nu $ is pure
imaginary and in accordance with the asymptotic analysis given above the
behavior of the field square is oscillatory for large values of the ratio $%
L/\eta $. For the left panel in figure \ref{fig1} the first zero is for $%
L/\eta \approx 8.35$ and for the right panel $L/\eta \approx 9.57$.
\begin{figure}[tbph]
\begin{center}
\begin{tabular}{cc}
\epsfig{figure=sahfig1a.eps,width=7.cm,height=6cm} & \quad %
\epsfig{figure=sahfig1b.eps,width=7.cm,height=6cm}%
\end{tabular}%
\end{center}
\caption{The topological part in the VEV of the field square in the case of
a conformally coupled twisted massive scalar ($\protect\xi =\protect\xi _{D}$%
) for $D=3$ dS spacetime with spatial topologies $\mathrm{R}^{2}\times
\mathrm{S}^{1}$ (left panel) and $(\mathrm{S}^{1})^{3}$ (right panel) as a
function of $L/\protect\eta =Le^{t/\protect\alpha }/\protect\alpha $. In the
second case we have taken the lengths for all compactified dimensions being
the same: $L_{1}=L_{2}=L_{3}\equiv L$. The numbers near the curves
correspond to the values of the parameter $m\protect\alpha $. }
\label{fig1}
\end{figure}
In the case of a twisted scalar field the formulae for the VEV of the
energy-momentum tensor are obtained from formulae for the untwisted field
given in the previous section (formulae (\ref{DelT00}), (\ref{DelTii})) with
$k_{\mathbf{n}_{q-1}}^{2}$ from (\ref{knqtw}) and by making the replacement%
\begin{equation}
\sum_{n=1}^{\infty }\rightarrow \sum_{n=1}^{\infty }(-1)^{n},\
\label{SumRepl}
\end{equation}%
and $k_{i}=2\pi (n_{i}+1/2)/L_{i}$ in expression (\ref{fp+1}) for $%
f^{(i)}(y) $, $i=p+2,\ldots ,D$. In figure \ref{fig2} the topological part
in the VEV of the energy density is plotted versus $L/\eta $ for a a
conformally coupled twisted massive scalar in $D=3$ dS spacetime with
spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ (left panel) and $(%
\mathrm{S}^{1})^{3}$ (right panel). In the latter case the lengths of
compactified dimensions are the same. As in figure \ref{fig1}, the numbers
near the curves are the values of the parameter $m\alpha $. For $m\alpha =1$
the behavior of the energy density for large values $L/\eta $ correspond to
damping oscillations. In the case $m\alpha =0.25$ (the parameter $\nu $ is
real) for the example on the left panel the topological part of the energy
density vanishes for $L/\eta \approx 9.2$, takes the minimum value $\langle
T_{0}^{0}\rangle _{c}\approx -3.1\cdot 10^{-6}/\alpha ^{4}$ for $L/\eta
\approx 12.9$ and then monotonically goes to zero. For the example on the
right panel with $m\alpha =0.25$ the energy density vanishes for $L/\eta
\approx 45$, takes the minimum value $\langle T_{0}^{0}\rangle _{c}\approx
-1.1\cdot 10^{-8}/\alpha ^{4}$ for $L/\eta \approx 64.4$ and then
monotonically goes to zero. For a conformally coupled massless scalar field
in the case of topology $(\mathrm{S}^{1})^{3}$ one has $\langle
T_{0}^{0}\rangle _{c}=0.1957(\eta /\alpha L)^{4}$. Note that in the case of
topology $\mathrm{R}^{D-1}\times \mathrm{S}^{1}$ for a conformally coupled
massless scalar we have the formulae (no summation over $l$)%
\begin{eqnarray}
\langle T_{l}^{l}\rangle _{c} &=&\frac{1-2^{-D}}{\pi ^{(D+1)/2}}\left( \frac{%
\eta }{\alpha L}\right) ^{D+1}\zeta _{\mathrm{R}}(D+1)\Gamma \left( \frac{D+1%
}{2}\right) , \label{TllConfTwS1} \\
\langle T_{D}^{D}\rangle _{c} &=&-D\langle T_{0}^{0}\rangle _{c},\;\xi =\xi
_{D},\;m=0, \label{T00ConfTwS1}
\end{eqnarray}%
where $l=0,1,\ldots ,D-1$, and $\zeta _{\mathrm{R}}(x)$ is the Riemann zeta
function. The corresponding energy density is positive.
\begin{figure}[tbph]
\begin{center}
\begin{tabular}{cc}
\epsfig{figure=sahfig2a.eps,width=7.cm,height=6cm} & \quad %
\epsfig{figure=sahfig2b.eps,width=7.cm,height=6cm}%
\end{tabular}%
\end{center}
\caption{The same as in figure \protect\ref{fig1} for the topological part
of the energy density. }
\label{fig2}
\end{figure}
\section{Conclusion}
\label{sec:Conc}
In topologically non-trivial spaces the periodicity conditions imposed on
possible field configurations change the spectrum of the vacuum fluctuations
and lead to the Casimir-type contributions to the VEVs of physical
observables. Motivated by the fact that dS spacetime naturally arise in a
number of contexts, in the present paper we consider the quantum vacuum
effects for a massive scalar field with general curvature coupling in $(D+1)$%
-dimensional dS spacetime having the spatial topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$. Both cases of the periodicity and antiperiodicity
conditions along the compactified dimensions are discussed. As a first step
for the investigation of vacuum densities we evaluate the positive frequency
Wightman function. This function gives comprehensive insight into vacuum
fluctuations and determines the response of a particle detector of the
Unruh-DeWitt type. Applying the Abel-Plana formula to the corresponding
mode-sum, we have derived a recurrence relation which presents the Wightman
function for the $\mathrm{dS}_{D+1}$ with topology $\mathrm{R}^{p}\times (%
\mathrm{S}^{1})^{q}$ in the form of the sum of the Wightman function for the
topology $\mathrm{R}^{p+1}\times (\mathrm{S}^{1})^{q-1}$ and the additional
part $\Delta _{p+1}G_{p,q}^{+}$ induced by the compactness of the $(p+1)$th
spatial dimension. The latter is given by formula (\ref{GxxD2}) for a scalar
field with periodicity conditions and by formula (\ref{GxxD2tw}) for a
twisted scalar field. The repeated application of formula (\ref{G1decomp})
allows us to present the Wightman function as the sum of the uncompactified
dS and topological parts, formula (\ref{DeltaGtop}). As the toroidal
compactification does not change the local geometry, by this way the
renormalization of the bilinear field products in the coincidence limit is
reduced to that for uncompactifeid $\mathrm{dS}_{D+1}$.
Further, taking the coincidence limit in the formulae for the Wightman
function and its derivatives, we evaluate the VEVs of the field square and
the energy-momentum tensor. For a scalar field with periodic conditions the
corresponding topological parts are given by formula (\ref{DelPhi2}) for the
field square and by formulae (\ref{DelT00}) and (\ref{DelTii}) for the
energy density and vacuum stresses respectively. The trace anomaly is
contained in the uncompactified dS part only and the topological part
satisfies the standard trace relation (\ref{tracerel}). In particular, this
part is traceless for a conformally coupled massless scalar. In this case
the problem under consideration is conformally related to the corresponding
problem in $(D+1)$-dimensional Minkowski spacetime with the spatial topology
$\mathrm{R}^{p}\times (\mathrm{S}^{1})^{q}$ and the topological parts in the
VEVs are related by the formulae $\langle \varphi ^{2}\rangle _{c}=(\eta
/\alpha )^{D-1}\langle \varphi ^{2}\rangle _{c}^{\mathrm{(M)}}$ and $\langle
T_{i}^{k}\rangle _{c}=(\eta /\alpha )^{D+1}\langle T_{i}^{k}\rangle _{c}^{%
\mathrm{(M)}}$. Note that for a conformally coupled massless scalar the
topological part in the energy density is always negative and is equal to
the vacuum stresses along the uncompactified dimensions.
For the general case of the curvature coupling, in the limit $L_{p+1}/\eta
\ll 1$ the leading terms in the asymptotic expansion of the VEVs coincide
with the corresponding expressions for a conformally coupled massless field.
In particular, this limit corresponds to the early stages of the
cosmological expansion, $t\rightarrow -\infty $, and the topological parts
behave like $e^{-(D-1)t/\alpha }$ for the field square and like $%
e^{-(D+1)t/\alpha }$ for the energy-momentum tensor. Taking into account
that the uncompactified dS part is time independent, from here we conclude
that in the early stages of the cosmological evolution the topological part
dominates in the VEVs. In the opposite asymptotic limit corresponding to $%
\eta /L_{p+1}\ll 1$, the behavior of the topological parts depends on the
value of the parameter $\nu $. For real values of this parameter the leading
terms in the corresponding asymptotic expansions are given by formulae (\ref%
{DelPhi2Mets}) and (\ref{T00smallEta}) for the field square and the
energy-momentum tensor respectively. The corresponding vacuum stresses are
isotropic and the topological part of the energy-momentum tensor corresponds
to the gravitational source of the barotropic type with the equation of
state parameter equal to $-2\nu /D$. In the limit under consideration the
topological part in the energy density is positive for a minimally coupled
scalar field and for a conformally coupled massive scalar field. In
particular, this limit corresponds to the late stages of the cosmological
evolution, $t\rightarrow +\infty $, and the topological parts of the VEVs
are suppressed by the factor $e^{-(D-2\nu )t/\alpha }$ for both the field
square and the energy-momentum tensor. For a conformally coupled massless
field the coefficient of the leading term in the asymptotic expansion
vanishes and the topological part is suppressed by the factor $%
e^{-(D+1)t/\alpha }$. In the limit $\eta /L_{p+1}\ll 1$ and for pure
imaginary values of the parameter $\nu $ the asymptotic behavior of the
topological parts in the VEVs of the field square and the energy-momentum
tensor is described by formulae (\ref{DelPhi2MetsIm1}), (\ref{T00ImEta}), (%
\ref{TiiImEta}). These formulae present the leading term in the asymptotic
expansion of the topological parts at late stages of the cosmological
evolution. In this limit the topological terms oscillate with the amplitude
going to the zero as $e^{-Dt/\alpha }$ for $t\rightarrow +\infty $. The
phases of the oscillations for the energy density and vacuum stresses are
shifted by $\pi /2$.
In section \ref{sec:Twisted} we have considered the case of a scalar field
with antiperiodicity conditions along the compactified directions. The
Wightman fucntion and the VEVs of the field square and the energy-momentum
tensor are evaluated in the way similar to that for the field with
periodicity conditions. The corresponding formulae are obtained from the
formulae for the untwisted field with $k_{\mathbf{n}_{q-1}}^{2}$ defined by
Eq. (\ref{knqtw}) and by making the replacement (\ref{SumRepl}). In this
case we have also presented the graphs of the topological parts in the VEVs
of the field square and the energy-momentum tensor for $\mathrm{dS}_{4}$
with the spatial topologies $\mathrm{R}^{2}\times \mathrm{S}^{1}$ and $(%
\mathrm{S}^{1})^{3}$.
\section*{Acknowledgments}
AAS would like to acknowledge the hospitality of the INFN Laboratori
Nazionali di Frascati, Frascati, Italy. The work of AAS was supported in
part by the Armenian Ministry of Education and Science Grant. The work of SB
has been supported in part by the European Community Human Potential Program
under contract MRTN-CT-2004-005104 "Constituents, fundamental forces and
symmetries of the Universe" and by INTAS under contract 05-7928.
| {'timestamp': '2008-02-15T13:43:50', 'yymm': '0802', 'arxiv_id': '0802.2190', 'language': 'en', 'url': 'https://arxiv.org/abs/0802.2190'} |
\section{Introduction}
\label{sec:intro}
\indent Since the work of Arratia~\cite{arratia_1983} on annihilating random walks, it is known that, when starting with infinitely
many supporters of each opinion, the one-dimensional voter model fluctuates, i.e., the number of opinion changes at each vertex is
almost surely infinite.
In contrast, as a consequence of irreducibility, the process on finite connected graphs fixates to a configuration in which all
the vertices share the same opinion.
The objective of this paper is to study the dichotomy between fluctuation and fixation for a general class of opinion models
with confidence threshold.
The main novelty is to equip the set of opinions with the structure of a connected graph and use the induced graph distance to
define mathematically a level of disagreement among individuals.
Based on this modeling approach, some of the most popular models of opinion dynamics can be recovered by choosing the
structure of the opinion space suitably:
the constrained voter model, independently introduced in~\cite{itoh_etal_1998, vazquez_krapivsky_redner_2003}, is obtained by assuming
that the opinion space is a path, while the Axelrod model for the dissemination of cultures~\cite{axelrod_1997} and the discrete
Deffuant model~\cite{deffuant_al_2000} are closely related to our models when the opinion space is a Hamming graph
and a hypercube, respectively. \vspace*{8pt}
\noindent{\bf Model description} --
The class of models considered in this article are examples of interacting particle systems inspired from the voter
model~\cite{clifford_sudbury_1973, holley_liggett_1975} for the dynamics of opinions.
Individuals are located on the vertex set of a connected graph and characterized by their opinion, with the set of opinions being
identified with the vertex set of another connected graph.
The former graph represents the underlying spatial structure and is used to determine the interaction neighborhood of each individual.
The latter graph, that we call the opinion graph, represents the structure of the opinion space and is used to determine the distance
between two opinions and the level of disagreement between two individuals.
From now on, we call spatial distance the graph distance induced by the spatial structure and opinion distance the graph distance induced
by the opinion graph.
Individuals interact with each of their neighbors at rate one.
As the result of an interaction, an individual imitates her neighbor if and only if the distance between their opinions just before the
interaction does not exceed some confidence threshold~$\tau \in \mathbb{N}$.
More formally, we let
$$ \begin{array}{rclcl}
\mathscr G & := & (\mathscr V, \mathscr E) & = & \hbox{the {\bf spatial structure}} \vspace*{2pt} \\
\Gamma & := & (V, E) & = & \hbox{the {\bf opinion graph}} \end{array} $$
be two connected graphs, where~$\Gamma$ is also assumed to be finite.
Then, our opinion model is the continuous-time Markov chain whose state at time~$t$ is a spatial configuration
$$ \eta_t : \mathscr V \ \longrightarrow \ V \quad \hbox{where} \quad \eta_t (x) = \hbox{opinion at~$x \in \mathscr V$ at time~$t$} $$
and with transition rates at vertex~$x \in \mathscr V$ given by
\begin{equation}
\label{eq:rates}
\begin{array}{rrl}
c_{i \to j} (x, \eta) & := & \lim_{h \to 0} \,(1/h) \,P \,(\eta_{t + h} (x) = j \,| \,\eta_t = \eta \ \hbox{and} \ \eta (x) = i) \vspace*{4pt} \\
& = & \card \{y \in N_x : \eta (y) = j \} \ \mathbf{1} \{d (i, j) \leq \tau \} \quad \hbox{for all} \quad i, j \in V. \end{array}
\end{equation}
Here, the set~$N_x$ denotes the interaction neighborhood of vertex~$x$, i.e., all the vertices which are at spatial distance one
from~$x$, while~$d (i, j)$ refers to the opinion distance between~$i$ and~$j$, which is the length of the shortest path connecting
both opinions on the opinion graph.
Note that the classical voter model is simply obtained by assuming that the opinion graph consists of two vertices
connected by an edge and that the confidence threshold equals one.
The general class of opinion models described by the transition rates~\eqref{eq:rates} where the opinion space is represented by
a finite connected graph equipped with its graph distance has been recently introduced in~\cite{scarlatos_2013}. \vspace*{8pt}
\noindent{\bf Main results} --
The main question about the general model is whether the system fluctuates and clusters, leading ultimately the population to a global
consensus, or fixates in a highly fragmented configuration.
Recall that the process is said to
\begin{itemize}
\item {\bf fluctuate} when~$P \,(\eta_t (x) \ \hbox{changes infinitely often}) = 1$ for all~$x \in \mathscr V$, \vspace*{3pt}
\item {\bf fixate} when~$P \,(\eta_t (x) \ \hbox{changes a finite number of times}) = 1$ for all~$x \in \mathscr V$, \vspace*{3pt}
\item {\bf cluster} when~$P \,(\eta_t (x) = \eta_t (y)) \to 1$ as~$t \to \infty$ for all~$x, y \in \mathscr V$.
\end{itemize}
Note that whether the system fluctuates and clusters or fixates in a fragmented configuration is very sensitive to the initial configuration.
Also, throughout this paper, we assume that the process starts from a product measure with densities which are constant across space, i.e.,
$$ \rho_j \ := \ P \,(\eta_0 (x) = j) \quad \hbox{for all} \quad (x, j) \in \mathscr V \times V $$
only depends on opinion~$j$ but not on site~$x$.
To avoid trivialities, these densities are assumed to be positive.
Sometimes, we will make the stronger assumption that all the opinions are equally likely at time zero.
These two hypotheses correspond to the following two conditions:
\begin{align}
\label{eq:product} \rho_j \ > \ 0 \hspace*{15pt} \quad \hbox{for all} \quad j \in V \vspace*{3pt} \\
\label{eq:uniform} \rho_j \ = \ F^{-1} \quad \hbox{for all} \quad j \in V
\end{align}
where~$F := \card V$ refers to the total number of opinions.
Key quantities to understand the long-term behavior of the system are the radius and the diameter of the opinion graph defined respectively
as the minimum and maximum eccentricity of any vertex:
$$ \begin{array}{rclcl}
\mathbf{r} & := & \min_{i \in V} \ \max_{j \in V} \ d (i, j) & = & \hbox{the {\bf radius} of the graph~$\Gamma$} \vspace*{3pt} \\
\mathbf{d} & := & \max_{i \in V} \ \max_{j \in V} \ d (i, j) & = & \hbox{the {\bf diameter} of the graph~$\Gamma$}. \end{array} $$
To state our first theorem, we also introduce the subset
\begin{equation}
\label{eq:center}
C (\Gamma, \tau) \ := \ \{i \in V : d (i, j) \leq \tau \ \hbox{for all} \ j \in V \}
\end{equation}
that we shall call the~{\bf $\tau$-center} of the opinion graph.
The next result states that, whenever the confidence threshold is at least equal to the radius of the opinion graph, the infinite one-dimensional
system fluctuates and clusters while the probability that the finite system reaches ultimately a consensus, i.e., fixates in a configuration where
all the individuals share the same opinion, is bounded from below by a positive constant that does not depend on the size of the spatial structure.
Here, infinite one-dimensional means that the spatial structure is the graph with vertex set~$\mathbb{Z}$ and where each vertex is connected to its
two nearest neighbors.
\begin{theorem} --
\label{th:fluctuation}
Assume~\eqref{eq:product}. Then,
\begin{enumerate}
\item[a.] the process on~$\mathbb{Z}$ fluctuates whenever
\begin{equation}
\label{eq:fluctuation}
d (i, j) \leq \tau \quad \hbox{for all} \quad (i, j) \in V_1 \times V_2 \quad \hbox{for some $V$-partition~$\{V_1, V_2 \}$}.
\end{equation}
\end{enumerate}
Assume in addition that~$\mathbf{r} \leq \tau$. Then,
\begin{enumerate}
\item[b.] the process on~$\mathbb{Z}$ clusters and \vspace*{3pt}
\item[c.] the probability of consensus on any finite connected graph satisfies
$$ \begin{array}{l} P \,(\eta_t \equiv \hbox{constant for some} \ t > 0) \ \geq \ \rho_{\cent} := \sum_{j \in C (\Gamma, \tau)} \,\rho_j \ > \ 0. \end{array} $$
\end{enumerate}
\end{theorem}
We will show that the~$\tau$-center is nonempty if and only if the threshold is at least equal to the radius so the
probability of consensus in the last part is indeed positive.
In fact, except when the threshold is at least equal to the diameter, in which case all three conclusions of the theorem turn out to be trivial,
when the threshold is at least equal to the radius, both the~$\tau$-center and its complement are nonempty, and therefore form a partition
that satisfies~\eqref{eq:fluctuation}.
In particular, fluctuation also holds when the radius is not more than the threshold.
We also point out that the last part of the theorem implies that the average domain length in the final absorbing state scales like the population
size, namely~$\card \mathscr V$.
This result applies in particular to the constrained voter model where the opinion graph is a path with three vertices interpreted as leftists,
centrists and rightists, thus contradicting the conjecture on domain length scaling in~\cite{vazquez_krapivsky_redner_2003}.
\indent We now seek for sufficient conditions for fixation of the infinite one-dimensional system, beginning with general opinion graphs.
At least for the process starting from the uniform product measure, these conditions can be expressed using
$$ N (\Gamma, s) \ := \ \card \{(i, j) \in V \times V : d (i, j) = s \} \quad \hbox{for} \quad s = 1, 2, \ldots, \mathbf{d}, $$
which is the number of pairs of opinions at opinion distance~$s$ of each other.
In the statement of the next theorem, the function~$\ceil{\,\cdot \,}$ refers to the ceiling function.
\begin{theorem} --
\label{th:fixation}
For the opinion model on~$\mathbb{Z}$, fixation occurs
\begin{enumerate}
\item[a.] when~\eqref{eq:uniform} holds and
\begin{equation}
\label{eq:th-fixation}
\begin{array}{l} S (\Gamma, \tau) \ := \ \sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,N (\Gamma, s)) \ > \ 0, \end{array}
\end{equation}
\item[b.] for some initial distributions~\eqref{eq:product} when~$\mathbf{d} > 2 \tau$.
\end{enumerate}
\end{theorem}
Combining Theorems~\ref{th:fluctuation}.a and~\ref{th:fixation}.b shows that these two results are sharp when~$\mathbf{d} = 2 \mathbf{r}$,
which holds for opinion graphs such as paths and stars:
for such graphs, the one-dimensional system fluctuates starting from any initial distribution~\eqref{eq:product}
if and only if~$\mathbf{r} \leq \tau$.
\indent Our last theorem, which is also the most challenging result of this paper, gives a significant improvement of the previous condition for fixation for {\bf distance-regular} opinion graphs.
This class of graphs is defined mathematically as follows: let
$$ \Gamma_s (i) \ := \ \{j \in V : d (i, j) = s \} \quad \hbox{for} \quad s = 0, 1, \ldots, \mathbf{d} $$
be the {\bf distance partition} of the vertex set~$V$ for some~$i \in V$.
Then, the opinion graph is said to be a distance-regular graph when the so-called {\bf intersection numbers}
\begin{equation}
\label{eq:dist-reg-1}
\begin{array}{rrl}
N (\Gamma, (i_-, s_-), (i_+, s_+)) & := & \card (\Gamma_{s_-} (i_-) \cap \Gamma_{s_+} (i_+)) \vspace*{3pt} \\
& = & \card \{j \in V : d (i_-, j) = s_- \ \hbox{and} \ d (i_+, j) = s_+ \} \vspace*{3pt} \\
& = & f (s_-, s_+, d (i_-, i_+)) \end{array}
\end{equation}
only depend on the distance~$d (i_-, i_+)$ but not on the particular choice of~$i_-$ and~$i_+$.
This implies that, for distance-regular opinion graphs, the number of vertices
$$ N (\Gamma, (i, s)) \ := \ \card (\Gamma_s (i)) \ = \ f (s, s, 0) \ =: \ h (s) $$
does not depend on vertex~$i$.
To state our last theorem, we let
$$ \begin{array}{l} \mathbf W (k) \ := \ - 1 + \sum_{1 < n \leq k} \,\sum_{n \leq m \leq \ceil{\mathbf{d} / \tau}} \,(q_n \,q_{n + 1} \cdots q_{m - 1}) / (p_n \,p_{n + 1} \cdots p_m) \end{array} $$
where by convention an empty sum is equal to zero and an empty product is equal to one, and where the coefficients~$p_n$ and~$q_n$ are defined in terms of the intersection numbers as
$$ \begin{array}{rcl}
p_n & := & \max \,\{\sum_{s : \ceil{s / \tau} = n - 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \} \vspace*{3pt} \\
q_n & := & \,\min \,\{\sum_{s : \ceil{s / \tau} = n + 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \}. \end{array} $$
Then, we have the following sufficient condition for fixation.
\begin{theorem} --
\label{th:dist-reg}
Assume~\eqref{eq:uniform} and~\eqref{eq:dist-reg-1}.
Then, the process on~$\mathbb{Z}$ fixates when
\begin{equation}
\label{eq:th-dist-reg}
\begin{array}{l} S_{\reg} (\Gamma, \tau) \ := \ \sum_{k > 0} \,(\mathbf W (k) \,\sum_{s : \ceil{s / \tau} = k} \,h (s)) \ > \ 0. \end{array}
\end{equation}
\end{theorem}
To understand the coefficients~$p_n$ and~$q_n$, we note that, letting~$i_-$ and~$j$ be two opinions at opinion distance~$s_-$ of each other, we have the following
interpretation:
$$ \begin{array}{rcl}
f (s_-, s_+, s) / h (s_+) & = & \hbox{probability that an opinion~$i_+$ chosen uniformly} \vspace*{0pt} \\
& & \hbox{at random among the opinions at distance~$s_+$ from} \\
& & \hbox{opinion~$j$ is at distance~$s$ from opinion~$i_-$}. \end{array} $$
\noindent{\bf Outline of the proofs} --
The lower bound for the probability of consensus on finite connected graphs follows from the optional stopping theorem after proving that
the process that keeps track of the number of supporters of opinions belonging to the~$\tau$-center is a martingale.
The analysis of the infinite system is more challenging.
The first key to all our proofs is to use the formal machinery introduced in~\cite{lanchier_2012, lanchier_scarlatos_2013, lanchier_schweinsberg_2012}
that consists in keeping track of the disagreements along the edges of the spatial structure.
This technique has also been used in~\cite{lanchier_moisson_2014, lanchier_scarlatos_2014} to study related models.
In the context of our general opinion model, we put a pile of~$s$ particles on edges that connect individuals who are at opinion distance~$s$
of each other, i.e., we set
$$ \xi_t ((x, x + 1)) \ := \ d (\eta_t (x), \eta_t (x + 1)) \quad \hbox{for all} \quad x \in \mathbb{Z}. $$
The definition of the confidence threshold implies that, piles with at most~$\tau$ particles, that we call active, evolve
according to symmetric random walks, while larger piles, that we call frozen, are static.
In addition, the jump of an active pile onto another pile results in part of the particles being annihilated.
The main idea to prove fluctuation is to show that, after identifying opinions that belong to the same member of the
partition~\eqref{eq:fluctuation}, the process reduces to the voter model, and use that the one-dimensional voter model
fluctuates according to~\cite{arratia_1983}.
Fluctuation, together with the stronger assumption~$\mathbf{r} \leq \tau$, implies that the frozen piles, and ultimately all the piles
of particles, go extinct, which is equivalent to clustering of the opinion model.
\indent In contrast, fixation occurs when the frozen piles have a positive probability of never being reduced, which is more difficult to establish.
To briefly explain our approach to prove fixation, we say that the pile at~$(x, x + 1)$ is of order~$k$ when
$$ (k - 1) \,\tau \ < \ \xi_t ((x, x + 1)) \ \leq \ k \tau. $$
To begin with, we use a construction due to~\cite{bramson_griffeath_1989} to obtain an implicit condition for fixation in terms
of the initial number of piles of any given order in a large interval.
Large deviation estimates for the number of such piles are then proved and used to turn this implicit condition into the explicit
condition~\eqref{eq:th-fixation}.
To derive this condition, we use that at least~$k - 1$ active piles must jump onto a pile initially of order~$k > 1$ to turn this pile
into an active pile.
Condition~\eqref{eq:th-fixation} is obtained assuming the worst case scenario when the number of particles that annihilate is maximal.
To show the improved condition for fixation~\eqref{eq:th-dist-reg} for distance-regular opinion graphs, we use the same approach but
count more carefully the number of annihilating events.
First, we use duality-like techniques to prove that, when the opinion graph is distance-regular, the system of piles becomes Markov.
This is used to prove that the jump of an active pile onto a pile of order~$n > 1$ reduces/increases its order with respective
probabilities at most~$p_n$ and at least~$q_n$.
This implies that the number of active piles that must jump onto a pile initially of order~$k > 1$ to turn it into an active pile
is stochastically larger than the first hitting time to state~1 of a certain discrete-time birth and death process.
This hitting time is equal in distribution to
$$ \begin{array}{l} \sum_{1 < n \leq k} \,\sum_{n \leq m \leq \ceil{\mathbf{d} / \tau}} \,(q_n \,q_{n + 1} \cdots q_{m - 1}) / (p_n \,p_{n + 1} \cdots p_m) \ = \ 1 + \mathbf W (k). \end{array} $$
The probabilities~$p_n$ and~$q_n$ are respectively the death parameter and the birth parameter of the discrete-time birth and death
process while the integer~$\ceil{\mathbf{d} / \tau}$ is the number of states of this process, which is also the maximum order
of a pile. \vspace*{8pt}
\noindent{\bf Application to concrete opinion graphs} --
\begin{figure}[t]
\centering
\includegraphics[width=0.98\textwidth]{graphs.eps}
\caption{\upshape{Opinion graphs considered in Corollaries~\ref{cor:path}--\ref{cor:hypercube}}}
\label{fig:graphs}
\end{figure}
We now apply our general results to particular opinion graphs, namely the ones which are represented in Figure~\ref{fig:graphs}.
First, we look at paths and more generally stars with~$b$ branches of equal length.
For paths, one can think of the individuals as being characterized by their position about one issue, ranging from strongly agree to strongly disagree.
For stars, individuals are offered~$b$ alternatives:
the center represents undecided individuals while vertices far from the center are more extremist in their position.
These graphs are not distance-regular so we can only apply Theorem~\ref{th:fixation} to study fixation of the infinite system.
This theorem combined with Theorem~\ref{th:fluctuation} gives the following two corollaries.
\begin{corollary}[path] --
\label{cor:path}
When~$\Gamma$ is the path with~$F$ vertices,
\begin{itemize}
\item the system fluctuates when~\eqref{eq:product} holds and~$F \leq 2 \tau + 1$ whereas \vspace*{3pt}
\item the system fixates when~\eqref{eq:uniform} holds and~$3 F^2 - (20 \tau + 3) \,F + 10 \,(3 \tau + 1) \,\tau > 0$.
\end{itemize}
\end{corollary}
\begin{corollary}[star] --
\label{cor:star}
When~$\Gamma$ is the star with~$b$ branches of length~$r$,
\begin{itemize}
\item the system fluctuates when~\eqref{eq:product} holds and~$r \leq \tau$ whereas \vspace*{3pt}
\item the system fixates when~\eqref{eq:uniform} holds, $2r > 3 \tau$ and
$$ 4 \,(b - 1) \,r^2 + 2 \,((4 - 5b) \,\tau + b - 1) \,r + (6b - 5) \,\tau^2 + (1 - 2b) \,\tau \ > \ 0. $$
\end{itemize}
\end{corollary}
To illustrate Theorem~\ref{th:dist-reg}, we now look at distance-regular graphs, starting with the five convex regular polyhedra also known as the Platonic solids.
These graphs are natural mathematically though we do not have any specific interpretation from the point of view of social sciences except, as explained below,
for the cube and more generally hypercubes.
For these five graphs, Theorems~\ref{th:fluctuation} and~\ref{th:dist-reg} give sharp results with the exact value of the critical threshold except
for the dodecahedron for which the behavior when~$\tau = 3$ remains an open problem.
\begin{corollary}[Platonic solids] --
\label{cor:polyhedron}
Assume~\eqref{eq:uniform}. Then,
\begin{itemize}
\item the tetrahedral model fluctuates for all~$\tau \geq 1$, \vspace*{2pt}
\item the cubic model fluctuates when~$\tau \geq 2$ and fixates when~$\tau \leq 1$, \vspace*{2pt}
\item the octahedral model fluctuates for all~$\tau \geq 1$, \vspace*{2pt}
\item the dodecahedral model fluctuates when~$\tau \geq 4$ and fixates when~$\tau \leq 2$, \vspace*{2pt}
\item the icosahedral model fluctuates when~$\tau \geq 2$ and fixates when~$\tau \leq 1$.
\end{itemize}
\end{corollary}
Next, we look at the case where the individuals are characterized by some preferences represented by the set of vertices of a cycle.
For instance, as explained in~\cite{boudourides_scarlatos_2005}, all strict orderings of three alternatives can be represented by the cycle
with~$3! = 6$ vertices.
\begin{corollary}[cycle] --
\label{cor:cycle}
When~$\Gamma$ is the cycle with~$F$ vertices,
\begin{itemize}
\item the system fluctuates when~\eqref{eq:product} holds and~$F \leq 2 \tau + 2$ whereas \vspace*{3pt}
\item the system fixates when~\eqref{eq:uniform} hold and~$F \geq 4 \tau + 2$.
\end{itemize}
\end{corollary}
Finally, we look at hypercubes with~$F = 2^d$ vertices, which are generalizations of the three-dimensional cube.
In this case, the individuals are characterized by their position --~in favor or against~-- about~$d$ different issues, and the opinion distance between two
individuals is equal to the number of issues they disagree on.
Theorem~\ref{th:dist-reg} gives the following result.
\begin{corollary}[hypercube] --
\label{cor:hypercube}
When~$\Gamma$ is the hypercube with~$2^d$ vertices,
\begin{itemize}
\item the system fluctuates when~\eqref{eq:product} holds and~$d \leq \tau + 1$ whereas \vspace*{3pt}
\item the system fixates when~\eqref{eq:uniform} holds and~$d / \tau > 3$ or when~$d / \tau > 2$ with~$\tau$ large.
\end{itemize}
\end{corollary}
\begin{table}[t]
\begin{center}
\begin{tabular}{cccccc}
\hline \noalign{\vspace*{2pt}}
opinion graph & radius & diameter & fluctuation & fix. ($\tau = 1$) & fix. ($\tau$ large) \\ \noalign{\vspace*{1pt}} \hline \noalign{\vspace*{6pt}}
path & $\mathbf{r} = \integer{F/2}$ & $\mathbf{d} = F - 1$ & $F \leq 2 \tau + 1$ & $F \geq 6$ & $F / \tau > (10 + \sqrt{10}) / 3 \approx 4.39$ \\ \noalign{\vspace*{3pt}}
star~($b = 3$) & $\mathbf{r} = r$ & $\mathbf{d} = 2r$ & $r \leq \tau$ & $r \geq 2$ & $r / \tau > (11 + \sqrt{17}) / 8 \approx 1.89$ \\ \noalign{\vspace*{3pt}}
star~($b = 5$) & $\mathbf{r} = r$ & $\mathbf{d} = 2r$ & $r \leq \tau$ & $r \geq 2$ & $r / \tau > (21 + \sqrt{41}) / 16 \approx 1.71$ \\ \noalign{\vspace*{3pt}}
cycle & $\mathbf{r} = \integer{F/2}$ & $\mathbf{d} = \integer{F/2}$ & $F \leq 2 \tau + 2$ & $F \geq 6$ & $F / \tau > 4$ \\ \noalign{\vspace*{3pt}}
hypercube & $\mathbf{r} = d$ & $\mathbf{d} = d$ & $d \leq \tau + 1$ & $d \geq 3$ & $d / \tau > 2$ \\ \noalign{\vspace*{8pt}} \hline \noalign{\vspace*{3pt}}
opinion graph & radius & diameter & fluctuation & \multicolumn{2}{c}{fixation when} \\ \noalign{\vspace*{1pt}} \hline \noalign{\vspace*{6pt}}
tetrahedron & $\mathbf{r} = 1$ & $\mathbf{d} = 1$ & $\tau \geq 1$ & \multicolumn{2}{c}{$\tau = 0$} \\ \noalign{\vspace*{3pt}}
cube & $\mathbf{r} = 3$ & $\mathbf{d} = 3$ & $\tau \geq 2$ & \multicolumn{2}{c}{$\tau \leq 1$} \\ \noalign{\vspace*{3pt}}
octahedron & $\mathbf{r} = 2$ & $\mathbf{d} = 2$ & $\tau \geq 1$ & \multicolumn{2}{c}{$\tau = 0$} \\ \noalign{\vspace*{3pt}}
dodecahedron & $\mathbf{r} = 5$ & $\mathbf{d} = 5$ & $\tau \geq 4$ & \multicolumn{2}{c}{$\tau \leq 2$} \\ \noalign{\vspace*{3pt}}
icosahedron & $\mathbf{r} = 3$ & $\mathbf{d} = 3$ & $\tau \geq 2$ & \multicolumn{2}{c}{$\tau \leq 1$} \\ \noalign{\vspace*{4pt}} \hline
\end{tabular}
\end{center}
\caption{\upshape{Summary of our results for the opinion graphs in Figure~\ref{fig:graphs}}}
\label{tab:summary}
\end{table}
Table~\ref{tab:summary} summarizes our results for the graphs of Figure~\ref{fig:graphs}.
The second and third columns give the value of the radius and the diameter.
The conditions in the fourth column are the conditions for fluctuation of the infinite system obtained from the corollaries.
For opinion graphs with a variable number of vertices, the last two columns give sufficient conditions for fixation in the two extreme cases when
the confidence threshold is one and when the confidence threshold is large.
To explain the last column for paths and stars, note that the opinion model fixates whenever~$\mathbf{d} / \tau$
is larger than the largest root of the polynomials
$$ \begin{array}{rl}
3 X^2 - 20 X + 30 & \hbox{for the path} \vspace*{3pt} \\
2 X^2 - 11 X + 13 & \hbox{for the star with~$b = 3$ branches} \vspace*{3pt} \\
4 X^2 - 21 X + 25 & \hbox{for the star with~$b = 5$ branches} \end{array} $$
and the diameter of the opinion graph is sufficiently large.
These polynomials are obtained from the conditions in Corollaries~\ref{cor:path}--\ref{cor:star} by only keeping the terms with degree two.
\section{Coupling with a system of annihilating particles}
\label{sec:coupling}
\indent To study the one-dimensional system, it is convenient to construct the process from a graphical representation and to introduce
a coupling between the opinion model and a certain system of annihilating particles that keeps track of the discrepancies along the edges
of the lattice rather than the opinion at each vertex.
This system of particles can also be constructed from the same graphical representation.
Since the opinion model on general finite graphs will be studied using other techniques, we only define the graphical representation
for the process on~$\mathbb{Z}$, which consists of the following collection of independent Poisson processes:
\begin{itemize}
\item For each~$x \in \mathbb{Z}$, we let~$(N_t (x, x \pm 1) : t \geq 0)$ be a rate one Poisson process. \vspace*{3pt}
\item We denote by~$T_n (x, x \pm 1) := \inf \,\{t : N_t (x, x \pm 1) = n \}$ its~$n$th arrival time.
\end{itemize}
This collection of independent Poisson processes is then turned into a percolation structure by drawing an arrow~$x \to x \pm 1$
at time~$t := T_n (x, x \pm 1)$.
We say that this arrow is {\bf active} when
$$ d (\eta_{t-} (x), \eta_{t-} (x \pm 1)) \ \leq \ \tau. $$
The configuration at time~$t$ is then obtained by setting
\begin{equation}
\label{eq:rule}
\begin{array}{rcll}
\eta_t (x \pm 1) & = & \eta_{t-} (x) & \hbox{when the arrow~$x \to x \pm 1$ is active} \vspace*{3pt} \\
& = & \eta_{t-} (x \pm 1) & \hbox{when the arrow~$x \to x \pm 1$ is not active} \end{array}
\end{equation}
and leaving the opinion at all the other vertices unchanged.
An argument due to Harris \cite{harris_1972} implies that the opinion model starting from any configuration can indeed
be constructed using this percolation structure and rule~\eqref{eq:rule}.
From the collection of active arrows, we construct active paths as in percolation theory.
More precisely, we say that there is an {\bf active path} from~$(z, s)$ to~$(x, t)$, and write~$(z, s) \leadsto (x, t)$, whenever there exist
$$ s_0 = s < s_1 < \cdots < s_{n + 1} = t \qquad \hbox{and} \qquad
x_0 = z, \,x_1, \,\ldots, \,x_n = x $$
such that the following two conditions hold:
\begin{itemize}
\item For~$j = 1, 2, \ldots, n$, there is an active arrow~$x_{j - 1} \to x_j$ at time~$s_j$. \vspace*{3pt}
\item For~$j = 0, 1, \ldots, n$, there is no active arrow that points at~$\{x_j \} \times (s_j, s_{j + 1})$.
\end{itemize}
These two conditions imply that
$$ \hbox{for all} \ (x, t) \in \mathbb{Z} \times \mathbb{R}_+ \ \hbox{there is a unique} \ z \in \mathbb{Z} \ \hbox{such that} \ (z, 0) \leadsto (x, t). $$
Moreover, because of the definition of active arrows, the opinion at vertex~$x$ at time~$t$ originates from and is therefore equal to the
initial opinion at vertex~$z$ so we call vertex~$z$ the {\bf ancestor} of vertex~$x$ at time~$t$.
\indent As previously mentioned, to study the one-dimensional system, we look at the process that keeps track of the discrepancies along the edges
rather than the actual opinion at each vertex, that we shall call the {\bf system of piles}.
To define this process, it is convenient to identify each edge with its midpoint and to define translations on the edge set as follows:
$$ \begin{array}{rclcl}
e & := & \{x, x + 1 \} \ \equiv \ x + 1/2 & \hbox{for all} & x \in \mathbb{Z} \vspace*{3pt} \\
e + v & := & \{x, x + 1 \} + v \ \equiv \ x + 1/2 + v & \hbox{for all} & (x, v) \in \mathbb{Z} \times \mathbb{R}. \end{array} $$
The system of piles is then defined as
$$ \xi_t (e) \ := \ d (\eta_t (e - 1/2), \eta_t (e + 1/2)) \quad \hbox{for all} \quad e \in \mathbb{Z} + 1/2, $$
and it is convenient to think of edge~$e$ as being occupied by a pile of~$\xi_t (e)$ particles.
The dynamics of the opinion model induces the following evolution rules on this system of particles.
Assuming that there is an arrow~$x - 1 \to x$ at time~$t$ and that
$$ \begin{array}{rcl}
\xi_{t-} (x - 1/2) & := & d (\eta_{t-} (x), \eta_{t-} (x - 1)) \ = \ s_- \vspace*{3pt} \\
\xi_{t-} (x + 1/2) & := & d (\eta_{t-} (x), \eta_{t-} (x + 1)) \ = \ s_+ \end{array} $$
we have the following alternative:
\begin{itemize}
\item In case~$s_- = 0$, meaning that there is no particle on the edge, the two interacting agents already agree just before the interaction therefore nothing happens. \vspace*{3pt}
\item In case~$s_- > \tau$, meaning that there are more than~$\tau$ particles on the edge, the two interacting agents disagree too much to trust each other so nothing happens. \vspace*{3pt}
\item In case~$0 < s_- \leq \tau$, meaning that there is at least one but no more than~$\tau$ particles on the edge, the agent at vertex~$x$ mimics her left neighbor, which gives
$$ \begin{array}{rcl}
\xi_t (x - 1/2) & := & d (\eta_t (x), \eta_t (x - 1)) \ = \ d (\eta_{t-} (x - 1), \eta_{t-} (x - 1)) \ = \ 0 \vspace*{3pt} \\
\xi_t (x + 1/2) & := & d (\eta_t (x), \eta_t (x + 1)) \ = \ d (\eta_{t-} (x - 1), \eta_{t-} (x + 1)). \end{array} $$
In particular, there is no more particles at edge~$x - 1/2$.
In addition, the size~$s$ of the pile of particles at edge~$x + 1/2$ at time~$t$, where size of a pile refers to the number of particles
in that pile, satisfies the two inequalities
\begin{equation}
\label{eq:size}
\begin{array}{rcl}
s & \leq & |d (\eta_{t-} (x - 1), \eta_{t-} (x)) + d (\eta_{t-} (x), \eta_{t-} (x + 1))| \ = \ |s_- + s_+| \vspace*{3pt} \\
s & \geq & |d (\eta_{t-} (x - 1), \eta_{t-} (x)) - d (\eta_{t-} (x), \eta_{t-} (x + 1))| \ = \ |s_- - s_+|. \end{array}
\end{equation}
Note that the first inequality implies that the process involves deaths of particles but no births, which is a key property that will be used later.
\end{itemize}
Similar evolution rules are obtained by exchanging the direction of the interaction from which we deduce the following description
for the dynamics of piles:
\begin{itemize}
\item Piles with more than~$\tau$ particles cannot move: we call such piles {\bf frozen piles} and the particles in such piles frozen particles. \vspace*{3pt}
\item Piles with at most~$\tau$ particles jump one unit to the left or to the right at rate one: we call such piles {\bf active piles} and the particles in such piles active particles.
Note that arrows in the graphical representation are active if and only if they cross an active pile. \vspace*{3pt}
\item When a pile of size~$s_-$ jumps onto a pile of size~$s_+$ this results in a pile whose size~$s$ satisfies the two inequalities in~\eqref{eq:size}
so we say that~$s_- + s_+ - s$ particles are {\bf annihilated}.
\end{itemize}
\section{Proof of Theorem~\ref{th:fluctuation}}
\label{sec:fluctuation}
\indent Before proving the theorem, we start with some preliminary remarks.
To begin with, we observe that, when the diameter~$\mathbf{d} \leq \tau$, the~$\tau$-center covers all the opinion graph,
indicating that the model reduces to a multitype voter model with~$F = \card V$ opinions.
In this case, all three parts of the theorem are trivial, with the probability of consensus in the last part being equal to one.
To prove the theorem in the nontrivial case~$\tau < \mathbf{d}$, we introduce the set
\begin{equation}
\label{eq:boundary}
B (\Gamma, \tau) \ := \ \{i \in V : d (i, j) > \tau \ \hbox{for some} \ j \in V \}
\end{equation}
and call this set the~{\bf $\tau$-boundary} of the opinion graph.
One key ingredient to our proof is the following lemma, which gives a sufficient condition for~\eqref{eq:fluctuation} to hold.
\begin{lemma} --
\label{lem:partition}
The sets~$V_1 = C (\Gamma, \tau)$ and~$V_2 = B (\Gamma, \tau)$ satisfy~\eqref{eq:fluctuation} when~$\mathbf{r} \leq \tau < \mathbf{d}$.
\end{lemma}
\begin{proof}
From~\eqref{eq:center} and~\eqref{eq:boundary}, we get~$B (\Gamma, \tau) = V \setminus C (\Gamma, \tau)$ therefore
$$ C (\Gamma, \tau) \,\cup \,B (\Gamma, \tau) = V \quad \hbox{and} \quad C (\Gamma, \tau) \,\cap \,B (\Gamma, \tau) = \varnothing. $$
In addition, the~$\tau$-center of the graph is nonempty because
\begin{equation}
\label{eq:center-radius}
\begin{array}{rcl}
C (\Gamma, \tau) \neq \varnothing & \hbox{if and only if} & \hbox{there is~$i \in V$ such that~$d (i, j) \leq \tau$ for all~$j \in V$} \vspace*{3pt} \\
& \hbox{if and only if} & \hbox{there is~$i \in V$ such that~$\max_{j \in V} \,d (i, j) \leq \tau$} \vspace*{3pt} \\
& \hbox{if and only if} & \min_{i \in V} \,\max_{j \in V} \,d (i, j) \leq \tau \vspace*{3pt} \\
& \hbox{if and only if} & \mathbf{r} \leq \tau \end{array}
\end{equation}
while the~$\tau$-boundary is nonempty because
$$ \begin{array}{rcl}
B (\Gamma, \tau) \neq \varnothing & \hbox{if and only if} & \hbox{there is~$i \in V$ such that~$d (i, j) > \tau$ for some~$j \in V$} \vspace*{3pt} \\
& \hbox{if and only if} & \hbox{there are~$i, j \in V$ such that~$d (i, j) > \tau$} \vspace*{3pt} \\
& \hbox{if and only if} & \max_{i \in V} \,\max_{j \in V} \,d (i, j) > \tau \vspace*{3pt} \\
& \hbox{if and only if} & \mathbf{d} > \tau. \end{array} $$
This shows that~$\{V_1, V_2 \}$ is a partition of the set of opinions.
Finally, since all the vertices in the~$\tau$-center are within distance~$\tau$ of all the other vertices, condition~\eqref{eq:fluctuation} holds.
\end{proof} \\ \\
This lemma will be used in the proof of part b where clustering will follow from fluctuation, and in the proof of part c to show that
the probability of consensus on any finite connected graph is indeed positive.
From now on, we call vertices in the~$\tau$-center the centrist opinions and vertices in the~$\tau$-boundary the extremist opinions. \\ \\
\begin{demo}{Theorem~\ref{th:fluctuation}a (fluctuation)} --
Under condition~\eqref{eq:fluctuation}, agents who support an opinion in the set~$V_1$ are within the confidence threshold of
agents who support an opinion in~$V_2$, therefore we deduce from the expression of the transition rates~\eqref{eq:rates} that
\begin{equation}
\label{eq:fluctuation-1}
\begin{array}{rcl}
c_{i \to j} (x, \eta_t) & = &
\lim_{h \to 0} \ (1/h) \,P \,(\eta_{t + h} (x) = j \,| \,\eta_t \ \hbox{and} \ \eta_t (x) = i) \vspace*{4pt} \\ & = &
\card \{y \in N_x : \eta_t (y) = j \} \end{array}
\end{equation}
for every~$(i, j) \in V_1 \times V_2$ and every~$(i, j) \in V_2 \times V_1$. Let
\begin{equation}
\label{eq:fluctuation-3}
\zeta_t (x) \ := \ \mathbf{1} \{\eta_t (x) \in V_2 \} \quad \hbox{for all} \quad x \in \mathbb{Z}.
\end{equation}
Since, according to~\eqref{eq:fluctuation-1}, we have
\begin{itemize}
\item for all~$j \in V_2$, the rates~$c_{i \to j} (x, \eta_t)$ are constant across all~$i \in V_1$, \vspace*{4pt}
\item for all~$i \in V_1$, the rates~$c_{j \to i} (x, \eta_t)$ are constant across all~$j \in V_2$,
\end{itemize}
the process~$(\zeta_t)$ is Markov with transition rates
$$ \begin{array}{rrl}
c_{0 \to 1} (x, \zeta_t) & := &
\lim_{h \to 0} \ (1/h) \,P \,(\zeta_{t + h} (x) = 1 \,| \,\zeta_t \ \hbox{and} \ \zeta_t (x) = 0) \vspace*{4pt} \\ & = &
\sum_{i \in V_1} \sum_{j \in V_2} c_{i \to j} (x, \eta_t) \,P \,(\eta_t (x) = i \,| \,\zeta_t (x) = 0) \vspace*{4pt} \\ & = &
\sum_{i \in V_1} \sum_{j \in V_2} \card \{y \in N_x : \eta_t (y) = j \} \,P \,(\eta_t (x) = i \,| \,\zeta_t (x) = 0) \vspace*{4pt} \\ & = &
\sum_{j \in V_2} \card \{y \in N_x : \eta_t (y) = j \} \ = \ \card \{y \in N_x : \zeta_t (y) = 1 \} \end{array} $$
and similarly for the reverse transition
$$ \begin{array}{l}
c_{1 \to 0} (x, \zeta_t) \ = \ \card \{y \in N_x : \eta_t (y) \in V_1 \} \ = \ \card \{y \in N_x : \zeta_t (y) = 0 \}. \end{array} $$
This shows that~$(\zeta_t)$ is the voter model.
In addition, since~$V_1, V_2 \neq \varnothing$,
$$ \begin{array}{l} P \,(\zeta_0 (x) = 0) \ = \ P \,(\eta_0 (x) \in V_1) \ = \ \sum_{j \in V_1} \,\rho_j \ \in \ (0, 1) \end{array} $$
whenever condition~\eqref{eq:product} holds.
In particular, the lemma follows from the fact that the one-dimensional voter model starting with a positive density of each type fluctuates.
This last result is a consequence of site recurrence for annihilating random walks proved in \cite{arratia_1983}.
\end{demo} \\ \\
\begin{demo}{Theorem~\ref{th:fluctuation}b (clustering)} --
Since~$\mathbf{r} \leq \tau < \mathbf{d}$,
$$ V_1 \ = \ C (\Gamma, \tau) \quad \hbox{and} \quad V_2 \ = \ B (\Gamma, \tau) $$
form a partition of~$V$ according to Lemma~\ref{lem:partition}.
This implies in particular that, not only the opinion model fluctuates, but also the coupled voter model~\eqref{eq:fluctuation-3}
for this specific partition fluctuates, which is the key to the proof.
To begin with, we define the function
$$ \begin{array}{l} u (t) \ := \ E \,\xi_t (e) \ = \ \sum_{0 \leq j \leq \mathbf{d}} \,j \,P \,(\xi_t (e) = j) \end{array} $$
which, in view of translation invariance of the initial configuration and the evolution rules, does not depend on the choice of~$e$.
Note that, since the system of particles coupled with the process involves deaths of particles but no births, the function~$u (t)$ is nonincreasing in time.
Since it is also nonnegative, it has a limit:~$u (t) \to l$ as~$t \to \infty$.
Now, on the event that an edge~$e$ is occupied by a pile with at least one particle at a given time~$t$, we have the following alternative:
\begin{itemize}
\item[(1)] In case edge~$e := x + 1/2$ carries a frozen pile, since the centrist agents are within the confidence threshold of all the other individuals, we must have
$$ \eta_t (x) \in V_2 = B (\Gamma, \tau) \quad \hbox{and} \quad \eta_t (x + 1) \in V_2 = B (\Gamma, \tau). $$
Now, using that the voter model~\eqref{eq:fluctuation-3} fluctuates,
$$ T \ := \ \inf \,\{s > t : \eta_s (x) \in V_1 = C (\Gamma, \tau) \ \hbox{or} \ \eta_s (x + 1) \in V_1 = C (\Gamma, \tau) \} < \infty $$
almost surely, while by definition of the~$\tau$-center, we have
$$ \xi_T (e) \ = \ d (\eta_T (x), \eta_T (x + 1)) \ \leq \ \tau \ < \ \xi_t (e). $$
In particular, at least one of the frozen particles at~$e$ is annihilated eventually. \vspace*{4pt}
\item[(2)] In case edge~$e := x + 1/2$ carries an active pile, since one-dimensional symmetric random walks are recurrent, this pile eventually intersects
another pile.
Let~$s_-$ and~$s_+$ be respectively the size of these two piles and let~$s$ be the size of the pile of particles resulting from their intersection.
Then, we have the following alternative:
\begin{itemize}
\item[(a)] In case~$s < s_- + s_+$ and~$s > \tau$, at least one particle is annihilated and there is either formation or increase of a frozen pile so we are
back to case~(1): since the voter model coupled with the opinion model fluctuates, at least one of the frozen particles in this pile is annihilated eventually.
\item[(b)] In case~$s < s_- + s_+$ and~$s \leq \tau$, at least one particle is annihilated.
\item[(c)] In case~$s = s_- + s_+$ and~$s > \tau$, there is either formation or increase of a frozen pile so we are back to case~(1):
since the voter model coupled with the opinion model fluctuates, at least one of the frozen particles in this pile is annihilated eventually.
\item[(d)] In case~$s = s_- + s_+$ and~$s \leq \tau$, the resulting pile is again active so it keeps moving until, after a finite number of collisions, we are back to either~(a) or~(b) or~(c)
and at least one particle is annihilated eventually.
\end{itemize}
\end{itemize}
This shows that there is a sequence~$0 < t_1 < \cdots < t_n < \cdots < \infty$ such that
$$ u (t_n) \ \leq \ (1/2) \,u (t_{n - 1}) \ \leq \ (1/4) \,u (t_{n - 2}) \ \leq \ \cdots \ \leq \ (1/2)^n \,u (0) \ \leq \ (1/2)^n \,F $$
from which it follows that the density of particles decreases to zero:
$$ \begin{array}{l} \lim_{t \to \infty} \,P \,(\xi_t (e) \neq 0) \ \leq \ \lim_{t \to \infty} \,u (t) \ = \ 0 \quad \hbox{for all} \quad e \in \mathbb{Z} + 1/2. \end{array} $$
In conclusion, for all~$x, y \in \mathbb{Z}$ with~$x < y$, we have
$$ \begin{array}{rcl}
\lim_{t \to \infty} \,P \,(\eta_t (x) \neq \eta_t (y)) & \leq &
\lim_{t \to \infty} \,P \,(\xi_t (z + 1/2) \neq 0 \ \hbox{for some} \ x \leq z < y) \vspace*{4pt} \\ & \leq &
\lim_{t \to \infty} \,\sum_{x \leq z < y} \,P \,(\xi_t (z + 1/2) \neq 0) \vspace*{4pt} \\ & = &
(y - x) \lim_{t \to \infty} \,P \,(\xi_t (e) \neq 0) \ = \ 0, \end{array} $$
which proves clustering.
\end{demo} \\ \\
The third part of the theorem, which gives a lower bound for the probability of consensus of the process on finite connected graphs,
relies on very different techniques, namely techniques related to martingale theory following an idea from \cite{lanchier_2010}, section 3.
However, the partition of the opinion set into centrist opinions and extremist opinions is again a key to the proof. \\ \\
\begin{demo}{Theorem~\ref{th:fluctuation}c (consensus)} --
We first prove that the process that keeps track of the number of supporters of any given opinion is a martingale.
Then, applying the optional stopping theorem, we obtain a lower bound for the probability of extinction of the extremist agents, which is
also a lower bound for the probability of consensus.
For every~$j \in V$, we set
$$ X_t (j) \ := \ \card \{x \in \mathscr V : \eta_t (x) = j \} \quad \hbox{and} \quad X_t \ := \ \card \{x \in \mathscr V : \eta_t (x) \in C (\Gamma, \tau) \} $$
and we observe that
\begin{equation}
\label{eq:consensus-1}
\begin{array}{l}
X_t \ = \ \sum_{j \in C (\Gamma, \tau)} X_t (j). \end{array}
\end{equation}
Letting~$\mathcal F_t$ denote the natural filtration of the process, we also have
$$ \begin{array}{l}
\lim_{h \to 0} \ (1/h) \,E \,(X_{t + h} (j) - X_t (j) \,| \,\mathcal F_t) \vspace*{4pt} \\ \hspace*{25pt} = \
\lim_{h \to 0} \ (1/h) \,P \,(X_{t + h} (j) - X_t (j) = 1 \,| \,\mathcal F_t) \vspace*{4pt} \\ \hspace*{40pt} - \
\lim_{h \to 0} \ (1/h) \,P \,(X_{t + h} (j) - X_t (j) = - 1 \,| \,\mathcal F_t) \vspace*{4pt} \\ \hspace*{25pt} = \
\card \{(x, y) \in \mathscr E : \eta_t (x) \neq j \ \hbox{and} \ \eta_t (y) = j \ \hbox{and} \ |\eta_t (x) - j| \leq \tau \} \vspace*{4pt} \\ \hspace*{40pt} - \
\card \{(x, y) \in \mathscr E : \eta_t (x) = j \ \hbox{and} \ \eta_t (y) \neq j \ \hbox{and} \ |\eta_t (y) - j| \leq \tau \} \ = \ 0. \end{array} $$
This shows that the process~$X_t (j)$ is a martingale with respect to the natural filtration of the opinion model.
This, together with equation~\eqref{eq:consensus-1}, implies that~$X_t$ also is a martingale.
Because of the finiteness of the graph, this martingale is bounded and gets trapped in an absorbing state after an almost surely
finite stopping time:
$$ T \ := \ \inf \,\{t : \eta_t = \eta_s \ \hbox{for all} \ s > t \} \ < \ \infty \quad \hbox{almost surely}. $$
We claim that~$X_T$ can only take two values:
\begin{equation}
\label{eq:consensus-2}
X_T \in \{0, N \} \quad \hbox{where} \quad \hbox{$N := \card (\mathscr V)$ = the population size}.
\end{equation}
Indeed, assuming by contradiction that~$X_T \notin \{0, N \}$ implies the existence of an absorbing state with at least one centrist
agent and at least one extremist agent.
Since the graph is connected, this further implies the existence of an edge~$e = (x, y)$ such that
$$ \eta_T (x) \in C (\Gamma, \tau) \quad \hbox{and} \quad \eta_T (y) \in B (\Gamma, \tau) $$
but then we have
$$ \eta_T (x) \neq \eta_T (y) \quad \hbox{and} \quad d (\eta_T (y), \eta_T (x)) \leq \tau $$
showing that~$\eta_T$ is not an absorbing state, in contradiction with the definition of time~$T$.
This proves that our claim~\eqref{eq:consensus-2} is true.
Now, applying the optional stopping theorem to the bounded martingale~$X_t$ and the almost surely finite stopping time~$T$,
we obtain
$$ \begin{array}{l} E X_T \ = \ E X_0 \ = \ N \times P \,(\eta_0 (x) \in C (\Gamma, \tau)) \ = \ N \times \sum_{j \in C (\Gamma, \tau)} \,\rho_j \ = \ N \rho_{\cent} \end{array} $$
which, together with~\eqref{eq:consensus-2}, implies that
\begin{equation}
\label{eq:consensus-3}
\begin{array}{rcl}
P \,(X_T = N) & = & (1/N)(0 \times P \,(X_T = 0) + N \times P \,(X_T = N)) \vspace*{4pt} \\
& = & (1/N) \ E X_T \ = \ \rho_{\cent}. \end{array}
\end{equation}
To conclude, we observe that, on the event that~$X_T = N$, all the opinions present in the system at the time to absorption
are centrist opinions and since the only absorbing states with only centrist opinions are the configurations in which all the agents
share the same opinion, we deduce that the system converges to a consensus.
This, together with~\eqref{eq:consensus-3}, implies that
$$ P \,(\eta_t \equiv \hbox{constant for some} \ t > 0) \ \geq \ P \,(X_T = N) \ = \ \rho_{\cent}. $$
Finally, since the threshold is at least equal to the radius, it follows from~\eqref{eq:center-radius} that
the~$\tau$-center is nonempty, so we have~$\rho_{\cent} > 0$.
This completes the proof of Theorem~\ref{th:fluctuation}.
\end{demo}
\section{Sufficient condition for fixation}
\label{sec:condition}
\indent This section and the next two ones are devoted to the proof of Theorem~\ref{th:fixation} which studies the fixation regime
of the infinite one-dimensional system.
In this section, we give a general sufficient condition for fixation that can be expressed based on the initial number of active
particles and frozen particles in a large random interval.
The main ingredient of the proof is a construction due to Bramson and Griffeath~\cite{bramson_griffeath_1989} based on
duality-like techniques looking at active paths.
The next section establishes large deviation estimates for the initial number of particles in order to simplify
the condition for fixation using instead the expected number of active and frozen particles per edge.
This is used in the subsequent section to prove Theorem~\ref{th:fixation}.
The next lemma gives a condition for fixation based on properties of the active paths, which is the
analog of~\cite[Lemma~2]{bramson_griffeath_1989}.
\begin{lemma} --
\label{lem:fixation-condition}
For all~$z \in \mathbb{Z}$, let
$$ T (z) \ := \ \inf \,\{t : (z, 0) \leadsto (0, t) \}. $$
Then, the opinion model on~$\mathbb{Z}$ fixates whenever
\begin{equation}
\label{eq:fixation}
\begin{array}{l} \lim_{N \to \infty} \,P \,(T (z) < \infty \ \hbox{for some} \ z < - N) \ = \ 0. \end{array}
\end{equation}
\end{lemma}
\begin{proof}
This follows closely the proof of~\cite[Lemma~4]{lanchier_scarlatos_2013}.
\end{proof} \\ \\
To derive a more explicit condition for fixation, we let
$$ H_N \ := \ \{T (z) < \infty \ \hbox{for some} \ z < - N \} $$
be the event introduced in~\eqref{eq:fixation}.
Following the construction in~\cite{bramson_griffeath_1989}, we also let~$\tau_N$ be the first time an active path starting from the
left of~$- N$ hits the origin, and observe that
$$ \tau_N \ = \ \inf \,\{T (z) : z \in (- \infty, - N) \}. $$
In particular, the event~$H_N$ can be written as
\begin{equation}
\label{eq:key-event}
\begin{array}{l} H_N \ = \ \bigcap_{z < - N} \, \{T (z) < \infty \} \ = \ \{\tau_N < \infty \}. \end{array}
\end{equation}
Given the event~$H_N$, we let~$z_- < - N$ and~$z_+ \geq 0$ be the initial position of the active path and the rightmost
source of an active path that reaches the origin by time~$\tau_N$, i.e.,
\begin{equation}
\label{eq:paths}
\begin{array}{rcl}
z_- & := & \,\min \,\{z \in \mathbb{Z} : (z, 0) \leadsto (0, \tau_N) \} \ < \ - N \vspace*{2pt} \\
z_+ & := & \max \,\{z \in \mathbb{Z} : (z, 0) \leadsto (0, \sigma_N) \ \hbox{for some} \ \sigma_N < \tau_N \} \ \geq \ 0, \end{array}
\end{equation}
and define~$I_N = (z_-, z_+)$.
Now, we observe that, on the event~$H_N$,
\begin{itemize}
\item All the frozen piles initially in~$I_N$ must have been destroyed, i.e., turned into active piles due to the occurrence of
annihilating events, by time~$\tau_N$. \vspace*{4pt}
\item The active particles initially outside the interval~$I_N$ cannot jump inside the space-time region delimited
by the two active paths implicitly defined in~\eqref{eq:paths} because the existence of such particles would contradict
the minimality of~$z_-$ or the maximality of~$z_+$.
\end{itemize}
This, together with equation~\eqref{eq:key-event}, implies that, given the event~$H_N$, all the frozen piles initially in the random
interval~$I_N$ must have been destroyed by either active piles initially in this interval or active piles that result from
the destruction of these frozen piles.
To quantify this statement, we attach random variables, that we call {\bf contributions}, to each edge.
The definition depends on whether the edge initially carries an active pile or a frozen pile.
To begin with, we give an arbitrary deterministic contribution, say~$-1$, to each pile initially active by setting
\begin{equation}
\label{eq:contribution-active}
\cont (e) \ := \ - 1 \quad \hbox{whenever} \quad 0 < \xi_0 (e) \leq \tau.
\end{equation}
Now, we observe that, given~$H_N$, for each frozen pile initially in~$I_N$, a random number of active piles must have jumped onto this frozen
pile to turn it into an active pile.
Therefore, to define the contribution of a frozen pile, we let
\begin{equation}
\label{eq:breaks}
T_e \ := \ \inf \,\{t > 0 : \xi_t (e) \leq \tau \}
\end{equation}
and define the contribution of a frozen pile initially at~$e$ as
\begin{equation}
\label{eq:contribution-frozen}
\cont (e) \ := \ - 1 + \hbox{number of active piles that hit~$e$ until time~$T_e$}.
\end{equation}
Note that~\eqref{eq:contribution-frozen} reduces to~\eqref{eq:contribution-active} when edge~$e$ carries initially an active pile since in this
case the time until the edge becomes active is zero, therefore~\eqref{eq:contribution-frozen} can be used as the general definition for the contribution
of an edge with at least one particle.
Edges with initially no particle have contribution zero.
Since the occurrence of~$H_N$ implies that all the frozen piles initially in~$I_N$ must have been destroyed by either active piles initially
in this interval or active piles that result from the destruction of these frozen piles, in which case~$T_e < \infty$ for all the
edges in the interval, and since particles in an active pile jump all at once rather than individually,
\begin{equation}
\label{eq:inclusion-1}
\begin{array}{rcl}
H_N & \subset & \{\sum_{e \in I_N} \,\cont (e \,| \,T_e < \infty) \leq 0 \} \vspace*{4pt} \\
& \subset & \{\sum_{e \in (l, r)} \,\cont (e \,| \,T_e < \infty) \leq 0 \ \hbox{for some~$l < - N$ and some~$r \geq 0$} \}. \end{array}
\end{equation}
Lemma~\ref{lem:fixation-condition} and~\eqref{eq:inclusion-1} are used in section~\ref{sec:fixation} together with the large
deviation estimates for the number of active and frozen piles showed in the following section to prove Theorem~\ref{th:fixation}.
\section{Large deviation estimates}
\label{sec:deviation}
\indent In order to find later a good upper bound for the probability in~\eqref{eq:fixation} and deduce a sufficient condition for
fixation of the opinion model, the next step is to prove large deviation estimates for the initial number of piles with~$s$~particles
in a large interval.
More precisely, the main objective of this section is to prove that for all~$s$ and all~$\epsilon > 0$ the probability that
$$ \card \{e \in (0, N) : \xi_0 (e) = s \} \ \notin \ (1 - \epsilon, 1 + \epsilon) \ E \,(\card \{e \in (0, N) : \xi_0 (e) = s \}) $$
decays exponentially with~$N$.
Note that, even though it is assumed that the process starts from a product measure and therefore the initial opinions at different
vertices are chosen independently, the initial states at different edges are not independent in general.
When starting from the uniform product measure, these states are independent if and only if, for every size~$s$,
$$ \card \{j \in V : d (i, j) = s \} \quad \hbox{does not depend on~$i \in V$}. $$
This holds for cycles or hypercubes but not for graphs which are not vertex-transitive.
When starting from more general product measures, the initial number of particles at different edges are not independent, even for
very specific graphs.
In particular, the number of piles of particles with a given size in a given interval does not simply reduce to a binomial random variable.
\indent The main ingredient to prove large deviation estimates for the initial number of piles with a given number of particles in
a large spatial interval is to first show large deviation estimates for the number of so-called changeovers in a sequence of independent
coin flips.
Consider an infinite sequence of independent coin flips such that
$$ P \,(X_t = H) = p \quad \hbox{and} \quad P \,(X_t = T) = q = 1 - p \quad \hbox{for all} \quad t \in \mathbb{N} $$
where~$X_t$ is the outcome: heads or tails, at time~$t$.
We say that a {\bf changeover} occurs whenever two consecutive coin flips result in two different outcomes.
The expected value of the number of changeovers~$Z_N$ before time~$N$ can be easily computed by observing that
$$ \begin{array}{l} Z_N \ = \ \sum_{0 \leq t < N} \,Y_t \quad \hbox{where} \quad Y_t \ := \ \mathbf{1} \{X_{t + 1} \neq X_t \} \end{array} $$
and by using the linearity of the expected value:
$$ \begin{array}{rcl}
E Z_N & = & \sum_{0 \leq t < N} \,E Y_t \vspace*{4pt} \\
& = & \sum_{0 \leq t < N} \,P \,(X_{t + 1} \neq X_t) \ = \ N \,P \,(X_0 \neq X_1) \ = \ 2 N p q. \end{array} $$
Then, we have the following result for the number of changeovers.
\begin{lemma} --
\label{lem:changeover}
For all~$\epsilon > 0$, there exists~$c_0 > 0$ such that
$$ P \,(Z_N - E Z_N \notin (- \epsilon N, \epsilon N)) \ \leq \ \exp (- c_0 N) \quad \hbox{for all~$N$ large}. $$
\end{lemma}
\begin{proof}
To begin with, we let~$\tau_{2K}$ be the time to the~$2K$th changeover and notice that, since all the outcomes between two consecutive
changeovers are identical, the sequence of coin flips up to this stopping time can be decomposed into~$2K$ strings with an alternation
of strings with only heads and strings with only tails followed by one more coin flip.
In addition, since the coin flips are independent, the length distribution of each string is
$$ \begin{array}{rcl}
H_j & := & \hbox{length of the~$j$th string of heads} \ = \ \geometric (q) \vspace*{2pt} \\
T_j & := & \hbox{length of the~$j$th string of tails} \ = \ \geometric (p) \end{array} $$
and lengths are independent.
In particular,~$\tau_{2K}$ is equal in distribution to the sum of~$2K$ independent geometric random variables with parameters~$p$ and~$q$,
therefore we have
\begin{equation}
\label{eq:changeover-1}
P \,(\tau_{2K} = n) \ = \ P \,(H_1 + T_1 + \cdots + H_K + T_K = n) \quad \hbox{for all} \quad n \in \mathbb{N}.
\end{equation}
Now, observing that, for all~$K \leq n$,
$$ \begin{array}{l}
\displaystyle P \,(H_1 + H_2 + \cdots + H_K = n) \ = \,{n - 1 \choose K - 1} \,q^K \,(1 - q)^{n - K} \vspace*{2pt} \\ \hspace*{50pt}
\displaystyle = \,\frac{K}{n} \ {n \choose K} \,q^K \,(1 - q)^{n - K} \ \leq \ P \,(\binomial (n, q) = K), \end{array} $$
large deviation estimates for the binomial distribution imply that
\begin{equation}
\label{eq:changeover-2}
\begin{array}{l}
P \,((1/K)(H_1 + H_2 + \cdots + H_K) \geq (1 + \epsilon)(1/q)) \vspace*{4pt} \\ \hspace*{70pt} \leq \
P \,(\binomial ((1 + \epsilon)(1/q) K, q) \leq K) \ \leq \ \exp (- c_1 K) \vspace*{8pt} \\
P \,((1/K)(H_1 + H_2 + \cdots + H_K) \leq (1 - \epsilon)(1/q)) \vspace*{4pt} \\ \hspace*{70pt} \leq \
P \,(\binomial ((1 - \epsilon)(1/q) K, q) \geq K) \ \leq \ \exp (- c_1 K) \end{array}
\end{equation}
for a suitable constant~$c_1 > 0$ and all~$K$ large.
Similarly, for all~$\epsilon > 0$,
\begin{equation}
\label{eq:changeover-3}
\begin{array}{l}
P \,((1/K)(T_1 + T_2 + \cdots + T_K) \geq (1 + \epsilon)(1/p)) \ \leq \ \exp (- c_2 K) \vspace*{4pt} \\
P \,((1/K)(T_1 + T_2 + \cdots + T_K) \leq (1 - \epsilon)(1/p)) \ \leq \ \exp (- c_2 K) \end{array}
\end{equation}
for a suitable~$c_2 > 0$ and all~$K$ large.
Combining~\eqref{eq:changeover-1}--\eqref{eq:changeover-3}, we deduce that
$$ \begin{array}{l}
P \,((1/K) \,\tau_{2K} \notin ((1 - \epsilon)(1/p + 1/q), (1 + \epsilon)(1/p + 1/q))) \vspace*{4pt} \\ \hspace*{20pt} = \
P \,((1/K)(H_1 + T_1 + \cdots + H_K + T_K) \notin ((1 - \epsilon)(1/p + 1/q), (1 + \epsilon)(1/p + 1/q))) \vspace*{4pt} \\ \hspace*{20pt} \leq \
P \,((1/K)(H_1 + H_2 + \cdots + H_K) \notin ((1 - \epsilon)(1/q), (1 + \epsilon)(1/q))) \vspace*{4pt} \\ \hspace*{80pt} + \
P \,((1/K)(T_1 + T_2 + \cdots + T_K) \notin ((1 - \epsilon)(1/p), (1 + \epsilon)(1/p))) \vspace*{4pt} \\ \hspace*{20pt} \leq \
2 \exp (- c_1 K) + 2 \exp (- c_2 K). \end{array} $$
Taking~$K := pq N$ and using that~$pq \,(1/p + 1/q) = 1$, we deduce
$$ \begin{array}{l}
P \,((1/N) \,\tau_{2K} \notin (1 - \epsilon, 1 + \epsilon)) \vspace*{4pt} \\ \hspace*{20pt} = \
P \,((1/K) \,\tau_{2K} \notin ((1 - \epsilon) (1/p + 1/q), (1 + \epsilon)(1/p + 1/q))) \ \leq \ \exp (- c_3 N) \end{array} $$
for a suitable~$c_3 > 0$ and all~$N$ large.
In particular,
$$ \begin{array}{rcl}
P \,((1/N) \,\tau_{2K - \epsilon N} \geq 1) & \leq & \exp (- c_4 N) \vspace*{4pt} \\
P \,((1/N) \,\tau_{2K + \epsilon N} \leq 1) & \leq & \exp (- c_5 N) \end{array} $$
for suitable constants~$c_4 > 0$ and~$c_5 > 0$ and all~$N$ sufficiently large.
Using the previous two inequalities and the fact that the event that the number of changeovers before time~$N$ is equal to~$2K$
is also the event that the time to the~$2K$th changeover is less than~$N$ but the time to the next changeover is more than~$N$,
we conclude that
$$ \begin{array}{l}
P \,(Z_N - E Z_N \notin (- \epsilon N, \epsilon N)) \ = \
P \,(Z_N \notin (2 pq - \epsilon, 2 pq + \epsilon) N) \vspace*{4pt} \\ \hspace*{20pt} = \
P \,((1/N) \,Z_N \notin (2 pq - \epsilon, 2 pq + \epsilon)) \vspace*{4pt} \\ \hspace*{20pt} = \
P \,((1/N) \,Z_N \leq 2 pq - \epsilon) + P \,((1/N) \,Z_N \geq 2 pq + \epsilon) \vspace*{4pt} \\ \hspace*{20pt} = \
P \,((1/N) \,\tau_{2K - \epsilon N} \geq 1) + P \,((1/N) \,\tau_{2K + \epsilon N} \leq 1) \ \leq \
\exp (- c_4 N) + \exp (- c_5 N) \end{array} $$
for all~$N$ large.
This completes the proof.
\end{proof} \\ \\
Now, we say that an edge is of type~$i \to j$ if it connects an individual with initial opinion~$i$ on the left to an individual
with initial opinion~$j$ on the right, and let
$$ e_N (i \to j) \ := \ \card \{x \in (0, N) : \eta_0 (x) = i \ \hbox{and} \ \eta_0 (x + 1) = j \} $$
denote the number of edges of type~$i \to j$ in the interval~$(0, N)$.
Using the large deviation estimates for the number of changeovers established in the previous lemma, we can deduce large
deviation estimates for the number of edges of each type.
\begin{lemma} --
\label{lem:edge}
For all~$\epsilon > 0$, there exists~$c_6 > 0$ such that
$$ P \,(e_N (i \to j) - N \rho_i \,\rho_j \notin (- 2 \epsilon N, 2 \epsilon N)) \ \leq \ \exp (- c_6 N) \quad \hbox{for all~$N$ large and~$i \neq j$}. $$
\end{lemma}
\begin{proof}
For any given~$i$, the number of edges~$i \to j$ and~$j \to i$ with~$j \neq i$ has the same distribution as the number of changeovers in a
sequence of independent coin flips of a coin that lands on heads with probability~$\rho_i$.
In particular, applying Lemma~\ref{lem:changeover} with~$p = \rho_i$ gives
\begin{equation}
\label{eq:edge-1}
\begin{array}{l} P \,(\sum_{j \neq i} \,e_N (i \to j) - N \rho_i \,(1 - \rho_i) \notin (- \epsilon N, \epsilon N)) \ \leq \ \exp (- c_0 N) \end{array}
\end{equation}
for all~$N$ sufficiently large.
In addition, since each~$i$ preceding a changeover is independently followed by any of the remaining~$F - 1$ opinions, for all~$i \neq j$, we have
\begin{equation}
\label{eq:edge-2}
\begin{array}{l}
P \,(e_N (i \to j) = n \ | \,\sum_{k \neq i} \,e_N (i \to k) = K) \vspace*{4pt} \\ \hspace*{50pt} = \
P \,(\binomial (K, \rho_j \,(1 - \rho_i)^{-1}) = n). \end{array}
\end{equation}
Combining~\eqref{eq:edge-1}--\eqref{eq:edge-2} with large deviation estimates for the binomial distribution, conditioning on the number
of edges of type~$i \to k$ for some~$k \neq i$, and using that
$$ (N \rho_i \,(1 - \rho_i) + \epsilon N) \,\rho_j \,(1 - \rho_i)^{-1} \ = \ N \rho_i \,\rho_j + \epsilon N \rho_j \,(1 - \rho_i)^{-1} $$
we deduce the existence of~$c_7 > 0$ such that
\begin{equation}
\label{eq:edge-3}
\begin{array}{l}
P \,(e_N (i \to j) - N \rho_i \,\rho_j \geq 2 \epsilon N) \vspace*{4pt} \\ \hspace*{20pt} \leq \
P \,(\sum_{k \neq i} \,e_N (i \to k) - N \rho_i \,(1 - \rho_i) \geq \epsilon N) \vspace*{4pt} \\ \hspace*{20pt} + \
P \,(e_N (i \to j) \geq N \rho_i \,\rho_j + 2 \epsilon N \ | \ \sum_{k \neq i} \,e_N (i \to k) - N \rho_i \,(1 - \rho_i) < \epsilon N) \vspace*{4pt} \\ \hspace*{20pt} \leq \
\exp (- c_0 N) + P \,(\binomial (N \rho_i \,(1 - \rho_i) + \epsilon N, \rho_j \,(1 - \rho_i)^{-1}) \geq N \rho_i \,\rho_j + 2 \epsilon N) \vspace*{4pt} \\ \hspace*{20pt} \leq \
\exp (- c_0 N) + \exp (- c_7 N) \end{array}
\end{equation}
for all~$N$ large.
Similarly, there exists~$c_8 > 0$ such that
\begin{equation}
\label{eq:edge-4}
P \,(e_N (i \to j) - N \rho_i \,\rho_j \leq - 2 \epsilon N) \ \leq \ \exp (- c_0 N) + \exp (- c_8 N)
\end{equation}
for all~$N$ large.
The lemma follows from~\eqref{eq:edge-3}--\eqref{eq:edge-4}.
\end{proof} \\ \\
Note that the large deviation estimates for the initial number of piles of particles easily follows from the previous lemma.
Finally, from the large deviation estimates for the number of edges of each type, we deduce the analog for a general class of
functions~$W$ that will be used in the next section to prove the first sufficient condition for fixation.
\begin{lemma} --
\label{lem:weight}
Let~$w : V \times V \to \mathbb{R}$ be any function such that
$$ w (i, i) = 0 \quad \hbox{for all} \quad i \in V $$
and let~$W : \mathbb{Z} + 1/2 \to \mathbb{R}$ be the function defined as
$$ W_e := w (i, j) \quad \hbox{whenever} \quad \hbox{edge~$e$ is of type~$i \to j$}. $$
Then, for all~$\epsilon > 0$, there exists~$c_9 > 0$ such that
$$ \begin{array}{l} P \,(\sum_{e \in (0, N)} \,(W_e - E W_e) \notin (- \epsilon N, \epsilon N)) \ \leq \ \exp (- c_9 N) \quad \hbox{for all~$N$ large}. \end{array} $$
\end{lemma}
\begin{proof}
First, we observe that
$$ \begin{array}{l}
\sum_{e \in (0, N)} \,(W_e - E W_e) \ = \
\sum_{e \in (0, N)} W_e - N E W_e \vspace*{4pt} \\ \hspace*{20pt} = \
\sum_{i \neq j} \,w (i, j) \,e_N (i \to j) - N \,\sum_{i \neq j} \,w (i, j) \,P \,(e \ \hbox{is of type} \ i \to j) \vspace*{4pt} \\ \hspace*{20pt} = \
\sum_{i \neq j} \,w (i, j) \,(e_N (i \to j) - N \rho_i \,\rho_j). \end{array} $$
Letting~$m := \max_{i \neq j} |w (i, j)| < \infty$ and applying Lemma~\ref{lem:edge}, we conclude that
$$ \begin{array}{l}
P \,(\sum_{e \in (0, N)} \,(W_e - E W_e) \notin (- \epsilon N, \epsilon N)) \vspace*{4pt} \\ \hspace*{25pt} = \
P \,(\sum_{i \neq j} \,w (i, j) \,(e_N (i \to j) - N \rho_i \,\rho_j) \notin (- \epsilon N, \epsilon N)) \vspace*{4pt} \\ \hspace*{25pt} \leq \
P \,(w (i, j) \,(e_N (i \to j) - N \rho_i \,\rho_j) \notin (- \epsilon N/F^2, \epsilon N/F^2) \ \hbox{for some} \ i \neq j) \vspace*{4pt} \\ \hspace*{25pt} \leq \
P \,(e_N (i \to j) - N \rho_i \,\rho_j \notin (- \epsilon N/mF^2, \epsilon N/mF^2) \ \hbox{for some} \ i \neq j) \vspace*{4pt} \\ \hspace*{25pt} \leq \
F^2 \,\exp (- c_{10} N) \end{array} $$
for a suitable constant~$c_{10} > 0$ and all~$N$ large.
\end{proof}
\section{Proof of Theorem~\ref{th:fixation} (general opinion graphs)}
\label{sec:fixation}
\indent The key ingredients to prove Theorem~\ref{th:fixation} are Lemma~\ref{lem:fixation-condition} and inclusions~\eqref{eq:inclusion-1}.
The large deviation estimates of the previous section are also important to make the sufficient condition for fixation more explicit and
applicable to particular opinion graphs.
First, we find a lower bound~$W_e$, that we shall call {\bf weight}, for the contribution of any given edge~$e$.
This lower bound is deterministic given the initial number of particles at the edge and is obtained assuming the worst case scenario
where all the active piles annihilate with frozen piles rather than other active piles.
More precisely, we have the following lemma.
\begin{lemma} --
\label{lem:deterministic}
For all~$k > 0$,
\begin{equation}
\label{eq:weight}
\cont (e \,| \,T_e < \infty) \ \geq \ W_e \ := \ k - 2 \quad \hbox{when} \quad (k - 1) \,\tau < \xi_0 (e) \leq k \tau.
\end{equation}
\end{lemma}
\begin{proof}
The jump of an active pile of size~$s_- \leq \tau$ onto a frozen pile of size~$s_+ > \tau$ decreases the size of this frozen
pile by at most~$s_-$ particles.
Since in addition active piles have at most~$\tau$ particles, whenever the initial number of frozen particles at edge~$e$ satisfies
$$ (k - 1) \,\tau < \xi_0 (e) \leq k \tau \quad \hbox{for some} \quad k \geq 2, $$
at least~$k - 1$ active piles must have jumped onto~$e$ until time~$T_e < \infty$.
Recalling~\eqref{eq:contribution-frozen} gives the result when edge~$e$ carries a frozen while the result is trivial when the
edge carries an active pile since, in this case, both its contribution and its weight are equal to~$-1$.
\end{proof} \\ \\
In view of Lemma~\ref{lem:deterministic}, it is convenient to classify piles depending on the number of complete blocks of~$\tau$ particles
they contain: we say that the pile at~$e$ is of {\bf order}~$k > 0$ when
$$ (k - 1) \,\tau < \xi_t (e) \leq k \tau \quad \hbox{or equivalently} \quad \ceil{\xi_t (e) / \tau} = k $$
so that active piles are exactly the piles of order one and the weight of a pile is simply its order minus two.
Now, we note that Lemma~\ref{lem:deterministic} and~\eqref{eq:inclusion-1} imply that
\begin{equation}
\label{eq:inclusion-2}
\begin{array}{l}
H_N \ \subset \ \{\sum_{e \in (l, r)} W_e \leq 0 \ \hbox{for some~$l < - N$ and some~$r \geq 0$} \}. \end{array}
\end{equation}
Motivated by Lemma~\ref{lem:fixation-condition}, the main objective to study fixation is to find an upper bound for the probability of the
event on the right-hand side of~\eqref{eq:inclusion-2}.
This is the key to proving the following general fixation result from which both parts of Theorem~\ref{th:fixation} can be easily deduced.
\begin{lemma} --
\label{lem:expected-weight}
Assume~\eqref{eq:product}.
Then, the system on~$\mathbb{Z}$ fixates whenever
$$ \begin{array}{l} \sum_{i, j \in V} \,\rho_i \,\rho_j \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\mathbf{1} \{d (i, j) = s \}) \ > \ 0. \end{array} $$
\end{lemma}
\begin{proof}
To begin with, we observe that
$$ \begin{array}{rcl}
P \,(\xi_0 (e) = s) & = & \sum_{i, j \in V} \,P \,(\eta_0 (x) = i \ \hbox{and} \ \eta_0 (x + 1) = j) \ \mathbf{1} \{d (i, j) = s \} \vspace*{4pt} \\
& = & \sum_{i, j \in V} \,\rho_i \,\rho_j \ \mathbf{1} \{d (i, j) = s \}. \end{array} $$
Recalling~\eqref{eq:weight}, it follows that
$$ \begin{array}{rcl}
E W_e & = & \sum_{k > 0} \,(k - 2) \,P \,((k - 1) \,\tau < \xi_0 (e) \leq k \tau) \vspace*{4pt} \\
& = & \sum_{k > 0} \,((k - 2) \,\sum_{(k - 1) \,\tau < s \leq k \tau} \,P \,(\xi_0 (e) = s)) \vspace*{4pt} \\
& = & \sum_{k > 0} \,((k - 2) \,\sum_{(k - 1) \,\tau < s \leq k \tau} \,\sum_{i, j \in V} \,\rho_i \,\rho_j \ \mathbf{1} \{d (i, j) = s \}) \vspace*{4pt} \\
& = & \sum_{i, j \in V} \,\rho_i \,\rho_j \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\mathbf{1} \{d (i, j) = s \}) \end{array} $$
which is strictly positive under the assumption of the lemma.
In particular, applying the large deviation estimate in Lemma~\ref{lem:weight} with~$\epsilon := E W_e > 0$, we deduce that
$$ \begin{array}{l}
P \,(\sum_{e \in (0, N)} W_e \leq 0) \ = \
P \,(\sum_{e \in (0, N)} \,(W_e - E W_e) \leq - \epsilon N) \vspace*{4pt} \\ \hspace*{40pt} \leq \
P \,(\sum_{e \in (0, N)} \,(W_e - E W_e) \notin (- \epsilon N, \epsilon N)) \ \leq \ \exp (- c_9 N) \end{array} $$
for all~$N$ large, which, in turn, implies with~\eqref{eq:inclusion-2} that
$$ \begin{array}{rcl}
P \,(H_N) & \leq &
P \,(\sum_{e \in (l, r)} W_e \leq 0 \ \hbox{for some~$l < - N$ and~$r \geq 0$}) \vspace*{4pt} \\ & \leq &
\sum_{l < - N} \,\sum_{r \geq 0} \,\exp (- c_9 \,(r - l)) \ \to \ 0 \end{array} $$
as~$N \to \infty$.
This together with Lemma~\ref{lem:fixation-condition} implies fixation.
\end{proof} \\ \\
Both parts of Theorem~\ref{th:fixation} directly follow from the previous lemma. \\ \\
\begin{demo}{Theorem~\ref{th:fixation}a} --
Assume that~\eqref{eq:uniform} holds and that
$$ \begin{array}{l} S (\Gamma, \tau) \ = \ \sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,N (\Gamma, s)) \ > \ 0. \end{array} $$
Then, the expected weight becomes
$$ \begin{array}{rcl}
E W_e & = & \sum_{i, j \in V} \,\rho_i \,\rho_j \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\mathbf{1} \{d (i, j) = s \}) \vspace*{4pt} \\
& = & (1/F)^2 \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\sum_{i, j \in V} \,\mathbf{1} \{d (i, j) = s \}) \vspace*{4pt} \\
& = & (1/F)^2 \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\card \{(i, j) \in V \times V : d (i, j) = s \}) \vspace*{4pt} \\
& = & (1/F)^2 \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,N (\Gamma, s)) \vspace*{4pt} \\
& = & (1/F)^2 \,S (\Gamma, \tau) \ > \ 0 \end{array} $$
which, according to Lemma~\ref{lem:expected-weight}, implies fixation.
\end{demo} \\ \\
\begin{demo}{Theorem~\ref{th:fixation}b} --
Assume that~$\mathbf{d} > 2 \tau$. Then,
$$ d (i_-, i_+) = \mathbf{d} > 2 \tau \quad \hbox{for some pair} \quad (i_-, i_+) \in V \times V. $$
Now, let~$X, Y \geq 0$ such that~$2X + (F - 2)Y = 1$ and assume that
$$ \rho_{i_-} = \rho_{i_+} = X \quad \hbox{and} \quad \rho_i = Y \quad \hbox{for all} \quad i \notin B := \{i_-, i_+ \}. $$
To simplify the notation, we also introduce
$$ \begin{array}{l} Q (i, j) \ := \ \sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\mathbf{1} \{d (i, j) = s \}) \end{array} $$
for all~$(i, j) \in V \times V$.
Then, the expected weight becomes
$$ \begin{array}{rcl}
P (X, Y) & = & \sum_{i, j \in V} \,\rho_i \,\rho_j \ Q (i, j) \vspace*{4pt} \\
& = & \sum_{i, j \in B} \,\rho_i \,\rho_j \ Q (i, j) + \sum_{i \notin B} \,2 \,\rho_i \,\rho_{i_-} \,Q (i, i_-) \vspace*{4pt} \\ && \hspace*{25pt} + \
\sum_{i \notin B} \,2 \,\rho_i \,\rho_{i_+} \,Q (i, i_+) + \sum_{i, j \notin B} \,\rho_i \,\rho_j \ Q (i, j) \vspace*{4pt} \\
& = & 2 \,Q (i_-, i_+) \,X^2 + 2 \,(\sum_{i \notin B} \,Q (i, i_-) + Q (i, i_+)) \,XY + \sum_{i, j \notin B} \,Q (i, j) \,Y^2. \end{array} $$
This shows that~$P$ is continuous in both~$X$ and~$Y$ and that
$$ \begin{array}{rcl}
P (1/2, 0) & = & (1/2) \,Q (i_-, i_+) \vspace*{4pt} \\
& = & (1/2) \,\sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,\mathbf{1} \{d (i_-, i_+) = s \}) \vspace*{4pt} \\
& \geq & (1/2) \,(3 - 2) \,\sum_{s > 2 \tau} \,\mathbf{1} \{d (i_-, i_+) = s \} \ = \ 1/2 \ > \ 0. \end{array} $$
Therefore, according to Lemma~\ref{lem:expected-weight}, there is fixation of the one-dimensional process starting from any product
measure whose densities are in some neighborhood of
$$ \rho_{i_-} = \rho_{i_+} = 1/2 \quad \hbox{and} \quad \rho_i = 0 \quad \hbox{for all} \quad i \notin \{i_-, i_+ \}. $$
This proves the second part of Theorem~\ref{th:fixation}.
\end{demo}
\section{Proof of Theorem~\ref{th:dist-reg} (distance-regular graphs)}
\label{sec:dist-reg}
\indent To explain the intuition behind the proof, recall that, when an active pile of size~$s_-$ jumps to the right onto a frozen
pile of size~$s_+$ at edge~$e$, the size of the latter pile becomes
$$ \xi_t (e) \ = \ d (\eta_t (e - 1/2), \eta_t (e + 1/2)) \ = \ d (\eta_{t-} (e - 3/2), \eta_{t-} (e + 1/2)) $$
and the triangle inequality implies that
\begin{equation}
\label{eq:triangle}
s_+ - s_- \ = \ \xi_{t-} (e) - \xi_{t-} (e - 1) \ \leq \ \xi_t (e) \ \leq \ \xi_{t-} (e) + \xi_{t-} (e - 1) \ = \ s_+ + s_-.
\end{equation}
The exact distribution of the new size cannot be deduced in general from the size of the intersecting piles, indicating that
the system of piles is not Markov.
The key to the proof is that, at least when the underlying opinion graph is distance-regular, the system of piles becomes Markov.
The first step is to show that, for all opinion graphs, the opinions on the left and on the right of a pile of size~$s$ are
conditioned to be at distance~$s$ of each other but are otherwise independent, which follows from the fact that both opinions
originate from two different ancestors at time zero, and the fact that the initial distribution is a product measure.
If in addition the opinion graph is distance-regular then the number of possible opinions on the left and on the right of the pile,
which is also the number of pairs of opinions at distance~$s$ of each other, does not depend on the actual opinion on the left of
the pile.
This implies that, at least in theory, the new size distribution of a pile right after a collision can be computed explicitly.
This is then used to prove that a jump of an active pile onto a pile of order~$n > 1$ reduces its order with probability at most
$$ \begin{array}{l} p_n \ = \ \max \,\{\sum_{s : \ceil{s / \tau} = n - 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \} \end{array} $$
while it increases its order with probability at least
$$ \begin{array}{l} q_n \ = \ \,\min \,\{\sum_{s : \ceil{s / \tau} = n + 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \}. \end{array} $$
In particular, the number of active piles that need to be sacrificed to turn a frozen pile into an active pile is stochastically
larger than the hitting time to state~1 of a certain discrete-time birth and death process.
To turn this into a proof, we let~$x = e - 1/2$ and
$$ x - 1 \to_t x \ := \ \hbox{the event that there is an arrow~$x - 1 \to x$ at time~$t$}. $$
Then, we have the following lemma.
\begin{lemma} --
\label{lem:collision}
Assume~\eqref{eq:uniform} and~\eqref{eq:dist-reg-1}.
For all~$s \geq 0$ and $s_-, s_+ > 0$ with~$s_- \leq \tau$,
$$ \begin{array}{l} P \,(\xi_t (e) = s \,| \,(\xi_{t-} (e - 1), \xi_{t-} (e)) = (s_-, s_+) \ \hbox{and} \ x - 1 \to_t x) \ = \ f (s_-, s_+, s) / h (s_+). \end{array} $$
\end{lemma}
\begin{proof}
The first step is similar to the proof of~\cite[Lemma~3]{lanchier_scarlatos_2013}.
Due to one-dimensional nearest neighbor interactions, active paths cannot cross each other.
In particular, the opinion dynamics preserve the ordering of the ancestral lineages therefore
\begin{equation}
\label{eq:collision-1}
a (x - 1, t-) \ \leq \ a (x, t-) \ \leq \ a (x + 1, t-)
\end{equation}
where~$a (z, t-)$ refers to the ancestor of~$(z, t-)$, i.e., the unique source at time zero of an active path reaching space-time point~$(z, t-)$.
Since in addition~$s_-, s_+ > 0$, given the conditioning in the statement of the lemma, the individuals at~$x$ and~$x \pm 1$ must disagree at
time~$t-$ and so must have different ancestors.
This together with~\eqref{eq:collision-1} implies that
\begin{equation}
\label{eq:collision-2}
a (x - 1, t-) \ < \ a (x, t-) \ < \ a (x + 1, t-).
\end{equation}
Now, we fix~$i_-, j \in V$ such that~$d (i_-, j) = s_-$ and let
$$ B_{t-} (i_-, j) \ := \ \{\eta_{t-} (x - 1) = i_- \ \hbox{and} \ \eta_{t-} (x) = j \}. $$
Then, given this event and the conditioning in the statement of the lemma, the probability that the pile of particles at~$e$ becomes of size~$s$
is equal to
\begin{equation}
\label{eq:collision-3}
\begin{array}{l}
P \,(\xi_t (e) = s \,| \,B_{t-} (i_-, j) \ \hbox{and} \ \xi_{t-} (e) = s_+ \ \hbox{and} \ x - 1 \to_t x) \vspace*{4pt} \\ \hspace*{25pt} = \
P \,(d (i_-, \eta_{t-} (x + 1)) = s \,| \,B (i_-, j) \ \hbox{and} \vspace*{4pt} \\ \hspace*{100pt}
d (j, \eta_{t-} (x + 1)) = s_+ \ \hbox{and} \ x - 1 \to_t x) \vspace*{4pt} \\ \hspace*{25pt} = \
\card \{i_+ : d (i_-, i_+) = s \ \hbox{and} \ d (i_+, j) = s_+ \} / \card \{i_+ : d (i_+, j) = s_+ \} \end{array}
\end{equation}
where the last equality follows from~\eqref{eq:uniform} and~\eqref{eq:collision-2} which, together, imply that the opinion at~$x + 1$ just before
the jump is independent of the other opinions on its left and chosen uniformly at random from the set of opinions at distance~$s_+$ of opinion~$j$.
Assuming in addition that the underlying opinion graph is distance-regular~\eqref{eq:dist-reg-1}, we also have
\begin{equation}
\label{eq:collision-4}
\begin{array}{l}
\card \{i_+ : d (i_-, i_+) = s \ \hbox{and} \ d (i_+, j) = s_+ \} \vspace*{4pt} \\ \hspace*{50pt} = \
N (\Gamma, (i_-, s), (j, s_+)) \ = \ f (s_-, s_+, s) \vspace*{8pt} \\
\card \{i_+ : d (i_+, j) = s_+ \} \ = \ N (\Gamma, (j, s_+)) \ = \ h (s_+). \end{array}
\end{equation}
In particular, the conditional probability in~\eqref{eq:collision-3} does not depend on the particular choice of the pair of opinions~$i_-$ and~$j$ from
which it follows that
\begin{equation}
\label{eq:collision-5}
\begin{array}{l}
P \,(\xi_t (e) = s \,| \,\xi_{t-} (e - 1) = s_- \ \hbox{and} \ \xi_{t-} (e) = s_+ \ \hbox{and} \ x - 1 \to_t x) \vspace*{4pt} \\ \hspace*{40pt} = \
P \,(\xi_t (e) = s \,| \,B_{t-} (i_-, j) \ \hbox{and} \ \xi_{t-} (e) = s_+ \ \hbox{and} \ x - 1 \to_t x) \end{array}
\end{equation}
The lemma is then a direct consequence of~\eqref{eq:collision-3}--\eqref{eq:collision-5}.
\end{proof} \\ \\
As previously mentioned, it follows from Lemma~\ref{lem:collision} that, provided the opinion model starts from a product measure in which
the density of each opinion is constant across space and the opinion graph is distance-regular, the system of piles itself is a Markov process.
Another important consequence is the following lemma, which gives bounds for the probabilities that the jump of an active pile onto a frozen
pile results in a reduction or an increase of its order.
\begin{lemma} --
\label{lem:jump}
Let~$x = e - 1/2$. Assume~\eqref{eq:uniform} and~\eqref{eq:dist-reg-1}. Then,
$$ \begin{array}{l}
P \,(\ceil{\xi_t (e) / \tau} < \ceil{\xi_{t-} (e) / \tau} \,| \,(\xi_{t-} (e - 1), \xi_{t-} (e)) = (s_-, s_+) \ \hbox{and} \ x - 1 \to_t x) \ \leq \ p_n \vspace*{4pt} \\
P \,(\ceil{\xi_t (e) / \tau} > \ceil{\xi_{t-} (e) / \tau} \,| \,(\xi_{t-} (e - 1), \xi_{t-} (e)) = (s_-, s_+) \ \hbox{and} \ x - 1 \to_t x) \ \geq \ q_n \end{array} $$
whenever~$0 < \ceil{s_- / \tau} = 1$ and~$\ceil{s_+ / \tau} = n > 1$.
\end{lemma}
\begin{proof}
Let~$p (s_-, s_+, s)$ be the conditional probability
$$ P \,(\xi_t (e) = s \,| \,(\xi_{t-} (e - 1), \xi_{t-} (e)) = (s_-, s_+) \ \hbox{and} \ x - 1 \to_t x) $$
in the statement of Lemma~\ref{lem:collision}.
Then, the probability that the jump of an active pile onto the pile of order~$n$ at edge~$e$ results in a reduction of its order is smaller than
\begin{equation}
\label{eq:jump-1}
\begin{array}{l} \max \,\{\sum_{s : \ceil{s / \tau} = n - 1} \,p (s_-, s_+, s) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \} \end{array}
\end{equation}
while the probability that the jump of an active pile onto the pile of order~$n$ at edge~$e$ results in an increase of its order is larger than
\begin{equation}
\label{eq:jump-2}
\begin{array}{l} \min \,\{\sum_{s : \ceil{s / \tau} = n + 1} \,p (s_-, s_+, s) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = n \}. \end{array}
\end{equation}
But according to Lemma~\ref{lem:collision}, we have
$$ p (s_-, s_+, s) \ = \ f (s_-, s_+, s) / h (s_+) $$
therefore~\eqref{eq:jump-1}--\eqref{eq:jump-2} are equal to~$p_n$ and~$q_n$, respectively.
\end{proof} \\ \\
We refer to Figure~\ref{fig:coupling} for a schematic illustration of the previous lemma.
In order to prove the theorem, we now use Lemmas~\ref{lem:collision}--\ref{lem:jump} to find a stochastic lower bound for the contribution of each edge.
To express this lower bound, we let~$X_t$ be the discrete-time birth and death Markov chain with transition probabilities
$$ p (n, n - 1) \ = \ p_n \qquad p (n, n) \ = \ 1 - p_n - q_n \qquad p (n, n + 1) \ = \ q_n $$
for all~$1 < n < M := \ceil{\mathbf{d} / \tau}$ and boundary conditions
$$ p (1, 1) \ = \ 1 \quad \hbox{and} \quad p (M, M - 1) \ = \ 1 - p (M, M) \ = \ p_M. $$
This process will allow us to retrace the history of a frozen pile until time~$T_e$ when it becomes an active pile.
To begin with, we use a first-step analysis to compute explicitly the expected value of the first hitting time to state~1 of the birth and death process.
\begin{lemma} --
\label{lem:hitting}
Let~$T_n := \inf \,\{t : X_t = n \}$. Then,
$$ E \,(T_1 \,| \,X_0 = k) \ = \ 1 + \mathbf W (k) \quad \hbox{for all} \quad 0 < k \leq M = \ceil{\mathbf{d} / \tau}. $$
\end{lemma}
\begin{proof}
Let~$\sigma_n := E \,(T_{n - 1} \,| \,X_0 = n)$.
Then, for all~$1 < n < M$,
$$ \begin{array}{rcl}
\sigma_n & = & p (n, n - 1) + (1 + \sigma_n) \,p (n, n) + (1 + \sigma_n + \sigma_{n + 1}) \,p (n, n + 1) \vspace*{3pt} \\
& = & p_n + (1 + \sigma_n)(1 - p_n - q_n) + (1 + \sigma_n + \sigma_{n + 1}) \,q_n \vspace*{3pt} \\
& = & p_n + (1 + \sigma_n)(1 - p_n) + q_n \,\sigma_{n + 1} \vspace*{3pt} \\
& = & 1 + (1 - p_n) \,\sigma_n + q_n \,\sigma_{n + 1} \end{array} $$
from which it follows, using a simple induction, that
\begin{equation}
\label{eq:hitting-1}
\begin{array}{rcl}
\sigma_n & = & 1 / p_n + \sigma_{n + 1} \,q_n / p_n \vspace*{4pt} \\
& = & 1 / p_n + q_n / (p_n \,p_{n + 1}) + \sigma_{n + 2} \,(q_n \,q_{n + 1}) / (p_n \,p_{n + 1}) \vspace*{4pt} \\
& = & \sum_{n \leq m < M} \,(q_n \cdots q_{m - 1}) / (p_n \cdots p_m) + \sigma_M \,(q_n \cdots q_{M - 1}) / (p_n \cdots p_{M - 1}). \end{array}
\end{equation}
Since~$p (M, M - 1) = 1 - p (M, M) = p_M$, we also have
\begin{equation}
\label{eq:hitting-2}
\sigma_M \ = \ E \,(T_{M - 1} \,| \,X_0 = M) \ = \ E \,(\geometric (p_M)) \ = \ 1 / p_M.
\end{equation}
Combining~\eqref{eq:hitting-1}--\eqref{eq:hitting-2}, we deduce that
$$ \begin{array}{l} \sigma_n \ = \ \sum_{n \leq m \leq M} \,(q_n \,q_{n + 1} \cdots q_{m - 1}) / (p_n \,p_{n + 1} \cdots p_m), \end{array} $$
which finally gives
$$ \begin{array}{rcl}
E \,(T_1 \,| \,X_0 = k) & = & \sum_{1 < n \leq k} \,E \,(T_{n - 1} \,| \,X_0 = n) \ = \ \sum_{1 < n \leq k} \,\sigma_n \vspace*{4pt} \\
& = & \sum_{1 < n \leq k} \,\sum_{n \leq m \leq M} \,(q_n \cdots q_{m - 1}) / (p_n \cdots p_m) \ = \ 1 + \mathbf W (k). \end{array} $$
This completes the proof.
\end{proof} \\ \\
The next lemma gives a lower bound for the contribution~\eqref{eq:contribution-frozen} of an edge~$e$ that keeps track of the number
of active piles that jump onto~$e$ before the pile at~$e$ becomes active.
The key is to show how the number of jumps relates to the birth and death process.
Before stating our next result, we recall that~$T_e$ is the first time the pile of particles at edge~$e$ becomes active.
\begin{figure}[t]
\centering
\scalebox{0.45}{\input{coupling.pstex_t}}
\caption{\upshape{Schematic illustration of the coupling between the opinion model and the system of piles along with their evolution rules.
In our example, the threshold~$\tau = 2$, which makes piles with three or more particles frozen piles and piles with one or two particles active piles.}}
\label{fig:coupling}
\end{figure}
\begin{lemma} --
\label{lem:coupling}
Assume~\eqref{eq:uniform} and~\eqref{eq:dist-reg-1}.
Then, for~$1 < k \leq \ceil{\mathbf{d} / \tau}$,
$$ \begin{array}{l} E \,(\cont (e \,| \,T_e < \infty)) \ \geq \ \mathbf W (k) \quad \hbox{when} \quad \ceil{\xi_0 (e) / \tau} = k. \end{array} $$
\end{lemma}
\begin{proof}
Since active piles have at most~$\tau$ particles, the triangle inequality~\eqref{eq:triangle} implies that the jump of an active pile onto
a frozen pile can only increase or decrease its size by at most~$\tau$ particles, and therefore can only increase or decrease its order by at most one.
In particular,
$$ \begin{array}{l}
P \,(|\ceil{\xi_t (e) / \tau} - \ceil{\xi_{t-} (e) / \tau}| > 2 \,| \,x - 1 \to_t x) \ = \ 0. \end{array} $$
This, together with the bounds in Lemma~\ref{lem:jump} and the fact that the outcomes of consecutive jumps of active piles onto a frozen pile are independent
as explained in the proof of Lemma~\ref{lem:collision}, implies that the order of a frozen pile before it becomes active dominates stochastically the
state of the birth and death process~$X_t$ before it reaches state~1.
In particular,
$$ \begin{array}{l} E \,(\cont (e \,| \,T_e < \infty)) \ \geq \ - 1 + E \,(T_1 \,| \,X_0 = k) \quad \hbox{when} \quad \ceil{\xi_0 (e) / \tau} = k. \end{array} $$
Using Lemma~\ref{lem:hitting}, we conclude that
$$ \begin{array}{l} E \,(\cont (e \,| \,T_e < \infty)) \ \geq \ - 1 + (1 + \mathbf W (k)) \ = \ \mathbf W (k) \end{array} $$
whenever~$\ceil{\xi_0 (e) / \tau} = k$.
\end{proof} \\ \\
We now have all the necessary tools to prove the theorem.
The key idea is the same as in the proof of Lemma~\ref{lem:expected-weight} but relies on the previous lemma in place of Lemma~\ref{lem:deterministic}. \\ \\
\begin{demo}{Theorem~\ref{th:dist-reg}} --
Assume~\eqref{eq:uniform} and~\eqref{eq:dist-reg-1} and
$$ \begin{array}{l} S_{\reg} (\Gamma, \tau) \ = \ \sum_{k > 0} \,(\mathbf W (k) \,\sum_{s : \ceil{s / \tau} = k} \,h (s)) \ > \ 0. \end{array} $$
Since the opinion graph is distance-regular,
$$ \begin{array}{rcl}
P \,(\xi_0 (e) = s) & = & \sum_{i \in V} \,P \,(\xi_0 (e) = s \,| \,\eta_0 (e - 1/2) = i) \,P \,(\eta_0 (e - 1/2) = i) \vspace*{4pt} \\
& = & \sum_{i \in V} \,F^{-1} \,\card \{j \in V : d (i, j) = s \} \ P \,(\eta_0 (e - 1/2) = i) \vspace*{4pt} \\
& = & \sum_{i \in V} \,F^{-1} \,h (s) \,P \,(\eta_0 (e - 1/2) = i) \ = \ F^{-1} \,h (s). \end{array} $$
Using also Lemma~\ref{lem:coupling}, we get
$$ \begin{array}{rcl}
E \,(\cont (e \,| \,T_e < \infty)) & \geq & \sum_{k > 0} \,\mathbf W (k) \,P \,(\ceil{\xi_0 (e) / \tau} = k) \vspace*{4pt} \\
& = & \sum_{k > 0} \,\mathbf W (k) \,P \,((k - 1) \tau < \xi_0 (e) \leq k \tau) \vspace*{4pt} \\
& = & \sum_{k > 0} \,\mathbf W (k) \,\sum_{s : \ceil{s / \tau} = k} \,F^{-1} \,h (s) \vspace*{4pt} \\
& = & F^{-1} \,S_{\reg} (\Gamma, \tau) \ > \ 0. \end{array} $$
Now, let~$\mathbf W_e$ be the collection of random variables
$$ \begin{array}{l} \mathbf W_e \ := \ \sum_{k > 0} \,\mathbf W (k) \,\mathbf{1} \{\xi_0 (e) = k \} \quad \hbox{for all} \quad e \in \mathbb{Z} + 1/2. \end{array} $$
Using Lemma~\ref{lem:weight} and the fact the number of collisions to turn a frozen pile into an active pile is independent for different
frozen piles, we deduce that there exists~$c_{11} > 0$ such that
$$ \begin{array}{l}
P \,(\sum_{e \in (0, N)} \cont (e \,| \,T_e < \infty) \leq 0) \ \leq \
P \,(\sum_{e \in (0, N)} \mathbf W_e \leq 0) \vspace*{4pt} \\ \hspace*{40pt} = \
P \,(\sum_{e \in (0, N)} \,(\mathbf W_e - E \mathbf W_e) \notin (- \epsilon N, \epsilon N)) \ \leq \ \exp (- c_{11} N) \end{array} $$
for all~$N$ large.
This, together with~\eqref{eq:inclusion-1}, implies that
$$ \begin{array}{rcl}
P \,(H_N) & \leq &
P \,(\sum_{e \in (l, r)} \cont (e \,| \,T_e < \infty) \leq 0 \ \hbox{for some~$l < - N$ and~$r \geq 0$}) \vspace*{4pt} \\ & \leq &
\sum_{l < - N} \,\sum_{r \geq 0} \,\exp (- c_{11} \,(r - l)) \ \to \ 0 \end{array} $$
as~$N \to \infty$.
In particular, it follows from Lemma~\ref{lem:fixation-condition} that the process fixates.
\end{demo}
\section{Proof of Corollaries~\ref{cor:path}--\ref{cor:hypercube}}
\label{sec:graphs}
\indent This section is devoted to the proof of Corollaries~\ref{cor:path}--\ref{cor:hypercube} that give sufficient conditions
for fluctuation and fixation of the infinite system for the opinion graphs shown in Figure~\ref{fig:graphs}.
To begin with, we prove the fluctuation part of all the corollaries at once. \\ \\
\begin{demo}{Corollaries~\ref{cor:path}--\ref{cor:hypercube} (fluctuation)} --
We start with the tetrahedron.
In this case, the diameter equals one therefore, whenever the threshold is positive, the system reduces to a four-opinion voter model,
which is known to fluctuate according to~\cite{arratia_1983}.
To deal with paths and stars, we recall that combining Theorem~\ref{th:fluctuation}a and Lemma~\ref{lem:partition} gives fluctuation
when~$\mathbf{r} \leq \tau$.
Recalling also the expression of the radius from Table~\ref{tab:summary} implies fluctuation when
$$ \begin{array}{rl}
F \leq 2 \tau + 1 & \hbox{for the path with~$F$ vertices} \vspace*{3pt} \\
r \leq \tau & \hbox{for the star with~$b$ branches of length~$r$}. \end{array} $$
For the other graphs, it suffices to find a partition that satisfies~\eqref{eq:fluctuation}.
For the remaining four regular polyhedra and the hypercubes, we observe that there is a unique vertex at distance~$\mathbf{d}$ of any
given vertex.
In particular, fixing an arbitrary vertex~$i_-$ and setting
$$ V_1 \ := \ \{i_-, i_+ \} \quad \hbox{and} \quad V_2 \ := \ V \setminus V_1 \quad \hbox{where} \quad d (i_-, i_+) = \mathbf{d} $$
defines a partition of the set of opinions such that
$$ d (i, j) \ \leq \ \mathbf{d} - 1 \quad \hbox{for all} \quad (i, j) \in V_1 \times V_2. $$
Recalling the expression of the diameter from Table~\ref{tab:summary} and using~Theorem~\ref{th:fluctuation}a give the fluctuation
parts of Corollaries~\ref{cor:polyhedron} and~\ref{cor:hypercube}.
Using the exact same approach implies fluctuation when the opinion graph is a cycle with an even number of vertices and~$F \leq 2 \tau + 2$.
For cycles with an odd number of vertices, we again use Lemma~\ref{lem:partition} to deduce fluctuation if
$$ \integer{F / 2} = \mathbf{r} \leq \tau \quad \hbox{if and only if} \quad F \leq 2 \tau + 1 \quad \hbox{if and only if} \quad F \leq 2 \tau + 2, $$
where the last equivalence is true because~$F$ is odd.
\end{demo} \\ \\
We now prove the fixation part of the corollaries using Theorems~\ref{th:fixation} and~\ref{th:dist-reg}.
The first two classes of graphs, paths and stars, are not distance-regular therefore, to study the behavior of the
model for these opinion graphs, we rely on the first part of Theorem~\ref{th:fixation}. \\ \\
\begin{demo}{Corollary~\ref{cor:path} (path)} --
Assume that~$4 \tau < \mathbf{d} = F - 1 \leq 5 \tau$. Then,
$$ \begin{array}{rcl}
S (\Gamma, \tau) & = & \sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,N (\Gamma, s)) \vspace*{4pt} \\
& = & \sum_{0 < k \leq 4} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,2 \,(F - s)) + 3 \,\sum_{4 \tau < s \leq d} \,2 \,(F - s) \vspace*{4pt} \\
& = & \sum_{0 < k \leq 4} \,((k - 2)(2 F \tau - (k \tau)(k \tau + 1) + ((k - 1) \,\tau)((k - 1) \,\tau + 1)) \vspace*{4pt} \\ && \hspace*{50pt} + \
3 \,(2F \,(F - 4 \tau - 1) - F \,(F - 1) + 4 \tau \,(4 \tau + 1)) \vspace*{4pt} \\
& = & 4 F \tau + \tau \,(\tau + 1) + 2 \tau \,(2 \tau + 1) + 3 \tau \,(3 \tau + 1) \vspace*{4pt} \\ && \hspace*{50pt} + \
4 \tau \,(4 \tau + 1) + 6 F \,(F - 4 \tau - 1) - 3 F \,(F - 1) \vspace*{4pt} \\
& = & 3 F^2 - (20 \tau + 3) \,F + 10 \,(3 \tau + 1) \,\tau. \end{array} $$
Since the largest root~$F_+ (\tau)$ of this polynomial satisfies
$$ 4 \tau \leq F_+ (\tau) - 1 = (1/6)(20 \,\tau + 3 + \sqrt{40 \,\tau^2 + 9}) - 1 \leq 5 \tau \quad \hbox{for all} \quad \tau \geq 1 $$
and since for any fixed~$\tau$ the function~$F \mapsto S (\Gamma, \tau)$ is nondecreasing, we deduce that fixation occurs under the assumptions of the lemma
according to Theorem~\ref{th:fixation}.
\end{demo} \\ \\
The case of the star with~$b$ branches of equal length~$r$ is more difficult mainly because there are two different expressions for the number of pairs of vertices at a
given distance of each other depending on whether the distance is smaller or larger than the branches' length.
In the next lemma, we compute the number of pairs of vertices at a given distance of each other, which we then use to find a condition for fixation when
the opinion graph is a star.
\begin{lemma} --
\label{lem:star}
For the star with~$b$ branches of length~$r$,
$$ \begin{array}{rclcl}
N (\Gamma, s) & = & b \,(2r + (b - 3)(s - 1)) & \hbox{for all} & s \in (0, r] \vspace*{3pt} \\
& = & b \,(b - 1)(2r - s + 1) & \hbox{for all} & s \in (r, 2r]. \end{array} $$
\end{lemma}
\begin{proof}
Let~$n_1 (s)$ and~$n_2 (s)$ be respectively the number of directed paths of length~$s$ embedded in a given branch of the star and the total
number of directed paths of length~$s$ embedded in a given pair of branches of the star.
Then, as in the proof of the corollary for paths,
$$ n_1 (s) = 2 \,(r + 1 - s) \quad \hbox{and} \quad n_2 (s) = 2 \,(2r + 1 - s) \quad \hbox{for all} \quad s \leq r. $$
Since there are~$b$ branches and $(1/2)(b - 1) \,b$ pairs of branches, and since self-avoiding paths embedded in the star cannot
intersect more than two branches, we deduce that
$$ \begin{array}{rcl}
N (\Gamma, s) & = & b \,n_1 (s) + ((1/2)(b - 1) \,b)(n_2 (s) - 2 n_1 (s)) \vspace*{4pt} \\
& = & 2b \,(r + 1 - s) + b \,(b - 1)(s - 1) \vspace*{4pt} \\
& = & b \,(2r + 2 \,(1 - s) + (b - 1)(s - 1)) \ = \ b \,(2r + (b - 3)(s - 1)) \end{array} $$
for all~$s \leq r$.
To deal with~$s > r$, we let~$o$ be the center of the star and observe that there is no vertex at distance~$s$ of vertices which are
close to the center whereas there are~$b - 1$ vertices at distance~$s$ from vertices which are far from the center.
More precisely,
$$ \begin{array}{rclcl}
\card \{j \in V : d (i, j) = s \} & = & 0 & \quad \hbox{when} & d (i, o) < s - r \vspace*{3pt} \\
\card \{j \in V : d (i, j) = s \} & = & b - 1 & \quad \hbox{when} & d (i, o) \geq s - r. \end{array} $$
The number of directed paths of length~$s$ is then given by
$$ \begin{array}{rcl}
N (\Gamma, s) & = & (b - 1) \,\card \{i \in V : d (i, o) \geq s - r \} \vspace*{4pt} \\
& = & b \,(b - 1)(r - (s - r - 1)) \ = \ b \,(b - 1)(2r - s + 1) \end{array} $$
for all~$s > r$.
This completes the proof of the lemma.
\end{proof} \\ \\
\begin{demo}{Corollary~\ref{cor:star} (star)} --
Assume that~$3 \tau < \mathbf{d} = 2r \leq 4 \tau$. Then,
$$ \begin{array}{rcl}
S (\Gamma, \tau) & = & \sum_{k > 0} \,((k - 2) \,\sum_{s : \ceil{s / \tau} = k} \,N (\Gamma, s)) \vspace*{4pt} \\
& = & - \ \sum_{0 < s \leq \tau} \,N (\Gamma, s) + \sum_{2 \tau < s \leq 3 \tau} \,N (\Gamma, s) + 2 \,\sum_{3 \tau < s \leq 2r} \,N (\Gamma, s). \end{array} $$
Since~$\tau < r \leq 2 \tau$, it follows from Lemma~\ref{lem:star} that
$$ \begin{array}{rcl}
S (\Gamma, \tau) & = & - \ \sum_{0 < s \leq \tau} \,b \,(2r + (b - 3)(s - 1)) \vspace*{4pt} \\ &&
+ \ \sum_{2 \tau < s \leq 3 \tau} \,b \,(b - 1)(2r - s + 1) + 2 \,\sum_{3 \tau < s \leq 2r} \,b \,(b - 1)(2r - s + 1) \vspace*{4pt} \\
& = & - \ b \,(2r - b + 3) \,\tau - (b/2)(b - 3) \,\tau \,(\tau + 1) \vspace*{4pt} \\ &&
+ \ b \,(b - 1)(2r + 1) \,\tau + (b/2)(b - 1)(2 \tau \,(2 \tau + 1) - 3 \tau \,(3 \tau + 1)) \vspace*{4pt} \\ &&
+ \ 2b \,(b - 1)(2r + 1)(2r - 3 \tau) + b \,(b - 1)(3 \tau \,(3 \tau + 1) - 2 r \,(2r + 1)). \end{array} $$
Expanding and simplifying, we get
$$ (1/b) \,S (\Gamma, \tau) \ = \ 4 \,(b - 1) \,r^2 + 2 \,((4 - 5b) \,\tau + b - 1) \,r + (6b - 5) \,\tau^2 + (1 - 2b) \,\tau. $$
As for paths, the result is a direct consequence of Theorem~\ref{th:fixation}.
\end{demo} \\ \\
The remaining graphs in Figure~\ref{fig:graphs} are distance-regular, which makes Theorem~\ref{th:dist-reg} applicable.
Note that the conditions for fixation in the last three corollaries give minimal values for the confidence threshold that lie between one third and
one half of the diameter.
In particular, we apply the theorem in the special case when~$\ceil{\mathbf{d} / \tau} = 3$.
In this case, we have
$$ \mathbf W (1) \ = \ - 1 \qquad \mathbf W (2) \ = \ \mathbf W (1) + (1 / p_2)(1 + q_2 / p_3) \qquad \mathbf W (3) \ = \ \mathbf W + 1 / p_3 $$
so the left-hand side of~\eqref{eq:th-dist-reg} becomes
\begin{equation}
\label{eq:common}
\begin{array}{rcl}
S_{\reg} (\Gamma, \tau) & = & \sum_{0 < k \leq 3} \,(\mathbf W (k) \,\sum_{s : \ceil{s / \tau} = k} \,h (s)) \vspace*{4pt} \\ & = &
- \ (h (1) + h (2) + \cdots + h (\mathbf{d})) \vspace*{4pt} \\ &&
+ \ (1/p_2)(1 + q_2 / p_3)(h (\tau + 1) + h (\tau + 2) + \cdots + h (\mathbf{d})) \vspace*{4pt} \\ &&
+ \ (1/p_3)(h (2 \tau + 1) + h (2 \tau + 2) + \cdots + h (\mathbf{d})). \end{array}
\end{equation}
This expression is used repeatedly to prove the remaining corollaries. \\ \\
\begin{demo}{Corollary~\ref{cor:polyhedron} (cube)} --
When~$\Gamma$ is the cube and~$\tau = 1$, we have
$$ p_2 \ = \ f (1, 2, 1) / h (2) \ = \ 2/3 \quad \hbox{and} \quad q_2 \ = \ f (1, 2, 3) / h (2) \ = \ 1/3 $$
which, together with~\eqref{eq:common} and the fact that~$p_3 \leq 1$, implies that
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, 1) & \geq & - \ (h (1) + h (2) + h (3)) + (1/p_2)(1 + q_2)(h (2) + h (3)) + h (3) \vspace*{4pt} \\
& = & - \ (3 + 3 + 1) + (3/2)(1 + 1/3)(3 + 1) + 1 \ = \ 2 \ > \ 0. \end{array} $$
This proves fixation according to Theorem~\ref{th:dist-reg}.
\end{demo} \\ \\
\begin{demo}{Corollary~\ref{cor:polyhedron} (icosahedron)} --
When~$\Gamma$ is the icosahedron and~$\tau = 1$,
$$ p_2 \ = \ f (1, 2, 1) / h (2) \ = \ 2/5 \qquad \hbox{and} \qquad q_2 \ = \ f (1, 2, 3) / h (2) \ = \ 1/5. $$
Using in addition~\eqref{eq:common} and the fact that~$p_3 \leq 1$, we obtain
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, 1) & \geq & - \ (h (1) + h (2) + h (3)) + (1/p_2)(1 + q_2)(h (2) + h (3)) + h (3) \vspace*{4pt} \\
& = & - \ (5 + 5 + 1) + (5/2)(1 + 1/5)(5 + 1) + 1 \ = \ 8 \ > \ 0 \end{array} $$
which, according to Theorem~\ref{th:dist-reg}, implies fixation.
\end{demo} \\ \\
\begin{demo}{Corollary~\ref{cor:polyhedron} (dodecahedron)} --
Fixation of the opinion model when the threshold equals one directly follows from Theorem~\ref{th:fixation} since in this case
$$ \begin{array}{rcl}
F^{-1} \,S (\Gamma, 1) & = & (1/20)(- h (1) + h (3) + 2 \,h (4) + 3 \,h (5)) \vspace*{3pt} \\
& = & (1/20)(- 3 + 6 + 2 \times 3 + 3 \times 1) \ = \ 3/5 \ > \ 0. \end{array} $$
However, when the threshold~$\tau = 2$,
$$ \begin{array}{rcl}
F^{-1} \,S (\Gamma, 2) & = & (1/20)(- h (1) - h (2) + h (5)) \vspace*{3pt} \\
& = & (1/20)(- 3 - 6 + 1) \ = \ - 2/5 \ < \ 0 \end{array} $$
so we use Theorem~\ref{th:dist-reg} instead: when~$\tau = 2$, we have
$$ \begin{array}{rcl}
p_2 & = & \max \,\{\sum_{s = 1, 2} f (s_-, s_+, s) / h (s_+) : s_- = 1, 2 \ \hbox{and} \ s_+ = 3, 4 \} \vspace*{4pt} \\
& = & \max \,\{f (1, 3, 2) / h (3), (f (2, 3, 2) + f (2, 3, 1)) / h (3), f (2, 4, 2) / h (4) \} \vspace*{4pt} \\
& = & \max \,\{2/6, (2 + 1) / 6, 1/3 \} \ = \ 1/2. \end{array} $$
In particular, using~\eqref{eq:common} and the fact that~$p_3 \leq 1$ and~$q_2 \geq 0$, we get
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, 2) & \geq & - \ (h (1) + h (2) + h (3) + h (4) + h (5)) \vspace*{4pt} \\ && \hspace*{25pt} + \
(1/p_2)(h (3) + h (4) + h (5)) + h (5) \vspace*{4pt} \\
& = & - \ (3 + 6 + 6 + 3 + 1) + 2 \times (6 + 3 + 1) + 1 \ = \ 2 \ > \ 0, \end{array} $$
which again gives fixation.
\end{demo} \\ \\
\begin{demo}{Corollary~\ref{cor:cycle} (cycle)} --
Regardless of the parity of~$F$,
\begin{equation}
\label{eq:cycle-1}
\begin{array}{rclclcl}
f (s_-, s_+, s) & = & 0 & \hbox{when} & s_- \leq s_+ \leq \mathbf{d} & \hbox{and} & s > s_+ - s_- \vspace*{2pt} \\
f (s_-, s_+, s) & = & 1 & \hbox{when} & s_- \leq s_+ \leq \mathbf{d} & \hbox{and} & s = s_+ - s_- \end{array}
\end{equation}
while the number of vertices at distance~$s_+$ of a given vertex is
\begin{equation}
\label{eq:cycle-2}
h (s_+) = 2 \ \ \hbox{for all} \ \ s_+ < F/2 \quad \hbox{and} \quad h (s_+) = 1 \ \ \hbox{when} \ \ s_+ = F/2 \in \mathbb{N}.
\end{equation}
Assume that~$F = 4 \tau + 2$.
Then, $\mathbf{d} = 2 \tau + 1$ so it follows from~\eqref{eq:cycle-1}--\eqref{eq:cycle-2} that
$$ \begin{array}{rcl}
p_2 & = & \max \,\{\sum_{s : \ceil{s / \tau} = 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = 2 \} \vspace*{3pt} \\
& = & \max \,\{f (s_-, s_+, s_+ - s_-) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = 2 \} \vspace*{3pt} \\
& = & \max \,\{f (s_-, s_+, s_+ - s_-) / h (s_+) : \ceil{s_+ / \tau} = 2 \} \ = \ 1/2. \end{array} $$
Using in addition that~$p_3 \leq 1$ and~$q_2 \geq 0$ together with~\eqref{eq:common}, we get
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, \tau) & \geq & - \ (h (1) + h (2) + \cdots + h (2 \tau + 1)) \vspace*{4pt} \\ &&
+ \ (1/p_2)(h (\tau + 1) + h (\tau + 2) + \cdots + h (2 \tau + 1)) + h (2 \tau + 1) \vspace*{4pt} \\
& = & - \ (4 \tau + 1) + 2 \times (2 \tau + 1) + 1 \ = \ 2 \ > \ 0. \end{array} $$
In particular, the corollary follows from Theorem~\ref{th:dist-reg}.
\end{demo} \\ \\
\begin{demo}{corollary~\ref{cor:hypercube} (hypercube)} --
The first part of the corollary has been explained heuristically in~\cite{adamopoulos_scarlatos_2012}.
To turn it into a proof, we first observe that opinions on the hypercube can be represented by vectors with coordinates equal to zero or one while the distance
between two opinions is the number of coordinates the two corresponding vectors disagree on.
In particular, the number of opinions at distance~$s$ of a given opinion, namely~$h (s)$, is equal to the number of subsets of size~$s$ of a set of size~$d$.
Therefore, we have the symmetry property
\begin{equation}
\label{eq:hypercube-1}
h (s) \ = \ {d \choose s} \ = \ {d \choose d - s} \ = \ h (d - s) \quad \hbox{for} \quad s = 0, 1, \ldots, d,
\end{equation}
from which it follows that, for~$d = 3 \tau + 1$,
$$ \begin{array}{rcl}
2^{-d} \,S (\Gamma, \tau) & = & - \ h (1) - \cdots - h (\tau) + h (2 \tau + 1) + \cdots + h (d - 1) + 2 \,h (d) \vspace*{3pt} \\
& = & h (d - 1) - h (1) + h (d - 2) - h (2) + \cdots + h (d - \tau) - h (\tau) + 2 \,h (d) \vspace*{3pt} \\
& = & 2 \,h (d) \ = \ 2 \ > \ 0. \end{array} $$
Since in addition the function~$d \mapsto S (\Gamma, \tau)$ is nondecreasing, a direct application of Theorem~\ref{th:fixation} gives the first part of the corollary.
The second part is more difficult.
Note that, to prove this part, it suffices to show that, for any fixed~$\sigma > 0$, fixation occurs when
\begin{equation}
\label{eq:hypercube-2}
d \ = \ (2 + 3 \sigma) \,\tau \quad \hbox{and} \quad \tau \ \ \hbox{is large}.
\end{equation}
The main difficulty is to find a good upper bound for~$p_2$ which relies on properties of the hypergeometric random variable.
Let~$u$ and~$v$ be two opinions at distance~$s_-$ of each other.
By symmetry, we may assume without loss of generality that both vectors disagree on their first~$s_-$ coordinates.
Then, changing each of the first~$s_-$ coordinates in either one vector or the other vector and changing each of the remaining coordinates in either both vectors
simultaneously or none of the vectors result in the same vector.
In particular, choosing a vector~$w$ such that
$$ d (u, w) \ = \ s_+ \quad \hbox{and} \quad d (v, w) \ = \ s $$
is equivalent to choosing~$a$ of the first~$s_-$ coordinates and then choosing~$b$ of the remaining~$d - s_-$ coordinates with the following constraint:
$$ a + b \ = \ s_+ \quad \hbox{and} \quad (s_- - a) + b \ = \ s. $$
In particular, letting~$K := \ceil{(1/2)(s_- + s_+ - \tau)}$, we have
$$ \sum_{s = 1}^{\tau} \ f (s_-, s_+, s) \ = \ \sum_{a = K}^{s_-} {s_- \choose a}{d - s_- \choose s_+ - a} \ = \ h (s_+) \,P \,(Z \geq K) $$
where~$Z = \hypergeometric (d, s_-, s_+)$.
In order to find an upper bound for~$p_2$ and deduce fixation, we first prove the following lemma about the hypergeometric random variable.
\begin{lemma} --
Assume~\eqref{eq:hypercube-2}, that~$\ceil{s_- / \tau} = 1$ and~$\ceil{s_+ / \tau} = 2$. Then,
$$ P \,(Z \geq K) \ = \ \sum_{a = K}^{s_-} {s_- \choose a}{d - s_- \choose s_+ - a}{d \choose s_+}^{-1} \leq \ 1/2. $$
\end{lemma}
\begin{proof}
The proof is made challenging by the fact that there is no explicit expression for the cumulative distribution function of the hypergeometric random variable
and the idea is to use a combination of symmetry arguments and large deviation estimates.
Symmetry is used to prove the result when~$s_-$ is small while large deviation estimates are used for larger values.
Note that the result is trivial when~$s_+ > s_- + \tau$ since in this case the sum in the statement of the lemma is empty so equal to zero.
To prove the result when the sum is nonempty, we distinguish two cases. \vspace*{5pt} \\
{\bf Small active piles} -- Assume that~$s_- < \sigma \tau$. Then,
\begin{equation}
\label{eq:hypergeometric-1}
\begin{array}{rcl}
s_+ & \leq & s_- + \tau < (1 + \sigma) \,\tau \ = \ (1/2)(d - \sigma \tau) \ < \ (1/2)(d - s_-) \vspace*{3pt} \\
K & \geq & (1/2)(s_- + s_+ - \tau) \ > \ s_- / 2 \ > \ s_- - K \end{array}
\end{equation}
from which it follows that
\begin{equation}
\label{eq:hypergeometric-2}
{s_- \choose a}{d - s_- \choose s_+ - a} \ \leq \ {s_- \choose a}{d - s_- \choose s_+ - s_- + a} \quad \hbox{for all} \quad K \leq a \leq s_-.
\end{equation}
Using~\eqref{eq:hypergeometric-2} and again the second part of~\eqref{eq:hypergeometric-1}, we deduce that
$$ \begin{array}{rcl}
h (s_+) \,P \,(Z \geq K) & = & \displaystyle \sum_{a = K}^{s_-} {s_- \choose a}{d - s_- \choose s_+ - a}
\ \leq \ \displaystyle \sum_{a = K}^{s_-} {s_- \choose a}{d - s_- \choose s_+ - s_- + a} \vspace*{4pt} \\
& = & \displaystyle \sum_{a = 0}^{s_- - K} {s_- \choose s_- - a}{d - s_- \choose s_+ - a}
\ \leq \ \displaystyle \sum_{a = 0}^{K - 1} {s_- \choose a}{d - s_- \choose s_+ - a}. \end{array} $$
In particular, we have~$P \,(Z \geq K) \leq P \,(Z < K)$, which gives the result. \vspace*{5pt} \\
{\bf Larger active piles} -- Assume that~$\sigma \tau \leq s_- \leq \tau$.
In this case, the result is a consequence of the following large deviation estimates for the hypergeometric random variable:
\begin{equation}
\label{eq:hypergeometric-3}
P \,\bigg(Z \geq \bigg(\frac{s_-}{d} + \epsilon \bigg) \,s_+ \bigg) \ \leq \ \bigg(\bigg(\frac{s_-}{s_- + \epsilon d} \bigg)^{s_- / d + \epsilon} \bigg(\frac{d - s_-}{d - s_- - \epsilon d} \bigg)^{1 - s_- / d - \epsilon} \bigg)^{s_+}
\end{equation}
for all~$0 < \epsilon < 1 - s_- / d$, that can be found in~\cite{hoeffding_1963}.
Note that
$$ \begin{array}{rcl}
d \,(s_+ + s_- - \tau) - 2 s_+ \,s_- & = & (d - 2 s_-) \,s_+ + d \,(s_- - \tau) \vspace*{3pt} \\
& \geq & (d - 2 s_-)(\tau + 1) + d \,(s_- - \tau) \ \geq \ (d - 2 \tau) \,s_- \vspace*{3pt} \\
& = & 3 \sigma \tau s_- \ = \ (3 \sigma \tau / 2 s_+)(2 s_+ \,s_-) \ \geq \ (3 \sigma / 4)(2 s_+ \,s_-) \end{array} $$
for all~$\tau < s_+ \leq 2 \tau$.
It follows that
$$ K \ \geq \ \frac{s_+ + s_- - \tau}{2} \ \geq \ \bigg(1 + \frac{3 \sigma}{4} \bigg) \,\frac{s_+ \,s_-}{d} \ = \ \bigg(\frac{s_-}{d} + \frac{3 \sigma s_-}{4d} \bigg) \,s_+ \ \geq \ \bigg(\frac{s_-}{d} + \frac{\sigma^2}{3} \bigg) \,s_+ $$
which, together with~\eqref{eq:hypergeometric-3} for~$\epsilon = \sigma^2 / 3$, gives
$$ \begin{array}{rcl}
P \,(Z \geq K) & \leq &
\displaystyle P \,\bigg(Z \geq \bigg(\frac{s_-}{d} + \epsilon \bigg) \,s_+ \bigg) \ \leq \ \bigg(\frac{s_-}{s_- + \epsilon d} \bigg)^{s_+ s_- / d} \vspace*{8pt} \\ & \leq &
\displaystyle \bigg(\frac{3 s_-}{3 s_- + \sigma^2 d} \bigg)^{s_+ s_- / d} \leq \ \bigg(\frac{3}{3 + 2 \sigma^2} \bigg)^{(\sigma / 3) \,s_+} \leq \ \bigg(\frac{3}{3 + 2 \sigma^2} \bigg)^{(\sigma / 3) \,\tau}. \end{array} $$
Since this tends to zero as~$\tau \to \infty$, the proof is complete.
\end{proof} \\ \\
It directly follows from the lemma that
$$ \begin{array}{l}
p_2 \ = \ \max \,\{\sum_{s : \ceil{s / \tau} = 1} f (s_-, s_+, s) / h (s_+) : \ceil{s_- / \tau} = 1 \ \hbox{and} \ \ceil{s_+ / \tau} = 2 \} \ \leq \ 1/2. \end{array} $$
This, together with~\eqref{eq:common} and~$p_3 \leq 1$ and~$q_2 \geq 0$, implies that
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, \tau) & \geq & - \ h (1) - \cdots - h (d) + (1/p_2) \,h (\tau + 1) + \cdots + (1/p_2) \,h (d) \vspace*{3pt} \\
& \geq & - \ h (1) - \cdots - h (d) + 2 \,h (\tau + 1) + \cdots + 2 \,h (d) \vspace*{3pt} \\
& = & - \ h (1) - \cdots - h (\tau) + h (\tau + 1) + \cdots + h (d). \end{array} $$
Finally, using again~\eqref{eq:hypercube-1} and the fact that~$d > 2 \tau$, we deduce that
$$ \begin{array}{rcl}
S_{\reg} (\Gamma, \tau) & \geq & - \ h (1) - \cdots - h (\tau) + h (\tau + 1) + \cdots + h (d) \vspace*{3pt} \\
& \geq & h (d - 1) - h (1) + h (d - 2) - h (2) + \cdots + h (d - \tau) - h (\tau) + h (d) \vspace*{3pt} \\
& = & h (d) \ = \ 1 \ > \ 0. \end{array} $$
The corollary follows once more from Theorem~\ref{th:dist-reg}.
\end{demo}
| {'timestamp': '2014-12-16T02:00:52', 'yymm': '1412', 'arxiv_id': '1412.4142', 'language': 'en', 'url': 'https://arxiv.org/abs/1412.4142'} |
\section{Introduction}
\label{sec1}
\IEEEPARstart{W}{ith} the development of the internet of vehicles (IoV) and cloud computing, caching technology facilitates various real-time vehicular applications for vehicular users (VUs), such as automatic navigation, pattern recognition and multimedia entertainment \cite{Liuchen2021} \cite{QWu2022}. For the standard caching technology, the cloud caches various contents like data, video and web pages. In this scheme, vehicles transmit the required contents to a macro base station (MBS) connected to a cloud server, and could fetch the contents from the MBS, which would cause high content transmission delay from the MBS to vehicles due to the communication congestion caused by frequently requested contents from vehicles \cite{Dai2019}. The content transmission delay can be effectively reduced by the emergence of vehicular edge computing (VEC), which caches contents in the road side unit (RSU) deployed at the edge of vehicular networks (VNs) \cite{Javed2021}. Thus, vehicles can fetch contents directly from the local RSU, to reduce the content transmission delay. In the VEC, since the caching capacity of the local RSU is limited, if some vehicles cannot fetch their required contents, a neighboring RSU who has the required contents could forward them to the local RSU. The worst case is that vehicles need to fetch contents from the MBS due to both local and neighboring RSUs not having cached the requested contents.
In the VEC, it is critical to design a caching scheme to cache the popular contents. The traditional caching schemes cache contents based on the previously requested contents \cite{Narayanan2018}. However, owing to the high-mobility characteristics of vehicles in VEC, the previously requested contents from vehicles may become outdated quickly, thus the traditional caching schemes may not satisfy all the VUs' requirement. Therefore, it is necessary to predict the most popular contents in the VEC and cache them in the suitable RSUs in advance. Machine learning (ML) as a new tool, can extract hidden features by training user data to efficiently predict popular contents\cite{Yan2019}. However, the user data usually contains privacy information and users are reluctant to share their data directly with others, which make it difficult to collect and train users' data. Federated learning (FL) can protect the privacy of users by sharing their local models instead of data\cite{Chen2021}. In traditional FL, the global model is periodically updated by aggregating all vehicles' local models\cite{Wang2020} -\cite{Cheng2021}. However, vehicles may frequently drive out of the coverage area of the VEC before they update their local models and thus the local models cannot be uploaded in the same area, which would reduce the accuracy of the global model as well as the probability of getting the predicted popular contents. Hence, it motivates us to consider the mobility of vehicles and propose an asynchronous FL to predict accurate popular contents in VEC.
Generally, the predicted popular contents should be cached in their local RSU of vehicles to guarantee a low content transmission delay. However, the caching capacity of each local RSU is limited and the popular contents may be diverse, thus the size of the predicted popular contents usually exceeds the cache capacity of the local RSU. Hence, the VEC has to determine where the predicted popular contents are cached and updated. The content transmission delay is an important metric for vehicles to provide real-time vehicular application. The different popular contents cached in the local and neighboring RSUs would impact the way vehicles fetch contents, and thus affect the content transmission delay. In addition, the content transmission delay of each vehicle is impacted by its channel condition, which is affected by vehicle mobility. Therefore, it is necessary to consider the mobility of vehicles to design a cooperative caching scheme, in which the predicted popular contents can be cached among RSUs to optimize the content transmission delay. In contrast to some conventional decision algorithms, deep reinforcement learning (DRL) is a favorable tool to construct the decision-making framework and optimize the cooperative caching for the contents in complex vehicular environment \cite{Zhu2021}. Therefore, we shall employ DRL to determine the optimal cooperative caching to reduce the content transmission delay of vehicles.
In this paper, we consider the vehicle mobility and propose a cooperative Caching scheme in VEC based on Asynchronous Federated and deep Reinforcement learning (CAFR). The main contributions of this paper are summarized as follows.
\begin{itemize}
\item[1)] By considering the mobility characteristics of vehicles including the positions and velocities, we propose an asynchronous FL algorithm to improve the accuracy of the global model.
\item[2)] We propose an algorithm to predict the popular contents based on the global model, where each vehicle adopts the autoencoder (AE) to predict its interested contents based on the global model, while the local RSU collects the interested contents of all vehicles within the coverage area to catch the popular contents.
\item[3)] We elaborately design a DRL framework (dueling deep Q-network (DQN)) to illustrate the cooperative caching problem, where the state, action and reward function have been defined. Then the local RSU can determine optimal cooperative caching to minimize the content transmission delay based on the dueling DQN algorithm.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{sec2} reviews the related works on content caching in VNs. Section \ref{sec3} briefly describes the system model. Section \ref{sec5} proposes a mobility-aware cooperative caching in the VEC based on asynchronous federated and deep reinforcement learning method. We present some simulation results in Section \ref{sec6}, and then conclude them in Section \ref{sec7}.
\section{Related Work}
\label{sec2}
In this section, we first review the existing works related to the content caching in vehicular networks (VNs), and then survey the current state of art of the cooperative content caching schemes in VEC.
In \cite{YDai2020}, Dai \textit{et al.} proposed a distributed content caching framework with empowering blockchain to achieve security and protect privacy, and considered the mobility of vehicles to design an intelligent content caching scheme based on DRL framework.
In \cite{Yu2021}, Yu \textit{et al.} proposed a mobility-aware proactive edge caching scheme in VNs that allows multiple vehicles with private data to collaboratively train a global model for predicting content popularity, in order to meet the requirements for computationally intensive and latency-sensitive vehicular applications.
In \cite{JZhao2021}, Zhao \textit{et al.} optimized the edge caching and computation management for service caching, and adopted Lyapunov optimization to deal with the dynamic and unpredictable challenges in VNs.
In \cite{SJiang2020}, Jiang \textit{et al.} constructed a two-tier secure access control structure for providing content caching in VNs with the assistance of edge devices, and proposed the group signature-based scheme for the purpose of anonymous authentication.
In \cite{CTang2021}, Tang \textit{et al.} proposed a new optimization method to reduce the average response time of caching in VNs, and then adopted Lyapunov optimization technology to constrain the long-term energy consumption to guarantee the stability of response time.
In \cite{YDai2022}, Dai \textit{et al.} proposed a VN with digital twin to cache contents for adaptive network management and policy arrangement, and designed an offloading scheme based on the DRL framework to minimize the total offloading delay.
However, the above content caching schemes in VNs did not take into account the cooperative caching in the VEC environment.
There are some works considering cooperative content caching schemes in VEC.
In \cite{GQiao2020}, Qiao \textit{et al.} proposed a cooperative edge caching scheme in VEC and constructed the double time-scale markov decision process to minimize the content access cost, and employed the deep deterministic policy gradient (DDPG) method to solve the long-term mixed-integer linear programming problems.
In \cite{JChen2020}, Chen \textit{et al.} proposed a cooperative edge caching scheme in VEC which considered the location-based contents and the popular contents, while designing an optimal scheme for cooperative content placement based on an ant colony algorithm to minimize the total transmission delay and cost.
In \cite{LYao2022}, Yao \textit{et al.} designed a cooperative edge caching scheme with consistent hash and mobility prediction in VEC to predict the path of each vehicle, and also proposed a cache replacement policy based on the content popularity to decide the priorities of collaborative contents.
In \cite{RWang2021}, Wang \textit{et al.} proposed a cooperative edge caching scheme in VEC based on the long short-term memory (LSTM) networks, which caches the predicted contents in RSUs or other vehicles and thus reduces the content transmission delay.
In \cite{DGupta2020}, Gupta \textit{et al.} proposed a cooperative caching scheme that jointly considers cache location, content popularity and predicted rating of contents to make caching decision based on the non-negative matrix factorization, where it employs a legitimate user authorization to ensure the secure delivery of cached contents.
In \cite{LYao2019}, Yao \textit{et al.} proposed a cooperative caching scheme based on the mobility prediction and drivers' social similarities in VEC, where the regularity of vehicles' movement behaviors are predicted based on the hidden markov model to improve the caching performance.
In \cite{RWu2022}, Wu \textit{et al.} proposed a hybrid service provisioning framework and cooperative caching scheme in VEC to solve the profit allocation problem among the content providers (CPs), and proposed an optimization model to improve the caching performance in managing the caching resources.
In \cite{LYao2017}, Yao \textit{et al.} proposed a cooperative caching scheme based on mobility prediction, where the popular contents may be cached in the mobile vehicles within the coverage area of hot spot. They also designed a cache replacement scheme according to the content popularity to solve the limited caching capacity problem for each edge cache device.
In \cite{KZhang2018}, Zhang \textit{et al.} proposed a cooperative edge caching architecture that focuses on the mobility-aware caching, where the vehicles cache the contents with base stations collaboratively. They also introduced a vehicle-aided edge caching scheme to improve the capability of edge caching.
In \cite{KLiu2016}, Liu \textit{et al.} designed a cooperative caching scheme that allows vehicles to search the unrequested contents. This scheme facilitates the content sharing among vehicles and improves the service performance.
In \cite{SWang2017}, Wang \textit{et al.} proposed a VEC caching scheme to reduce the total transmission delay. This scheme extends the capability of the data center from the core network to the edge nodes by cooperatively caching popular contents in different CPs. It minimizes the VUs' average delay according to an iterative ascending price method.
In \cite{MLiu2021}, Liu \textit{et al.} proposed a real-time caching scheme in which edge devices cooperate to improve the caching resource utilization. In addition, they adopted the DRL framework to optimize the problem of searching requests and utility models to guarantee the search efficiency.
In \cite{BKo2019}, Ko \textit{et al.} proposed an adaptive scheduling scheme consisting of the centralized scheduling mechanism, ad hoc scheduling mechanism and cluster management mechanism to exploit the ad hoc data sharing among different RSUs.
In \cite{JCui2020}, Cui \textit{et al.} proposed a privacy-preserving data downloading method in VEC, where the RSUs can find popular contents by analyzing encrypted requests of nearby vehicles to improve the downloading efficiency of the network.
In \cite{QLuo2020}, Luo \textit{et al.} designed a communication, computation and cooperative caching framework, where computing-enabled RSUs provide computation and bandwidth resource to the VUs to minimize the data processing cost in VEC.
As mentioned above, no other works has considered the vehicle mobility and privacy of VUs simultaneously to design cooperative caching schemes in VEC, which motivates us to propose a mobility-aware cooperative caching in VEC based on the asynchronous FL and DRL.
\begin{figure}
\center
\includegraphics[scale=0.7]{1-eps-converted-to.pdf}
\caption{VEC scenario}
\label{fig1}
\end{figure}
\section{System Model}
\label{sec3}
\subsection{System Scenario}
As shown in Fig. \ref{fig1}, we consider a three-tier VEC in an urban scenario that consists of a local RSU, a neighboring RSU, a MBS attached with a cloud and some vehicles moving in the coverage area of the local RSU. The top tier is the MBS deployed at the center of the VEC, while middle tier is the RSUs deployed in the coverage area of the MBS. They are placed on one side of the road. The bottom tier is the vehicles driving within the coverage area of the RSUs.
Each vehicle stores a large amount of VUs' historical data, i.e., local data. Each data is a vector reflecting different information of a VU, including the VU's personal information such as identity (ID) number, gender, age and postcode, the contents that the VU may request, as well as the VU's ratings for the contents where a larger rating for a content indicates that the VU is more interested in the content. Particularly, the rating for a content may be $0$, which means that it is not popular or is not requested by VUs. Each vehicle randomly chooses a part of the local data to form a training set while the rest is used as a testing set. The time duration of vehicles within the coverage area of the MBS is divided into rounds. For each round, each vehicle randomly selects contents from its testing set as the requested contents, and sends the request information to the local RSU to fetch the contents at the beginning of each round. We consider the MBS has abundant storage capacity and caches all available contents, while the limited storage capacity of each RSU can only accommodate part of contents. Therefore, the vehicle fetches each of the requested content from the local RSU, neighboring RSU or MBS in different conditions. Specifically,
\subsubsection{Local RSU}If a requested content is cached in the local RSU, the local RSU sends back the requested content to the vehicle. In this case the vehicle fetches the content from the local RSU.
\subsubsection{neighboring RSU}If a requested content is not cached in the local RSU, the local RSU transfers the request to the neighboring RSU, and the neighboring RSU sends the content to the local RSU if it caches the requested content. Afterward, the local RSU sends back the content to the vehicle. In this case the vehicle fetches the content from the neighboring RSU.
\subsubsection{MBS}If a content is neither cached in the local RSU nor the neighboring RSU, the vehicle sends the request to the MBS that directly sends back the requested content to the vehicle. In this case, the VU fetches the content from the MBS.
\subsection{Mobility Model of Vehicles}
The model assumes that all vehicles drive in the same direction and vehicles arrive at a local RSU, following a Poisson distribution with the arrival rate $\lambda_{v}$. Once a vehicle enters the coverage of the local RSU, it sends request information to the local RSU. Each vehicle keeps the same mobility characteristics including position and velocity within a round and may change its mobility characteristics at the beginning of each round. The velocity of different vehicles follows an independent identically distribution. The velocity of each vehicle is generated by a truncated Gaussian distribution, which is flexible and consistent with the real dynamic vehicular environment. For round $r$, the number of vehicles driving in the coverage area of the local RSU is $N^{r}$. The set of $N^{r}$ vehicles are denoted as $\mathbb{V}^{r}=\left\{V_{1}^{r}, V_{2}^{r},\ldots, V_{i}^{r}, \ldots, V_{N^{r}}^{r}\right\}$, where $V_{i}^{r}$ is vehicle $i$ driving in the local RSU $(1 \leq i \leq N^{r})$. Let $\left\{U_{1}^{r}, U_{2}^{r}, \ldots, U_{i}^{r}, \ldots, U_{N^{r}}^{r}\right\}$ be the velocities of all vehicles driving in the local RSU, where $U_{i}^{r}$ is velocity of $V_{i}^{r}$. According to \cite{AlNagar2019}, the probability density function of $U_{i}^{r}$ is expressed as
\begin{equation}
f({U_{i}^r}) = \left\{ \begin{aligned}
\frac{{{e^{ - \frac{1}{{2{\sigma ^2}}}{{({U_{i}^r} - \mu )}^2}}}}}{{\sqrt {2\pi {\sigma ^2}} (erf(\frac{{{U_{\max }} - \mu }}{{\sigma \sqrt 2 }}) - erf(\frac{{{U_{\min }} - \mu }}{{\sigma \sqrt 2 }}))}},\\
{U_{min }} \le {U_{i}^r} \le {U_{max }},\\
0 \qquad \qquad \qquad \qquad \quad otherwise.
\end{aligned} \right.
\label{eq1}
\end{equation}
where $U_{\max}$ and $U_{\min}$ are the maximum and minimum velocity threshold of each vehicle, respectively, and $erf\left(\frac{U_{i}^{r}-\mu}{\sigma \sqrt{2}}\right)$ is the Gauss error function of $U_{i}^{r}$ under the mean $\mu$ and variance $\sigma^{2}$.
\subsection{Communication Model}
The communications between the local RSU and neighboring RSU adopt the wired link. Each vehicle keeps the same communication model during a round and changes its communication model for different rounds. When the round is $r$, the channel gain of $V_{i}^{r}$ is modeled as \cite{3gpp}
\begin{equation}
\begin{aligned}
h_{i}^{r}(dis(x,V_{i}^{r}))=\alpha_{i}^{r}(dis(x,V_{i}^{r})) g_{i}^{r}(dis(x,V_{i}^{r})), \\
x=S,M,\\
\label{eq2}
\end{aligned}
\end{equation}
where $x=S$ means the local RSU and $x=M$ means the MBS, $dis(x,V_{i}^{r})$ is the distance between the local RSU$/$MBS and $V_{i}^{r}$, $\alpha_{i}^{r}(dis(x,V_{i}^{r}))$ is the path loss between the local RSU$/$MBS and $V_{i}^{r}$, and $g_{i}^{r}(dis(x,V_{i}^{r}))$ is the shadowing channel fading between the local RSU$/$MBS and $V_{i}^{r}$, which follows a Log-normal distribution.
Each RSU communicates with the vehicles in its coverage area through vehicle to RSU (V2R) link, while the MBS communicates with vehicles through vehicle to base station (V2B) link. Since the distances between the local RSU$/$MBS and $V_{i}^{r}$ are different in different rounds, V2R$/$V2B link suffers from different channel impairments, and thus transmit with different transmission rates in different rounds. The transmission rates under V2R and V2B link are calculated as follows.
According to the Shannon theorem, the transmission rate between the local RSU and $V_{i}^{r}$ is calculated as \cite{Chenwu2020}
\begin{equation}
R_{R, i}^{r}=B\log _{2}\left(1+\frac{p_B h_{i}^{r}(dis(S,V_{i}^{r}))}{\sigma_{c}^{2}}\right),
\label{eq3}
\end{equation}where $B$ is the available bandwidth, $p_B$ is the transmit power level used by the local RSU and $\sigma_{c}^{2}$ is the noise power.
Similarly, the transmission rate between the MBS and $V_{i}^{r}$ is calculated as
\begin{equation}
R_{B, i}^{r}=B\log _{2}\left(1+\frac{p_{M} h_{i}^{r}(dis(M,V_{i}^{r}))}{\sigma_{c}^{2}}\right),
\label{eq4}
\end{equation}where $p_{M}$ is the transmit power level used by MBS.
\begin{figure}
\center
\includegraphics[scale=0.75]{2-eps-converted-to.pdf}
\caption{Asynchronous FL}
\label{fig2}
\end{figure}
\section{Cooperative Caching Scheme}
\label{sec5}
In this section, we propose a cooperative cache scheme to optimize the content transmission delay in each round $r$. We first propose an asynchronous FL algorithm to protect VU's information and obtain an accurate model. Then we propose an algorithm to predict the popular contents based on the obtained model. Finally, we present a DRL based algorithm to determine the optimal cooperative caching according to the predicted popular contents. Next, we will introduce the asynchronous FL algorithm, the popular content prediction algorithm and the DRL-based algorithm, respectively.
\subsection{Asynchronous Federated Learning}
As shown in Fig. \ref{fig2}, the asynchronous FL algorithm consists of 5 steps as follows.
\subsubsection{Select Vehicles}
\
\newline
\indent
The main goal of this step is to select the vehicles whose staying time in the local RSU is long enough to ensure they can participate in the asynchronous FL and complete the training process.
Each vehicle first sends its mobility characteristics including its velocity and position (i.e., the distance to the local RSU and distance it has traversed within the coverage of the local RSU), then the local RSU selects vehicles according to the staying time that is calculated based on the vehicle's mobility characteristics. The staying time of $V_{i}^{r}$ in the local RSU is calculated as
\begin{equation}
T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r},
\label{eq5}
\end{equation}
where $L_s$ is the coverage range of the local RSU, $P_{i}^{r}$ is the distance that $V_{i}^{r}$ has traversed within the coverage of the local RSU.
The staying time of $V_{i}^{r}$ should be larger than the sum of the average training time $T_{training}$ and inference time $T_{inference}$ to guarantee that $V_{i}^{r}$ can complete the training process. Therefore, if $T_{r,i}^{staying}>T_{training}+T_{inference}$, the local RSU selects $V_{i}^{r}$ to participate in asynchronous FL training. Otherwise, $V_{i}^{r}$ is ignored.
\subsubsection{Download Model}
\
\newline
\indent
In this step, the local RSU will generate the global model $\omega^{r}$. For the first round, the local RSU initializes a global model based on the AE, which can extract the hidden features used for popular content prediction. In each round, the local RSU updates the global model and transfers the global model $\omega^{r}$ to all the selected vehicles in the end.
\subsubsection{Local Training}
\
\newline
\indent
In this step, each vehicle in the local RSU sets the downloaded global model $\omega^{r}$ as the initial local model and updates the local model iteratively through training. Afterward, the updated local model will be the feedback to the local RSU.
For each iteration $k$, $V_{i}^{r}$ randomly samples some training data $n_{i,k}^{r}$ from the training set. Then, it uses $n_{i,k}^{r}$ to train the local model based on the AE that consists of an encoder and a decoder. Let $W_{i,k}^{r,e}$ and $b_{i,k}^{r,e}$ be the weight matrix and bias vector of the encoder for iteration $k$, respectively, $W_{i,k}^{r,d}$ and $b_{i,k}^{r,d}$ be the weight matrix and bias vector of the decoder for iteration $k$, respectively. Thus the local model of $V_{i,j}^{r}$ for iteration $k$ is expressed as $\omega_{i,k}^r=\{W_{i,k}^{r,e}, b_{i,k}^{r,e}, W_{i,k}^{r,d}, b_{i,k}^{r,d}\}$. For each training data $x$ in $n_{i,k}^{r}$, the encoder first maps the original training data $x$ to a hidden layer to obtain the hidden feature of $x$, i.e., $z(x)=f\left(W_{i,k}^{r,e}x+b_{i,k}^{r,e}\right)$. Then the decoder calculates the reconstructed input $\hat{x}$, i.e., $\hat{x}=g\left(W_{i,k}^{r,d}z(x)+b_{i,k}^{r,d}\right)$, where $f{(\cdot)}$ and $g{(\cdot)}$ are the nonlinear and logical activation function \cite{Ng2011}. Afterward, the loss function of data $x$ under the local model $\omega_{i,k}^r$ is calculated as
\begin{equation}
l\left(\omega_{i,k}^r;x\right)=(x-\hat{x})^{2},
\label{eq6}
\end{equation}where $\omega^{r}_{i,1}=\omega^{r}$.
After the loss functions of all the data in $n_{i,k}^{r}$ are calculated, the local loss function for iteration $k$ is calculated as
\begin{equation}
f(\omega_{i,k}^r)=\frac{1}{\left| n_{i,k}^r\right|}\sum_{x\in n_{i,k}^r} l\left(\omega_{i,k}^r;x\right),
\label{eq7}
\end{equation}
where $\left| n_{i,k}^r\right|$ is the number of data in $n_{i,k}^r$.
Then the regularized local loss function is calculated to reduce the deviation between the local model $\omega_{i,k}^r$ and global model $\omega^{r}$ to improve the algorithm convergence, i.e.,
\begin{equation}
g\left(\omega_{i,k}^r\right)=f\left(\omega_{i,k}^r\right)+\frac{\rho}{2}\left\|\omega^{r}-\omega_{i,k}^r\right\|^{2},
\label{eq8}
\end{equation}
where $\rho$ is the regularization parameter.
Let $\nabla g(\omega_{i,k}^{r})$ be the gradient of $g\left(\omega_{i,k}^r\right)$, which is referred to as the local gradient. In the previous round, some vehicles may upload the updated local model unsuccessfully due to the delayed training time, and thus adversely affect the convergence of global model \cite{Chen2020}\cite{Xie2019}\cite{-S2021}. Here, these vehicles are called stragglers and the local gradient of a straggler in the previous round is referred to as the delayed local gradient. To solve this problem, the delayed local gradient will be aggregated into the local gradient of the current round $r$. Thus, the aggregated local gradient can be calculated as
\begin{equation}
\nabla \zeta_{i,k}^{r}=\nabla g(\omega_{i,k}^{r})+\beta \nabla g_{i}^{d},
\label{eq9}
\end{equation}
where $\beta$ is the decay coefficient and $\nabla g_{i}^{d}$ is the delayed local gradient. Note that $\nabla g_{i}^{d}=0$ if $V_{i}^{r}$ uploads successfully in the previous round.
Then the local model for the next iteration is updated as
\begin{equation}
\omega^{r}_{i,k+1}=\omega^{r}-\eta_{l}^{r}\nabla \zeta_{i,k}^{r},
\label{eq10}
\end{equation}where $\eta_{l}^{r}$ is the local learning rate in round $r$, which is calculated as
\begin{equation}
\eta_{l}^{r}=\eta_{l} \max \{1, \log (r)\},
\label{eq11}
\end{equation} where $\eta_{l}$ is the initial value of local learning rate.
Then iteration $k$ is finished and $V_{i}^{r}$ randomly samples some training data again to start the next iteration. When the number of iterations reaches the threshold $e$, $V_{i}^{r}$ completes the local training and upload the updated local model $\omega_{i}^{r}$ to the local RSU.
\subsubsection{Upload Model}
\
\newline
\indent
Each vehicle uploads its updated local model to the local RSU after it completes local training.
\subsubsection{Asynchronous aggregation}
\
\newline
\indent
If the local model of $V_{i}^{r}$, i.e., $\omega^{r}_{i}$, is the first model received by the local RSU, the upload is successful and the local RSU updates the global model. Otherwise, the local RSU drops $\omega^{r}_{i}$ and thus the upload is not successful.
When the upload is successful, the local RSU updates the global model $\omega^{r}$ by weighted averaging as follows:
\begin{algorithm}
\caption{The Asynchronous Federated Learning Algorithm}
\label{al1}
Set global model $\omega^{r}$;\\
\For{each round $r$ from $1$ to $R^{max}$}
{
\For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$ \textbf{in parallel}}
{
$T_{r,i}^{staying}=\left(L_{s}-P_{i}^{r}\right) / U_{i}^{r}$;\\
\If{ $T_{r,i}^{staying}>T_{training}+T_{inference}$}
{
$V^{r}_i$ is selected to participate in asynchronous FL training;
}
}
\For{each selected vehicle $ V^{r}_{i}$}
{
$\omega^{r}_{i} \leftarrow \textbf{Vehicle Updates}(\omega^r,i)$;\\
Upload the local model $\omega^{r}_{i}$ to the local RSU;\\
}
Receive the updated model $\omega^{r}_{i}$;\\
Calculate the weight of the asynchronous aggregation $\chi_{i}$ based on Eq. \eqref{eq14};\\
Update the global model based on Eq. \eqref{eq12};\\
\Return $w^{r+1}$
}
\textbf{Vehicle Update}($w,i$):\\
\textbf{Input:} $w^r$ \\
Calculate the local learning rate $\eta_{l}^{r}$ based on Eq. \eqref{eq11};\\
\For{each local epoch k from $1$ to $e$}
{
Randomly samples some data $n_{i,k}^r$ from the training set;\\
\For{each data $x \in n_{i,k}^r$ }
{
Calculate the loss function of data $x$ based on Eq. \eqref{eq6};\\
}
Calculate the local loss function for interation $k$ based on Eq. \eqref{eq7};\\
Calculate the regularized local loss function $g\left(\omega_{i,k}^r\right)$ based on Eq. \eqref{eq8};\\
Aggregate local gradient $\nabla \zeta_{i,k}^{r}$ based on Eq. \eqref{eq9};\\
Update the local model $\omega^{r}_{i,k}$ based on Eq. \eqref{eq10};\\
}
Set $\omega^{r}_{i}=\omega^{r}_{i,e}$;\\
\Return$\omega^{r}_{i}$
\end{algorithm}
\begin{equation}
\omega^{r}=\omega^{r-1}+\frac{d_{i}^r}{d^r} \chi_{i} \omega^{r}_{i},
\label{eq12}
\end{equation}where $d_{i}^r$ is the size of local data in $V_i^r$, $d^r$ is the total local data size of the selected vehicles and $\chi_{i}$ is the weight of the asynchronous aggregation for $V_{i}^{r}$.
The weight of the asynchronous aggregation $\chi_{i}$ is calculated by considering the traversed distance of $V_{i}^{r}$ in the coverage area of the local RSU and the content transmission delay from local RSU to $V_{i}^{r}$ to improve the accuracy of the global model and reduce the content transmission delay. Specifically, if the traversed distance of $V_{i}^{r}$ is large, it may have long available time to participate in the training, thus its local model should occupy large weight for aggregation to improve the accuracy of the global model. In addition, the content transmission delay from local RSU to $V_{i}^{r}$ is important because $V_{i}^{r}$ would finally download the content from the local RSU when the content is either cached in the local or neighboring RSU. Thus, if the content transmission delay from local RSU to $V_{i}^{r}$ is small, its local model should also occupy large weight for aggregation to reduce the content transmission delay. The weight of the asynchronous aggregation $\chi_{i}$ is calculated as
\begin{equation}
\chi_{i}=\mu_{1} {(L_{s}-P_{i}^{r})}+\mu_{2} \frac{s}{R_{R, i}^{r}},
\label{eq13}
\end{equation}where $\mu_{1}$ and $\mu_{2}$ are coefficients of the position weight and transmission weight, respectively (i.e., $\mu_{1}+\mu_{2}=1$), $s$ is the size of each content. Thus, the content transmission delay from local RSU to $V_{i}^{r}$ is affected by the transmission rate between the local RSU and $V_{i}^{r}$, i.e., $R_{R, i}^{r}$. We can further calculate $\chi_{i}$ based on the normalized $L_{s}-P_{i}^{r}$ and $R_{R, i}^{r}$, i.e.,
\begin{equation}
\chi_{i}=\mu_{1} \frac{(L_{s}-P_{i}^{r})}{L_{s}}+\mu_{2} \frac{R_{R, i}^{r}}{\max _{k \in N^{r}}\left(R_{R, k}^{r}\right)}.
\label{eq14}
\end{equation}
Since the local RSU knows $dis(S,V_{i}^{r})$ and $P_{i}^{r}$ for each vehicle $i$ at the beginning of the asynchronous FL, the local RSU can calculate $R_{R, i}^{r}$ according to Eqs. \eqref{eq2} and \eqref{eq3}, and further calculate $\chi_{i}$ according to Eq. \eqref{eq13}.
Up to now, the asynchronous FL in round $r$ is finished and the updated global model $\omega^{r}$ is obtained. The process of the asynchronous FL algorithm is shown in Algorithm \ref{al1} for ease of understanding, where $R^{max}$ is the maximum number of rounds, $e$ is the maximum number of local epochs. Then, the local RSU sends the obtained model to each vehicle to predict popular contents.
\subsection{Popular Content Prediction}
\begin{figure*}
\center
\includegraphics[scale=0.6]{3-eps-converted-to.pdf}
\caption{Popular content prediction process}
\label{fig3}
\end{figure*}
In this subsection, we propose an algorithm to predict the popular contents. As shown in Fig. \ref{fig3}, the popular content prediction algorithm consists of the 4 steps as follows.
\subsubsection{Data Preprocessing}
\
\newline
\indent
The VU's rating for a content is $0$ when VU is uninterested in the content or has not requested a content. Thus, it is difficult to differentiate if a content is an interesting one for the VU when its rating is $0$. Marking all contents with rating $0$ as uninterested contents is a bias prediction. Therefore, we adopt the obtained model to reconstruct the rating for each content in the first step, which is described as follows.
Each vehicle abstracts a rating matrix from the data in the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's ratings for all contents. Denote the rating matrix of $V_{i}^r$ as $\boldsymbol{R}_{i}^r$. Then, the AE with the obtained model is adopted to reconstruct $\boldsymbol{R}_{i}^r$. The rating matrix $\boldsymbol{R}_{i}^r$ is used as the input data for the AE that outputs the reconstructed rating matrix $\hat{\boldsymbol{R}}_{i}^r$. Since $\hat{\boldsymbol{R}}_{i}^r$ is reconstructed based on the obtained model which reflects the hidden features of data, $\hat{\boldsymbol{R}}_{i}^r$ can be used to approximate the rating matrix $\boldsymbol{R}_{i}^r$.
Then, similar to the rating matrix, each vehicle also abstracts a personal information matrix from the data of the testing set, where the first dimension of the matrix is VUs' ID and the second dimension is VU's personal information.
\subsubsection{Cosine Similarity}
\
\newline
\indent
$V_{i}^r$ counts the number of the nonzero ratings for each VU in $\boldsymbol{R}_{i}^r$ and marks the VUs with the $1$$/$$m$ largest numbers as active VUs. Then, each vehicle combines $\hat{\boldsymbol{R}}_{i}^r$ and the personal information matrix (denoted as $\boldsymbol{H}_{i}^r$) to calculate the similarity between each active VU and other VUs. The similarity between an active VU $a$ and $b$ is calculated according to cosine similarity \cite{yuet2018}
\begin{equation}
\begin{aligned}
\operatorname{sim}_{a,b}^{r,i}=\cos \left(\boldsymbol{H}_{i}^r(a,:), \boldsymbol{H}_{i}^r(b,:)\right)\\
=\frac{\boldsymbol{H}_{i}^r(a,:) \cdot \boldsymbol{H}_{i}^r(b,:)^T}{\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2} \times\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}}
\label{eq15}
\end{aligned}
\end{equation}where $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$ are the vectors corresponding to the active VU $a$ and $b$ in the combined matrixes, respectively, $\left\|\boldsymbol{H}_{i}^r(a,:)\right\|_{2}$ and $\left\|\boldsymbol{H}_{i}^r(b,:)\right\|_{2}$ are the 2-norm of $\boldsymbol{H}_{i}^r(a,:)$ and $\boldsymbol{H}_{i}^r(b,:)$, respectively. Then for each active VU $a$, $V_{i}^r$ selects the VUs with the $K$ largest similarities as the $K$ neighboring VUs of VU $a$. The ratings of the $K$ neighboring VUs also reflect the preferences of VU $a$ to a certain extent.
\subsubsection{Interested Contents}
\
\newline
\indent
After determining the neighboring VUs of active VUs, in $\boldsymbol{R}_{i}^r$, the vectors of neighboring VUs for each active VU are abstracted to construct a matrix $\boldsymbol{H}_K$, where the first dimension of $\boldsymbol{H}_K$ is the IDs of the neighboring VUs for active VUs, while the second dimension of $\boldsymbol{H}_K$ is the ratings of the contents from neighboring VUs. In $\boldsymbol{H}_K$, a content with a VU's nonzero rating is regarded as the VU's interested content. Then the number of interested contents is counted for each VU, where the counted number of a content is referred to as the content popularity of the content. $V_{i}^r$ selects the contents with the $F_c$ largest content popularity as the predicted interested contents.
\subsubsection{Popular Contents}
\
\newline
\indent
After vehicle in the local RSU uploads their predicted interested contents, the local RSU collects and compares the predicted interested contents uploaded from all vehicles to select the contents with the $F_{c}$ largest content popularity as the popular contents. The proposed popular content prediction algorithm is illustrated in Algorithm \ref{al2}, where $\mathbb{C}^{r}$ is the set of the popular contents and $\mathbb{C}_{i}^r$ is the set of interested contents of $V^{r}_i$.
\begin{algorithm}
\caption{The Popular Content Prediction Algorithm}
\label{al2}
\textbf{Input: $\omega^{r}$}\\
\For{each vehicle $ V^{r}_{i} \in \mathbb{V}^{r}$}
{
Construct the rating matrix $\boldsymbol{R}_{i}^r$ and personal information matrix;\\
$\hat{\boldsymbol{ R}}_{i}^r \leftarrow AE(\omega^{r},\boldsymbol{R}_{i}^r)$;\\
Combine $\hat{\boldsymbol{ R}}_{i}^r$ and information matrix as $\boldsymbol{H}_{i}^r$;\\
$\mathbb{C}_{i}^r \leftarrow \textbf{Vehicle Predicts}(\boldsymbol{H}_{i}^r,i)$;\\
Uploads $\mathbb{C}_{i}^r$ to the local RSU;\\
}
\textbf{Compare} received contents and select the $F_c$ most interested contents into $\mathbb{C}^{r}$.\\
\Return $\mathbb{C}^{r}$\\
\textbf{Vehicle Predicts}$(\boldsymbol{H}_{i}^r, i)$:\\
\textbf{Input: $\boldsymbol{H}_{i}^r, i\in {1,2,...,N^r}$}\\
Calculate the similarity between $V_{i}^r$ and other vehicles based on Eq. \eqref{eq15};\\
Select the first $K$ vehicles with the largest similarity as neighboring vehicles of $V_{i}^r$;\\
Construct reconstructed rating matrixes of $K$ neighboring vehicles as $\boldsymbol{H}_K$;\\
Select the $F_c$ most interested contents as $\mathbb{C}_{i}^r$;\\
\Return $\mathbb{C}_{i}^r$
\end{algorithm}
The cache capacity of the each RSU $c$, i.e., the largest number of contents that each RSU can accommodate, is usually smaller than $F_{c}$.
Next, we will propose a cooperative caching to determine where the predicted popular contents can be cached.
\subsection{Cooperative Caching Based on DRL}
We consider the computation capability of each RSU is powerful and the cooperative caching can be determined within a short time. The main goal is to find an optimal cooperative caching based on DRL to minimize the content transmission delay. Next, we will formulate the DRL framework and then introduce the DRL algorithm.
\subsubsection{DRL Framework}
\
\newline
\indent
The DRL framework includes state, action and reward. The training process is divided into slots. For the current slot $t$, the local RSU observes the current state $s(t)$ and decides the current action $a(t)$ based on $s(t)$ according to a policy $\pi$, which is used to generate the action based on the state at each slot. Then the local RSU can obtain the current reward $r(t)$ and observes the next state $s(t+1)$ that is transited from the current state $s(t)$. We will design $s(t)$, $a(t)$ and $r(t)$, respectively, for this DRL framework.
\paragraph{State}
\
\newline
\indent
We consider the contents cached by the local RSU as the current state $s(t)$. In order to focus on the contents with high popularity, the contents of the state space $s(t)$ are sorted in descending order based on the predicted content popularity of the $F_c$ popular contents, thus the current state can be expressed as $s(t)=\left(s_{1}, s_{2}, \ldots, s_{c}\right)$, where $s_{i}$ is the $i$th most popular content.
\paragraph{Action}
\
\newline
\indent
Action $a(t)$ represents whether the contents cached in the local RSU need to be relocated or not. In the $F_c$ predicted popular contents, the contents that are not cached in the local RSU form a set $\mathbb{N}$. If $a(t)=1$, the local RSU randomly selects $n(n<c)$ contents from $\mathbb{N}$ and exchanges them with the $n$ lowest popular contents cached in the local RSU, and then sorts the contents in a descending order based on their content popularity to get $s(t+1)$. Neighboring RSU also randomly samples $c$ contents from $F_c$ popular contents that do not belong to $s(t+1)$ as the cached contents of the neighboring RSU within the next slot $t+1$. We denote the contents cached by the neighboring RSU as $s_n(t+1)$.
If $a(t)=0$, the contents cached in the local RSU will not be relocated and the neighboring RSU also determines its cached contents, similar to the case when $a(t)=1$.
\paragraph{Reward}
\
\newline
\indent
The reward function $r(t)$ is designed to minimize the total content transmission delay to fetch the contents requested by vehicles. Note that the local RSU has recorded all the contents requested by the vehicles. The content transmission delays to fetch a requested content $f$ are different when the content is cached in different places.
If content $f$ is cached in the local RSU, i.e., $f\in s(t)$, the local RSU transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is calculated as
\begin{equation}
d_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}},
\label{eq16}
\end{equation}where $R_{R, i}^{r}$ is the transmission rate between the local RSU and $V_{i}^{r}$, which has been calculated by Eq. \eqref{eq3}.
If content $f$ is cached in the neighboring RSU, i.e., $f\in s_n(t)$, the neighboring RSU sends the content to the local RSU that forwards the content to $V_{i}^{r}$, thus the transmission delay is calculated as
\begin{equation}
\bar{d}_{R, i, f}^{r}=\frac{s}{R_{R, i}^{r}}+\frac{s}{R_{R-R}},
\label{eq17}
\end{equation}where $R_{R-R}$ is the transmission rate between the local RSU and neighboring RSU, which is a constant transmission rate in the wired link.
If content $f$ is neither cached in the local RSU nor in the neighboring RSU, i.e., $f \notin s(t) \text{ and } f \notin s_n(t)$, the MBS transmits content $f$ to $V_{i}^{r}$, thus the content transmission delay is expressed as
\begin{equation}
d_{B, i,f}^{r}=\frac{s}{R_{B, i}^{r}},
\label{eq18}
\end{equation}where $R_{B, i}^{r}$ is the transmission rate between the MBS and $V_{i}^{r}$, which is calculated according to Eq. \eqref{eq4}.
In order to clearly distinguish the content transmission delays under different conditions, we set the reward that $V_{i}^r$ fetches content $f$ at slot $t$ as
\begin{equation}
r_{i,f}^r(t)=\begin{cases}
e^{-\lambda_{1} d_{R,i,f}^{r}}& f\in s(t)\\
e^{-\left(\lambda_{1} d_{R, i, f}^{r}+\lambda_{2} \bar d_{R, i, f}^{r}\right)}&f \in s_n(t) \\
e^{-\lambda_{3} d_{M, i, f}^{r}}&f \notin s(t) \text{ and } f \notin s_n(t)
\end{cases},
\label{eq19}
\end{equation}
where $\lambda_{1}+\lambda_{2}+\lambda_{3}=1$ and $\lambda_{1}<\lambda_{2}\ll \lambda_{3}$.
Thus the reward function $r(t)$ is calculated as
\begin{equation}
r(t)=\sum_{i=1}^{N^r}\sum_{f=1}^{F_{i}^r} r_{i,f}^r(t),
\label{eq20}
\end{equation}where $F_{i}^r$ is the number of requested contents from $V_{i}^r$.
\subsubsection{DRL Algorithm}
\
\newline
\indent
As mentioned above, the next state will change when the action is $1$. The dueling DQN algorithm is a particular algorithm which works for the cases where the partial actions have no relevant effects on subsequent states \cite{Wangarxiv2016}. Specifically, the dueling DQN decomposes the Q-value into two functions $V$ and $A$. Function $V$ is the state value function that is unrelated to the action, while $A$ is the action advantage function that is related to the action. Therefore, we adopt the dueling DQN algorithm to solve this problem.
\begin{algorithm}
\caption{Cooperative Caching Based on Dueling DQN Algorithm}
\label{al3}
Initialize replay buffer $\mathcal{D}$, the parameters of the prediction network $\theta$, the parameters of the target network $\theta'$;\\
\textbf{Input:} requested contents from all vehicles in the local RSU for round $r$\\
\For{episode from $1$ to $T_s$}
{
Local RSU randomly caches $c$ contents from $F_c$ popular contents;\\
Neighboring RSU randomly caches $c$ contents from $F_c$ popular contents that are not cached in the local RSU;\\
\For{slot from $1$ to $N_s$}
{
Observe the state $s(t);$\\
Calculate the Q-value of prediction network $Q(s(t), a; \theta)$ based on Eq. \eqref{eq21};\\
Calculate the action $a(t)$ based on Eq. \eqref{eq22};\\
Obtain state $s(t+1)$ after executing action $a(t)$;\\
Obtain reward $r(t)$ based on Eqs. \eqref{eq16} - \eqref{eq20};\\
Store tuple $(s(t),a(t),r(t),s(t+1))$ in $\mathcal{D}$;\\
\If{number of tuples in $\mathcal{D}$ is larger than $I$}
{
Randomly sample a minibatch of $I$ tuples from $\mathcal{D}$;\\
\For{tuple $i$ from $1$ to $I$}
{
Calculate the Q-value function of target network $Q'(s^i, a; \theta')$ based on Eq. \eqref{eq23};\\
Calculate the target Q-value of the target network $y^i$ based on Eq. \eqref{eq24};\\
Calculate the loss function $L(\theta)$ based on Eq. \eqref{eq25};\\
}
Calculate the gradient of loss function $\nabla_{\theta} L(\theta)$ based on Eq. \eqref{eq26};\\
Update parameters of the prediction network $\theta$ based on Eq. \eqref{eq27};\\
}
\If{number of slots is $M$}
{$\theta'=\theta$.\\}
}
}
\end{algorithm}
The dueling DQN includes a prediction network, a target network and a replay buffer. The prediction network evaluates the current state-action value (Q-value) function, while the target network generates the optimal Q-value function. Each of them consists of three layers, i.e., the feature layer, the state-value layer, and the advantage layer. The replay buffer $\mathcal{D}$ is adopted to cache the transitions for each slot. The dueling DQN algorithm is illustrated in Algorithm \ref{al3} and is described in detail as follow.
\begin{figure*}
\center
\includegraphics[scale=0.27]{4-eps-converted-to.pdf}
\caption{The flow diagram of the dueling DQN}
\label{fig4}
\end{figure*}
Firstly, the parameters of the prediction network $\theta$ and the parameters of the target network $\theta'$ are initialized randomly. The requested contents from all vehicles in the local RSU for round $r$ as input (lines 1-2).
Then the algorithm is executed for $T_s$ episodes. At the beginning of each episode, the local RSU randomly selects $c$ contents from $F_c$ popular contents, and the neighboring RSU randomly selects $c$ contents from $F_c$ popular contents that are not cached in the local RSU. Then the algorithm is executed iteratively from slots $1$ to $N_s$. In each slot $t$, the local RSU first observes state $s(t)$ and then input $s(t)$ to the prediction network, in which it goes through the feature layer, state-value layer and advantage layer, respectively. In the end, the prediction network outputs the state value function $V(s(t) ; \theta)$ and the action advantage function under each action $a$, i.e., $A(s(t), a ; \theta)$, respectively, where ${a \in\{0,1\}}$. Furthermore, the Q-value function of prediction network under each action $a$ is calculated as
\begin{equation}
\begin{aligned}
Q(s(t), a; \theta)=V(s(t) ; \theta)+\{ A(s(t), a ; \theta) \\
-\mathbb{E}[A(s(t), a ; \theta)] \} \\
\end{aligned}.
\label{eq21}
\end{equation}
In Eq. \eqref{eq21}, the range of Q-values can be narrowed to remove redundant degrees of freedom by calculating the difference between the action advantage function $A(s(t), a ; \theta)$ and the average value of the action advantage functions under all actions, i.e., $\mathbb{E}[A(s(t), a ; \theta)]$. Thus, the stability of the algorithm can be improved.
Then action $a(t)$ is chosen by the $\varepsilon \text {-greedy}$ method, which is calculated as
\begin{equation}
a(t)=\underset{a \in\{0,1\}}{\operatorname{argmax}}(Q(s(t), a;\theta))
\label{eq22}.
\end{equation}
Particularly, action $a(1)$ is initialized as $1$ at slot $1$.
The local RSU calculates the reward $r(t)$ according to Eqs. \eqref{eq16} - \eqref{eq20} and state $s(t)$ transits to the next state $s(t+1)$, then the local RSU observes $s(t+1)$. Next, the neighboring RSU randomly samples $c$ popular contents that are not cached in $s(t+1)$ as its cached contents, which is denoted as $s_n(t+1)$. The transition from $s(t)$ to $s(t+1)$ is denoted as tuple $(s(t),a(t),r(t),s(t+1))$, which is then stored in the replay buffer $\mathcal{D}$. When the number of the stored tuples in the replay buffer $\mathcal{D}$ is larger than $I$, the local RSU randomly samples $I$ tuples from $\mathcal{D}$ to form a minibatch. Let $(s^i,a^i,r^i,s'^i), (i=1,2,3,...,I)$ be the $i$-th tuple in the mini-batch. Then $S_i$ input each tuple into the prediction network and the target network (lines 3-12).
Next, we will introduce how parameters of prediction network $\theta$ are updated. For tuple $i$, the local RSU inputs $s^i$ into the target network, where it goes through the feature layer and outputs its feature. Then the feature is input to the state-value layer and the advantage layer, respectively, which output state value function $V'(s^i ; \theta')$ and action advantage function $A'(s^i, a; \theta')$ under each action $a \in \{0,1\}$, respectively. Thus, the Q-value function of target network of tuple $i$ under each action $a$ is calculated as
\begin{equation}
\begin{aligned}
&Q'(s^i, a; \theta')=V'(s^i ; \theta')\\
&+\{ A'(s^i, a ; \theta') -\left.\mathbb{E}\left[A'\left(s^i, a ; \theta'\right)\right]\right\}\\
\end{aligned}.
\label{eq23}
\end{equation}
Then the target Q-value of the target network of tuple $i$ is calculated as
\begin{equation}
y^i=r^i+\gamma_{D} \max _{a\in\{0,1\} } Q'(s^i, a; \theta'),
\label{eq24}
\end{equation}where $\gamma_{D}$ is the discount factor. The loss function is calculated as follows
\begin{equation}
L(\theta)=\frac{1}{I} \sum_{i=1}^{I}\left[(y^i-Q(s^i, a^i, \theta))^{2}\right].
\label{eq25}
\end{equation}
The gradient of loss function $\nabla_{\theta} L(\theta)$ for all sampled tuples is calculated as
\begin{equation}
\nabla_{\theta} L(\theta)=\frac{1}{I} \sum_{i=1}^{I} [\left(y^i-Q(s^i, a^i, \theta)\right) \nabla_{\theta^i} Q(s^i, a^i, \theta)].
\label{eq26}
\end{equation}
At the end of slot $t$, the parameters of the prediction network $\theta$ are updated as
\begin{equation}
\theta \leftarrow \theta-\eta_{\theta} \nabla_{\theta} L(\theta),
\label{eq27}
\end{equation}where $\eta_{\theta}$ is the learning rate of prediction network.
Up to now, the iteration in slot $t$ is completed, which will be repeated. During the iterations, the parameters of target network $\theta' $ are updated after a certain number of slots ($M$), as the parameters of prediction network $\theta$. When the number of slots reaches $N_s$, this episode is finished and then the local RSU randomly caches $c$ contents from $F_c$ popular contents to start the next episode. When the number of episodes reaches $T_s$, the algorithm will be terminated (lines 13-22). The flow diagram of the dueling DQN algorithm is shown in Fig. \ref{fig4}.
Finally, the local RSU and neighboring RSU cache popular contents according to the optimal cooperative caching, and then each vehicle fetches contents from the VEC. This round is finished after each vehicle has fetched contents and then the next round is started.
\section{Simulation and Analytical Results}
\label{sec6}
\begin{table}
\caption{Values of the parameters in the experiments.}
\label{tab2}
\footnotesize
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Parameters of System Model}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$B$ & $540$ kHz & $K$ &$10$\\
\hline
$m$ &$3$ & $p_B$ & $30$ dBm\\
\hline
$p_M$ & $43$ dBm & $R_{R,R}$ & $15$ Mbps \\
\hline
$s$ &$100$ bytes & $T_{training}$ & $2$s\\
\hline
$T_{inference}$ & $0.5$s & $U_{\max}$ &$60$ km/h\\
\hline
$U_{\min }$ &$50$ km/h & $\mu$ &$55$ km/h\\
\hline
$\sigma$ &$2.5$km/h & $\sigma_{c}^{2}$ & $-114$ dBm\\
\hline
\multicolumn{4}{|c|}{Parameters of Asynchronous FL}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$L_s$ &$1000$m & $\beta$ & $0.001$\\
\hline
$\eta_{l}$ &$0.01$ & $\mu_{1}$ &$0.5$ \\
\hline
$\mu_{2}$ &$0.5$ & $\rho$ &$0.0001$\\
\hline
\multicolumn{4}{|c|}{Parameters of DRL}\\
\hline
\textbf{Parameter} &\textbf{Value} &\textbf{Parameter} &\textbf{Value}\\
\hline
$I$ &$32$ & $\gamma_{D}$ & $0.99$\\
\hline
$\eta_{\theta}$ &$0.01$ & $\lambda_{1}$ & $0.0001$\\
\hline
$\lambda_{2}$ & $0.4$ & $\lambda_{3}$ & $0.5999$\\
\hline
\end{tabular}
\end{table}
We have evaluated the performance of the proposed CAFR scheme in this section.
\subsection{Settings and Dataset}
We simulate a VEC environment on the urban road as shown in Fig. \ref{fig1} and the simulation tool is Python $3.8$. The communications between vehicle and RSU/MBS employ the 3rd Generation Partnership Project (3GPP) cellular V2X (C-V2X) architecture, where the parameters are set according to the 3GPP standard \cite{3gpp}. The simulation parameters are listed in Table \ref{tab2}. A real-world dataset from the MovieLens website, i.e., MovieLens 1M, is used in the experiments. MovieLens 1M contains $1,000,209$ rating values for $3,883$ movies from $6,040$ anonymous VUs with movie ratings ranging from $0$ to $1$, where each VU rates at least $20$ movies \cite{Harper2016}. MovieLens lM also provides personal information about VUs including ID number, gender, age and postcode. We randomly divide MovieLens lM data set to each vehicle as its local data. Each vehicle randomly chooses $99.8\%$ data from its local data as its training set and $0.2\%$ data as its testing set. For each round, each vehicle randomly samples a part of the movies from testing set as its requested contents.
\subsection{Performance Evaluation}
We use cache hit ratio and the content transmission delay as performance metrics to evaluate the CAFR scheme. The cache hit rate is defined as the probability of fetching requested contents from the local RSU \cite{Muller2017}. If a requested content is cached in the local RSU, it can be fetched directly from the local RSU, which is referred to as a cache hit, otherwise, it is referred to as a cache miss. Thus, the cache hit rate is calculated as
\begin{equation}
\text { cache hit radio }=\frac{\text {cache hits }}{\text {cache hits }+\text {cache misses }}\times 100\%.
\label{eq28}
\end{equation}
The content transmission delay indicates the average delay for all vehicles to fetch contents, which is calculated as
\begin{equation}
\text {content transmission delay}=\frac{D^{\text {total}}}{\text {the number of vehicles }},
\label{eq29}
\end{equation}
where $D^{\text {total}}$ is the delay for all vehicles to fetch contents, and it is calculated by aggregating the content transmission delay for every vehicle to fetch contents.
We compare the CAFR scheme with other baseline schemes such as:
\begin{itemize}
\item Random: Randomly selecting $c$ contents from the all contents to cache in the local and neighboring RSU.
\item c-$\epsilon$-greedy: Selecting the contents with $c$ largest numbers of requests based on probability $1-\epsilon$ and selecting $c$ contents randomly based on probability $\epsilon$ to cache in the local RSU. In our simulation, $\epsilon= 0.1$.
\item Thompson sampling: For each round, the contents cached in the local RSU is updated based on the number of cache hits and cache misses in the previous round \cite{Cui2020}, and $c$ contents with the highest value are selected to cache in the local RSU.
\item FedAVG: Federated averaging (FedAVG) is a typical synchronous FL scheme where the local RSU needs to wait for the local model updates to update its global model according to weighted average method:
\begin{equation}
\omega^{r}=\sum_{i=1}^{N^r} \frac {d^r_i}{d^r} \omega^{r}_{i}.
\label{eq30}
\end{equation}
\item CAFR without DRL: Compared with the CAFR scheme, this scheme does not adopt the DRL algorithm to optimize caching scheme. Specifically, after predicting the popular contents, $c$ contents are randomly selected from the predicted popular contents to cache in the local RSU and neighboring RSU, respectively.
\end{itemize}
\begin{figure}
\center
\includegraphics[scale=0.5]{method_ce_vs_cs-eps-converted-to.pdf}
\caption{Cache hit radio under different cache capacities}
\label{fig5}
\end{figure}
Now, we will evaluate the performance of the CAFR scheme through simulation experiments. In the following performance evaluation, each result is the average value of five experiments.
Fig. \ref{fig5} shows the cache hit ratio of different schemes under different cache capacities of each RSU, where the result of CAFR is obtained when the vehicle density is $15$ vehicles/km (i.e., the number of vehicles is 15 per kilometer), and the results of other schemes are independent with the vehicle density. It can be seen that the cache hit ratio of all schemes increases with a larger capacity. This is because that the local RSU caches more contents with a larger capacity, thus the requested contents of vehicles are more likely to be fetched from the local RSU. Moreover, it is seen that the random scheme provides the worst cache hit ratio, because the scheme just selects contents randomly without considering the content popularity. In addition, CAFR and c-$\epsilon$-greedy outperform the random scheme and the thompson sampling. This is because that random and thompson sampling schemes do not predict the caching contents through learning, whereas CAFR and c-$\epsilon$-greedy decide the caching contents by observing the historical requested contents. Furthermore, CAFR outperforms c-$\epsilon$-greedy. This is because that CAFR captures useful hidden features from the data to predict the accurate popular contents.
\begin{figure}
\center
\includegraphics[scale=0.5]{method_rd_vs_cs-eps-converted-to.pdf}
\caption{Content transmission delay under different cache capacities}
\label{fig6}
\end{figure}
Fig. \ref{fig6} shows the content transmission delay of different schemes under different cache capacities of each RSU, where the vehicle density is $15$ vehicles/km. It is seen that the content transmission delays of all schemes decrease as the cache capacity increases. This is because that each RSU caches more contents as the cache capacity increases, and each vehicle fetches contents from local RSU and neighboring RSU with a higher possibility, thus reducing the content transmission delay. Moreover, the content transmission delay of CAFR is smaller than other schemes. This is because that the cache hit rate of CAFR is better than those of schemes, and more vehicles can fetch contents from local RSU directly, thus reducing the content transmission delay.
\begin{figure}
\center
\includegraphics[scale=0.5]{vs_vd-eps-converted-to.pdf}
\caption{Cache hit radio and content transmission delay under different vehicle densities}
\label{fig7}
\end{figure}
Fig. \ref{fig7} shows the cache hit ratio and the content transmission delay of the CAFR scheme under different vehicle densities when the cache capacity of each RSU is $100$. As shown in this figure, the cache hit rate increases as the vehicle density increases. This is because when more vehicles enter the coverage area of the RSU, the global model of the local RSU is trained based on more data, and thus can predict accurately. In addition, the content transmission delay decreases as the vehicle density increases. This is because the cache hit rate increases when the vehicle density increases, which enables more vehicles to fetch contents directly from local RSU.
\begin{figure}
\center
\includegraphics[scale=0.5]{asy_syn_ce-eps-converted-to.pdf}
\caption{Cache hit radio of CAFR and FedAVG}
\label{fig8}
\end{figure}
Fig. \ref{fig8} compares the cache hit rate of the CAFR scheme and the FedAVG scheme under different rounds when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the cache hit radio of CAFR fluctuates between $22.5\%$ and $24\%$ within $30$ rounds, while the cache hit rate of FedAVG scheme fluctuates between $22\%$ and $23.5\%$ within $30$ rounds. This indicates that the CAFR scheme is slightly better than the FedAVG scheme. This is because the CAFR scheme has considered the vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, thus improving the accuracy of the global model.
\begin{figure}
\center
\includegraphics[scale=0.5]{asy_syn_tt-eps-converted-to.pdf}
\caption{Training time of CAFR and FedAVG}
\label{fig9}
\end{figure}
Fig. \ref{fig9} shows the training time of CAFR and FedAVG schemes for each round when the vehicle density is $15$ vehicles/km and the cache capacity of each RSU is $100$ contents. It can be seen that the training time of CAFR scheme for each round is within $1$s and $2$s, while the training time of FedAVG scheme for each round is within $22$s and $24$s. This indicates that CAFR scheme has a much smaller training time than the FedAVG scheme. This is because the FedAVG scheme needs to aggregate all vehicles' local models for the global model updating in each round, while the CAFR scheme aggregates as soon as a vehicle's local model is received for each round.
\begin{figure}
\center
\includegraphics[scale=0.5]{ce_rd_episode-eps-converted-to.pdf}
\caption{Cache hit radio and content transmission delay of each episode in the DRL}
\label{fig10}
\end{figure}
Fig. \ref{fig10} shows the cache hit rate and content transmission delay of each episode in the DRL of the CAFR scheme when the vehicle density is $15$ vehicles/km and the cache capacity of RSU is $100$. As the episode increases, the cache hit rate gradually increases and the content transmission delay decreases gradually in the first ten episodes. This is because the local RSU and neighboring RSU gradually cache appropriate popular contents in the first ten episodes. In addition, it is seen that the cache hit rate and content transmission delay converge at around episode $10$. This is because the local RSU is able to learn the policy to perform optimal cooperative caching at around $10$ episodes.
\begin{figure}
\center
\includegraphics[scale=0.5]{rl_vs_ce_cs-eps-converted-to.pdf}
\caption{Cache hit radio for whether cache replacement}
\label{fig11}
\end{figure}
Fig. \ref{fig11} compares the cache hit ratio of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km. As shown in Fig. \ref{fig11}, the cache hit ratio of CAFR outperforms the CAFR without DRL. This is because DRL can determine the optimal cooperative caching according to the predicted popular contents, and thus more suitable popular contents can be cached in the local RSU.
\begin{figure}
\center
\includegraphics[scale=0.5]{rl_vs_rd_cs-eps-converted-to.pdf}
\caption{Content transmission delay of CAFR and CAFR without DRL under different cache capacities}
\label{fig12}
\end{figure}
Fig. \ref{fig12} compares the content transmission delay of the CAFR scheme with CAFR scheme without DRL under different cache capacities of each RSU when the vehicle density is $15$ vehicles/km.
As shown in Fig. \ref{fig12}, the content transmission delay of CAFR is less than that of CAFR without DRL. This is because the cache hit ratio of CAFR outperforms the CAFR without DRL and more vehicles can fetch contents from local RSU directly.
\section{Conclusions}
\label{sec7}
In this paper, we considered the vehicle mobility and proposed a cooperative caching scheme CAFR to reduce the content transmission delay and improve the cache hit radio. We first proposed an asynchronous FL algorithm to obtain an accurate global model, and then proposed an algorithm to predict the popular contents based on the global model. Afterwards, we proposed a cooperative caching scheme to minimize the content transmission delay based on the dueling DQN algorithm. Simulation results have demonstrated that the CAFR scheme outperforms other baseline caching schemes. According to the theoretical analysis and simulation results, the conclusions can be summarized as follows:
\begin{itemize}
\item CAFR scheme can learn from the local data of vehicles to capture useful hidden features and predict the accurate popular contents.
\item CAFR greatly reduces the training time for each round by aggregating the local model of a single vehicle in each round. In addition, CAFR considers vehicles' mobility characteristics including the positions and velocities to select vehicles and aggregate the local model, which can improve the accuracy of the training model.
\item The DRL in the CAFR scheme determines the optimal cooperative caching policy according to the predicted popular contents, and thus more suitable popular contents are cached in the local RSU and neighboring RSU to reduce the content transmission delay.
\end{itemize}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {'timestamp': '2022-08-03T02:06:56', 'yymm': '2208', 'arxiv_id': '2208.01219', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.01219'} |
\section{Introduction}\label{sec:intro}
New knowledge in physics is driven by the observation of phenomena, the design of experiments to probe these phenomena, and the communication of and debate around the resulting measurements in public fora. Laboratory courses in physics are thus unique spaces where students can engage in these central aspects of studying physical systems. Greater emphasis on these aspects in laboratory spaces is needed to accurately represent the physics discipline and to engage students in the universal scientific endeavor that is driven by observation, measurement, and communication.
Recently, national calls have been made to design laboratory instruction such that it emphasizes students' engagement in experimental scientific practices rather than simply re-enforcing content learning \cite{kozminski2014aapt,holmes2017value}. Such experiences would be better aligned with discovery-based learning \cite{olson2012engage}, which is more representative of the enterprise of experimental physics. This focus on science practices is articulated in the American Association of Physics Teachers' {\it Recommendations for the Undergraduate Physics Laboratory Curriculum} \cite{kozminski2014aapt}. These recommendations call for all laboratories in undergraduate physics to better represent experimental physics by constructing laboratory curriculum around science practices such as designing experiments, analyzing and visualizing data, and communicating physics. Arguably, middle-division and advanced laboratory courses for physics and astronomy majors -- with their more complex experiments and equipment as well as their focus on the professional development of future physicists -- tend to engage students with these practices.
By contrast, introductory physics laboratory courses tend to have more prescriptive and direct approaches to instruction. In these courses, students often follow a well-documented procedure and do not typically have opportunities to explore the observed phenomenon and the associated experimental work. At larger universities in the United States, these introductory laboratory courses are taught to thousands of students per semester, which makes these more direct approaches to instruction attractive as they are quite efficient. At many US schools, engineering students, physical science majors, and biological science students must pass these laboratory courses to complete their degree program. The scale of these course offerings provides an additional challenge to incorporating science practices. There are unique examples in the literature where students of introductory physics are engaged with scientific practices such as the Investigative Science Learning Environment (ISLE) \cite{etkina2001investigative} and Studio Physics \cite{wilson1994cuple}. However, these courses have the advantage of being taught to smaller population of students than most introductory laboratory courses, in the case of ISLE, or having an integrated ``lecture'' and a modified instructional space, in the case of Studio Physics, and thus can make use of greater instructional resources.
In this paper, we describe a stand-alone, introductory physics laboratory course sequence for biological science majors at Michigan State University (MSU) that was designed specifically to engage students in scientific practices through the work of experimental physics. Students learn to design experiments, analyze and visualize their data, and communicate their results to their peers and instructors. Design, Analysis, Tools, and Apprenticeship (DATA) Lab is unique in that it is was explicitly designed with the AAPT Lab Recommendations in mind. The sequence is a stand-alone mechanics laboratory (DL1) and a separate E\&M and optics laboratory (DL2), which is taught to more than 2000 students per year. Furthermore, the process of developing and launching this pair of courses required that we confront and overcome several well-documented challenges such as
departmental norms for the course, expectations of content coverage, and the lack of instructor time \cite{dancy2008barriers}.
\begin{table*}[th]
\caption{Finalized learning goals for DATA Lab}\label{tab:lgs}
\begin{tabular}{l|p{4in}}
\hline
\hline
Learning Goal & Description \\
\hline
LG1 - Experimental Process& Planning and executing an experiment to effectively explore how different parameters of a physical system interact with each other. Generally taking the form of model evaluation or determination.\\
LG2 - Data Analysis & Knowing how to turn raw data into an interpretable result (through plots, calculations, error analysis, comparison to an expectation, etc.) that can be connected to the bigger physics concepts.\\
LG3 - Collaboration & Working effectively as a group. Communicating your ideas and understanding. Coming to a consensus and making decisions as a group.\\
LG4 - Communication & Communicating understanding -- of the physics, the experimental process, the results -- in a variety of authentic ways -- to your peers, in a lab notebook, in a presentation or proposal. \\
\hline
\end{tabular}
\end{table*}
We begin this paper by describing how the learning goals for the lab sequence were constructed through a consensus-driven process (Sec.~\ref{LGSection}). In Sec.~\ref{structures}, we provide an overview of the course structure -- diving deeper into the details of the course materials later (Sec.~\ref{OverviewSection}). We describe the assessments for this course in Sec.~\ref{assessments} as they are somewhat non-traditional for a course of this level and scale. To make our discussion concrete, we highlight a particular example in Sec.~\ref{ExpOverSection}. Finally, we offer a measure of efficacy using student responses to the Colorado Learning Attitudes about Science Survey for Experimental Physics \cite{zwickl2014epistemology} (Sec.~\ref{efficacy}) and some concluding remarks (Sec.~\ref{conclusions})
\section{Learning Goals}\label{LGSection}
As this laboratory course serves the largest population of students enrolled in introductory physics at MSU, it was critical to develop a transformed course that reflected faculty voice in the design. While physics faculty are not often steeped in formal aspects of curriculum development, sustained efforts to transform physics courses take an approach where faculty are engaged in the process to develop a consensus design \cite{chasteen2011thoughtful,chasteen2012transforming,wieman2017improving}. In this process, interested faculty are invited to participate in discussions around curriculum design, but experts in curriculum and instruction synthesize those discussions to develop course structures, materials, and pedagogy. These efforts are then reflected out to faculty to iterate on the process. Our design process followed the approach developed by the University of Colorado's Science Education Initiative \cite{chasteen2011thoughtful,chasteen2012transforming,wieman2017improving}. In this process, faculty are engaged in broad discussions about learning goals, the necessary evidence to achieve the expected learning, and the teaching practices and course activities that provide evidence that students are meeting these goals. Below, we discuss the approach to developing learning goals for the course as well as present the finalized set of learning goals from which the course was designed. We refer readers to \citet{wieman2017improving} for a comprehensive discussion of setting about transforming courses at this scale.
Prior to engaging in curriculum and pedagogical design, an interview protocol was developed to talk with faculty about what they wanted students to get out of this laboratory course once students had completed the two semester sequence. The interview focused discussion on what made an introductory laboratory course in physics important for these students and what role it should play as a distinct course since, at MSU, students do not need to enroll in the laboratory course at the same time as the associated lecture course. A wide variety of faculty members were interviewed including those who had previously taught the course, those who had taught other physics laboratory courses, and those who conduct experimental research. In total, 15 interviews were conducted with faculty. This number represents more than half of the total number of experimental faculty who teach at MSU.
The discussion of faculty learning goals was wide-ranging and covered a variety of important aspects of laboratory work including many of the aspects highlighted in the AAPT Laboratory Guidelines \cite{kozminski2014aapt}. Interviews were coded for general themes of faculty goals and the initial list included: developing skepticism in their own work, in science, and the media; understanding that measurements have uncertainty; developing agency over their own learning; communicating their results to a wider variety of audiences; learning how to use multiple sources of information to develop their understanding; demonstrating the ability to use and understand equipment; documenting their work effectively; and becoming reflective of their own experimental work.
With the intent of resolving the faculty's expressed goals with the AAPT Lab Guidelines, the goals were synthesized under larger headings, which aimed to combine and/or to connect seemingly disconnected goals. In addition, through a series of informational meetings that roughly 10-12 faculty attended regularly, how these goals were being combined and connected to interested faculty were reflected upon. Additional critiques and refinements of these goals were collected through notes taken during these meetings. Through several revisions, a set of four broad goals that faculty agreed reflected their views on the purpose of this part of laboratory courses was finalized. Additionally, these goals were also represented in the AAPT Lab Guidelines. The finalized goals are listed in Table \ref{tab:lgs} along with short description of each; they are enumerated (LG{\it X}) in order to refer to them in later sections.
\begin{figure*}
\includegraphics[clip, trim=15 100 15 100, width=0.8\linewidth]{DATALabWeekly.png}
\caption{Week-by-week schedule of DATA Lab I \& II.\label{weekly}}
\end{figure*}
The learning goals formed the basis for the design of course structures including materials and pedagogy. To construct these course structures, constructive alignment \cite{biggs1996enhancing} was leveraged, which helped ensure that the designed materials and enacted pedagogy were aligned with the overall learning goals for the course. These structures are described in the next section where we have included a direct reference to each learning goal that a particular course structure is supporting.
\section{Course Structures}
\label{structures}
Each laboratory section consists of twenty students and two instructors -- one graduate teaching assistant (GTA) and one undergraduate learning assistant (ULA) \cite{otero2010physics}. The students are separated into five groups of four, which they remain in for 4 to 6 weeks -- 4 to 6 class meetings. This time frame works well because it gives the students time to grow and improve as a group as well as individuals within a consistent group. In addition, when the groups are switched it requires the students to adapt to a new group of peers. The groups complete 6 (DL1) or 5 (DL2) experiments during the semester, most of them spanning two weeks -- two class meetings. Fig.~\ref{weekly} provides an overview of the two-semester sequence and will be unpacked further below. We indicate the laboratories that students complete with light green squares (introductory experiments) and dark green squares (two week labs). The students keep a written lab notebook, which they turn in to be graded at the end of each experiment.
\indent In this laboratory course, each group conducts a different experiment. This is possible because, in general, students tend to follow a similar path with respect to the learning goals and there is no set endpoint for any individual experiment. As long as students continue to work through the experimental process and complete analysis of their data, they are working towards the learning goals and can be evaluated using the aligned assessments (Sec.~\ref{assessments}). This approach also emphasizes that there is not one way to complete an experiment; this has added benefits for students' ownership and agency of the work as they must decide how to proceed through the experiment. In addition, having no set endpoint and two weeks to complete most experiments takes away the time pressure to reach a specific point in a given time. All of these aspects allow students to more fully engage with the work they are doing and, in turn, make progress toward the learning goals. Having each group conduct a different experiment addressed a significant point of discussion among the faculty; specifically, not covering the same breadth of content was a major concern. Although, through this design, students do not complete all of the experiments, they are introduced to all of the concepts through the peer evaluation of the communication projects (red squares in Fig.~\ref{weekly}, addressed in detail below).
\subsection{Laboratory Activities}
The laboratory activities were designed around the learning goals. As such, the experiments follow a similar path from the beginning of the experimental process through analysis, with communication and collaboration as central components throughout. The course structures in relation to each of the learning goals are highlighted below. The core component (i.e. lab activities) of the course sequence is outlined in Fig.~\ref{snpsht}.\\
\textbf{LG1 - Experimental Process:} The students begin each experiment by broadly exploring the relevant parameters and their relationships. Typically, students investigate how changing one parameter affects another by making predictions and connecting their observations to physics ideas (qualitative exploration in Fig.~\ref{snpsht}). From these initial investigations, students work toward designing an experiment by determining what to measure, change, and keep the same. This often requires grounding decisions on some known model or an observed relationship (quantitative exploration, experimental design, and investigation in Fig.~\ref{snpsht}).\\
\textbf{LG2 - Data Analysis:} After additional formal investigations in which data has been collected, students summarize the raw data into an interpretable result. This typically includes some form of data analysis; for example, constructing a plot to evaluate a model or determining a quantitative relationship between the different variables in the data. In this work, the students are expected to make claims that are supported by their results. This often involves the students finding the slope and/or intercept in a plot and interpreting those results with respect to their expectations (discussion and analysis in Fig.~\ref{snpsht}).\\
\textbf{LG3 - Collaboration:} Throughout the experimental work and analysis, students discuss and make decisions with their peers in their lab group. Students are encouraged to develop a consensus approach to their work -- deciding collectively where to take their experiment and analysis. Furthermore, students are expected to make these decisions by grounding their discussions in their experiment, data, and analysis.\\
\textbf{LG4 - Communication:} Overall, the entire process requires that students communicate with their group and instructors. Additionally, students communicate their experimental approach and the results of their work including their analysis in their lab notebook. Later, students provide a more formal presentation of their work in the form of the communication projects.
\begin{figure}
\includegraphics[clip, trim=110 375 100 180, width=0.8\linewidth]{experimentoverview.png}
\caption{A snapshot of an experiment from pre-class homework through the communication project.\label{snpsht}}
\end{figure}
It be should emphasized that this process is not content dependent; each laboratory activity conducted by a student group follows this process. This generalization enables the core components of the course to be repeated (see Fig. \ref{weekly}) to help address external constraints, such as limited equipment and time to work on experiments.
\subsection{Communication Projects}
DATA Lab is also defined by the focus on authentic scientific communication through the communication projects (CPs). The CPs are a formal way for the students to present their work and they are one of the assessments of the course in which the work done by the students is completed individually. CPs replace the lab practical from the traditional version of the course where students would conduct a smaller portion of a laboratory by themselves. CPs occur in the middle and at the end of the semester (red squares in Fig.\ref{weekly}). In DL1, the CP is a written proposal that summarizes the work the students conducted in one of their previous experiments and proposes an additional investigation. In DL2, the students create and present a research poster on one of (or a portion of one of) their experiments. In both courses, the projects are shared with and reviewed by their primary instructor and their peers in the class.
Through the CPs, students continue to engage with the faculty consensus learning goals (Sec.~\ref{LGSection}) as described below:\\
\textbf{LG1 - Experimental Process:} Students are expected to reflect on and summarize the process through which they went to complete the experiment. In so doing, they must communicate their rationale and reasoning for following that process.\\
\textbf{LG2 - Data Analysis:} The students must show that they can turn their raw data into an interpretable result. Again, this is often and, ideally, done in the form of a plot of their data with the emphasize of a model, including a fit, is needed. Students also present and explain what the results mean in the context of the experiment and a physical model.\\
\textbf{LG3 - Collaboration:} While the experiment was completed with the student's group where they may have consulted with their group mates, the CPs themselves are not inherently collaborative. However, in DL1, the reviews that students perform on each other's projects are done collaboratively in their groups.\\
\textbf{LG4 - Communication:} The CPs are the formal communication of a student's experimental work. In both courses, a student's CP is reviewed by their peers and feedback is provided describing successes and shortcomings along with suggestions for improvements.
\subsection{Final Projects}
The course structure was designed with the intent to provide students with a variety of ways to engage in the experimental physics practices. The final projects are an additional form of communication including an analysis and interpretation of experimental results through critiquing other scientific results (DL1--Critique Project) and describing a new experimental design (DL2--Design Project).
\textit{Critique Project}: For the final project in DL1, students critique two sides of a popular science topic. In the prior week, students are arranged into new groups and before the class meeting, they must choose, as a group, from a list of possible topics such as climate change and alternative energy. In class, students collectively write up a summary and critique both sides of the scientific argument.
\textit{Design Project}: For the final project in DL2, students choose an experiment that was conducted previously and design a new experiment for a future semester of DATA Lab. Similarly to DL1, the students are sorted into new groups and they must decide, as a group, which experiment they will be working on before the class meeting. Due to the structure of the course, specifically everyone doing different experiments throughout the semester, this choice may be an experiment that individual members of the group did not complete; negotiating this decision is part of the process of the Design Project. In class, students construct two documents: (1) a document that explains the design of the new experiment and (2) a document that would aide a future DATA Lab instructor to teach the experiment. Through this final project, DL2 students can design a project covering material that they may not have had the chance to explore during the course.
For both final projects, students turn in one assignment per group and they receive a single grade (as a group) for the assignment. Students also assess their own in-class participation, providing themselves a participation score (on a 4.0 scale) for the day. This score is submitted to their instructor along with their rationale for assigning themselves the grade.
These projects offer the final opportunity for DATA Lab students to engage with the faculty-consensus learning goals:\\
\textbf{LG1 - Experimental Process:} In DL1, students evaluate and summarize both sides of the chosen argument by reviewing the relevant data and experiments. Although students are not conducting an experiment, they are still asked to be critical of the experimental process in each side of the argument. In DL2, students must create a clear procedure for their proposed experiment. Here, they must consider the available equipment as well as how the data would be collected and why. \\
\textbf{LG2 - Data Analysis:} In DL1, the students must evaluate the evidence provided in each article. They must decide if there are obvious flaws in the way the analysis was conducted and if the analysis is compelling; that is, if the overall claims made in article align with the data and analysis. In DL2, students must consider the kind of analysis that would fit with their experiment and the data that they would collect. In addition, students are also expected to reflect on their analysis in light of the models that are available to explain the data they would collect. \\
\textbf{LG3 - Collaboration:} In both courses, students continue to work as a group and are graded accordingly. In addition, the students have been put into new groups, which they must adjust to.\\
\textbf{LG4 - Communication:} In both courses, students continue to communicate with their group as part of the collaboration. In DL1 specifically, the final project provides an opportunity to communicate their own evaluation and critique of a scientific arguments. Students in DL2 are expected to communicate to different audiences, including future DATA Lab students and instructors, about their newly planned experiment.
\section{Overview of Key Supports}
\label{OverviewSection}
As the students' work in this course is sufficiently open-ended, specific supports to ensure they feel capable of conducting the lab activities have been designed. Since the CPs are the main assessments in the DATA Lab course sequence and are a large portion of their overall grade for the course, the goals of the key supports are intended to provide students with the tools to help them succeed in the projects. Each of the supports designed for DATA Lab will be discussed in detail below (Secs.~\ref{sec:labs} \& \ref{sec:sup}). Assessments will be discussed in Sec.~ \ref{assessments}]
Broadly, the key supports for the students are outlined in Fig.~\ref{snpsht}. Before each class day, students complete a pre-class homework assignment (vertical green lines). Students also have three communication project homework (CPHW) assignments during the semester (vertical pink line) to help them complete their CPs. These supports, in addition to feedback on students' in-class participation and lab notebooks, apply for any of the regular two week experiments (green squares Fig.~\ref{weekly}). In the following section, these will be described in detail along with the additional supports that were designed for the courses.
\subsection{Typical Experiment}\label{sec:labs}
Each two-week experiment follows a similar path, highlighted in Fig.~\ref{snpsht} and described, in part, in Sec.~\ref{structures}. In this section, details of the general course components necessary to maintain the flexibility of the path students take through each experiment will be described.
\textbf{Pre-Class Homework:} At the beginning of an experiment, students are expected to complete the pre-class homework assignment which includes reading through the lab handout and investigating the suggested research. This assignment is usually 2-4 questions designed to have students prepare for the upcoming experiment. For example, before the first day of a new lab, students are asked what they learned during their pre-class research and if they have any questions or concerns about the lab handout. Between the first and second class meeting of the two-week experiment, students are expected to reflect on what they have already done and prepare for what they plan to do next. Typically, the 2-4 questions include reflections from the prior week, such as any issues their group ran into on the first day, and what they intend on doing during the second day of the experiment. Answers to the pre-class homework serve as additional information that the instructors can draw on during the class; knowing what questions and confusions that their students might have can help instructors be more responsive during class. Overall, the goal of the pre-class homework is for the students to come into class prepared to conduct their experiment and this assignment is used to hold them accountable for that preparation.
\textbf{In-class Participation}: With the overall intent of improving students' specific laboratory skills and practices that are outlined in the course learning goals (Sec.~\ref{LGSection}), students receive in-class participation grades and feedback after every lab section (green squares in Figs~\ref{weekly} \&~\ref{snpsht}) on their engagement with respect to these practices. As the lab handouts do not provide students with specific steps that they must take to complete the experiment, students are expected to make most of the decisions together as a group. Generally, students have control over how their investigation proceeds; however, this control varies between experiments (i.e. students choose how to set up the equipment, what to measure, how to take measurements, etc.). The in-class participation grades and feedback are where students are assessed most frequently and where they have the quickest turnaround to implement the changes. See Sec.~\ref{sec:assA} for the details of how in-class participation is assessed.
\textbf{Lab Notebooks}: For each experiment that the students engage in, they are expected to document their work in a lab notebook. In comparison to formal lab reports, lab notebooks are considered a more authentic approach to documenting experimental work. Furthermore, lab notebooks provide students with space to decide what is important and how to present it. The lab notebooks are the primary source that the students use to create their CPs. Like in-class participation, students receive lab notebook feedback much more regularly than CP feedback, so they have greater opportunity to reflect and make improvements. The specific details of the assessment of lab notebooks will be explained in Sec.~\ref{sec:assA}.
\textbf{CP Homeworks:} Three times during the semester the students complete CPHW assignments in addition to that week's pre-class homework. Each CPHW focuses on a relevant portion of the CPs (e.g., making a figure and a caption). Through the CPHWs, the aim is for students to develop experience with more of the CP components. In addition, students receive feedback on these different aspects (see Sec.~\ref{sec:assA}) , which they can act upon before they have to complete their final CPs.
\textbf{Communication Projects:} Throughout each semester, the students complete two CPs, the first of which is a smaller portion of their overall course grade. With the goal of providing the students with a second opportunity to conduct a CP after receiving initial feedback, this course design feature intends to create less pressure on students during their first CP assignment. Students are expected to reflect on the process, their grade, and the feedback before they have to complete another CP. The CP assessment details will be discuss further in Sec.~\ref{sec:assB}.
\subsection{Additional Supports}\label{sec:sup}
Along with the support structures for the core components of the course sequence, additional supports have been designed to ease students into the more authentic features of DATA Lab such as designing experiments and documenting progress in lab notebooks. DL1 begins with three weeks of workshops (purple squares in Fig.~\ref{weekly}), followed by the introductory experiment (light green squares in Fig.~\ref{weekly}) that all of the students complete. DL2 begins with an introductory experiment as well, under the assumption that the students already went through DL1. The workshops and introductory experiments are designed to assist the students in navigating the different requirements and expectations of the overall course sequence, and of a typical experiment within each course. The additional support structures are described in detail below.
\textbf{DL1 Workshops:} The first workshop focuses on measurement and uncertainty with a push for the students to discuss and share their ideas (LG1,3). The students perform several different measurements -- length of a metal block, diameter of a bouncy ball, length of a string, mass of a weight, and the angle of a board. Each group discusses the potential uncertainty associated with one of the measurements. Then, students perform one additional measurement and assign uncertainty to it. The second workshop also focuses on uncertainty but in relation to data analysis and evaluating models (LG2,4) using the concept of a spring constant. Students collect the necessary measurements, while addressing the associated uncertainty and plot the measurements to analyze how the plot relates to the model of a spring. The final workshop focuses on proper documentation. The lab handouts do not contain their own procedure, so each student is expected to document the steps they take and their reasoning (LG4) in their lab notebook. In preparation for the third workshop, as a pre-class homework, students submit a procedure for making a peanut butter and jelly sandwich, which they discuss and evaluate in class. Students are then tasked with developing a procedure to determine the relationship between different parameters (length of a spring and mass added, angle of metal strip and the magnets placed on it, or time for a ball to roll down a chute and how many blocks are under the chute. At the end of each workshop the students turn in their notebooks, just as they would at the end of any experiment.
\textbf{Introductory Experiments:} In DL1, the introductory experiment occurs after the three workshops. All students conduct a free-fall experiment where they must determine the acceleration due to gravity and the terminal velocity for a falling object. In DL2, the introductory experiment is the first activity in the course. This is because students will have already completed DL1 prior to taking DL2; rather than being slowly introduced to what DATA Lab focuses on, students can be reminded in a single experiment. The introductory experiment for DL2 involves Ohm's Law; students must determine the resistance of a given resistor.
As these are the first DATA Lab experiments for either course, the instructors take a more hands-on and guiding approach than they will later in the semester. In DL1, these instructional changes represent a dramatic shift from the guidance students had during the workshops where instructors are often quite involved. In DL2, the one week lab is intended to be simple enough that students can be reminded of the expectations with respect to the overall learning goals of the course.
\textbf{CP Prep Day:} As discussed in the prior section, the CPs comprise a large portion of the students' total grade in the course. In addition to the supports that were already mentioned -- in class grades, notebooks, CPHW, and a lower stakes CP1 -- in the spring semester, the MSU academic calendar offers time for a communication project prep day (pink squares in Fig.~\ref{weekly}). This gives the students an extra day where they have time to work on their CPs in class. They can take additional measurements, seek help from their group or instructor, or work on the project itself. This prep day allows for a gentler transition into the CPs with a bit more guidance. It also reduces the amount of work that the students have to do outside of class.
\section{In Course Assessments}
\label{assessments}
The DATA Lab activities described above were designed around the overall learning goals outlined in Sec.~\ref{LGSection}. As such, the course assessments were also aligned with these overall course goals. There are two types of assessments used in DATA Lab -- formative (to help the students improve upon their work) and summative (to evaluate the students' output); these are separated for clarity. In this section, the various assessment tools are discussed with respect to the overall learning goals of the course.
\subsection{Formative Assessments} \label{sec:assA}
In DATA Lab the formative assessments are comprised of students' work on their in-class activities, lab notebooks, and CPHWs. Other than the pre-class homework, which is graded on completion, there is a rubric for each activity for which students receive a score. Each is structured to ensure that any improvements students make carry over to their CPs.
\textbf{In-class Participation}: In-class participation feedback is broken into group, which covers the general things everyone in the group or the group as a whole needs to work on, and individual, which is specific to the student and not seen by other group members. The general structure of the feedback follows an evaluation rubric used in other introductory courses and focuses on something they did well, something they need to work on, and advice on how to improve \cite{irving2017p3}. It is expected that students will work on the aspects mentioned in their prior week's feedback during the next week's class. Students are graded based on their response to that feedback. Any improvements they make with respect to the learning goals in class will also likely impact how well they complete their CPs.
Students' in-class participation is assessed with respect to two components, group function and experimental design. Specifically, group function covers their work in communication, collaboration, and discussion (LG3,4). For communication they are expected to contribute to and engage in group discussions. To do well in collaboration, students should come to class prepared and actively participate in the group’s activities. Discussion means working as a group to understand the results of their experiment. Experimental design evaluates the process that students take through the experiment and their engagement in experimental physics practices (LG1,2). They are expected to engage with and show competence in use of equipment, employ good experimental practices (i.e., work systematically, make predictions, record observations, and set goals) and take into account where uncertainty plays into the experimental process (i.e., reduce, record, and discuss it).
Specifically for the DL1 Workshops, instructors grade students differently than they would for a typical experiment. The emphasis for the workshops is on the group function aspect of the rubrics, communication and participation. This is because the students are being eased into the expectations that the instructors have around experimental work.
\textbf{Lab Notebooks}: Feedback and grades for lab notebooks are only provided after the experiment is completed (the two week block in Figs~\ref{weekly} \&~\ref{snpsht}). Students receive individual feedback on their notebook, although members of a group may receive feedback on some of the same things simply because they conducted the experiment together. Like for in-class participation, it is expected that the students will work on the aspects mentioned in their feedback for the next lab notebook and the instructor can remind them of these things in class during the experiment.
Lab notebooks are also graded over two components, experimental design and discussion. Experimental design focuses on the experimental process and how students communicate it (LG1,4). Here, instructors typically look for clearly recorded steps and results, and intentional progression through the experiment. Discussion covers uncertainty in the measurements and the models, as well as the results, with respect to any plots and conclusions (LG2,4). These evaluation rubrics for the lab notebooks were designed to be aligned with the those for the CPs, so that when students work toward improving their notebooks they are also making improvements that will benefit their CPs. For example, if a student is getting better at analyzing data and communicating their results within their notebooks, instructors should expect the same improvement to transfer to their CPs.\\
\indent For the DL1 Workshops, the lab notebooks are graded on the same components but the grades and feedback are specifically focused on the parts of the rubric that the students should have addressed in each of the previous workshops. For example, as documentation is emphasized in the last workshop, the students are not heavily penalized on poorly documented procedures in the first two workshops.\\
\textbf{CPHW}: The goal of this CPHW is to have students think about creating a more complete CP that connects their in-class work to the bigger picture. Students are evaluated on the quality and relevance of their sources, including the background and real-life connections (LG2,4). Each CPHW has a different rubric because each one addresses a different aspect of the CPs.
\textit{Figure and caption}: The students create a figure with a robust caption based on the data from one of the labs they completed. Both the figure and the caption are evaluated on communication and uncertainty (LG2,4). For the plot, the students are expected to visualize the data clearly with error bars and it should provide insight into the various parameters within the experiment. For the caption, students need to discuss what is being plotted, make comparisons to the model including deviations, and draw conclusions that include uncertainty. \\
\textit{Abstract}: For a given experiment, students write a research abstract that covers the main sections of their project including introduction, methods, results, and conclusion. These are assessed on experimental process (motivation and clarity of the experiment) (LG1,4), and discussion (results and conclusions) (LG2,4).\\
\textit{Critique (DL1 only)}: Students are given an example proposal that they must read, critique, and grade. This assignment plays two roles. First, students must examine a proposal, which should help to produce their own. Second, students must critique the proposal, which should help them provide better critiques to their peers. Students' performance is evaluated based on their identification of the different components of a proposal, and the quality of the feedback they provide (LG4).\\
\textit{Background (DL2 only)}: Students are tasked with finding three out-of-class sources related to one of their optics experiments, which they must summarize and connect back to the experiment.
\subsection{Summative Assessment} \label{sec:assB}
The CPs form the sole summative assessment of student learning in DATA Lab. As described above, each of the formative assessments are designed to align with the goals of the CPs.
\textbf{CPs}: As mentioned above, although students conduct the experiments together, the CPs are completed individually. In DL1, students' CP is a proposal that emphasizes their prior work and discusses a proposed piece of future work. As a result, the CP rubric is divided into two sections, prior and future work. Within those sections, there is a focus on experimental design and discussion. This rubric was iterated on after piloting the course for two semesters as it was found that students would often neglect either their future work or prior work when they were not directly addressed in the rubric; the rubrics were reorganized in order to account for this. Experimental design, which covers methods and uncertainty, focuses on the experimental methods and the uncertainty in measurements, models, and results when students discuss their prior work (LG1,2,4). In future work, experimental design refers to the proposed experimental methodology and the reasoning behind their choices (LG1,4). For the student's discussion of prior work, the rubric emphasizes how the they communicate their results (LG2,4). When students discuss their future work, the rubric emphasizes the novelty of the proposed experiment and the arguments made on the value of the project (LG1,4).
In DL2, students' CP is a poster that they present to their classmates for peer review. The rubric includes an additional component on the presentation itself, but the rubric still emphasizes the experimental design and discussion. Experimental design covers communication of the experimental process including students' reasoning and motivation. Discussion focuses on the discussion of uncertainty (i.e., in the measurements and models) and the discussion of results (i.e., in the plot and conclusions). The additional component focusing on presentation is divided into specifics about the poster (i.e., its structure, figures, layout) and the student's presentation of the project (i.e., clear flow of discussion, ability to answer questions).
\section{Example Experiment}
\label{ExpOverSection}
Overall, the course structures, supports, and assessments of DATA Lab have been discussed. In this section, the key supports will be grounded in examples from a specific experiment. The details of a specific two-week experiment will be described to better contextualize the features of the course. Additional experiments are listed in Tables \ref{DL1exp} and \ref{DL2exp} in the Appendix. The chosen experiment is from DL2 and is called ``Snell's Law: Rainbows''. In this experiment, students explore the index of refraction for different media and different wavelengths of light.
Before attending the first day of the laboratory activity, students are expected to conduct the pre-class homework assignment, including the recommended research in Fig.~\ref{snells1}. In addition, the homework questions for the first day of a new experiment address the pre-class research, as follows:
\begin{figure}
\fbox{
\parbox{0.85\columnwidth}{
{\bf Research Concepts}\\
\begin{flushleft}
To do this lab, it will help to do some research on the concepts underlying the bending of light at interfaces including:
\begin{itemize}[noitemsep,nolistsep]
\item Snell's Law (get more details than presented here)
\item Refraction and how it differs from reflection
\item Index of refraction of materials
\item Fiber optics
\item Using this simulation might be helpful: http://goo.gl/HEflDI
\item How to obtain estimates for fits in your data (e.g., the LINEST function in Excel - http://goo.gl/wiZH3p)
\end{itemize}
\end{flushleft}
}}
\caption{Pre-class research prompts for the Snell's Law lab.\label{snells1}}
\end{figure}
\begin{figure*}[t]
\fbox{
\parbox{1.8\columnwidth}{
{\bf Part 1 - Observing Light in Water}\\
\begin{flushleft}
At your table, you have a tank of water and a green laser. Turn on the green laser and point it at the water's surface.
\begin{itemize}[noitemsep,nolistsep]
\item What do you notice about the beam of light in the water?
\item What about the path the light takes from the source to the bottom of the tank?
\end{itemize}
Let's get a little quantitative with this set up. Can you measure the index of refraction of the water? You have a whiteboard marker, a ruler, and a protractor to help you. Don't worry about making many measurements, just see if you can get a rough estimate by taking a single measurement.
\begin{itemize}
\item What does your setup and procedure look like for this experiment?
\item What part(s) of your setup/procedure is(are) the main source of uncertainty for this
measurement?
\item Can you gain a sense of the uncertainty in this measurement?
\item How close is your predicted value to the ``true value" of the index of refraction of mater?\\
\end{itemize}
\end{flushleft}
\vspace{11pt}
\noindent\rule{4in}{0.4pt}
\vspace{11pt}
\begin{flushleft}
On the optical rail you have a half circle shape of acrylic that is positioned on a rotating stage, with angular measurements. You also have a piece of paper with a grid attached to a black panel (i.e., a ``beam stop"). Using this setup, you will test Snell's Law for the green laser. Your group will need to decide how to set up your experiments and what measurements you will make. You should sketch the setup in your lab notebook and it would be good to be able to explain how your measurements relate to Snell's Law (i.e., how will the laser beam travel and be bent by the acrylic block?). In conducting this experiment, consider,
\begin{itemize}[noitemsep,nolistsep]
\item What measurements do you need to make?
\item What is the path of the laser beam and how does it correspond to measurements that you are making?
\item What is a good experimental procedure for testing Snell's Law?
\item What kind of plot is a useful one to convey how the model (Snell's Law) and your measurements match up?
\item Where is the greatest source of uncertainty in your experimental setup? What does that mean about the uncertainty in your measurements?
\end{itemize}
\end{flushleft}
}}
\caption{Snell's Law: Rainbows Lab Handout. \textit{Top}: Exploring refraction, first day of Snell's Law. \textit{Bottom}: Beginning model evaluation, main Snell's Law activity. \label{snells23}}
\end{figure*}
\begin{enumerate}
\item Describe something you found interesting in your pre-class research.
\item From reading your procedure, where do you think you may encounter challenges in this lab? What can you do to prepare for these?
\item Considering your assigned lab, is there anything specific about the lab handout that is unclear or confusing?
\end{enumerate}
The first day of the lab begins with exploring refraction in a water tank. Students are asked to qualitatively explore the index of refraction of the water using a simple setup (Fig.~\ref{snells23}). The exploration is fully student led; they investigate the laser and tank, discussing what they see with their group as they go and recording their observations in their notebooks. Students observe that the path of the light changes once the laser crosses the air-water boundary. Students are then lead to a quantitative exploration by determining the index of refraction of the water; instructors expect the students to have an idea of how to do this after their pre-class research. If students are not sure how to start, they are encouraged to search for Snell's Law online where they can quickly find a relevant example. The instructors check in with the students toward the end of this work. Typically, instructors will ask about the questions outlined in the lab handout.
The next part of the experiment is where students work to gain precision in their measurements and evaluate the model of the system. This part is most similar to a traditional laboratory course. The difference is that the students are told the goal but not how to proceed (see Fig.~\ref{snells23}). There are a number of decisions they must make as a group as they progress. Students record and explain their decisions in their lab notebooks; they might also discuss them with their instructor.
Typically by the end of the first day students know how to set up their experiment and have documented that in their lab notebooks. They are unlikely to have taken more than one measurement (the design and investigation phase in Fig.~\ref{snpsht}). They will return the following week to complete their experiment. The homework questions between the first week and the week that they return emphasize students' reflections on the previous week. Students also are asked think about the experiment outside of class. The typical homework questions prior to Week 2 are the following:
\begin{enumerate}
\item Because you will be working on the same lab this week, it is useful to be reflective on your current progress and plans. Describe where your group ended up in your current lab, and what you plan to do next.
\item Now that you are halfway through your current lab and are more familiar with the experiment, what have you done to prepare for this upcoming class?
\item Describe something that you found interesting in your current lab and what you would do to investigate it further.
\end{enumerate}
\begin{figure*}[t]
\includegraphics[clip, trim=0 0 0 0, width=0.8\linewidth]{CPExample-New.png}
\caption{Sample of a student's Communication Project for DL2. \textit{Blue}: graph with sine of the angle of incidence plotted against sine of the angle of refraction for each wavelength of light. \textit{Green}: The slope for each wavelength, which is the index of refraction of the block. \textit{Red}: Results and conclusions where they discuss the differences in the indices of refraction and how that is related to rainbows.\label{poster}}
\end{figure*}
The second week starts with setting up the experiment again and beginning the process of taking multiple measurements. At this point, students often break up into different roles: someone manipulating the equipment, one or two people taking measurements, and someone recording the data and/or doing calculations. These roles are what students appear to fall into naturally, and are not assigned to them. Although, if one student is always working in excel or always taking the measurements, instructors will address it in their feedback where they encourage the students to switch roles.
The next step depends on the amount of time that students have left in the class. If there is not much time, students focus on the data from one wavelength of light. If they have more time, they can make the same measurements with lasers of different wavelengths. In both cases, students can determine the index of refraction of the acrylic block. With multiple wavelengths, students are able to see that the index of refraction depends on wavelength. This leads to a conversation with the instructor about how this relates to rainbows and a critique of the model of refraction -- Snell's Law.
Most of the analysis that students conduct in this example experiment is the same regardless of how many lasers they collected data (discussion and analysis in Fig.~\ref{snpsht}). While considering the different variables in their experiment, students are expected to make a plot where the slope tells them something about the physical system. In this case, the design is intended for the students to plot the sine of the angle of incidence on the x-axis and the sine of the angle of refraction on the y-axis, which makes the slope the index of refraction of the acrylic block. The optics experiments occur in the second half of the semester after the students have become familiar with constructing linear plots from nonlinear functions. For this lab, students usually do not have much difficulty determining what they should plot. After they obtain the slope and the error in the slope, students will typically compare it to the known index of refraction of the acrylic block. They must research this online as it is not provided anywhere for them in the lab handout.
The second day of the experiment ends with a discussion of their plot. Students construct a conclusion in their notebooks that summarizes the results, what they found, what they expected, reasons for any differences, and an explanation of what it all means in the larger physics context.
After the experiment, the students may have their third and final CPHW, background/literature review. In the case of Snell's Law, students would be asked to find three additional sources where these concepts are used in some other form of research, often in the field of medicine but also in physics or other sciences. Students then summarize what they did in class and connect their experimental work to the sources that they found.
The student can choose to do their second CP on this experiment. An example of a poster can be seen in Fig.~\ref{poster}. In the figure, three key features are highlighted. First, in the blue box, is the graph where students plotted all three wavelengths of light. In the green box, is the slope for each color, which is the index of refraction of the acrylic for each laser. Finally, in the red boxes, are their results and conclusion. In the top box, students explained why their indices are different, that is, because of the assumption that Snell's Law is wavelength independent. In the bottom box, they make the connection to rainbows. The student would present this poster during the in-class poster session, to their peers and their instructor.
\section{Redesign Efficacy}
\label{efficacy}
To measure the efficacy of the DATA Lab course transformation, the Colorado Learning Attitudes about Science Survey for Experimental Physics (E-CLASS) \cite{zwickl2014epistemology} was implemented in the traditional laboratory course as well as the transformed courses. The E-CLASS is a research-based assessment tool used to measure students' epistemology and expectations about experimental physics \cite{wilcox2016students,hu2017qualitative,wilcox2018summary}. The well-validated survey consists of 30 items (5-point Likert scale) where students are asked to rate their level of agreement with each statement. The scoring method of this assessment was adapted from previous studies \cite{adams2006new}. First, the 5-point Likert scale is compressed into a 3-point scale; ``(dis)agree" and ``strongly (dis)agree" are combined into one category. Then, student responses are compared to the expert-like response; a response that is aligned with the expert-like view is assigned a $+1$ and a response that is opposite to the expert-like view is assigned a $-1$. All neutral responses are assigned a 0. For our comparison between the traditional and transformed courses, we will report the percentage of students with expert-like responses.
In DL1 and DL2, the E-CLASS was administered as an online extra credit assignment both pre- and post-instruction. Throughout the course transformation, DL1 and DL2 collected a total of 1,377 and 925 students, respectively, with matched (both pretest and post-test) E-CLASS scores. Figure \ref{eclass} shows the fraction of students with expert-like responses in the traditional course and the transformed course for (a) DL1 and (b) DL2. Students in the traditional courses had a decrease of 3\% and and 1\%, respectively, in their expert-like attitudes and beliefs toward experimental physics from pre- to post-instruction. However, in the transformed DATA Lab courses, the students' expert-like views of experimental physics increased by 4\% in DL1 and by 6\% in DL2.
To explore the impact of the course transformation after controlling for students' incoming epistemology and expectations about experimental physics, ANCOVA was used to evaluate the student's attitudes and beliefs post-instruction between the traditional courses and the transformed courses. For both DL1 and DL2, results showed that there was a significant difference in ECLASS post-test percentages between the traditional courses and the transformed courses ($ps<0.001$). Specifically for DL1, results demonstrated a significant 7\% post-test difference in expert-like responses between the traditional course and the transformed course after controlling for ECLASS pretest scores. For DL2, there was a significant 9\% difference in post-test responses between the traditional and transformed courses after controlling for the student's incoming ECLASS responses. Overall, the transformation in both DL1 and DL2 had a positive impact on students' epistemological views and expectations about experimental physics.
\begin{figure}
\includegraphics[clip, trim=0 0 0 0, width=\linewidth]{ECLASS-3-26-2019.png}
\caption{Fraction of students with expert-like responses for (a) DL1 and (b) DL2.\label{eclass}}
\end{figure}
\section{Conclusion}
\label{conclusions}
In this paper, the large scale transformation of the MSU algebra-based physics labs for life science students was described. The design was divorced from the specific physics content because the learning goals developed from a faculty consensus design did not include specific content. This design means that the individual lab activities do not matter {\it per se}, but instead the structure of the course and how students work through the lab are what is important. Theoretically, one could adapt this design to a chemistry or biology lab by making adjustments to the kinds of lab activities, and relevant changes to the learning goals. That being said, there are still key structures to ensure the functioning of the course which will be covered in detail in a subsequent paper (e.g. a leadership team of four instructors, two GTAs and two ULAs, tasked with maintaining consistent grading and instruction across the sections).
The transformation was centered to emphasize experimental physics practices. The overall efforts were focused on the two course series because the majority of the students that are taking courses in the physics department at MSU are enrolled in the introductory algebra based series, specifically 2,000 students per year. In addition, the majority of the student instructors in the MSU physics and astronomy department, nearly 80 graduate teaching assistants and undergraduate learning assistants, teach in these labs. Because of its scale, special attention was given to the voice of the physics faculty in the development of the learning goals for DATA Lab \cite{wieman2017improving}. The entire course was designed around the faculty-consensus learning goals, which are all based around physics laboratory practices (Sec.~\ref{LGSection}). From course structures to assessments, everything was intentionally aligned with the overall learning goals. Each component of the course builds upon another through the two semester sequence. Each individual lab activity builds upon skills that will be valuable for each subsequent activity, from lab handouts to pre-class homework assignments. Such an effort was put into designing this course sequence in large part because of the number of MSU undergraduate students they are serving. The value in physics labs for these non-majors lies in the scientific practices on which the redesign was centered. Those skills and practices are what they will take with them into their future careers.
\begin{acknowledgments}
This work was generously supported by the Howard Hughes Medical Institute, Michigan State University's College of Natural Science, as well as the Department of Physics and Astronomy. The authors would like to thank the faculty who participated in the discussion of learning goals. Additionally, we would like to thank S. Beceiro-Novo, A. Nair, M. Olsen, K. Tollefson, S. Tessmer, J. Micallef, V. Sawtelle, P. Irving, K. Mahn, J. Huston who have supported the development and operation of DATA Lab. We also thank the members of the Physics Education Research Lab who have given feedback on this work and this manuscript.
\end{acknowledgments}
\section{Introduction}
\LaTeX\ is typesetting software that is widely used by mathematicians
and physicists because it is so good at typesetting equations. It is
also completely programmable, so it can be configured to produce
documents with almost any desired formatting, and to automatically
number equations, figures, endnotes, and so on.
To prepare manuscripts for the American Journal of Physics (AJP),
you should use the REV\TeX\ 4.1 format for Physical Review B
preprints, as indicated in the \texttt{documentclass} line at the top
of this article's source file. (If you're already familiar with
\LaTeX\ and have used other \LaTeX\ formats, please resist the
temptation to use them, or to otherwise override REV\TeX's formatting
conventions, in manuscripts that you prepare for AJP.)
This sample article is intended as a tutorial, template, and reference for
AJP authors, illustrating most of the \LaTeX\ and REV\TeX\ features that
authors will need. For a more comprehensive introduction to \LaTeX,
numerous books and online references are available.\cite{latexsite,
wikibook, latexbook} Documentation for the REV\TeX\ package
can be found on the APS web site.\cite{revtex}
\LaTeX\ is free software, available for Unix/Linux, Mac OS X, and Windows
operating systems. For downloading and installation instructions, follow
the links from the \LaTeX\ web site.\cite{latexsite} It is most
convenient\cite{cloudLaTeX} to install a ``complete \TeX\ distribution,''
which will include \LaTeX, the underlying \TeX\ engine, macro packages
such as REV\TeX, a large collection of fonts, and GUI tools for editing
and viewing your documents. To test your installation, try to process
this sample article.
\section{Ordinary text and paragraphs}
To typeset a paragraph of ordinary text, just type the text in your source
file like this. Put line breaks
wherever
you
want, and don't worry about extra spaces between words, which \LaTeX\ will ignore. You can almost always trust \LaTeX\ to make your paragraphs look good, with neatly justified margins.
To start a new paragraph, just leave a blank line in your source file.
A few punctuation characters require special treatment in \LaTeX. There
are no ``smart quotes,'' so you need to use the left-quote key (at the
top-left corner of the keyboard) for a left quote, and the ordinary apostrophe
key (next to the semi-colon) for a right quote. Hit either key twice for double
quotes, which are standard in American English. Don't use shift-apostrophe
to make double quotes. Use single quotes when they're nested inside a
double-quoted quotation. When a period or comma belongs at the end of
a quotation, put it inside the quotes---even if it's not part of what you're
quoting.\cite{nevermindlogic}
Your fingers also need to distinguish between a hyphen (used for
multi-word adjectives and for hyphenated names like Lennard-Jones), an
en-dash (formed by typing two consecutive hyphens, and used for ranges
of numbers like 1--100), and an em-dash (formed out of three consecutive
hyphens and used as an attention-getting punctuation symbol---preferably
not too often).
Some non-alphanumeric symbols like \$, \&, and \% have special meanings
in a \LaTeX\ source file, so if you want these symbols to appear in the output,
you need to precede them with a backslash.
There are also special codes for generating the various accents
that can appear in foreign-language words and names, such as Amp\`ere
and Schr\"odinger.\cite{FontEncodingComment}
You can switch to \textit{italic}, \textbf{bold}, and \texttt{typewriter} fonts
when necessary. Use curly braces to enclose the text that is to appear in
the special font. In general, \LaTeX\ uses curly braces to group characters
together for some common transformation.
Notice that any word or symbol preceded by the backslash character is
a special instruction to \LaTeX, typically used to produce a special
symbol or to modify the typeset output in some way. These instructions
are also called \textit{control sequences} or \textit{macros}.
After you've used \LaTeX\ for a while, the little finger of your right
hand will be really good at finding the backslash and curly-brace keys.
\section{Math symbols}
To type mathematical symbols and expressions within a paragraph, put
them between \$ signs, which indicate \textit{math mode}: $ab + 2c/d = e-3f$.
\LaTeX\ ignores spaces in math mode, using its own algorithms to determine
the right amount of space between symbols. Notice that an ordinary letter
like~$x$, when used in math mode, is automatically typeset in italics.
This is why you need to use math mode for all mathematical
expressions (except plain numerals), even when they don't contain any
special symbols. But don't use math mode to italicize ordinary \textit{words}.
Besides ordinary letters and numerals and common arithmetic symbols, math
mode provides a host of other characters that you can access via control
sequences.\cite{wikimathpage} These include Greek letters like $\pi$ and
$\Delta$ (note capitalization), symbols for operations and relations such
as $\cdot$, $\times$, $\pm$, $\gg$, $\leq$, $\sim$, $\approx$, $\propto$,
and $\rightarrow$, and special symbols like $\nabla$, $\partial$, $\infty$,
and~$\hbar$. You can decorate symbols with dots ($\dot x$ or $\ddot x$),
arrows ($\vec\mu$), bars ($\bar x$ or $\overline m$), hats ($\hat x$),
tildes ($\tilde f$ or $\widetilde w$), and radicals ($\sqrt\pi$, $\sqrt{2/3}$).
Parentheses and square brackets require no special keystrokes, but you
can also make curly braces and angle brackets: $\{\langle\ \cdots\ \rangle\}$.
To make subscripts and superscripts, use the underscore and caret
(circumflex) symbols on your keyboard: $x^\mu$, $g_{\mu\nu}$, $\delta^i_j$,
$\epsilon^{ijk}$. Notice that you need to put the subscript or superscript
in curly braces if it's longer than one character (or one control sequence).
You can even make nested subscripts and superscripts, as in $e^{-x^2}$.
If a subscript consists of an entire word or word-like abbreviation,
we usually put it in plain Roman type: $x_\textrm{max}$. If you need to
put a subscript or superscript \textit{before} a symbol, use an empty
set of curly braces: ${}^{235}_{\ 92}\textrm{U}$. (Notice the trick of using
backslash-space put a space before the 92.)
\newcommand{\bE}{\mathbf{E}}
To make boldface letters you use the \verb/\mathbf/ control sequence, as in
$\nabla\times\mathbf{E} = -\partial\mathbf{B}/\partial t$. For bold Greek
letters like $\boldsymbol{\omega}$, you need to use \verb/\boldsymbol/
instead. You can also use calligraphic ($\mathcal{E}$), Fraktur
($\mathfrak{D}$), and blackboard bold ($\mathbb{R}$) fonts, if you need them.
If you'll be using a symbol in a special font repeatedly, you can save
some keystrokes by defining an abbreviation for it; for example, the
definition \verb/\newcommand{\bE}{\mathbf{E}}/ allows you to type simply
\verb/\bE/ to get $\bE$.
Unit abbreviations, as in $1~\mathrm{eV} = 1.6\times10^{-19}~\mathrm{J}$,
should be in the plain Roman font, not italics. You can access this font
from math mode using \verb/\mathrm/. For function names like $\sin\theta$,
$\exp x$, and $\ln N!$, \LaTeX\ provides special control sequences,
which you should use instead of \verb/\mathrm/ whenever possible because
they work better with \LaTeX's automatic spacing algorithms.
But \LaTeX\ doesn't always get the spacing right in mathematical formulas.
In the previous paragraph we had to use the \verb/~/ symbol to
manually insert a space between each number and its units. The \verb/~/
symbol actually represents an unbreakable space, where \LaTeX\ will never
insert a line break. For occasional minor adjustments to the spacing
in a \LaTeX\ expression, you can insert or remove a little
space with \verb/\,/ and \verb/\!/. Use these macros sparingly,
because \LaTeX's default spacing rules will provide more consistency
within and among AJP articles. The most common use of \verb/\,/
is in expressions like $T\,dS - P\,dV$.
\section{Displayed equations}
\label{DispEqSection}
When an equation is important and/or tall and/or complicated, you should
display it on a line by itself, with a number. To do this, you put
\verb/\begin{equation}/ before the equation and \verb/\end{equation}/
after it, as in
\begin{equation}
\int_0^\infty \! \frac{x^3}{e^x - 1} \, dx = 6\sum_{k=1}^\infty \frac1{k^4} =
6\left(\frac{\pi^4}{90}\right) = \frac{\pi^4}{15}.
\end{equation}
This example also shows how to make the sum and integral symbols, big parentheses,
and built-up fractions. (Don't put built-up fractions in a
non-displayed equation, because there won't be enough vertical space in
AJP's final, single-spaced paragraphs. Use the slashed form, $x^3/(e^x-1)$,
instead.)
If you want to refer to an equation elsewhere in your manuscript, you can
give it a label. For example, in the equation
\begin{equation}
\label{deriv}
\frac{\Delta x}{\Delta t} \mathop{\longrightarrow}_{\Delta t\rightarrow0} \frac{dx}{dt}
= \lim_{\Delta t\rightarrow0} \frac{\Delta x}{\Delta t}
\end{equation}
we've inserted \verb/\label{deriv}/ to label this equation
\texttt{deriv}.\cite{labelnames} To refer to
Eq.~(\ref{deriv}), we then type \verb/\ref{deriv}/.\cite{footnotes} Notice
that AJP's style conventions also require you to put the equation number in
parentheses when you refer to it, and to abbreviate ``Eq.''\ unless it's at
the beginning of a sentence.
Some equations require more complicated layouts. In the equation
\begin{equation}
E_n = (n + \tfrac12)\hbar, \quad \textrm{where}\ n = 0, 1, 2, \ldots,
\end{equation}
we've used \verb/\quad/ to leave a wide space and \verb/\textrm/ to put ``where''
in plain Roman type. To create a matrix or column vector, as in
\begin{equation}
\begin{bmatrix}
t' \\
x' \\
\end{bmatrix}
=
\begin{pmatrix}
\gamma & -\beta\gamma \\
-\beta\gamma & \gamma \\
\end{pmatrix}
\begin{bmatrix}
t \\
x \\
\end{bmatrix},
\end{equation}
you can use the \texttt{pmatrix} and/or \texttt{bmatrix} environment,
for matrices delimited by parentheses and/or brackets. There's also
a plain \texttt{matrix} environment that omits the delimiters.
In this and other examples of \LaTeX\ tables and arrays, the \verb/&/
character serves as a ``tab'' to separate columns, while the \verb/\\/
control sequence marks the end of a row.
For a list of related equations, with nicely lined-up equals signs,
use the \texttt{eqnarray} environment:
\begin{eqnarray}
\oint \vec B \cdot d\vec\ell & = & -\frac{d\Phi_E}{dt} ; \\
\oint \vec E \cdot d\vec\ell & = & \mu_0\epsilon_0\frac{d\Phi_B}{dt} + \mu_0 I.
\end{eqnarray}
You can also use \texttt{eqnarray} to make a multi-line equation, for example,
\begin{eqnarray}
\mathcal{Z}
& = & 1 + e^{-(\epsilon-\mu)/kT} + e^{-2(\epsilon-\mu)/kT} + \cdots \nonumber \\
& = & 1 + e^{-(\epsilon-\mu)/kT} + (e^{-(\epsilon-\mu)/kT})^2 + \cdots \nonumber \\
& = & \frac{1}{1 - e^{-(\epsilon-\mu)/kT}}.
\end{eqnarray}
Here the first column of the second and third lines is empty. Note that you
can use \verb/\nonumber/ within any line to suppress the generation of
an equation number; just be sure that each multi-line equation has at least
one number.
Another commonly used structure is the \texttt{cases} environment, as in
\begin{equation}
m(T) =
\begin{cases}
0 & T > T_c \, , \\
\bigl(1 - [\sinh 2 \beta J]^{-4} \bigr)^{1/8} & T < T_c \, .
\end{cases}
\end{equation}
At AJP we require that you put correct punctuation before and after every
displayed equation, treating each equation as part of a correctly punctuated
English sentence.\cite{mermin} The preceding examples illustrate good
equation punctuation.
\section{Figures}
\LaTeX\ can import figures via the \verb/\includegraphics/ macro.
For AJP, you should embed this in the \texttt{figure} environment, which
can place the figure in various locations. This environment also lets
you add a caption (which AJP requires) and an optional label for referring
to the figure from elsewhere. See Fig.~\ref{gasbulbdata} for an example.
\begin{figure}[h!]
\centering
\includegraphics{GasBulbData.eps}
\caption{Pressure as a function of temperature for a fixed volume of air.
The three data sets are for three different amounts of air in the container.
For an ideal gas, the pressure would go to zero at $-273^\circ$C. (Notice
that this is a vector graphic, so it can be viewed at any scale without
seeing pixels.)}
\label{gasbulbdata}
\end{figure}
Most \LaTeX\ implementations can import a variety of graphics formats.
For graphs and line drawings you should use vector (i.e., resolution-independent)
graphics saved in encapsulated PostScript (.eps) or portable document
format (.pdf). Most good graphics software systems can save to one
or both of these formats. Please don't use a rasterized graphics format
(such as .jpg or .png or .tiff) for graphs or line drawings.
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{ThreeSunsets.jpg}
\caption{Three overlaid sequences of photos of the setting sun, taken
near the December solstice (left), September equinox (center), and
June solstice (right), all from the same location at 41$^\circ$ north
latitude. The time interval between images in each sequence is approximately
four minutes.}
\label{sunsets}
\end{figure}
For photographs and other images that are \textit{inherently} made
of pixels (that is, rasters or bitmaps), \LaTeX\ can
(usually) handle the .jpg and .png formats as well as .eps and .pdf.
Figure~\ref{sunsets} is a .jpg example. For final production, however,
AJP prefers that raster images be in .tiff format. Most \LaTeX\ systems
can't import .tiff images, so we recommend using .png or .jpg with \LaTeX\
for your initial submission, while saving a higher-quality .tiff version
to submit as a separate file after your manuscript is conditionally accepted
for publication.
Please refer to the AJP editor's web site\cite{editorsite} for more details
on AJP's requirements for figure preparation.
\section{Tables}
Tables are somewhat similar to figures: You use the \texttt{table} environment
to let them ``float'' to an appropriate location, and to automatically number
them and format their captions. But whereas the content of a figure comes
from an external file, the content of a table is typeset directly in \LaTeX.
For that you use the \texttt{tabular} environment, which uses \verb/&/ and
\verb/\\/ for tabbing and ending rows, just like the \texttt{matrix} and
\texttt{eqnarray} environments discussed in Section~\ref{DispEqSection}.
Table~\ref{bosons} shows a fairly simple example. Notice that the caption comes
before the table itself, so it will appear above the table instead of below.
The \texttt{ruledtabular} environment, which surrounds \texttt{tabular},
provides the double horizontal lines at the top and bottom, and stretches
the table horizontally out to the margins. (This will look funny for tables
intended to fill only one column of a final journal page, but there's no
need to worry about such cosmetic details.)
\begin{table}[h!]
\centering
\caption{Elementary bosons}
\begin{ruledtabular}
\begin{tabular}{l c c c c p{5cm}}
Name & Symbol & Mass (GeV/$c^2$) & Spin & Discovered & Interacts with \\
\hline
Photon & $\gamma$ & \ \ 0 & 1 & 1905 & Electrically charged particles \\
Gluons & $g$ & \ \ 0 & 1 & 1978 & Strongly interacting particles (quarks and gluons) \\
Weak charged bosons & $W^\pm$ & \ 82 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$, $\gamma$ \\
Weak neutral boson & $Z^0$ & \ 91 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$ \\
Higgs boson & $H$ & 126 & 0 & 2012 & Massive particles (according to theory) \\
\end{tabular}
\end{ruledtabular}
\label{bosons}
\end{table}
Every table is a little bit different, and many tables will require
further tricks; see Refs.\ \onlinecite{wikibook} and~\onlinecite{latexbook}
for examples. Note that the AJP style does not ordinarily use lines
to separate rows and columns in the body of a table.
\section{Special formats}
\subsection{Block quotes}
If a quoted passage is long or important, you can use the \texttt{quote}
environment to typeset it as a block quote, as in this passage from The
Feynman Lectures:\cite{feynman}
\begin{quote}
A poet once said, ``The whole universe is in a glass of wine.'' We will
probably never know in what sense he meant that, for poets do not write
to be understood. But it is true that if we look at a glass of wine closely
enough we see the entire universe.
\end{quote}
\subsection{Numbered lists}
To create a numbered list, use the \texttt{enumerate} environment and start
each entry with the \verb/\item/ macro:
\begin{enumerate}
\item You can't win.
\item You can't even break even.
\item You can't get out of the game.
\end{enumerate}
\subsection{Unnumbered lists}
For a bulleted list, just use \texttt{itemize} instead of \texttt{enumerate}:
\begin{itemize}
\item Across a resistor, $\Delta V = \pm IR$.
\item Across a capacitor, $\Delta V = \pm Q/C$.
\item Across an inductor, $\Delta V = \pm L(dI/dt)$.
\end{itemize}
\subsection{Literal text}
For typesetting computer code, the \texttt{verbatim} environment reproduces
every character verbatim, in a typewriter font:
\begin{verbatim}
u[t_] := NIntegrate[
x^2 * Sqrt[x^2+t^-2] / (Exp[Sqrt[x^2+t^-2]] + 1), {x,0,Infinity}]
f[t_] := NIntegrate[
x^2 * Log[1+ Exp[-Sqrt[x2+t^-2]]], {x,0,Infinity}]
Plot[((11Pi^4/90) / (u[t]+f[t]+(2Pi^4/45)))^(1/3), {t,0,3}]
\end{verbatim}
There's also a \verb/\verb/ macro for typesetting short snippets of verbatim
text within a paragraph. To use this macro, pick any character that doesn't
appear within the verbatim text to use as a delimiter. Most of the examples
in this article use \texttt{/} as a delimiter, but in \verb|{a/b}| we've used
\verb/|/ instead.
\section{Endnotes and references}
This article has already cited quite a few endnotes, using the \verb/\cite/
macro. See the end of this article (and source file) for the endnotes
themselves, which are in an environment called \texttt{thebibliography}
and are created with the \verb/\bibitem/ macro. These macros require
you to give each endnote a name. The notes will be numbered in the
order in which the \verb/\bibitem/ entries appear, and AJP requires that
this order coincide with the order in which the notes are first cited in
the article. You can cite multiple endnotes in a single \verb/\cite/,
separating their names by commas. And you can cite each note as many
times as you like.
Notice that in the AJP (and Physical Review B) style, the citation numbers
appear as superscripts. Think carefully about the optimal placement of
each citation, and try not to attach citations to math symbols where the
numbers might be misinterpreted as exponents. Often there will be a
punctuation symbol after the word where you attach the citation; you
should then put the citation \textit{after} the punctuation, not
before.\cite{nevermindlogic}
If you want to refer directly to Ref.~\onlinecite{mermin} (or any other)
in a sentence, you can do so with the \verb/\onlinecite/ macro.
Most endnotes consist of bibliographic citations.\cite{noBIBTeX} Be sure
to learn and use the AJP styles for citing books,\cite{latexbook}
articles,\cite{dyson} edited volumes,\cite{examplevolume} and
URLs.\cite{latexsite} For example, article titles are in double quotes,
while book titles are in italics. Pay careful attention to all punctuation
symbols in citations. Note that AJP requires that all article citations
include titles as well as beginning and ending page numbers.
Please use standard abbreviations, as listed in the AIP Style
Manual,\cite{AIPstylemanual} for journal titles.
\section{Conclusion}
We hope this article will help you prepare beautifully typeset
manuscripts for the American Journal of Physics. Good typesetting requires
considerable attention to detail, but this effort will pay off by making your
manuscript easier and more enjoyable to read. Your colleagues, reviewers,
and editors will be grateful for your effort.
Of course, we encourage you to put as much care into the \textit{content}
of your manuscript as you put into its form. The AIP Style
Manual\cite{AIPstylemanual} is an indispensable reference on good physics
writing, covering everything from planning and organization to standard
spellings and abbreviations.
Most important of all, please familiarize yourself with the AJP Statement
of Editorial Policy,\cite{editorsite} which describes the types of manuscripts
that AJP publishes and the audience for which AJP authors are expected to write.
You wouldn't want to put all that care into preparing a manuscript for AJP,
only to find that AJP is the wrong journal for your manuscript.
We look forward to receiving your submission to AJP.
\section{Introduction}
\LaTeX\ is typesetting software that is widely used by mathematicians
and physicists because it is so good at typesetting equations. It is
also completely programmable, so it can be configured to produce
documents with almost any desired formatting, and to automatically
number equations, figures, endnotes, and so on.
To prepare manuscripts for the American Journal of Physics (AJP),
you should use the REV\TeX\ 4.1 format for Physical Review B
preprints, as indicated in the \texttt{documentclass} line at the top
of this article's source file. (If you're already familiar with
\LaTeX\ and have used other \LaTeX\ formats, please resist the
temptation to use them, or to otherwise override REV\TeX's formatting
conventions, in manuscripts that you prepare for AJP.)
This sample article is intended as a tutorial, template, and reference for
AJP authors, illustrating most of the \LaTeX\ and REV\TeX\ features that
authors will need. For a more comprehensive introduction to \LaTeX,
numerous books and online references are available.\cite{latexsite,
wikibook, latexbook} Documentation for the REV\TeX\ package
can be found on the APS web site.\cite{revtex}
\LaTeX\ is free software, available for Unix/Linux, Mac OS X, and Windows
operating systems. For downloading and installation instructions, follow
the links from the \LaTeX\ web site.\cite{latexsite} It is most
convenient\cite{cloudLaTeX} to install a ``complete \TeX\ distribution,''
which will include \LaTeX, the underlying \TeX\ engine, macro packages
such as REV\TeX, a large collection of fonts, and GUI tools for editing
and viewing your documents. To test your installation, try to process
this sample article.
\section{Ordinary text and paragraphs}
To typeset a paragraph of ordinary text, just type the text in your source
file like this. Put line breaks
wherever
you
want, and don't worry about extra spaces between words, which \LaTeX\ will ignore. You can almost always trust \LaTeX\ to make your paragraphs look good, with neatly justified margins.
To start a new paragraph, just leave a blank line in your source file.
A few punctuation characters require special treatment in \LaTeX. There
are no ``smart quotes,'' so you need to use the left-quote key (at the
top-left corner of the keyboard) for a left quote, and the ordinary apostrophe
key (next to the semi-colon) for a right quote. Hit either key twice for double
quotes, which are standard in American English. Don't use shift-apostrophe
to make double quotes. Use single quotes when they're nested inside a
double-quoted quotation. When a period or comma belongs at the end of
a quotation, put it inside the quotes---even if it's not part of what you're
quoting.\cite{nevermindlogic}
Your fingers also need to distinguish between a hyphen (used for
multi-word adjectives and for hyphenated names like Lennard-Jones), an
en-dash (formed by typing two consecutive hyphens, and used for ranges
of numbers like 1--100), and an em-dash (formed out of three consecutive
hyphens and used as an attention-getting punctuation symbol---preferably
not too often).
Some non-alphanumeric symbols like \$, \&, and \% have special meanings
in a \LaTeX\ source file, so if you want these symbols to appear in the output,
you need to precede them with a backslash.
There are also special codes for generating the various accents
that can appear in foreign-language words and names, such as Amp\`ere
and Schr\"odinger.\cite{FontEncodingComment}
You can switch to \textit{italic}, \textbf{bold}, and \texttt{typewriter} fonts
when necessary. Use curly braces to enclose the text that is to appear in
the special font. In general, \LaTeX\ uses curly braces to group characters
together for some common transformation.
Notice that any word or symbol preceded by the backslash character is
a special instruction to \LaTeX, typically used to produce a special
symbol or to modify the typeset output in some way. These instructions
are also called \textit{control sequences} or \textit{macros}.
After you've used \LaTeX\ for a while, the little finger of your right
hand will be really good at finding the backslash and curly-brace keys.
\section{Math symbols}
To type mathematical symbols and expressions within a paragraph, put
them between \$ signs, which indicate \textit{math mode}: $ab + 2c/d = e-3f$.
\LaTeX\ ignores spaces in math mode, using its own algorithms to determine
the right amount of space between symbols. Notice that an ordinary letter
like~$x$, when used in math mode, is automatically typeset in italics.
This is why you need to use math mode for all mathematical
expressions (except plain numerals), even when they don't contain any
special symbols. But don't use math mode to italicize ordinary \textit{words}.
Besides ordinary letters and numerals and common arithmetic symbols, math
mode provides a host of other characters that you can access via control
sequences.\cite{wikimathpage} These include Greek letters like $\pi$ and
$\Delta$ (note capitalization), symbols for operations and relations such
as $\cdot$, $\times$, $\pm$, $\gg$, $\leq$, $\sim$, $\approx$, $\propto$,
and $\rightarrow$, and special symbols like $\nabla$, $\partial$, $\infty$,
and~$\hbar$. You can decorate symbols with dots ($\dot x$ or $\ddot x$),
arrows ($\vec\mu$), bars ($\bar x$ or $\overline m$), hats ($\hat x$),
tildes ($\tilde f$ or $\widetilde w$), and radicals ($\sqrt\pi$, $\sqrt{2/3}$).
Parentheses and square brackets require no special keystrokes, but you
can also make curly braces and angle brackets: $\{\langle\ \cdots\ \rangle\}$.
To make subscripts and superscripts, use the underscore and caret
(circumflex) symbols on your keyboard: $x^\mu$, $g_{\mu\nu}$, $\delta^i_j$,
$\epsilon^{ijk}$. Notice that you need to put the subscript or superscript
in curly braces if it's longer than one character (or one control sequence).
You can even make nested subscripts and superscripts, as in $e^{-x^2}$.
If a subscript consists of an entire word or word-like abbreviation,
we usually put it in plain Roman type: $x_\textrm{max}$. If you need to
put a subscript or superscript \textit{before} a symbol, use an empty
set of curly braces: ${}^{235}_{\ 92}\textrm{U}$. (Notice the trick of using
backslash-space put a space before the 92.)
\newcommand{\bE}{\mathbf{E}}
To make boldface letters you use the \verb/\mathbf/ control sequence, as in
$\nabla\times\mathbf{E} = -\partial\mathbf{B}/\partial t$. For bold Greek
letters like $\boldsymbol{\omega}$, you need to use \verb/\boldsymbol/
instead. You can also use calligraphic ($\mathcal{E}$), Fraktur
($\mathfrak{D}$), and blackboard bold ($\mathbb{R}$) fonts, if you need them.
If you'll be using a symbol in a special font repeatedly, you can save
some keystrokes by defining an abbreviation for it; for example, the
definition \verb/\newcommand{\bE}{\mathbf{E}}/ allows you to type simply
\verb/\bE/ to get $\bE$.
Unit abbreviations, as in $1~\mathrm{eV} = 1.6\times10^{-19}~\mathrm{J}$,
should be in the plain Roman font, not italics. You can access this font
from math mode using \verb/\mathrm/. For function names like $\sin\theta$,
$\exp x$, and $\ln N!$, \LaTeX\ provides special control sequences,
which you should use instead of \verb/\mathrm/ whenever possible because
they work better with \LaTeX's automatic spacing algorithms.
But \LaTeX\ doesn't always get the spacing right in mathematical formulas.
In the previous paragraph we had to use the \verb/~/ symbol to
manually insert a space between each number and its units. The \verb/~/
symbol actually represents an unbreakable space, where \LaTeX\ will never
insert a line break. For occasional minor adjustments to the spacing
in a \LaTeX\ expression, you can insert or remove a little
space with \verb/\,/ and \verb/\!/. Use these macros sparingly,
because \LaTeX's default spacing rules will provide more consistency
within and among AJP articles. The most common use of \verb/\,/
is in expressions like $T\,dS - P\,dV$.
\section{Displayed equations}
\label{DispEqSection}
When an equation is important and/or tall and/or complicated, you should
display it on a line by itself, with a number. To do this, you put
\verb/\begin{equation}/ before the equation and \verb/\end{equation}/
after it, as in
\begin{equation}
\int_0^\infty \! \frac{x^3}{e^x - 1} \, dx = 6\sum_{k=1}^\infty \frac1{k^4} =
6\left(\frac{\pi^4}{90}\right) = \frac{\pi^4}{15}.
\end{equation}
This example also shows how to make the sum and integral symbols, big parentheses,
and built-up fractions. (Don't put built-up fractions in a
non-displayed equation, because there won't be enough vertical space in
AJP's final, single-spaced paragraphs. Use the slashed form, $x^3/(e^x-1)$,
instead.)
If you want to refer to an equation elsewhere in your manuscript, you can
give it a label. For example, in the equation
\begin{equation}
\label{deriv}
\frac{\Delta x}{\Delta t} \mathop{\longrightarrow}_{\Delta t\rightarrow0} \frac{dx}{dt}
= \lim_{\Delta t\rightarrow0} \frac{\Delta x}{\Delta t}
\end{equation}
we've inserted \verb/\label{deriv}/ to label this equation
\texttt{deriv}.\cite{labelnames} To refer to
Eq.~(\ref{deriv}), we then type \verb/\ref{deriv}/.\cite{footnotes} Notice
that AJP's style conventions also require you to put the equation number in
parentheses when you refer to it, and to abbreviate ``Eq.''\ unless it's at
the beginning of a sentence.
Some equations require more complicated layouts. In the equation
\begin{equation}
E_n = (n + \tfrac12)\hbar, \quad \textrm{where}\ n = 0, 1, 2, \ldots,
\end{equation}
we've used \verb/\quad/ to leave a wide space and \verb/\textrm/ to put ``where''
in plain Roman type. To create a matrix or column vector, as in
\begin{equation}
\begin{bmatrix}
t' \\
x' \\
\end{bmatrix}
=
\begin{pmatrix}
\gamma & -\beta\gamma \\
-\beta\gamma & \gamma \\
\end{pmatrix}
\begin{bmatrix}
t \\
x \\
\end{bmatrix},
\end{equation}
you can use the \texttt{pmatrix} and/or \texttt{bmatrix} environment,
for matrices delimited by parentheses and/or brackets. There's also
a plain \texttt{matrix} environment that omits the delimiters.
In this and other examples of \LaTeX\ tables and arrays, the \verb/&/
character serves as a ``tab'' to separate columns, while the \verb/\\/
control sequence marks the end of a row.
For a list of related equations, with nicely lined-up equals signs,
use the \texttt{eqnarray} environment:
\begin{eqnarray}
\oint \vec B \cdot d\vec\ell & = & -\frac{d\Phi_E}{dt} ; \\
\oint \vec E \cdot d\vec\ell & = & \mu_0\epsilon_0\frac{d\Phi_B}{dt} + \mu_0 I.
\end{eqnarray}
You can also use \texttt{eqnarray} to make a multi-line equation, for example,
\begin{eqnarray}
\mathcal{Z}
& = & 1 + e^{-(\epsilon-\mu)/kT} + e^{-2(\epsilon-\mu)/kT} + \cdots \nonumber \\
& = & 1 + e^{-(\epsilon-\mu)/kT} + (e^{-(\epsilon-\mu)/kT})^2 + \cdots \nonumber \\
& = & \frac{1}{1 - e^{-(\epsilon-\mu)/kT}}.
\end{eqnarray}
Here the first column of the second and third lines is empty. Note that you
can use \verb/\nonumber/ within any line to suppress the generation of
an equation number; just be sure that each multi-line equation has at least
one number.
Another commonly used structure is the \texttt{cases} environment, as in
\begin{equation}
m(T) =
\begin{cases}
0 & T > T_c \, , \\
\bigl(1 - [\sinh 2 \beta J]^{-4} \bigr)^{1/8} & T < T_c \, .
\end{cases}
\end{equation}
At AJP we require that you put correct punctuation before and after every
displayed equation, treating each equation as part of a correctly punctuated
English sentence.\cite{mermin} The preceding examples illustrate good
equation punctuation.
\section{Figures}
\LaTeX\ can import figures via the \verb/\includegraphics/ macro.
For AJP, you should embed this in the \texttt{figure} environment, which
can place the figure in various locations. This environment also lets
you add a caption (which AJP requires) and an optional label for referring
to the figure from elsewhere. See Fig.~\ref{gasbulbdata} for an example.
\begin{figure}[h!]
\centering
\includegraphics{GasBulbData.eps}
\caption{Pressure as a function of temperature for a fixed volume of air.
The three data sets are for three different amounts of air in the container.
For an ideal gas, the pressure would go to zero at $-273^\circ$C. (Notice
that this is a vector graphic, so it can be viewed at any scale without
seeing pixels.)}
\label{gasbulbdata}
\end{figure}
Most \LaTeX\ implementations can import a variety of graphics formats.
For graphs and line drawings you should use vector (i.e., resolution-independent)
graphics saved in encapsulated PostScript (.eps) or portable document
format (.pdf). Most good graphics software systems can save to one
or both of these formats. Please don't use a rasterized graphics format
(such as .jpg or .png or .tiff) for graphs or line drawings.
\begin{figure}[h!]
\centering
\includegraphics[width=5in]{ThreeSunsets.jpg}
\caption{Three overlaid sequences of photos of the setting sun, taken
near the December solstice (left), September equinox (center), and
June solstice (right), all from the same location at 41$^\circ$ north
latitude. The time interval between images in each sequence is approximately
four minutes.}
\label{sunsets}
\end{figure}
For photographs and other images that are \textit{inherently} made
of pixels (that is, rasters or bitmaps), \LaTeX\ can
(usually) handle the .jpg and .png formats as well as .eps and .pdf.
Figure~\ref{sunsets} is a .jpg example. For final production, however,
AJP prefers that raster images be in .tiff format. Most \LaTeX\ systems
can't import .tiff images, so we recommend using .png or .jpg with \LaTeX\
for your initial submission, while saving a higher-quality .tiff version
to submit as a separate file after your manuscript is conditionally accepted
for publication.
Please refer to the AJP editor's web site\cite{editorsite} for more details
on AJP's requirements for figure preparation.
\section{Tables}
Tables are somewhat similar to figures: You use the \texttt{table} environment
to let them ``float'' to an appropriate location, and to automatically number
them and format their captions. But whereas the content of a figure comes
from an external file, the content of a table is typeset directly in \LaTeX.
For that you use the \texttt{tabular} environment, which uses \verb/&/ and
\verb/\\/ for tabbing and ending rows, just like the \texttt{matrix} and
\texttt{eqnarray} environments discussed in Section~\ref{DispEqSection}.
Table~\ref{bosons} shows a fairly simple example. Notice that the caption comes
before the table itself, so it will appear above the table instead of below.
The \texttt{ruledtabular} environment, which surrounds \texttt{tabular},
provides the double horizontal lines at the top and bottom, and stretches
the table horizontally out to the margins. (This will look funny for tables
intended to fill only one column of a final journal page, but there's no
need to worry about such cosmetic details.)
\begin{table}[h!]
\centering
\caption{Elementary bosons}
\begin{ruledtabular}
\begin{tabular}{l c c c c p{5cm}}
Name & Symbol & Mass (GeV/$c^2$) & Spin & Discovered & Interacts with \\
\hline
Photon & $\gamma$ & \ \ 0 & 1 & 1905 & Electrically charged particles \\
Gluons & $g$ & \ \ 0 & 1 & 1978 & Strongly interacting particles (quarks and gluons) \\
Weak charged bosons & $W^\pm$ & \ 82 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$, $\gamma$ \\
Weak neutral boson & $Z^0$ & \ 91 & 1 & 1983 & Quarks, leptons, $W^\pm$, $Z^0$ \\
Higgs boson & $H$ & 126 & 0 & 2012 & Massive particles (according to theory) \\
\end{tabular}
\end{ruledtabular}
\label{bosons}
\end{table}
Every table is a little bit different, and many tables will require
further tricks; see Refs.\ \onlinecite{wikibook} and~\onlinecite{latexbook}
for examples. Note that the AJP style does not ordinarily use lines
to separate rows and columns in the body of a table.
\section{Special formats}
\subsection{Block quotes}
If a quoted passage is long or important, you can use the \texttt{quote}
environment to typeset it as a block quote, as in this passage from The
Feynman Lectures:\cite{feynman}
\begin{quote}
A poet once said, ``The whole universe is in a glass of wine.'' We will
probably never know in what sense he meant that, for poets do not write
to be understood. But it is true that if we look at a glass of wine closely
enough we see the entire universe.
\end{quote}
\subsection{Numbered lists}
To create a numbered list, use the \texttt{enumerate} environment and start
each entry with the \verb/\item/ macro:
\begin{enumerate}
\item You can't win.
\item You can't even break even.
\item You can't get out of the game.
\end{enumerate}
\subsection{Unnumbered lists}
For a bulleted list, just use \texttt{itemize} instead of \texttt{enumerate}:
\begin{itemize}
\item Across a resistor, $\Delta V = \pm IR$.
\item Across a capacitor, $\Delta V = \pm Q/C$.
\item Across an inductor, $\Delta V = \pm L(dI/dt)$.
\end{itemize}
\subsection{Literal text}
For typesetting computer code, the \texttt{verbatim} environment reproduces
every character verbatim, in a typewriter font:
\begin{verbatim}
u[t_] := NIntegrate[
x^2 * Sqrt[x^2+t^-2] / (Exp[Sqrt[x^2+t^-2]] + 1), {x,0,Infinity}]
f[t_] := NIntegrate[
x^2 * Log[1+ Exp[-Sqrt[x2+t^-2]]], {x,0,Infinity}]
Plot[((11Pi^4/90) / (u[t]+f[t]+(2Pi^4/45)))^(1/3), {t,0,3}]
\end{verbatim}
There's also a \verb/\verb/ macro for typesetting short snippets of verbatim
text within a paragraph. To use this macro, pick any character that doesn't
appear within the verbatim text to use as a delimiter. Most of the examples
in this article use \texttt{/} as a delimiter, but in \verb|{a/b}| we've used
\verb/|/ instead.
\section{Endnotes and references}
This article has already cited quite a few endnotes, using the \verb/\cite/
macro. See the end of this article (and source file) for the endnotes
themselves, which are in an environment called \texttt{thebibliography}
and are created with the \verb/\bibitem/ macro. These macros require
you to give each endnote a name. The notes will be numbered in the
order in which the \verb/\bibitem/ entries appear, and AJP requires that
this order coincide with the order in which the notes are first cited in
the article. You can cite multiple endnotes in a single \verb/\cite/,
separating their names by commas. And you can cite each note as many
times as you like.
Notice that in the AJP (and Physical Review B) style, the citation numbers
appear as superscripts. Think carefully about the optimal placement of
each citation, and try not to attach citations to math symbols where the
numbers might be misinterpreted as exponents. Often there will be a
punctuation symbol after the word where you attach the citation; you
should then put the citation \textit{after} the punctuation, not
before.\cite{nevermindlogic}
If you want to refer directly to Ref.~\onlinecite{mermin} (or any other)
in a sentence, you can do so with the \verb/\onlinecite/ macro.
Most endnotes consist of bibliographic citations.\cite{noBIBTeX} Be sure
to learn and use the AJP styles for citing books,\cite{latexbook}
articles,\cite{dyson} edited volumes,\cite{examplevolume} and
URLs.\cite{latexsite} For example, article titles are in double quotes,
while book titles are in italics. Pay careful attention to all punctuation
symbols in citations. Note that AJP requires that all article citations
include titles as well as beginning and ending page numbers.
Please use standard abbreviations, as listed in the AIP Style
Manual,\cite{AIPstylemanual} for journal titles.
\section{Conclusion}
We hope this article will help you prepare beautifully typeset
manuscripts for the American Journal of Physics. Good typesetting requires
considerable attention to detail, but this effort will pay off by making your
manuscript easier and more enjoyable to read. Your colleagues, reviewers,
and editors will be grateful for your effort.
Of course, we encourage you to put as much care into the \textit{content}
of your manuscript as you put into its form. The AIP Style
Manual\cite{AIPstylemanual} is an indispensable reference on good physics
writing, covering everything from planning and organization to standard
spellings and abbreviations.
Most important of all, please familiarize yourself with the AJP Statement
of Editorial Policy,\cite{editorsite} which describes the types of manuscripts
that AJP publishes and the audience for which AJP authors are expected to write.
You wouldn't want to put all that care into preparing a manuscript for AJP,
only to find that AJP is the wrong journal for your manuscript.
We look forward to receiving your submission to AJP.
| {'timestamp': '2019-04-04T02:04:39', 'yymm': '1904', 'arxiv_id': '1904.01713', 'language': 'en', 'url': 'https://arxiv.org/abs/1904.01713'} |
\section{Introduction}
Among the various approaches used in meson physics, the formalism of
Bethe-Salpeter and Dyson-Schwinger equations (DSEs) plays a traditional and
indispensable role. The Bethe-Salpeter equation (BSE) provides a
field-theoretical starting point to describe hadrons as relativistic bound
states of quarks and/or antiquarks. For instance, the DSE and BSE framework has
been widely used in order to obtain nonperturbative information about the
spectra and decays of the whole lightest pseudoscalar nonet, with an
emphasis on the QCD pseudo-Goldstone boson --- the pion \cite{PSEUDO}.
Moreover, the formalism satisfactorily provides a window to the 'next-scale'
meson sector, too, including vector, scalar \cite{SCALARY} and excited mesons.
Finally, electromagnetic form factors of mesons have been calculated with this
approach for space-like momenta \cite{FORMFAKTORY}.
When dealing with bound states composed of light quarks, then it is unavoidable
to use the full covariant BSE framework. Nonperturbative knowledge of the
Green's function, which makes part of the BSE kernel, is required. Very often,
the problem is solved in Euclidean space, where it is more tractable, as
there are no Green's function singularities there. The physical amplitudes
can be then obtained by continuation to Minkowski space. Note that the
extraction of mass spectra is already a complicated task \cite{BHKRW2007}, not
to speak of an analytic continuation of Euclidean-space form factors.
When dealing with heavy quarkonia or mixed heavy mesons like $B_c$
(found at Fermilab by the CDF Collaboration \cite{BCmesons}),
some simplifying approximations are possible. Different
approaches have been developed to reduce the computational complexity of the
full four-dimensional (4D) BSE. The so-called instantaneous \cite{INSTA} and
quasi-potential approximations \cite{QUASI}
can reduce the 4D BSE to a 3D equation in a Lorentz-covariant manner. In
practice, such 3D equations are much more tractable, since their resolution is
less involved, especially if one exploits the considerable freedom in
performing the 3D reduction. Also note that, contrary to the BSE in the ladder
approximation, these equations reduce to the Schr\"{o}dinger equation of
nonrelativistic Heavy-Meson Effective Theory and nonrelativistic QCD
\cite{HEAVY}. However, the interaction kernels of the reduced equations
often correspond to input based on economical phenomenological models, and the
connection to the underlying theory (QCD) is less clear (if not abandoned from
the onset).
In the present paper, we extend the method of solving the full 4D BSE,
originally developed for pure scalar theories \cite{NAKAN,KUSIWI,SAUADA2003},
to theories with nontrivial spin degrees of freedom. Under a certain
assumption on the functional form of Green's functions, we develop a method
of solving the BSE directly in Minkowski space, in its original manifestly
Lorentz-covariant 4D form. In order to make our paper as self-contained as
possible, we shall next supply some basic facts about the BSE approach to
relativistic mesonic bound states.
The crucial step to derive the homogeneous BSE for bound states is the
assumption that the bound state reflects itself in a pole of the four-point
Green's function for on-shell total momentum $P$, with $P^2=M_j^2$, viz.\
\begin{equation}
G^{(4)}(p,p',P)=\sum_j\frac{-i}{(2\pi)^4}\frac{\psi_j(p,P_{os})
\bar{\psi_j}(p',P_{os})}{2E_{p_j}(P^0-E_{p_j}+i\epsilon)}+\mbox{regular terms}\;,
\end{equation}
where $E_{p_j}=\sqrt{\vec{p}\,{}^2+M_j^2}$ and $M_j$ is the (positive) mass of
the bound state characterized by the BS wave function $\psi_j$ carrying the
set of quantum numbers $j$.
Then the BSE can be conventionally written in momentum space like
\begin{eqnarray}
S_1^{-1}(p_+,P)\psi(p,P)S_2^{-1}(p_-,P)&&=-i\int\frac{d^4k}{(2\pi)^4}
V(p,k,P)\psi(p,P)\, ,
\\
p_+&&=p+\alpha P \, ,
\nonumber \\
p_-&&=p-(1-\alpha)P \, ,
\nonumber
\end{eqnarray}
or, equivalently, in terms of BS vertex function $\Gamma$ as
\begin{eqnarray} \label{wakantanka}
\Gamma(p,P)&=&-i\int\frac{d^4k}{(2\pi)^4}V(p,k,P)S_1(k_+,P)
\Gamma(p,P)S_2(k_-,P) \, ,
\end{eqnarray}
where we suppress all Dirac, flavor and Lorentz indices, and $\alpha\in(0,1)$.
The function $V $ represents the two-body-irreducible interaction kernel, and
$S_i$ ($i=1,2$) are the dressed propagators of the constituents. The free
propagators read
\begin{equation}
S_i^0(p)=\frac{\not p+m_i}{p^2-m^2_i+i\epsilon}.
\end{equation}
Concerning solutions to the BSE (\ref{wakantanka}) for pseudoscalar mesons,
they have the generic form \cite{LEW}
\begin{equation} \label{gen.form}
\Gamma(q,P)=\gamma_5[\Gamma_A+\Gamma_Bq.P\not\!q+\Gamma_C\not\!P+
\Gamma_D\not\!q\not\!P+ \Gamma_E\not\!P\not\!q] ,
\end{equation}
where
the $\Gamma_i$, with $i=A,B,C,D,E$, are scalar functions of their arguments
$ P,q$. If the bound state has a well-defined charge parity, say
${\cal{C}}=1$, then these functions are even in $q.P$, and furthermore
$\Gamma_D=-\Gamma_E$.
As was already discussed in Ref.~\cite{MUNCZEK}, the dominant contribution to
the BSE vertex function for pseudoscalar mesons comes from the first term in
Eq.~(\ref{gen.form}). This is already true, at a 15\% accuracy level, for the
light pseudoscalars $\pi,K,\eta$, while in the case of ground-state heavy
pseudoscalars, like the $\eta_c$ and $\eta_b$, the contributions from the other
tensor components in Eq.~(\ref{gen.form}) are even more negligible.
Hence, at this stage of our Minkowski calculation, we also approximate our
solution by taking $\Gamma=\gamma_5\Gamma_A$.
The interaction kernel is approximated by the dressed gluon propagator,
with the interaction gluon-quark-antiquark vertices taken in their bare forms.
Thus, we may write
\begin{equation} \label{landau}
V(p,q,P)=g^2(\kappa) D_{\mu\nu}(p-q,\kappa)\gamma^{\nu}\otimes\gamma^{\mu} \, ,
\end{equation}
where the full gluon propagator is renormalized at a scale $\kappa$. The
effective running strong coupling $\alpha_s$ is then related to $g$ through the
equations
\begin{eqnarray} \label{gluon}
g^2(\kappa)D_{\mu\nu}(l,\kappa)&&=
\alpha_s(l,\kappa)\frac{ P^T_{\mu\nu}(l)}{l^2+i\epsilon}-\xi g^2(\kappa)
\frac{l_{\mu}l_{\nu}}{l^4+i\epsilon}\, ,
\\
\alpha_s(q,\kappa)&&=\frac{g^2(\kappa)}{1-\Pi(q^2,\kappa)}\, ,
\nonumber \\
P^T_{\mu\nu}(l)&&=-g_{\mu\nu}+\frac{l_{\mu}l_{\nu}}{l^2}\, .
\nonumber
\end{eqnarray}
From the class of $\xi$-linear covariant gauges, the Landau gauge $\xi=0$ will
be employed throughout the present paper.
In the next section, we shall derive the solution for the dressed-ladder
approximation to the BSE, i.e., all propagators are considered dressed ones,
and no crossed diagrams are taken into account. The BSE for quark-antiquark
states has many times been treated in Euclidean space, even beyond the ladder
approximation. Most notably, the importance of dressing the proper vertices in
the light-quark sector was already stressed in Ref.~\cite{ACHJO}, so our
approximations are certainly expected to have a limited validity.
Going beyond the rainbow ($\gamma_{\mu}$) approximation is straightforward
but rather involved. (For comparison, see the Minkowski study of
Schwinger-Dyson equations published in Refs.~\cite{SAULIJHEP,SAULI2}),
the latter paper including the minimal-gauge covariant vertex instead of the
bare one). In the present paper, we prefer to describe the computational
method rather than carrying out a BSE study with the most sophisticated
kernel known in the literature.
The set-up of this paper is as follows. In Sec.~2 we describe the method of
solving the BSE. As a demonstration, numerical results are presented in
Sec.~3. Conclusions are drawn in Sec.~4. The detailed derivations of the integral
equation, that we actually solved numerically, are presented in the Appendices.
\section{Integral representation and solution of the BSE}
In this section we describe our method of solving the BSE in Minkowski space.
It basically assumes that the various Green's functions appearing in the
interaction kernel can be written as weighted integrals over the various
spectral functions (i.e., the real distribution) $\rho$.
More explicitly stated, the full quark and gluon propagators, the latter ones
in the Landau gauge, are assumed to satisfy the standard Lehmann
representation, which reads
\begin{equation} \label{srforquark}
S(l)=\int_{0}^{\infty}d\omega\frac{\rho_v(\omega)\not l+
\rho_s(\omega)}{l^2-\omega+i\epsilon}\,,
\end{equation}
\begin{equation} \label{srforgluon}
G_{\mu\nu}(l)=\int_{0}^{\infty}d\omega\frac{\rho_g(\omega)}{l^2-\omega+i\epsilon}
P^T_{\mu\nu}(l) \, ,
\end{equation}
where $\rho$ is a real istribution.
Until now, with certain limitations, the
integral representations ~(\ref{srforquark}) and (\ref{srforgluon}) have been used for the nonperturbative evaluation
of Green's functions in various models \cite{SAFBP}. However, we should note
here that the true analytic structure of QCD Green's functions is not reliably
known (also see Refs.~\cite{ALKSME,FISHER1,SABIA}), which studies suggests the tructure given by ~(\ref{srforquark}) and (\ref{srforgluon}) is not sufficient if not excluded. In this case, the lehmann representation or perhaps the ussage of real $\rho$ in the integral representation ~(\ref{srforquark}) and (\ref{srforgluon}) can be regarded as an analyticized approximation of the true
quark propagator. The complexification of $rho$ within the complex integration path is one of the straightforward and questionable generalization \cite{ARRBRO}. The general question of the existence of Lehamnn represintation in QCD is beyond the scope of presented paper and we do not discussed the problem furthermore.
Furthermore, we generalize here the idea of the Perturbation Theory Integral
Representation (PTIR) \cite{NAKAN}, specifically for our case. The PTIR
represents a unique integral representation (IR) for an $n$-point Green's function
defined by an $n$-leg Feynman integral.
The generalized PTIR formula for the $n$-point function in a theory involving
fields with arbitrary spin is exactly the same as in the original scalar theory
considered in Ref.~\cite{NAKAN}, but the spectral function now acquires a
nontrivial tensor structure. Let us denote such a generalized weight function by
$\rho(\alpha,x_i)$. Then, it can be clearly decomposed into the sum
\begin{equation}
\rho(\alpha,x_i)_{\mbox{\scriptsize scalar theory}}\rightarrow \sum_j
\rho_j(\alpha,x_i){\cal{P}}_j ,
\end{equation}
where $\alpha,x_i$ represent the set of spectral variables, and $j$ runs over
all possible independent combinations of Lorentz tensors and Dirac matrices
$P_j$. The function $\rho_j(\alpha,x_i)$ just represents the PTIR weight
function of the $j$-th form factor (the scalar function by definition), since
it can obviously be written as a suitable scalar Feynman integral. Leaving
aside the question of (renormalization) scheme dependence, we refer the reader
to the textbook by Nakanishi \cite{NAKAN} for a detailed derivation of the
PTIR. The simplest examples of such "generalized" integral representations corresponds with Lehmann
representations for spin half ~(\ref{srforquark}) and spin one propagators (\ref{srforgluon}).
Let us now apply our idea to the pseudoscalar bound-state vertex function
keeping in mind that the singularity structure (given by the denominators)
of the r.h.s.\ of the BSE is the same as in the scalar models studied in
Refs.~\cite{KUSIWI,SAUADA2003}, the appropriate IR for the pseudoscalar
bound- state vertex function $\Gamma_A(q,P)$ should read
\begin{equation} \label{repr}
\Gamma_A(q,P)=\int_{0}^{\infty} d\omega \int_{-1}^{1}dz
\frac{\rho_A^{[N]}(\omega,z)}
{\left[F(\omega,z;P,q)\right]^N}\, ,
\end{equation}
where we have introduced a useful abbreviation for the denominator of the
IR~(\ref{repr}), viz.\
\begin{equation} \label{efko}
F(\omega,z;P,q)=\omega-(q^2+q.Pz+P^2/4)-i\epsilon \, ,
\end{equation}
with $N$ a free integer parameter.
Substituting the IRs~(\ref{repr}), (\ref{srforgluon}), (\ref{srforquark}) into
the r.h.s.\ of the BSE~(\ref{wakantanka}), one can analytically integrate over
the loop momenta. Assuming the uniqueness theorem \cite{NAKAN}, we should
arrive at the same IR~(\ref{repr}), because of the r.h.s.\ of the
BSE~(\ref{wakantanka}). The derivation is given in Appendix A for the cases
$N=1,2$.
In other words, we have converted the momentum BSE (with a singular kernel)
into a homogeneous two-dimensional integral equation for the real weight
function $\rho_A^{[N]}(\omega,z)$, i.e.,
\begin{equation} \label{madrid}
\rho^{[N]}_A(\tilde{\omega},\tilde{z})=
\int_{0}^{\infty} d\omega \int_{-1}^{1}dz
V^{[N]}(\tilde{\omega}, \tilde{z};\omega,z)\rho^{[N]}_A(\omega,z) ,
\end{equation}
where the kernel $V^{[N]}(\tilde{\omega}, \tilde{z};\omega,z)$ is a regular
multivariable function.
The kernel $V^{[N]}$ also automatically supports the domain $\Omega$ where
the function $\rho^{[N]}_A(\omega,z)$ is nontrivial. This domain is always
smaller then the infinite strip $[0,\infty)\times[-1,1]$, as is explicitly
assumed by the boundaries of the integrals over $\omega$ and $z$.
For instance, with the simplest kernel parametrized by a free gluon
propagator and constituent quarks of mass $m$, we get for the flavor-singlet
meson $\rho^{[N]}_A(\omega,z)\neq 0$ only if $\omega>m^2$.
In our approach, to solve the momentum BSE in Minkowski space is equivalent
to finding a real solution to the real integral equation~(\ref{madrid}). No
special choice of frame is required. If one needs the resulting vertex
function, can be obtained by numerical integration over $\rho_N$ in an
arbitrary reference frame.
\section{ Numerical Results}
In this section we discuss the numerical solution of the BSE with various
interaction kernels. For that purpose, we shall vary the coupling strength
as well as the effective gluon mass $m_g$. We are mainly concerned with the
range of binding energies that coincide with those of heavy quarkonia, which
systems we shall study in future work. Moreover, we take a discrete set of
values for the mass $m_g$, such that it runs from zero to the value of the
constituent quark mass. These values are expected to be relevant for the case
of a true gluon propagator (when $m_g$ is replaced by the continuous spectral
variable $\omega$ (\ref{srforgluon})). Thus, in each case, the corresponding
gluon density is $\rho_g(c)=N_g\delta(c-m^2_g)$, which specifies the kernel
of the BSE to be (in the Landau gauge)
\begin{equation} \label{gluon2}
V(q-p)=g^2
\frac{-g_{\mu\nu}+\frac{(q-p)_{\mu}(q-p)_{\nu}}{(q-p)^2}}
{(q-p)^2-m_g^2+i\epsilon}
\gamma^{\nu}\otimes\gamma^{\mu}
\end{equation}
where the prefactor (including the trace of the color matrices) is simply
absorbed in the coupling constant. For our actual calculation, we use the bare
constituent propagator $S_i(p_i)$ with heavy quark mass $M\equiv m$ (see Appendix A
for this approximation).
Firstly, we follow the standard procedure: after fixing the bound-state mass
($\sqrt{P^2}$), we look for a solution by iterating the BSE for a spectral
function with fixed coupling constant $\alpha=g^2/(4\pi)$.
Very similarly to the scalar case \cite{SAUADA2003}, the choice $N=2$ for the
power of $F$ in the IR of the bound-state vertex function is the preferred one.
This choice is a reasonable compromise between on the one hand limiting
numerical errors and on the other hand avoiding the computational obstacles
for high $N$. Here we note that using $N=1$ is rather unsatisfactory
(comparing with the massive Wick-Cutkosky model), since then we do not find any
stable solution for a wide class of input parameters $g$, $m_g$. In contrast,
using the value $N=2$ we obtain stable results for all possible interaction
kernels considered here. This includes the cases with vanishing $m_g$, which
means that the numerical problems originally present in the scalar models
\cite{SAUADA2003} are fully overcome here. The details of our numerical
treatment are given in Appendix B.
As is more usual in the nonrelativistic case, we fix the coupling constant
$\alpha=g^2/(4\pi)$ and then look for the bound-state mass spectrum. We find
the same results in either case, whether $P$ or $\alpha$ is fixed first,
noting however that in the latter case the whole integration in the kernel
$K$ needs to be carried out in each iteration step, which makes the problem
more computer-time consuming.
\begin{figure}
\centerline{ \mbox{\psfig{figure=soub3.ps,height=14.0truecm,
width=14.0truecm,angle=0}} }
\caption[99]{\label{figBSE} The rescaled weight function
$\tau=\frac{\rho^{[2]}(\omega,z)}{\omega^2}$ for the following model parameters:
$\eta=0.95$, $m_g=0.001M$, $\alpha_s=0.666$; the small mass $m_g$
approximates the one-gluon-exchange interaction kernel. }
\end{figure}
The obtained solutions for varying $\alpha$ and mass $m_g$, with
a fixed fractional binding $\eta=\sqrt{P^2}/(2M)=0.95$, are given in Table~1.
If we fix the gluon mass at $m_g=0.5$ and vary the fractional binding $\eta$,
we obtain the spectrum of Table~2.
\begin{center}
\small{\begin{tabular}{|c|c|c|c|c|}
\hline \hline
$m_g/m_q $ & $10^{-3}$ & 0.01 & 0.1 & 0.5 \\
\hline
$\alpha$ & 0.666 & 0.669 & 0.745 & 1.029 \\
\hline \hline
\end{tabular}}
\end{center}
\begin{center}
TABLE 1. Coupling constant $\alpha_s=g^2/(4\pi )$ for
several choices of $ m_g/M$,
with given binding fraction $\eta=\sqrt{P^2}/(2M)=0.95$.
\end{center}
\begin{center}
\small{\begin{tabular}{|c|c|c|c|c|}
\hline \hline
$\eta: $ &0.8 & 0.9 & 0.95 & 0.99 \\
\hline
$\alpha$ &1.20 & 1.12 & 1.03 & 0.816 \\
\hline \hline
\end{tabular}}
\end{center}
\begin{center}
TABLE 2. Coupling $\alpha_s=g^2/(4\pi )$ as a
function of binding fraction $\eta=\sqrt{P^2}/(2M)$, for
exchanged massive gluon with $m_g=0.5M$.
\end{center}
For illustration, the weight function $\tilde{\rho}^{[2]}$ is
displayed in Fig.~\ref{figBSE}.
\section{ Summary and Conclusions}
The main result of the present paper is the development of a technical
framework to solve the bound-state BSE in Minkowski
space. In order to obtain the spectrum, no preferred reference frame is
needed, and the wave function can be obtained in an arbitrary frame
--- without numerical boosting --- by a simple integration of the weight
function.
The treatment is based on the usage of an IR for the
Green's functions of a given theory, including the bound-state vertices
themselves. The method has been explained and checked numerically on the
samples of pseudoscalar fermion-antifermion bound states. It was shown
that the momentum-space BSE can be converted into a real equation for a
real weight functions $\rho$, which is easily solved numerically.
The main motivation of the author was to develop a practical tool respecting
selfconsistency of DSEs and BSEs. Generalizing this study to other mesons,
such as vectors and scalars, and considering more general flavor or isospin
structures, with the simultaneous improvement of the approximations
(correctly dressed gluon propagator, dressed vertices, etc.), will be an
essential step towards a fully Lorentz-covariant description of a plethora of
transitions and form factors in the time-like four-momentum region.
\
\
{\Large{ \bf Acknowledgments}}
\
I would like to thank George Rupp for his careful reading of the manuscript.
| {'timestamp': '2008-02-20T23:27:59', 'yymm': '0802', 'arxiv_id': '0802.2955', 'language': 'en', 'url': 'https://arxiv.org/abs/0802.2955'} |
\section{Introduction}
Wireless sensor networks (WSNs) have attracted attention from wireless network research communities for more than a decade \cite{Akyildiz:2002:WSN:Survey}. Their abilities to collect environmental data and, to effectively and wirelessly send those data back to central, processing nodes, have been identified by research work such as \cite{senvm}. Recently, with the emerging Internet of Things (IoT) \cite{stankovic_iot}, researchers foresee its potentials to bring a new generation of the Internet where things are connected to engender unprecedented intelligence that facilitates people's daily lives, enterprise's tasks, and city-wide missions. WSNs are an integral part of the IoT because they are the sources of sensed data to be processed and analyzed by the computing clouds. Therefore, WSNs in the IoT are expected to handle operations with more density, diversity, and tighter integration.
WSNs often demand that sensed data be incorporated with timestamps so that the systems can fuse, distinguish and sequence the data consistently. Therefore, several protocols proposing to synchronize a global time in WSNs include \cite{Maroti:2004:FTS:1031495.1031501,5211944,Apicharttrisorn:2010:EGT:1900724.1901046}. However, some systems do not require a global notion of time; they demand sensor nodes to sample data at the same time to take sequential snapshots of an environment. Such systems include \cite{Werner-Allen:2005:FSN:1098918.1098934}, which is the first protocol to achieve this task of ``synchronicity''. Desynchronization is an \emph{inverse} of synchronicity because it requires nodes \emph{not} to work at same time; hence, desynchronization can provide nodes with collision-free and even equitable access to a shared resource. A concrete example is a system using a Time Division Multiple Access or TDMA protocol, in which nodes utilize a shared medium at different time slots to avoid collision. In addition, desynchronization can schedule duty cycles of sensor nodes; in other words, nodes covering the same sensing area take turns waking up to sense the environment while others are scheduled to sleep to save the limited energy. Other potential applications of desynchronization include techniques to increase a sampling rate in multiple analog-to-digital converters, to schedule resources in multi-core processors, and to control traffic at intersections \cite{4274893}.
In this paper, we propose a stable desynchronization algorithm for multi-hop wireless sensor networks called Multi-hop Desynchronization With an ARtificial Force field or M-DWARF. We use TDMA to validate our algorithm and evaluate its performance. M-DWARF uses basic concepts of artificial force fields in DWARF \cite{Choochaisri:2012:DAF:2185376.2185378}. However, to support multi-hop networks and to avoid their hidden terminal problems, M-DWARF adds two mechanisms called \textit{Relative Time Relaying} and \textit{Force Absorption}. With these features added, M-DWARF is able to desynchronize multi-hop networks without collision from hidden terminals while maintaining maximum channel utilization. We evaluate M-DWARF's functionality on TelosB motes \cite{telosb} and its performance on TOSSIM \cite{Levis:2003:TAS:958491.958506}, a TinyOS simulator. We compare M-DWARF with Extended Desynchronization for Multi-hop Networks or EXTENDED-DESYNC (also referred as EXT-DESYNC) \cite{MK09DESYNC} and Lightweight coloring and desynchronization for networks or LIGHTWEIGHT \cite{5062165} on several topologies of multi-hop wireless networks. According to the simulation results, M-DWARF is the only desynchronization that has all the three properties - fast convergence, high stability, and maintained fairness. Moreover, in \cite{Choochaisri:2012:DAF:2185376.2185378}, we prove that desynchronization using artificial force fields is a convex function; that is, it converges to a global minimum without local ones. In addition, our stability analysis (in a supplementary document) proves that once DWARF or M-DWARF systems reach a state of desynchrony, they become stable. In other words, DWARF or M-DWARF provides a static equilibrium of desynchronized force fields at steady states. Once nodes in the systems deviate from a state of desynchrony, they will attempt to return back to the balance of the forces immediately. Our stability analysis not only suggests the stability of our desynchronization our algorithms but also proves that the systems will eventually converge to an equilibrium.
In the next section, we briefly explain how our proposed desynchronization, M-DWARF, works and describe its contributions. In Section \ref{sec:related_work}, we survey related literature of desynchronization in temporal and spatial domains, as well as TDMA scheduling algorithms. Section \ref{sec:desync_algo} recalls the basic concepts of DWARF and then explains the M-DWARF algorithm in detail. Finally, Appendix \ref{sec:psuedocode_m_dwarf} shows the psuedocodes of M-DWARF.
\section{Contributions}
\label{sec:contribution}
In order to understand the contributions of the M-DWARF algorithm, it is important to know basically how it works.
In addition to DWARF, M-DWARF has two mechanisms, which are named relative time relaying and force absorption, to support multi-hop topologies. The algorithmic steps of M-DWARF can be enumerated as follows.
\begin{enumerate}
\item Nodes, which are not initially desynchronized, set a timer to fire in $T$ time unit and listen to all one-hop neighbors.
\item Upon receiving a firing message from its one-hop neighbor, the node marks the current time to be the relative phase reference. Then, it reads relative phases of its two-hop neighbors which are included within the firing message. After that, it calculates their two-hop neighbors' phases by using the relative phase reference as the offset. The details are explained in Section \ref{sec:relative}.
\item When the timer, $T$, expires, a node broadcasts a firing message containing relative phases of its one-hop neighbors. Then, it calculates a new time phase to move on the phase circle, which is based on the summation of artificial forces from all phase neighbors within two hops where some forces are absorbed as explained in Section \ref{sec:absorption}. Then, the node sets a new timer according to the new calculated phase.
\item Similar to DWARF, a node adjusts its phase by using Eq. \ref{eq:newphase} with $K$ calculated from Eq. \ref{eq:arbitrary_T}.
\end{enumerate}
All nodes in the artificial force field (in the period circle) iteratively run the same algorithm until the force is balanced.
The pseudo-code of this algorithm is shown in Appendix \ref{sec:psuedocode_m_dwarf}. The contributions of our proposed desynchronization algorithms are listed as follows.
\paragraph{Autonomous operations} M-DWARF works without any master or root nodes, so it does not need to elect or select such nodes and also does not have a single point of failure in this sense. Moreover, nodes do not need knowledge about network topology; they only have to know their one-hop and two-hop neighbors in order for the desynchronization to function correctly. In addition, M-DWARF is easy to be deployed because it works independently without any deployment setup. Finally, M-DWARF adapts itself very well to dynamics such as leaving or joining nodes.
\paragraph{Determinism} M-DWARF does not use any random operation, so its protocol behavior is deterministic and predictable.
\paragraph{Throughput} Thanks to its high stability, at a state of desynchrony, a node's time slot firmly stays at the same position in the next iteration, causing no or minimal interference with adjacent time slots. As a result, M-DWARF requires less \emph{guard time} between slots, allowing nodes to fully utilize the resource or medium because of its stable desynchronization. Without stability, the beginning and the end of the frames are likely to collide or interfere with the adjacent ones because time slots of the next iteration do not strictly stay at the same position. Moreover, M-DWARF requires low overhead. In each time period \textit{T}, each node only broadcasts a desynchronization message containing one-hop and two-hop neighbor information; in other words, it does not need any two-way or three-way handshakes like many other protocols.
\paragraph{Delay} A node that starts to join the desynchronization network needs to broadcast a desynchronization message to declare itself to the network and occupy its own time slot. Therefore, it has to wait one time period to send the first data frame. By determining the slot size and transmission bitrate, the node can predict how many iterations it needs to finish transmitting one packet. In consequence, all the nodes in the network route can share information regarding the end-to-end delay of a packet traversal. Therefore, upper and lower limits of such a delay can be determined.
\paragraph{Interoperability} Because M-DWARF does not assume any radio hardware specifications or MAC-layer techniques, it is highly interoperable with open standards. It assumes only that all nodes can transmit and receive data at the same spectrum. Although WSNs are a target platform of this paper, we argue that our desynchronization and TDMA algorithms can be applied to generic wireless multi-hop networks \cite{Sgora:2015:TDMA} or wireless mesh networks \cite{Vijayalayan:2013:Scheduling}.
\paragraph{Complexity} M-DWARF has low complexity for the following three reasons. First, it does not require time synchronization between nodes. Second, there are no handshakes; only one-way broadcasting is necessary. Third, its computational complexity depends only on the number of one-hop and two-hop neighbors, instead of the entire network size.
\paragraph{Energy Efficiency} Because of M-DWARF's high stability, data frames of adjacent time slots are less likely to collide or interfere with each other. Therefore, it has lower probability for packet retransmission, which is a factor for energy inefficiency in networks.
\paragraph{Channel Utilization Fairness} M-DWARF can provide equal access for all the neighbor nodes. The artificial forces are balanced between neighbor nodes, so all the nodes converge to occupy equal slot sizes.
\section{Related Work}
\label{sec:related_work}
Our related work can be mainly divided into three categories - 1) desynchronization on a temporal domain in wireless networks 2) desynchronization on a spatial domain in robotics 3) TDMA scheduling algorithms. Then, we summarize the properties of related work of the first category in Table \ref{tab:compared}.
\subsection{Desynchronization on a Temporal Domain in Wireless Networks}
\label{sec:timedesync}
To the best of our knowledge, Self-organizing desynchronization or DESYNC \cite{4379660} is the first to introduce the desynchronization problem. In DESYNC, a node simply attempts to stay in the middle phase position between its previous and next phase neighbors. By repeating this simple algorithm, all nodes will eventually and evenly be spread out on a temporal ring. However, the error from one phase neighbor is also propagated to the other phase neighbors and is indefinitely circulated inside the network. As a consequence, DESYNC's error is quite high even after convergence. C. M. Lien et al. propose Anchored desynchronization or ANCHORED \cite{anchored} that uses the same method as DESYNC but requires one anchored node to fix the phase of its oscillator. However, because ANCHORED uses only the phase information of the phase neighbors, it shall suffer the similar desynchronization error as DESYNC. In contrast, our work relies on all received neighbors' information and is therefore more robust to the error from one phase neighbor. In \cite{4663417}, the authors describe how DESYNC works on multi-hop networks and explain an extension for DESYNC by exchanging two-hop neighbor information.
Inversed Mirollo-Strogatz or INVERSE-MS \cite{4274893}, designed to converge faster than DESYNC, is an inverse algorithm of the synchronicity work by \cite{MS1990}.
At a steady state, INVERSE-MS maintains a dynamic equilibrium (\textit{i.e.}, nodes keep changing time positions while maintaining desynchronization). However, in INVERSE-MS, the time period is distorted whereas our algorithm does not distort the time period.
In Extended Desynchronization or EXT-DESYNC \cite{MK09DESYNC}, the authors propose a desynchronization algorithm that is similar to the extension proposed in \cite{4663417}. Each node sends its one-hop neighbors' relative time information to all of its one-hop neighbors.
Then, the one-hop neighbors relay such information to two-hop neighbors so that each node knows two-hop relative time information.
Consequently, each node can presume that there are two-hop neighbors appearing on the time circle.
Therefore, each node uses time information of both one-hop and two-hop neighbors to desynchronize with the same algorithm as in DESYNC. One mechanism in our multi-hop algorithm proposed in this paper is partly based on this notion.
M-DESYNC \cite{5062256} is a localized multi-hop desynchronization algorithm that works on a granularity of time slots. This protocol uses a graph coloring model for solving desynchronization. It starts by estimating the required number of time slots with a two-hop maximum degree or the maximum number of colors. This estimation allows nodes in M-DESYNC to immediately choose each predefined slot or color and helps M-DESYNC converge very fast. However, M-DESYNC requires that all nodes have a global notion of time in order to share the common perception of time slots. Furthermore, M-DESYNC claims that it works only on acyclic networks. On the contrary, our algorithm does not require a global notion of time and can work on both acyclic and cyclic networks.
A. Motskin et al. \cite{5062165} propose a simple, lightweight desynchronization algorithm, namely LIGHTWEIGHT, that is also based on a graph coloring model. Unlike M-DESYNC, the algorithm works on general graph networks and does not need the global time. To ensure that the selected time slot does not overlap with others', a node needs to listen to the shared medium for a full time period before claiming the slot. The listening mechanism can only avoid collision with one-hop neighbors but cannot avoid collision with two-hop neighbors (\textit{i.e.}, the hidden terminal problem). On the contrary, our algorithm works well on multi-hop networks; each node can effectively avoid collision with two-hop neighbors.
Furthermore, without a common notion of time, the starting time of each slot is quite random; as a result, several time gaps are too small to be used as time slots. This external fragmentation problem poorly reduces resource utilization of the system. Finally, to converge faster, their algorithm overestimates the number of required time slots. Hence, several large time gaps are also left unused and the resource utilization is undoubtedly and significantly lowered. In our work, nodes gradually adapt their phases to be separated from each other as far as possible. Therefore, the external fragmentation problem is reduced and the resource utilization is maximized.
T. Pongpakdi et al. propose Orthodontics-inspired Desynchronization or DESYNC-ORT \cite{desyncort}. In their work, they use information from all one-hop neighbors and attempt to find nodes that are already in corrected time positions and tie them up together. This method is similar to the Orthodontics method that attempts to tie teeth which are already in corrected positions together.
The desynchronization errors of DESYNC-ORT are lower than those of DESYNC.
However, to calculate the correct positions, each node is required to know the total number of nodes in the system in advance. Additionally, the algorithm does not propose to solve the problem in multi-hop networks because nodes in two-hop neighbors cannot be tied together with one-hop neighbors. In contrast, our algorithm does not require nodes to have knowledge about the total number of nodes in advance but gradually and automatically adapts itself based on the current number of neighbors. Finally, our algorithm works on multi-hop networks.
Vehicular-network Desynchronization, abbreviated as V-DESYNC \cite{v-desync}, is proposed to desynchronize nodes in vehicular ad-hoc networks. Their work has a different objective; that is, the algorithm does not focus on fairness (\textit{i.e.}, nodes are not necessary to be equitably separated) because vehicular networks are highly dynamic. In our work, we focus on wireless sensor networks with static sensor nodes and we attempt to render resource utilization fairness among sensor nodes.
Table \ref{tab:compared} summarizes the features of works in this category. Note that the overhead of the proposed algorithm depends on whether the algorithm works on the single-hop or multi-hop mode.
\begin{table*}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\caption{Comparison of Desynchronization Protocols}
{
\begin{tabular}{|m{2cm}|m{1.5cm}|m{1.5cm}|m{1.2cm}|m{1.2cm}|m{1.5cm}|m{1.2cm}|m{1.5cm}|m{1cm}|}
\hline
\multirow{2}{*}{Algorithms} & \multicolumn{8}{c|}{Properties}\\
\cline{2-9}
& Period & Time sync & Fair Space & Multi- hop & Conver- gence & Error & Scalable & Over- head\\
\hline
\hline
DESYNC & Fixed & No & Yes & No & Moderate & High & Poor & Zero\\
\hline
ANCHORED & Fixed & No & Yes & No & Moderate & High & Poor & Zero\\
\hline
INVERSE-MS & Distorted & No & Yes & No & Fast & Low & Good & Zero\\
\hline
EXT-DESYNC & Fixed & No & Yes & Yes & Moderate & High & Poor & High\\
\hline
M-DESYNC & Fixed & Required & No & Yes & Fast & High & Good & Low\\
\hline
LIGHT- WEIGHT & Fixed & No & No & Yes & Fast & High & Good & Zero\\
\hline
DESYNC-ORT & Fixed & No & Yes & No & Moderate & Low & Good & Zero\\
\hline
V-DESYNC & Fixed & No & No & No & No & High & Good & Zero\\
\hline
DWARF & Fixed & No & Yes & No & Moderate & Low & Good & Very Low\\
\hline
M-DWARF (Proposed) & Fixed & No & Yes & Yes & Moderate & Low & Good & Low\\
\hline
\end{tabular}
}
\label{tab:compared}
\end{table*}
\subsection{Desynchronization on a Spatial Domain in Robotics}
\label{sec:spacedesync}
\begin{figure}
\centering
\subfloat[Robotic Close Ring]{
\label{fig:subfig:robotic_closed_ring}
\includegraphics[width=1.2in]{figure/robotic-closed-ring}
}
\hspace{1.5cm}
\subfloat[Robotic Perfect Close Ring]{
\label{fig:subfig:robotic_closed_ring_perfect}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-perfect}
}
\caption{Robotic pattern formation on a closed ring. (\ref{fig:subfig:robotic_closed_ring}) Robots are randomly placed on a closed ring. (\ref{fig:subfig:robotic_closed_ring_perfect}) In the perfect configuration, robots are equitably separated from each other.}
\label{fig:robotic_ring}
\end{figure}
In robotic pattern formation, multiple robots distributedly group and align themselves in geometric patterns such as circles, rectangles, and triangles. Without an explicit argument, robotic pattern formation can be abstracted as desynchronization on a spatial domain. Robots attempt to separate away from each other as far as possible to form such patterns; in other words, robots desynchronize themselves spatially to avoid the collision with each other in the spatial domain.
These two pieces of work \cite{suzuki-96,suzuki-99} investigate several robotic pattern formations. However, the pattern formation that is similar to desynchronization on the temporal domain is the formation on a closed ring.
Figure \ref{fig:robotic_ring} illustrates a robotic formation on a closed ring. In Figure \ref{fig:subfig:robotic_closed_ring}, initially, robots are randomly placed on any position on the closed ring. The perfect configuration of the formation is illustrated in Figure \ref{fig:subfig:robotic_closed_ring_perfect}; that is, robots are equivalently separated away on the ring.
Other papers such as \cite{defago04,cohen-08,flocchini-08} propose the algorithms that are similar to each other for robotic formations on a closed ring, assuming that robots have limited visibility range. Each robot attempts to adjust its position to the middle between two nearest robots on its left side and right side (Figure \ref{fig:closedring-desync}). In their papers, they prove that this simple algorithm eventually converges the robotic formation to the perfect configuration (Figure \ref{fig:closedring-desync-perfect}).
\begin{figure}
\centering
\subfloat[A Robotic Move]{
\label{fig:closedring-desync}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-desync}
}
\hspace{1.5cm}
\subfloat[Convergence to Perfect Configuration]{
\label{fig:closedring-desync-perfect}
\includegraphics[width=1.2in]{figure/robotic-closed-ring-desync-perfect}
}
\caption{Moving to the midpoint algorithm. (a) Each robot moves to the midpoint between two nearest visible neighbors. (b) The algorithm converges to the perfect configuration..}
\label{fig:robotic-closed-ring-desync}
\end{figure}
In \cite{4141997}, heterogeneous robots are grouped in a distributed manner into teams that are equally spread out to cover the monitored area. Each robot has no global knowledge of others' absolute positions but can detect relative positions of the others with respect to itself as well as the type of the others.
To form a circle, an artificial force is used as an abstraction for velocity adaptation of a robot.
Robots of different types have attractive forces to each other, while robots of the same type have repulsive forces. As a result, a circle of heterogeneous robots is formed and robots are spaced on the circle (see Figure \ref{fig:circular_formation}). This work inspires our desynchronization algorithm.
\begin{figure}
\centering
\includegraphics[width=3.0in]{figure/robo_new}
\caption{Results of Robotic Circular Formation. Robots with two different types form the circle.}
\label{fig:circular_formation}
\end{figure}
\subsection{TDMA Scheduling Algorithms}
\label{sec:tdma}
Other works that are related to desynchronization protocols are distributed Time Division Multiple Access (TDMA) protocols. Distributed TDMA protocols are similar to M-DESYNC \cite{5062256}; their protocols work on a granularity of time slots. Similar to M-DESYNC, many of the distributed TDMA protocols such as TRAMA \cite{trama}, Parthasarathy \cite{Parthasarathy}, ED-TDMA \cite{edtdma}, and Herman \cite{herman} assume time is already slotted or all nodes are synchronized to achieve the same global clock.
In our work, we do not require time synchronization and do not assume already slotted time.
S. C. Ergen et al. \cite{Ergen:2010:TDMAAlgo} propose node-based and level-based TDMA scheduling algorithms for WSNs. Their techniques mainly derive from graph coloring algorithms in which the \textit{colors} must have been predefined. In contrast, our desynchronization algorithms never predefine slots but rather allow nodes to adjust their slots with those in the neighborhood on-the-fly.
K. S. Vijayalayan et al. \cite{Vijayalayan:2013:Scheduling} survey distributed scheduling techniques for wireless mesh networks and A. Sgora et al. \cite{Sgora:2015:TDMA} provide an extensive survey of recent advances in TDMA algorithms in wireless multi-hop networks.
Both the survey papers interestingly give a comprehensive overview of the TDMA scheduling algorithms and techniques that are still being investigated and further developed by wireless network researchers worldwide.
\section{DWARF and M-DWARF Desynchronization Algorithms}
\label{sec:desync_algo}
In this section, we briefly introduce the concept of \textit{artificial force field} and our previous work of desynchronization for single-hop networks \cite{Choochaisri:2012:DAF:2185376.2185378} because it is necessary to understand the basic concepts before the proposed algorithm for multi-hop networks can be understood.
\subsection{Desynchronization Framework, Artificial Force Field and DWARF Algorithms}
\label{AFF_DWARF}
\subsubsection{Desynchronization Framework}
The desynchronization framework is depicted as a time circle in Figure \ref{fig:time_circle}.
The perimeter of a time circle represents a configurable time period $T$ of nodes' oscillators.
The time position or \textit{phase} of each node represents its turn to perform a task (\textit{e.g.}, accessing a shared resource, sampling data, or firing a message).
The system is desynchronized when all nodes are separated in the time circle. We define the terms used in the desynchronization context as follows.
\begin{figure}
\centering
\includegraphics[width=3.2in]{figure/time_circle_new}
\caption{Desynchronization framework: $\phi_1$ and $\phi_2$ are phases of node 1 and 2 respectively. While all the four nodes are phase neighbors to each other, node 2 and 4 are the previous and next phase neighbor of node 1 respectively. The left figure shows a desynchrony state that will converge to the perfect desynchrony state as in the right figure.}
\label{fig:time_circle}
\end{figure}
\begin{definition}[Phase]
A phase $\phi_i$ of node $i$ is the time position on the circle of a time period $T$, where $0 \leq \phi_i < T$ and $T \in \mathbb{R}^+$.
\end{definition}
\begin{definition}[Phase Neighbor]
Node $j$ is a phase neighbor of node $i$ if node $i$ perceives the existence of node $j$ through reception of $j$'s messages at the phase $\phi_i + \phi_{i,j}$, where $\phi_{i,j}$ is the phase difference between node $j$ and node $i$,
\begin{equation}
\phi_{i,j} = \left\{
\begin{array}{l l}
\phi_j - \phi_i & \quad \mbox{if $\phi_j \geq \phi_i$,}\\
T - (\phi_i - \phi_j) & \quad \mbox{if $\phi_j < \phi_i$.}\\ \end{array} \right.
\end{equation}
\end{definition}
\begin{definition}[Next Phase Neighbor]
Node $j$ is the next phase neighbor of node $i$ if $\phi_{i,j} = \underset{k \in S}{\min}\{\phi_{i,k}\}$, where $S$ is a set of node $i$'s neighbors.
\end{definition}
\begin{definition}[Previous Phase Neighbor]
Node $j$ is the previous phase neighbor of node $i$ if \\$\phi_{i,j} = \underset{k \in S}{\max}\{\phi_{i,k}\}$, where $S$ is a set of node $i$'s neighbors.
\end{definition}
\begin{definition}[Desynchrony State]
The system is in a desynchrony state if $\phi_i \neq \phi_j$ for all $i, j \in V$ and $i \neq j$, where $V$ is a set of nodes in a network that cannot share the same phase.
\end{definition}
\begin{definition}[Perfect Desynchrony State]
The system is in the perfect desynchrony state if it is in the desynchrony state and $\phi_{i,j} = T/N$ for all $i \in V$, $j$ is $i$'s previous phase neighbor, and $N$ is the number of nodes in a network that cannot share the same phase.
\end{definition}
We note that two nodes can share the same phase if they are not within the two-hop communication range of each other.
\subsubsection{Artificial Force Field}
\label{AFF}
An artificial force field is an analogy to the circle of a time period where nodes have repulsive forces to each other.
Nodes are in the same force field if they can communicate with each other or share the same medium.
If node $i$ and node $j$ are on the same force field, they have repulsive forces to push one another away.
A closer pair of nodes has a higher magnitude of force than a farther pair does.
The time interval between two nodes is derived from the phase difference between them.
If two nodes have a small phase difference, they have a high magnitude of force and vice versa.
In other words, a repulsive force is an inverse of a phase difference between two nodes:
\begin{equation}
f_{i,j} = - \frac{1}{\Delta \phi_{i,j} / T}, \quad \Delta \phi_{i,j} \in (-\frac{T}{2}, \frac{T}{2}),
\label{eq:force}
\end{equation}
where $f_{i,j}$ is the repulsive force from node $j$ to node $i$ on a time period $T$ and $\Delta \phi_{i,j}$ is the phase difference between node $i$ and $j$.
We note that $\Delta \phi_{i,j}$ is not equal to 0 because if two nodes fire at the same time, their firings collide and two nodes do not record other's firing. Additionally, at $T/2$ or $-T/2$, a node does not repel an opposite node because the forces are balanced.
A repulsive force can be positive (clockwise repulsion) or negative (counterclockwise repulsion).
A positive force is created by a node on the left half of the circle relative to the node being considered whereas a negative force is created by a node on the right half.
Figure \ref{fig:force_field} represents a field of repulsive forces on node 1.
\begin{figure}
\centering
\includegraphics[width=1.5in]{figure/forcefield}
\caption{Artificial Force Field. Arrow lines represent repulsive forces from node 2, 3, and 4 to node 1. A shorter and thicker line is a stronger force. A force from node 4 is a positive force and two forces from node 2 and 3 are negative forces.}
\label{fig:force_field}
\end{figure}
Each node in the force field moves to a new time position or phase proportional to the total received force.
Given $n$ nodes in a force field, the total force on a node $i$ is the following:
\begin{equation}
\mathcal{F}_i = \sum_{\substack{j=1\\ j \neq i}}^{n}{f_{i,j}}.
\label{eq:fsum}
\end{equation}
Eventually, nodes reach an equilibrium where the total force of the system is close to zero and each pair of phase neighbor nodes has the same time interval.
This equilibrium state also indicates the perfect desynchrony state because all nodes are equally spaced on the time circle.
\subsubsection{DWARF, the Single-Hop Desynchronization Algorithm}
\label{DWARF}
We assume that, initially, nodes are not desynchronized and each node sets a timer to fire in $T$ time unit.
After setting the timer, each node listens to all its neighbors until the timer expires.
When receiving a firing message from its neighbor, the (positive or negative) repulsive force from that neighbor is calculated based on the phase difference.
When the timer expires, a node broadcasts a firing message to neighbors.
Then, the node calculates a new time phase to move on the circle based on the summation of forces from all neighbors and sets a new timer according to the new time phase.
It is reasonable now to question how far a node should move or adjust its phase.
In our work, given the total received force $\mathcal{F}_i$, the node $i$ adjusts to a new time phase $\phi_i^{'}$,
\begin{equation}
\phi_i^{'} = (\phi_i + K\mathcal{F}_i) \mod T,
\label{eq:newphase}
\end{equation}
where $\phi_i$ is the current phase of the node $i$.
Undoubtedly, the proper value of the coefficient $K$ leads to the proper new phase.
The value of $K$ is similar to a step size which is used in artificial intelligence techniques.
Therefore, if the value of $K$ is too small, the system takes much time to converge.
On the other hand, if the value of $K$ is too large, the system may overshoot the optimal value and fail to converge.
We observe that, given the same time period, fewer nodes in the system result in bigger phase difference between two phase neighbors. To be desynchronized, nodes in sparse networks must make a larger adjustment to their time phases than nodes in dense networks.
Therefore, the same total received force should have a greater impact on a node in sparse networks than on a node in dense networks.
To reflect this observation, the coefficient $K$ is inversely proportional to a power of the number of nodes $n$,
\begin{equation}
K = c_1 \times n^{-c_2}, \text{ where } c_1, c_2 \geq 0.
\end{equation}
Therefore, we have conducted an experiment to find the proper value of $c_1$ and $c_2$.
We set a time period $T$ to 1000 and vary the number of nodes.
In the specific number of nodes, we first simulate to see the trend of the value $K$ that leads to small errors.
Then, we select a range of good $K$ values and simulate 100 times to obtain the average desynchronization error for each $K$ value.
In each simulation, we randomly set an initial phase of each node between 0 and $T$ (period value).
Finally, we select the $K$ value that results in the lowest error.
After getting the proper $K$ value for each number of nodes, we plot the relation between $K$ and the number of nodes (Figure \ref{fig:relation_k_n}) and use a mathematical tool to calculate the power regression. The obtained relation function between $K$ and $n$ (the trendline in Figure \ref{fig:relation_k_n}) consists of $c_1$ and $c_2$ values as follows:
\begin{equation}
K = 38.597 \times n^{-1.874}. \nonumber
\end{equation}
\begin{figure}
\centering
\includegraphics[width=2.2in]{figure/k-trendline}
\caption{Relation of the coefficient $K$ with a number of nodes $n$}
\label{fig:relation_k_n}
\end{figure}
However, this $K$ value is derived by setting $T$ equal to 1000.
Therefore, for an arbitrary value of $T$,
\begin{equation}
\label{eq:arbitrary_T}
K = 38.597 \times n^{-1.874} \times \frac{T}{1000}.
\end{equation}
The proof of Eq. \ref{eq:arbitrary_T} can be found in \cite{Choochaisri:2012:DAF:2185376.2185378}. Moreover, in \cite{Choochaisri:2012:DAF:2185376.2185378}, we also prove that the force function of DWARF has the convexity property; that is, it has one global minima and no local minima. Additionally, in this paper, we provide stability analysis of DWARF and M-DWARF in the Supplementary Material.
\subsection{M-DWARF, the Multi-hop Desynchronization Algorithm (Proposed)}
\label{sec:m_dwarf}
In this section, we extend the artificial force field concept to desynchronization in multi-hop networks. We begin with applying DWARF directly to a simple multi-hop topology and find out how the algorithm fails on such a topology in Section \ref{sec:hidden-terminal}. Then, we propose two simple yet effective resolutions, relative time relaying and force absorption, to extend DWARF for multi-hop networks in Section \ref{sec:relative} and \ref{sec:absorption} respectively. Additionally, we provide pseudo-code of M-DWARF in Appendix \ref{sec:psuedocode_m_dwarf}.
\subsubsection{The Hidden Terminal Problem}
\label{sec:hidden-terminal}
To see how DWARF works on a multi-hop network, we set a simple 3-node chain topology as illustrated in Figure \ref{fig:3nodes-chain-hidden}. In this topology, node 1 can receive firing messages from node 2 but cannot receive from node 3; on the other hand, node 3 can receive firing messages from node 2 but cannot receive from node 1. However, node 2 can receive firing messages from both node 1 and 3 which are not aware of each other's transmission. This simple scenario causes messages to collide at node 2.
\begin{figure}
\centering
\includegraphics[width=2.5in]{figure/3nodes-chain-hidden}
\caption{The hidden terminal problem.}
\label{fig:3nodes-chain-hidden}
\end{figure}
\begin{figure}
\centering
\subfloat[]{
\includegraphics[width=2in]{figure/3nodes-chain-dwarf}
\label{fig:3nodes-chain-dwarf}
}
\hspace{1.0cm}
\subfloat[]{
\includegraphics[width=2in]{figure/3nodes-chain-expected}
\label{fig:3nodes-chain-expected}
}
\caption{(a) shows noisy phases due to message collision at node 2. (b) shows the expected result of the perfect desynchrony state.}
\end{figure}
We simulate DWARF by setting the time period to 1000 milliseconds with nodes starting up randomly.
The simulation result is shown in Figure \ref{fig:3nodes-chain-dwarf}. Node 2's and node 3's phases are plotted relatively to the node 1's phase which is relatively plotted to itself at 0. The noisy vertical line is the wrapping-around phase of node 3. It shows that node 1 and node 3 fire messages approximately at the same phase, causing message collision at node 2.
However, the expected result (\textit{i.e.} perfect desynchrony state) should be that three nodes are separated equivalently because all nodes will interfere each other if they fire messages at the same phase. The expected result is shown in Figure \ref{fig:3nodes-chain-expected} where each node is equivalently separated from each other approximately by 1000/3 milliseconds.
The problematic result is caused by the hidden terminal problem as demonstrated in Figure \ref{fig:3nodes-chain-hidden}; node 1 and node 3 are hidden to each other in this multi-hop topology. While node 3 is firing a message, node 1 senses a wireless channel and does not detect any signal from node 3 because the signal from node 3 is not strong enough within the signal range of node 1 and vice versa. Therefore, node 1 and node 3 notice only that there are two nodes, which are itself and node 2, in their perceived networks. Therefore, node 1 and node 3 simultaneously attempt to adjust their phases to the opposite side of node 2 in their time circles which are the same phase. As a result, messages of node 1 and 3 recursively collide at node 2.
The hidden terminal problem does not only affect the performance of DWARF but also affect that of DESYNC.
This is due to the fact that, in DESYNC, a node adjusts its phase based on firing messages from its perceived phase neighbors. Therefore, if it fails to know the phase or presence of its two-hop neighbors, none of their phases is perceived. In \cite{MK09DESYNC}, EXT-DESYNC, an extension of DESYNC, is proposed to solve the hidden terminal problem based on a relative time relaying mechanism. Based on the similar idea, we extend DWARF to support multi-hop topologies.
However, only relative time relaying mechanism does not lead DWARF to an optimal solution in some cases. Therefore, this paper also proposes a \textit{force absorption} mechanism for extending DWARF to support multi-hop networks more efficiently.
\subsubsection{Relative Time Relaying}
\label{sec:relative}
The first idea to solve the hidden terminal problem is straightforward. If a node does not know the firing times of its second-hop neighbors, its one-hop neighbors relay such information. Therefore, instead of firing only to notify others of its phase, each node includes their one-hop neighbors' firing time into a firing message.
However, due to our assumption that nodes' clocks are not synchronized, relying on second-hop neighbors' firing timestamps from its one-hop neighbors could lead to wrong phase adjustment. This problematic scenario is demonstrated in Figure \ref{fig:broadcast-problem}. Figure \ref{fig:broadcast-problem-topo} illustrates the firing message of node 2 that contains timestamps of its one-hop neighbors and Figure \ref{fig:broadcast-problem-ring} displays the problem. The inner circle represents the local time of node 1 and the outer circle represents the local time of node 2. The figure indicates that the local reference times (at 0 millisecond) of node 1 and node 2 are different. Therefore, if node 1 uses the node 3's firing time relayed by node 2, which is 125 milliseconds, node 1 will misunderstand the exact time phase of node 3. The misunderstood phase of node 3 is depicted as a dash circle.
\begin{figure}
\centering
\subfloat[]{
\label{fig:broadcast-problem-topo}
\includegraphics[width=1.8in]{figure/broadcast-problem-topo}
}
\hspace{2cm}
\subfloat[]{
\label{fig:broadcast-problem-ring}
\includegraphics[width=1.8in]{figure/broadcast-problem-ring}
}
\caption{A problem of relying on second-hop neighbors' firing timestamps from its one-hop neighbors. (a) shows node 2 firing a message. (b) displays node 1's misperception of phases.}
\label{fig:broadcast-problem}
\end{figure}
This problem can be simply solved by using relative phases instead of actual local timestamps. Each node fires a message that includes relative phases of its one-hop neighbors. A receiving node marks the firing phase of the firing node as a reference phase. Then, the receiving node perceives its second-hop neighbors' phases as relative phase offsets to the reference phase. Figure \ref{fig:broadcast-relative} shows how M-DWARF desynchronizes a three-node multi-hop chain network.
\begin{figure*}
\centering
\subfloat[]{
\label{fig:broadcast-relative-topo}
\includegraphics[width=1.8in]{figure/broadcast-relative-topo}
}
\hfill
\subfloat[]{
\label{fig:broadcast-relative-ring}
\includegraphics[width=1.8in]{figure/broadcast-relative-ring_new}
}
\hfill
\subfloat[]{
\label{fig:broadcast-relative-perfect}
\includegraphics[width=1.8in]{figure/broadcast-relative-perfect_new}
}
\caption{M-DWARF solves the problem by using one-hop neighbors' relative phases. (a) shows node 2's firing message. (b) shows how node 1 marks the node 2's phase as a reference phase and uses it as an offset for calculating the node 3's phase. (c) Eventually, nodes are in the perfect desynchrony state.}
\label{fig:broadcast-relative}
\end{figure*}
\subsubsection{Force Absorption}
\label{sec:absorption}
As we mentioned earlier, DWARF with the relative time relaying mechanism does not solve some cases.
These cases are when there are at least two second-hop neighbors that can share the same phase without interference. For example, in a 4-node chain network illustrated in Figure \ref{fig:4nodes-chain-topo}, node 2 and node 3 are physically far beyond two hops. Therefore, they can fire messages at the same time phase without causing message collisions, as shown in Figure \ref{fig:4nodes-chain-expected}.
However, in M-DWARF, node 0 perceives that node 2 and node 3 are at the same phase. Therefore, there are two forces from node 2 and node 3 to repel node 0 clockwise but there is only force from node 1 to repel node 0 counter-clockwise. Consequently, node 0 cannot stay at the middle between node 1 and the group of node 2 and 3 (see Figure \ref{fig:4nodes-chain-dwarf}).
\begin{figure*}
\centerline{
\subfloat[]{\includegraphics[scale=0.3]{figure/4nodes-chain-topo
\label{fig:4nodes-chain-topo}}
\hfill
\subfloat[]{\includegraphics[scale=0.20]{figure/4nodes-chain-expected
\label{fig:4nodes-chain-expected}}
\hfill
\subfloat[]{\includegraphics[scale=0.20]{figure/4nodes-chain-dwarf
\label{fig:4nodes-chain-dwarf}}
}
\caption{The problem of the DWARF algorithm. (a) displays 4-node chain topology. (b) shows node 0's local view. (c) displays an imperfect desynchrony state.}
\label{fig:4nodes-chain}
\end{figure*}
Therefore, we propose a novel force absorption mechanism for multi-hop desynchronization based on the artificial force field. The objective of this mechanism is to absorb the overwhelming force from at least two nodes that can fire at the same phase without interference.
The mechanism works as follows. A node receives a full repulsive force from the next/previous phase neighbor as in DWARF. However, a force from the second-next / second-previous phase neighbor is partially absorbed by the next / previous phase neighbor. The magnitude of the absorbed force depends on the phase interval between the next / previous and the second-next / second-previous phase neighbors. The closer the second-next / second-previous phase neighbor moves to the next / previous phase neighbor, the lower the magnitude of the absorbed force becomes. Eventually, when the second-next / second-previous phase neighbor moves to the same phase as the next / previous phase neighbor, the additional force from the second-next / second-previous phase neighbor is fully absorbed. Consequently, the magnitude of two forces repelling the considered node is approximately equal to only the magnitude of one force. This principle is applied recursively; that is, the force from the third-next / third-previous phase neighbors is absorbed by the second-next / second-previous phase neighbor, and the force from the fourth-next / fourth-previous phase neighbor is absorbed by the third-next/third-previous phase neighbor, and so forth.
Figure \ref{fig:4nodes-chain-dwarf-absorb} illustrates this mechanism. In Figure \ref{fig:4nodes-chain-dwarf-absorb-split}, the force from node 2 to node 0 is absorbed by node 3 (the absorbed force is displayed in a blur line). Thus, from node 2, there is only small magnitude of force left to node 0. Eventually, in Figure \ref{fig:4nodes-chain-dwarf-absorb-perfect}, node 2 moves to the same phase as node 3 because they do not interfere each other and the force from node 2 is fully absorbed. Consequently, the network can be in the perfect desynchrony state.
\begin{figure*}
\centerline{
\subfloat[]{\includegraphics[scale=0.2]{figure/4nodes-chain-dwarf-absorb-split
\label{fig:4nodes-chain-dwarf-absorb-split}}
\hfil
\subfloat[]{\includegraphics[scale=0.2]{figure/4nodes-chain-dwarf-absorb-perfect
\label{fig:4nodes-chain-dwarf-absorb-perfect}}
}
\caption{M-DWARF solves the problem with force absorption; the blur line represented an absorbed force. (a) shows node 2's force is being absorbed by node 3. (b) displays the perfect desynchrony state.}
\label{fig:4nodes-chain-dwarf-absorb}
\end{figure*}
Let $f_{i,j}$ be a full repulsive force from node $j$ to node $i$, $f_{i,j}^{'}$ be an absorbed force from node $j$ to node $i$, $T$ is the time period, and $\Delta \phi_{i,j}$ is the phase difference between node $i$ and $j$.
The force function for multi-hop networks is the following:
\begin{alignat}{2}
f_{i,j} &= \frac{1}{\Delta \phi_{i,j} / T}, \text{where }\Delta \phi_{i,j} \in (-\frac{T}{2}, \frac{T}{2}) \nonumber \\
f_{i,i + 1}^{'} &= f_{i,i + 1} \nonumber \\
f_{i, i - 1}^{'} &= f_{i,i - 1} \nonumber \\
f_{i,j}^{'} &= f_{i,x} - f_{i,j},
\label{eq:force-absorb}
\end{alignat}
where $j \notin \left\{i -1, i + 1\right\}$ and $x = (j - \frac{\Delta \phi_{i,j}}{|\Delta \phi_{i,j}|}) \mod n$.
For $f_{i,x}$, if node $j$ repels node $i$ forward, $x$ is $j + 1$. In contrast, if node $j$ repels node $i$ backward, $x$ is $j - 1$. At $T/2$ or $-T/2$, a node does not repel an opposite node because they are balanced.
For example, in Figure \ref{fig:4nodes-chain-dwarf-absorb}, node 0 calculates the force from node 2 as the following:
\begin{alignat}{2}
f_{0,2}^{'} &= f_{0,3} - f_{0,2} \nonumber \\
&= \frac{1}{\Delta \phi_{0,3} / T} - \frac{1}{\Delta \phi_{0,2} / T}. \nonumber
\end{alignat}
Noticeably, if node 2 moves close to node 3, the value of $\Delta \phi_{0,2}$ is close to the value of $\Delta \phi_{0,3}$. Then, the magnitude of force $f_{0,2}$ is reduced.
Finally, when $\Delta \phi_{0,2}$ is equal to $\Delta \phi_{0,3}$ as in Figure \ref{fig:4nodes-chain-dwarf-absorb-perfect}, the magnitude of force $f_{0,2}$ becomes 0; in other words, the force is fully absorbed.
\newpage
| {'timestamp': '2017-04-25T02:08:43', 'yymm': '1704', 'arxiv_id': '1704.07002', 'language': 'en', 'url': 'https://arxiv.org/abs/1704.07002'} |
\section{Model}
We consider a collection of N three-level V-type atoms located at the same position. We label the ground state as $\ket{1}$ and the two excited states as $\ket{2}$ and $\ket{3}$, and the transition frequency from level j to i as $\omega_{ij}$. A weak drive field which is resonantly tuned to $\omega_{21}$ prepares the atomic system in a timed-Dicke state. As the drive field is turned off, we detect the photons emitted from the cloud in the forward direction. In the experiment, the atomic cloud has a finite size, but for theoretical simplicity we can assume it to be point-like ensemble interacting each other through the vacuum field modes. This is because we are measuring the forward scattering, where any phases of emitted photons due to the atomic position distribution is exactly compensated by the phases initially imprinted on the atoms by the drive field \cite{Scully_2006}. Additionally, the transitions $\ket{1}\leftrightarrow\ket{2}$ and $\ket{1}\leftrightarrow\ket{3}$ interact with the field effectively with the same phase considering that the atomic cloud size is much smaller compared to $2\pi c/\omega_{23}$. We note that while the forward-scattered field is collectively enhanced, the decay rate of the atoms arising from interaction with the rest of the modes is not cooperative \cite{Bienaime_2011}.
The atomic Hamiltonian $H_A$ and the vacuum field Hamiltonian $H_F$ are
\eqn{\begin{split}
H_A &= \sum_{m=1}^{N}\sum_{j=2,3} \hbar \omega_{j1} \hat{\sigma}_{m,j}^+ \hat{\sigma}_{m,j}^-,\\
H_F &= \sum_{k} \hbar \omega_{k} \hat{a}_{k}^{\dagger} \hat{a}_{k},
\label{eq:H-0}
\end{split}}
where $\hat{\sigma}_{m,j}^{\pm}$ is the raising/lowering operator acting on $m^\mr{th}$ atom and $j^\mr{th}$ level, $\hat{a}_k^{\dagger}$ and $\hat{a}_k$ are the field creation/annihilation operators of the corresponding frequency mode $\omega_{k}$, and $N$ refers to the effective number of atoms acting cooperatively in the forward direction.
First, we prepare the atomic system by a weak drive field. The atom-drive field interaction Hamiltonian is
\eqn{H_{\text{AD}}=-\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \hat{\sigma}_{m,j}^+ e^{-i \omega_D t} + \hat{\sigma}_{m,j}^- e^{i \omega_D t} }.\label{eq:H-AD}}
Here, $\omega_D$ is the drive frequency and $\Omega_j^m \equiv \vec{d}_{j1}^m\cdot\vec{\epsilon_{D}}\,E_D$ is the Rabi frequency of $j^\mr{th}$ level, where $\vec{d}_{j1}^{m}$ is the dipole moment of $\ket{j}\leftrightarrow\ket{1}$ transition of $m^\mr{th}$ atom, $\vec{\epsilon}_D$ is the polarization unit vector of the drive field, and $E_D$ is the electric field of the drive field. Given that the atomic ensemble is driven with the common field in our experiment, we will assume that the atomic dipoles are aligned with the drive and each other. We can thus omit the atomic labels to write $\Omega_j$.
The interaction Hamiltonian describing the atom-vacuum field interaction, under the rotating wave approximation, is given as
\eqn{
H_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{m,j}(\omega_k) \bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k} + \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}}.
}
Here, the atom-field coupling strength $g_{m,j}(\omega_k) \equiv \vec{d}_{j1}^{m} \cdot \vec{\epsilon}_k\sqrt{\frac{\omega_k}{2\hbar\varepsilon_0 V}}$,
where $\vec{\epsilon}_k$ is the polarization unit vector of the field mode, $\varepsilon_0$ is the vacuum permittivity, and $V$ is the field mode volume. As justified previously, the atomic dipoles are aligned to each other and we write $g_{j}(\omega_k)$.
Also, note that the sum over k only refers to the forward-scattered modes. The spontaneous emission arising from the rest of the modes is to be considered separately later.
\section{Driven dynamics}
\begin{figure*}[t]
\centering
\includegraphics[width = 3.5 in]{fig_laser_extinction.eps}
\caption{\textbf{(a)} The drive field intensity (red circles) at turn-off edge characterized as the truncated $\cos^4\bkt{\frac{\pi}{2}\frac{t-t_0}{\tau}}$ function (red solid line) bridging the on and off state of the intensity. Here, $t_0 = -4$ ns and the fall-time $\tau=3.5$ ns are assumed. While the intensity of the drive field turns off mostly within $\approx$ 3.5 ns, additional 0.5-ns waiting time is provided before the the data analysis of the collective emission begins at $t=0$ as shown in Fig.\,\ref{fig_decay}\,(b), to further remove the residual drive intensity and the transient effect from our measurement.}
\label{fig_laser_extinction}
\end{figure*}
We consider here the driven dynamics of a atoms. Moving to the rotating frame with respect to the drive frequency, and tracing out the vacuum field modes, we can write the following Born-Markov master equation for the atomic density matrix:
\eqn{
\der{\hat{\rho}_A}{t} = -\frac{i}{\hbar} \sbkt{\widehat{H}_A + \widehat H_{AD}, \hat {\rho}_A} - \sum_{m,n = 1}^N\sum_{i,j = 2,3} \frac{\Gamma_{ij,mn}^{(D)}}{2} \sbkt{ \hat \rho_A \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- + \widehat{\sigma}_{m,i} ^+ \widehat{\sigma}_{n,j} ^- \hat \rho_A - 2\widehat{\sigma}_{n,j} ^- \hat \rho_A \widehat{\sigma}_{m,i} ^+ },
}
where $\widehat H_A = - \sum_{m=1}^{N}\sum_{j=2,3} \hbar \Delta_{j} \widehat{\sigma}_{m,j}^+ \widehat{\sigma}_{m,j}^-$ is the free atomic Hamiltonian and $\widehat H_{AD} = -\sum_{m=1}^N\sum_{j=2,3} \hbar \Omega_j^m \bkt{ \widehat{\sigma}_{m,j}^+ + \widehat{\sigma}_{m,j}^- }$ is the atom-drive interaction Hamiltonian in the rotating frame, with $ \Delta_j \equiv \omega_{j1} - \omega_D$. The driven damping rates are defined as $ \Gamma_{ij,mn}^{(D) } \equiv \frac{\vec{d}^m_{i1} \cdot \vec{d}^n_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$, with the indices $i,j$ referring to the atomic levels, and $m,n$ to different atoms.
Using the above master equation, one can obtain the following optical Bloch equations for the case of a single atom:
\begin{subequations}
\eqn{\label{eq:optical-Bloch-eqa}
\partial_t \rho_{33} &= i\Omega_3(\rho_{13}-\rho_{31}) - \Gamma_{33}^{(D)}\rho_{33} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32} \\
\partial_t \rho_{22} &= i\Omega_2(\rho_{12}-\rho_{21}) - \Gamma_{22}^{(D)}\rho_{22}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{32}\\
\partial_t \rho_{11} &= -i\Omega_3(\rho_{13}-\rho_{31}) -i\Omega_2(\rho_{12}-\rho_{21}) + \Gamma^{(D)}_{33}\rho_{33}+ \Gamma^{(D)}_{22}\rho_{22} + \Gamma^{(D)}_{23}\bkt{ \rho_{23} + \rho_{32}} \\
\partial_t \rho_{31} &= -i \Omega_2 \rho_{32} - i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}-i\Delta_3}\rho_{31}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{21}\\
\partial_t \rho_{13} &= i \Omega_2 \rho_{23} + i \Omega_3(\rho_{33}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{33}}{2}+i\Delta_3}\rho_{13}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{12}\\
\partial_t \rho_{21} &= -i \Omega_3 \rho_{23} - i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}-i\Delta_2}\rho_{21} - \frac{\Gamma^{(D)}_{23}}{2} \rho_{31}\\
\partial_t \rho_{12} &= i \Omega_3 \rho_{32} + i \Omega_2(\rho_{22}-\rho_{11})-\bkt{\frac{\Gamma^{(D)}_{22}}{2}+i\Delta_2}\rho_{12}- \frac{\Gamma^{(D)}_{23}}{2} \rho_{13}\\
\partial_t \rho_{32} &= -i \Omega_2 \rho_{31} + i\Omega_3\rho_{12}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2} -i\omega_{23}}\rho_{32} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}}\\
\partial_t \rho_{23} &= i \Omega_2 \rho_{13} - i\Omega_3\rho_{21}-\bkt{\frac{\Gamma^{(D)}_{22}+\Gamma^{(D)}_{33}}{2}+i\omega_{23}}\rho_{23} - \frac{\Gamma^{(D)}_{23}}{2} \bkt{\rho_{22} + \rho_{33}},
\label{eq:optical-Bloch-eqi}}
\end{subequations}
where we have defined the single atom driven damping rate as $\Gamma_{ij}^{(D) }\equiv \frac{\vec{d}_{i1} \cdot \vec{d}_{j1}\omega_{D}^3}{3\pi \varepsilon_0 \hbar c^3}$.
Numerically solving Eq.\,\eqref{eq:optical-Bloch-eqa}--Eq.\,\eqref{eq:optical-Bloch-eqi} along with the normalization condition $\rho_{33}+\rho_{22}+\rho_{11}=1$ gives us the steady state density matrix $\rho_S$ for the atom. Substituting our experimental parameters, we get the populations: $\rho_{S,33}\approx 0$, $\rho_{S,22}\approx 10^{-10}$, and $\rho_{S,11}\approx 1$. The absolute value of the coherences are: $|\rho_{S,23}|\approx 0$, $|\rho_{S,21}|\approx 10^{-5}$, and $|\rho_{S,31}|\approx 0$. These estimates are made for $N \approx1 - 10$, assuming the collective driven damping rate to be $ \Gamma_{ij}^{(D)} (N) \approx (1 + Nf) \Gamma_{ij}^{(D) }$ with phenomenological value f=1 and the collective Rabi frequency to be $ \Omega_{j} \approx \sqrt{N} \Omega_{j}$. Thus we can conclude that the atomic ensemble is well within the single excitation regime in $\ket{2}$.
The 3.5 ns time window of laser extinction has broad spectral component and may excite extra population to $\ket{2}$ and $\ket{3}$. We numerically simulate the optical Bloch equation for this time window to find the density matrix after the laser turn-off. We model the laser turn-off shape as $\cos^4$ (see Fig. \ref{fig_laser_extinction}) and vary the Rabi frequency accordingly. Note that this is a calculation for estimate purposes and may not convey the full dynamics in the laser extinction period. Within the numerical precision limit which is set by the evolution time step ($10^{-5}$ ns) multiplied by $\Gamma_{ij} \approx 0.01$ GHz, we obtain the following density matrix values after the turn-off: $\rho_{33}\approx 0$, $\rho_{22}\approx 0$, $\rho_{11}\approx 1$, $\rho_{23}\approx 0$, $\rho_{12}\approx 10^{-5}$, and $\rho_{13}\approx 10^{-7}-10^{-6}$. Thus the laser turn-off edge doesn't produce any significant excitation in $\ket{3}$.
\section{Quantum beat dynamics}
As the drive field is turned off, the system evolves with the atom-vacuum field interaction Hamiltonian. Moving to the interaction representation with respect to $H_A+H_F$, we get the interaction Hamiltonian in the interaction picture:
\eqn{
\tilde{H}_{\text{AV}} = -\sum_{m=1}^N\sum_{j=2,3}\sum_{k} \hbar g_{j}(\omega_k)
\bkt{ \hat{\sigma}_{m,j}^+\hat{a}_{k}e^{i(\omega_{j1}-\omega_{k})t }
+ \hat{\sigma}_{m,j}^-\hat{a}_k^{\dagger}e^{-i(\omega_{j1}-\omega_{k})t}},
\label{eq:H-AV}
}
Initially the system shares one excitation in $\ket{2}$ symmetrically, and the EM field is in the vaccum state such that
\eqn{
\ket{\Psi(0)} = \frac{1}{\sqrt{N}}\sum_{m=1}^{N}\hat{\sigma}_{m,2}^{+}\ket{11\cdots 1}\ket{\{0\}}.
\label{eq:psi-initial}
}
As the system evolves due to the atom-vacuum field interaction, it remains in the single-excitation manifold of total atom + field Hilbert space, as one can see from the interaction Hamiltonian (Eq.\,\eqref{eq:H-AV}):
\eqn{
\ket{\Psi(t)}=\bkt{\sum_{m=1}^{N}\sum_{j=2,3}c_{m,j}(t)\hat{\sigma}_{m,j}^{+}+\sum_{k}c_{k}(t)\hat{a}_k^{\dagger}}\ket{11\cdots 1}\ket{\{0\}}.
\label{eq:psi-evolved}
}
Now we solve the Schr\"odinger equation to find the time evolution of the atom + field system under the atom-field interaction using Eqs.\eqref{eq:psi-evolved} and \eqref{eq:H-AV} to obtain
\begin{subequations}
\begin{align}
& \partial_t c_{m,j}(t) = i\sum_{k} g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t} c_{\omega_k}(t), \\
& \partial_t c_{\omega_k}(t) = i\sum_{m=1}^{N}\sum_{j=2,3}g_j(\omega_k)e^{-i(\omega_{j1}-\omega_{k})t}c_{m,j}(t).
\end{align}
\label{eq:de-1}
\end{subequations}
Formally integrating Eq.\,\eqref{eq:de-1}(b) and plugging it in Eq.\,\eqref{eq:de-1}(a), we have
\eqn{
\partial_t c_{m,j}(t) = -\sum_{k}g_j(\omega_k) e^{i(\omega_{j1}-\omega_{k})t}\int_0^t \mathrm{d}{\tau}\sum_{n=1}^{N}\sum_{l=2,3}g_l(\omega_k) e^{-i(\omega_{l1}-\omega_k)\tau} c_{n,l}(\tau).
}
We observe that $c_{m,2}(t)$'s ($c_{m,3}(t)$'s) have the same initial conditions and the same evolution equation, thus we can justifiably define $c_2(t) \equiv c_{m,2}(t)$ ($c_3(t)\equiv c_{m,3}(t)$).
Assuming a flat spectral density of the field and making the Born-Markov approximation we get
\begin{subequations}
\begin{align}
\partial_t c_2(t) &= -\frac{\Gamma_{22}^{(N)}}{2} c_2(t)-\frac{\Gamma_{23}^{(N)}}{2} e^{i\omega_{23}t}c_3(t),\\
\partial_t c_3(t) &= -\frac{\Gamma_{33}^{(N)}}{2} c_3(t)-\frac{\Gamma_{32}^{(N)}}{2} e^{-i\omega_{23}t}c_2(t),
\end{align}
\label{eq:de-2}
\end{subequations}
where we have defined $\Gamma_{jl}^{(N)}\equiv \Gamma_{jl} + Nf\Gamma_{jl}$, with $\Gamma_{jl} = \frac{\vec{d}_{j1}\cdot \vec{d}_{l1}\omega_{l1}^3}{3\pi \varepsilon_0 \hbar c^3}$ as the generalized decay rate into the quasi-isotropic modes and $Nf\Gamma_{jl}$ as the collective decay rate in the forward direction \cite{Bienaime_2011, Araujo_2016}. The factor $f$ represents the geometrical factor coming from restricting the emission to the forward scattered modes. We emphasize here that the emission into all the modes (not specifically the forward direction) denoted by $\Gamma_{jl}$ is added phenomenologically and is not collective. Considering that the atomic dipole moments induced by the drive field are oriented along the polarization of the driving field, we can obtain $\Gamma_{23}=\sqrt{ \Gamma_{22}\Gamma_{33}}$, which can be extended to $\Gamma_{23}^{(N)}=\sqrt{ \Gamma_{22}^{(N)}\Gamma_{33}^{(N)}}$.
To solve the coupled differential equations, we take the Laplace transform of Eq.\,\eqref{eq:de-2}(a) and (b):
\begin{subequations}
\begin{align}
s\tilde{c}_2(s)&=c_2(0)-\frac{\Gamma_{22}^{(N)}}{2}\tilde{c}_2(s)-\frac{\Gamma_{23}^{(N)}}{2}\tilde{c}_3(s-i\omega_{23}),\\
s\tilde{c}_3(s)&=c_3(0)-\frac{\Gamma_{33}^{(N)}}{2}\tilde{c}_3(s)-\frac{\Gamma_{32}^{(N)}}{2}\tilde{c}_2(s+i\omega_{23}),
\end{align}
\end{subequations}
where we have defined $\tilde{c}_j(s) \equiv \int_0^{\infty} c_j(t) e^{-st} \mathrm{d}(t)$ as the Laplace transform of $c_j \bkt{t}$. Substituting the initial conditions, we obtain the Laplace coefficients as
\begin{subequations}\begin{align}
\tilde{c}_2(s)&=\,\frac{1}{\sqrt{N}}\frac{s+\frac{\Gamma_{33}^{(N)}}{2}-i\omega_{23}}{s^2+(\Gamma_{\text{avg}}^{(N)}-i\omega_{23})s-i\omega_{23}\frac{\Gamma_{22}^{(N)}}{2}},\\
\tilde{c}_3(s)&=-\frac{\Gamma^{(N)}_{32}}{2\sqrt{N}}\,\frac{1}{s^2+(\Gamma^{(N)}_{\text{avg}}+i\omega_{23})s+i\omega_{23}\frac{\Gamma^{(N)}_{33}}{2}}.
\end{align}\end{subequations}
And the poles of the denominators are, respectively,
\begin{subequations}\begin{align}
s_{\pm}^{(2)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} + \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2}, \\
s_{\pm}^{(3)}=&-\frac{\Gamma^{(N)}_{\text{avg}}}{2} - \frac{i\omega_{23}}{2} \pm \frac{i\delta}{2},
\end{align}\end{subequations}
where we have defined $\Gamma_{\text{avg}}^{(N)}=\frac{\Gamma_{33}^{(N)}+\Gamma_{22}^{(N)}}{2}$, $\Gamma_{\text{d}}=\frac{\Gamma_{33}^{(N)}-\Gamma_{22}^{(N)}}{2}$, and $\delta = \sqrt{\omega_{23}^2-\bkt{\Gamma^{(N)}_{\text{avg}}}^2+2i\omega_{23}\Gamma^{(N)}_{\text{d}}}$. The real part of the above roots corresponds to the collective decay rate of each of the excited states, while the imaginary part corresponds to the frequencies. The fact that $\delta$ is generally a complex number unless $\Gamma_{22}\neq\Gamma_{33}$ means that we will have modification to both the decay rate and the frequency. To see this more clearly, we can expand $\delta$ up to second order in $\Gamma_{jl}^{(N)}/\omega_{23}$, considering we are working in a spectroscopically well-separated regime ($\Gamma_{jl}^{(N)}\ll\omega_{23}$);
\eqn{
\delta \approx \omega_{23}\sbkt{1-\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2}+i\Gamma^{(N)}_{d}\sbkt{1+\frac{1}{2}\bkt{\frac{\Gamma^{(N)}_{23}}{\omega_{23}}}^2},
}
the above poles become
\begin{subequations}\begin{align}
s_{+}^{(2)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{22}^{(N)}}{2\omega_{23}^2}} + i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}, \\
s_{-}^{(2)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma_{33}^{(N)}}{2{\omega_{23}^2}}} + i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2, \\
s_{+}^{(3)}=&-\frac{\Gamma^{(N)}_{33}}{2}\bkt{1+\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{22}}{2\omega_{23}^2}} - i\omega_{23}\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2 \\
s_{-}^{(3)}=&-\frac{\Gamma^{(N)}_{22}}{2}\bkt{1-\frac{\Gamma^{(N)}_{\text{d}}\Gamma^{(N)}_{33}}{2\omega_{23}^2}} - i\omega_{23}\sbkt{1-\bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2}.
\end{align}\end{subequations}
The atomic state coefficients in time domain are
\begin{subequations}
\begin{align}
c_2(t) &= \frac{1}{2\sqrt{N}\delta} e^{- \Gamma^{(N)}_{\text{avg}}t/2} e^{i\omega_{23}t/2} \sbkt{(-i\Gamma^{(N)}_{d}-\omega_{23}+\delta)e^{i\delta t/2} + (i\Gamma^{(N)}_{d}+\omega_{23}+\delta)e^{-i\delta t/2}},\\
c_3(t) & =\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} e^{-\Gamma^{(N)}_{\text{avg}}t/2} e^{-i\omega_{23}t/2} \sbkt{e^{i\delta t/2}-e^{-i\delta t/2}}.
\end{align}
\label{eq:atom-coefficients}
\end{subequations}
Again, expanding $\delta$ under the condition $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get
\begin{subequations}
\begin{align}
c_2(t) &= \frac{1}{\sqrt{N}}\sbkt{ e^{- \Gamma^{(N)}_{22}t/2} - \bkt{\frac{\Gamma^{(N)}_{23}}{2\omega_{23}}}^2\frac{\delta^*}{\delta\,}e^{-\Gamma^{(N)}_{33}t/2}e^{i\omega_{23}t}},\\
c_3(t) & = -\frac{i\Gamma^{(N)}_{32}}{2\sqrt{N}\delta} \sbkt{e^{-\Gamma^{(N)}_{22}t/2}e^{-i\omega_{23}t}-e^{-\Gamma^{(N)}_{33}t/2}}.
\end{align}
\label{eq:atom-coefficients-2}
\end{subequations}
Note that the collection of $N$ atoms behaves like one ``super-atom'' which decays with a rate that is $N$-times that of an individual atom in the forward direction. We note that the system is not only superradiant with respect to the transition involving the initially excited level, but also with respect to other transitions as well as a result of the vacuum-induced coupling between the levels. Most population in $\ket{2}$ decays with the decay rate $\Gamma_{22}^{(N)}$, and small amount of it decays with $\Gamma_{33}^{(N)}$ and has corresponding level shift $\omega_{23}$. In $\ket{3}$ are the equal amount of components decaying with $\Gamma^{(N)}_{22}$ (and level shifted $-\omega_{23}$) and $\Gamma^{(N)}_{33}$. The small but nonzero contribution of $\ket{3}$ makes beating of frequency about $\omega_{23}$.
\section{Field Intensity}
The light intensity at position $x$ and time $t$ (assuming the atom is at position $x=0$ and it starts to evolve at time $t=0$) is
\eqn{
I(x,t) = \frac{\epsilon_0 c}{2}\bra{\Psi(t)}\hat{E}^{\dagger}(x,t) \hat{E}(x,t) \ket{\Psi(t)},
}
where the electric field operator is
\eqn{
\hat{E}(x,t) = \int_{-\infty}^{\infty} \mathrm{d} k \, E_k \hat{a}_k e^{ikx}e^{-i\omega_k t}.
}
Plugging in the electric field operator and the single-excitation ansatz (Eq.\,\eqref{eq:psi-evolved}), we obtain the intensity up to a constant factor:
\eqn{
I(x,t) \simeq N^2 \abs{e^{-i\omega_{23}\tau}c_2(\tau) + \frac{\Gamma_{23}}{\Gamma_{22}}c_3(\tau) }^2 \Theta(\tau),
\label{eq:field-intensity}
}
where $\tau = t-\abs{x/v}$.
Substituting Eqs. \eqref{eq:atom-coefficients}(a) and (b) in the above and approximating $\delta$ in the regime $\Gamma_{jl}^{(N)}\ll\omega_{23}$, we get
\eqn{
\frac{I(\tau)}{I_0} = e^{-\Gamma^{(N)}_{22}\tau}+\bkt{\frac{ \Gamma^{(N)}_{33}}{2\omega_{23}}}^2 e^{-\Gamma^{(N)}_{33}\tau} + \frac{\Gamma^{(N)}_{33}}{\omega_{23}} e^{-\Gamma^{(N)}_{\text{avg}}\tau} \sin(\omega_{23}\tau+\phi),
}
where $I_0$ is a normalization factor which increases as the number of atom increases. Neglecting the small second term in the right hand side, we get the relative beat intensity normalized to the main decay amplitude:
\eqn{
\text{beat amp.} = \frac{\Gamma^{(N)}_{33}}{\omega_{23}},
}
and the beat phase $\phi$:
\eqn{
\phi = \arctan\bkt{\frac{\Gamma^{(N)}_{22}}{\omega_{23}}}.
}
We see that even if there was no population in level 3 in the beginning, the vacuum field builds up a coherence between level 2 and level 3 to make a quantum beat. This is in line with the quantum trajectory calculation of single atom case \cite{Hegerfeldt_1994}, where the individual decay rates are replaced with collective decay rates. We can verify that the collective effect manifests in the beat size and the beat phase.
\section{Data analysis in Fig. \ref{fig_decay} (b)}
The modulated decay profiles of the flash after the peak are magnified in Fig.\,\ref{fig_decay} (b). The purpose of the figure is to visually compare the decay rate and the relative beat intensity $I_\mathrm{b}$, so we normalize each curve with the exponential decay amplitude such that the normalized intensity starts to decay from $\approx1$ at $t=0$. In practice, we fit the $I(t)$ shown in Fig.\,\ref{fig_decay} (a) after $t=0$ using Eq.\,\eqref{eq_intensity} to get $I_0$ for each curve, to get $I(t)/I_0$ curves as in Fig.\,\ref{fig_decay} (b). Note that, more precisely, it is the fitting curve that decays from $I(t)/I_0\approx1$, not the experimental data. In fact, the plotted data tend to be lower than the fitting curves near $t=0$, due to the effect of the transient behavior around the flash peak.
The inset displays the FFT of the beat signal shown in the main figure. We first subtract from $I(t)/I_0$ data the exponential decay profile the first term of the fitting function Eq.\,\eqref{eq_intensity} as well as the dc offset. The residual, which is a sinusoidal oscillation with an exponentially decaying envelop, is the beat signal represented by the second term of Eq.\,\eqref{eq_intensity}. The FFT of the beat signal has the lower background at $\omega = 0$ due to the pre-removal of the exponential decay and the offset. The linewidth of each spectrum is limited by the finite lifetime of the beat signal, which corresponds to $\Gamma^{(N)}_\mathrm{avg}$ as in Eq.\,\eqref{eq_intensity}.
\vspace{2cm}
\end{document} | {'timestamp': '2021-02-25T02:05:29', 'yymm': '2102', 'arxiv_id': '2102.11982', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.11982'} |
\section{Introduction}
A huge amount of information flows through enterprise documents; thus, it is imperative to develop efficient information extraction techniques to extract and use this information productively. While documents comprise multiple components such as text, tables, figures etc.; tables are the most commonly used structural representation that organize the information into rows and columns. It captures structural and geometrical relationships between different elements and attributes in the data. Moreover, important facts/numbers are often presented in tables instead of verbose paragraphs. For instance, tables in financial domain are a good example where different financial metrics such as ``revenue", ``income" etc. are presented for different quarters/years. Extracting the content of a table into a structured format (csv or JSON) \cite{gao2019icdar}, \cite{gobel2013icdar}, \cite{jimeno2021icdar} is a key step in many information extraction pipelines.
Unlike traditional machine learning problems where the output is a class (classification) or number (regression), the outcome of a table parsing algorithm is always a structure. There needs to be a way to compare one structure against another structure and define some measure of ``similarity/distance" to evaluate different methods. A number of metrics quantifying this ``distance" have been proposed in literature and multiple competitions. Existing metrics evaluates the performance of table parsing algorithms using the structural and textual information. This paper presents the limitation of existing metrics based on their dependence on the textual information. We emphasize that textual information introduces additional dependency on the OCR (text detection/recognition), which is a separate area in itself and should not be included in evaluating how good is the detected table structure. This paper presents a ``true" metric which is agnostic to the textual details and accounts only for the layout of cells in terms of its row number/column number and bounding box.
\vspace{-6pt}
\section{Existing Metrics in Table Parsing}
Two of the existing metrics are adjacency relation set-based F1 scores with different definitions of the set. They break and linearize the table structure into two dimensions, one along the row and one along the column. Adjacency Relation (Text) \cite{gobel2013icdar} computes pair-wise relations between non-empty adjacent cells and the relation is considered correct only if the direction (horizontal/vertical) and text of both the participating cells match. It does not take into account empty cells and multi-hop cell alignment. Adjacency Relation (IOU) \cite{gao2019icdar} is a text-independent metric where original non-empty cells are mapped to predicted cells by leveraging (multiple) IOU thresholds and then adjacency relations are calculated. This metric takes a weighted average of the computed F1-scores at different IOU thresholds \{0.6, 0.7, 0.8, 0.9\}. Finally, the predicted relations are compared to the ground truth relations and precision/recall/F1 scores are computed.
The third metric considers the structure as a HTML encoding of the table. In this representation, the table is viewed as a tree with the rows being the children of the root $<table>$ node, and cells being the children (represented by $<td>[text]</td>$) of the individual rows. A Tree edit-distance (TEDS) metric \cite{zhong2020image} is proposed which compares two trees and reports a single number summarizing the similarity.
While there are other metrics used in literature such as BLEU-4 \cite{li2019tablebank} (which is more language based), this paper only considers the above three most widely used metrics for evaluating the performance of table structure recognition.
\section{Proposed Metric}
This paper highlights the limitations of the previous metrics and also proposes a new metric, Tree-Edit-Distance Based Similarity with IOU \textit{(TEDS-IOU)}, for evaluating table structure recognition algorithms. The paper also demonstrates how \textit{TEDS-IOU} addresses the limitations of existing metrics.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\linewidth]{example.png}
\vspace{-10pt}
\caption{Original table and an example prediction for the same. For Adjacency relation (Text), the characters can be considered as representing the text inside cells. For Adjacency relation (IOU), characters can be considered as labels representing cells.}
\label{fig:digitalization}
\vspace{-16pt}
\end{center}
\end{figure}
Table \ref{tab1} describes the limitations of the commonly used metrics in table structure recognition literature. For example, in figure \ref{fig:digitalization}, even though the predicted table missed one entire row and $4$ empty cells, in terms of adjacency relations, the only extra relation in the predicted table is the \{C, A, Horizontal\}, where `Horizontal' is the direction of relation. This only affects precision but the recall is still $100$\% which clearly should have been penalised. Also, in the case of the IOU based metric, lets assume label mapping, i.e. cell represented by ``C" in ground truth is a mapped to the ``C" cell in predicted table using IOU thresholds. We still have that same extra relation \{``C", ``A", Horizontal\}, where `Horizontal' is the direction, which demonstrates the inability to capture empty cells and mis-alignments. We should note that metric is still better then the text-based version, since it does not rely on comparing text. Accurately detecting and recognizing text (OCR) is a separate field in itself, while in table structure recognition, we are primarily interested in localizing the cell boundaries and assign text to them.
\begin{table}[t]
\caption{ Existing metrics in literature and their limitations}\label{tab1}
\begin{tabular}{p{0.32\textwidth}|p{0.66\textwidth}}
\hline
\textbf{Metric} & \textbf{Limitations}\\
\hline
{Adjacency Relation (Text)} & {Doesn't handle empty cells, misalignment of cells beyond immediate neighbours \& text dependent}\\
{Adjacency Relation (IOU)} & {Doesn't handle empty cells, misalignment of cells beyond immediate neighbours}\\
{TEDS (Text)} & {Text dependent but less strict due to Levenshtein distance}\\
\hline
\end{tabular}
\vspace{-16pt}
\end{table}
TEDS (Text) metric solved the shortcomings of previous metrics with regard to empty cells and multi-hop mis-alignments \cite{zhong2020image}. In TEDS, all cells, with or without text are considered, thereby also including empty cells as part of computation. So, TEDS (text) will penalise the absence of a row and all the alignment mismatches when comparing ground truth table against predicted table in figure \ref{fig:digitalization}. But it computes the edit distance between cells' texts as compared to the exact match in Adjacency Relation (Text).
Table structure recognition algorithms aim at predicting the location (bounding boxes) of cells and their logical relation with one another, irrespective of the text in the cell. Therefore, the evaluation metric should not penalize an algorithm for inaccuracies in text. With this observation, this paper propose TEDS (IOU) which replaces the string edit distance between cells' text with the IOU distance between their bounding boxes. This effectively, removes dependency on text or OCR, while also preserving the benefits of the original TEDS (text) metric. Specifically, we compute TEDS (IOU) as follows: cost of insertion \& deletion operations is 1 unit; while substituting a node $n_s$ with $n_t$ - cost of edit is 1 unit if either $n_s$ or $n_t$ is not $<td>$, cost of edit is 1 unit if both $n_s$ \& $n_t$ is $<td>$ and the column span or row span of $n_s$ \& $n_t$ is different, otherwise, cost of edit is $1 - IOU(n_s.bbox, n_t.bbox)$. Finally,
\begin{equation}
TEDS\_IOU(T_a, T_b) = 1 - \frac{EditDistIOU(T_a, T_b)}{max(|T_a|, |T_b|)}
\end{equation}
TEDS (IOU) $\in [0,1]$, the higher the better. $|.|$ denotes cardinality. IOU distance $(IOU_d = 1 - IOU)$ being a Jaccard index \cite{kosub2019note}, is a metric as it satisfies:
\begin{enumerate}
\item $IOU_d(A, B) = 0 \iff A = B \qquad \qquad \qquad \qquad \qquad \; \;$ $Identity$
\item $IOU_d(A, B) = IOU_d(B, A) \qquad \qquad \qquad \qquad \qquad \; \; \; \; \; \;$ $Symmetry$
\item $IOU_d(A, C) <= IOU_d(A, B) + IOU_d(B, C) \qquad \qquad \; \,$ $Triangle\;Inequality$
\end{enumerate}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{test_table.png}
\vspace{-10pt}
\caption{(a) is a table from PubTabNet dataset. In (b), red lines denote the predicted structure and blue lines depict the true structure.}
\label{test}
\vspace{-16pt}
\end{center}
\end{figure}
To demonstrate the effectiveness of the proposed TEDS (IOU) metric, we compute the all four metrics for the predicted table in figure \ref{test}(b). In the example above, we had known OCR issues where it was unable to recognize the $\pm$ symbol (it got recognized as +) and all the cells with``NA" were detected as empty. Adjacency Relation (Text) got a very poor score of $13.7$ F1 due to the exact text match constraint. Adjacency Relation (IOU), being text independent, is more robust and achieves a Weighted Avg. F1 of $59.8$. TEDS (text) matches text through edit distances, therefore, for it, only the ``NA" cells gave high edit distance (of 1) and it scores $71.6$ on this table. TEDS (IOU) being text independent and computing the IOU distance between cells, assigns a higher score of $80.6$ which seems to be the most representative one of the prediction.
\section{Discussion \& Future Work}
We proposed a new metric for table structure recognition and demonstrated its benefits against existing metrics. As future steps, we plan to compare these metrics across different datasets and models. A possible extension of this work can be to introduce different thresholds for the IOU as in Adjacency Relation (IOU), instead of using absolute numbers.
\vspace{-10pt}
\bibliographystyle{splncs04}
\input{main.bbl}
\end{document}
| {'timestamp': '2022-08-02T02:14:52', 'yymm': '2208', 'arxiv_id': '2208.00385', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.00385'} |
\section{Introduction}
\input{sections/s1_introduction}
\section{Related Work}
\label{sec:related}
\input{sections/s2_related}
\input{sections/f2_model}
\section{Method}
\label{sec:approach}
\input{sections/s3_approach}
\input{sections/t1_scannet_reg}
\section{Experiments}
\label{sec:experiments}
\input{sections/s4_experiments}
\section{Conclusion}
\label{sec:conclusion}
\input{sections/s5_conclusion}
\vspace{6pt} \noindent
\textbf{Acknowledgments}
We would like to thank the anonymous reviewers for their valuable comments and suggestions.
We also thank Nilesh Kulkarni, Karan Desai, Richard Higgins, and Max Smith for many helpful discussions and feedback on early drafts of this work.
\clearpage
\newpage
{\small
\bibliographystyle{format/ieee}
\subsection{Pairwise Registration}
\label{sec:exp_pcreg}
We first evaluate our approach on point cloud registration.
Given two RGB-D images, we estimate the 6-DOF pose that would best align the first input image to the second.
The transformation is represented by a rotation matrix $\mathbf{R}$ and translation vector $\mathbf{t}$.
\lsparagraph{Evaluation Metrics.}
We evaluate pairwise registration by evaluating the pose prediction as well as the chamfer distance between the estimated and ground-truth alignments.
We compute the angular and translation errors as follows:
$$
E_{\text{rotation}} = \arccos(\frac{Tr(\mathbf{R}_{pr}\mathbf{R}_{gt}^\top) - 1}{2}),
$$$$
E_{\text{translation}} = ||\mathbf{t}_{pr} - \mathbf{t}_{gt}||_2 .
$$
We report the translation error in centimeters and the rotation errors in degrees.
While pose gives us a good measure of performance, some scenes are inherently ambiguous and multiple alignments can explain the scene appearance; \eg, walls, floors, symmetric objects.
To address these cases, we compute the chamfer distance between the scene and our reconstruction.
Given two point clouds where $\mathcal{P}$ represents the correct alignment of the scene and $\mathcal{Q}$ represents our reconstruction of the scene,
we can define the closest pairs between the point clouds as set $\Lambda_{\mathcal{P}, \mathcal{Q}} = \{(p, \argmin_{q \in \mathcal{Q}} ||p - q||) : p \in \mathcal{P})$.
We then compute the chamfer error as follows:
$$
E_{\text{cham}} =
|\mathcal{P}|^{-1} \sum_{\mathclap{({p, q) \in \Lambda_{\mathcal{P}, \mathcal{Q}}}}} ||\mathbf{x}_p - \mathbf{x}_q||
+
|\mathcal{Q}|^{-1} \sum_{\mathclap{{(q, p) \in \Lambda_{\mathcal{Q}, \mathcal{P}}}}} ||\mathbf{x}_q - \mathbf{x}_p||.
$$
For each of these error metrics, we report the mean and median errors over the dataset as well as the accuracy for different thresholds.
We conduct our experiments on ScanNet and report the results in Table~\ref{tab:pose_scannet}.
We find that our model learns accurate point cloud registration; outperforming prior feature descriptors and performing on-par with supervised geometric registration approaches.
We next analyze our results through the questions posed at the start of this section.
\lsparagraph{Does unsupervised learning improve over off-the-shelf descriptors? }
Yes. We evaluate our approach against the traditional pipeline for registration: feature extraction using an off-the-shelf keypoint descriptor and alignment via RANSAC.
We show large performance gains over both traditional and learned descriptors.
It is important to note that FCGF and SuperPoint currently represent the state-of-the-art for feature descriptors.
Furthermore, both methods have been used directly, without further fine-tuning, to achieve the highest performance on image registration benchmarks~\cite{sarlin2020superglue} and geometric registration benchmarks~\cite{choy2020deep,gojcic2020learning}.
We also find that our approach learns features that can generalize to similar datasets. As shown in Table~\ref{tab:pose_scannet}, our model trained on 3D Match outperforms the off-the-shelf descriptors while being competitive with supervised geometric registration approaches.
\lsparagraph{Does RGB-D training alleviate the need for pose supervision? }
Yes.
We compare our approach to two recently proposed supervised point cloud registration approaches: DGR~\cite{choy2020deep} and 3D Multi-view Registration~\cite{gojcic2020learning}.
Since their model was trained on 3D Match, we also train our model on 3D match and report the numbers.
We find that our model is competitive with supervised approaches when trained on their dataset, and can outperform them when trained on ScanNet.
However, a direct comparison is more nuanced since those two classes of methods differ in two key ways: training supervision and input modality.
We argue that the recent rise in RGB-D cameras on both hand-held devices and robotic systems supports our setup.
First, the rise in devices suggests a corresponding increase in RGB-D raw data that will not necessarily be annotated with pose information.
This increase provides a great opportunity for unsupervised learning to leverage this data stream.
Second, while there are cases where depth sensing might be the better or only option (\eg, dark environment or highly reflective surfaces.), there are many cases where one has access to both RGB and depth information.
The ability to leverage both can increase the effectiveness and robustness of a registration system.
Finally, while we only learn visual features in this work, we note that our approach is easily extensible to learning both geometric and visual features since it is agnostic to how the features are calculated.
\subsection{Ablations}
\label{sec:exp_ablations}
We perform several ablation studies to better understand the model’s performance and its various components.
In particular, we are interested in better understanding the impact of the optimization and rendering parameters on the overall model performance.
While some ablations can only be applied during training (\eg, rendering choice), ablations that affect the correspondence estimation and fitting can be selectively applied during training, inference, or both.
Hence, we consider all the variants.
\vspace{0.2cm}
\lsparagraph{Joint Rendering.}
Our first ablation investigates the impact of our rendering choices by rendering the output images from the joint point cloud.
In \S~\ref{sec:method_render}, we discuss rendering alternate views to force the model to align the pointclouds to produce accurate renders.
As shown in Table~\ref{tab:ablations}, we find that naively rendering the joint point cloud results in a significant performance drop.
This supports our claim that a joint render would negatively impact the features learned since the model can achieve good photometric consistency even if the pointclouds are not accurately aligned.
\vspace{0.2cm}
\lsparagraph{Ratio Test.}
In our approach, we use Lowe's ratio test to estimate the weight for each correspondence.
We ablate this component by instead using the feature distance between the corresponding points to rank the correspondences.
Since this ablation can be applied to training or inference independently, we apply it to training, inference, or both.
Our results indicate that the ratio test is critical to our model's performance, as ablating it results in the largest performance drop.
This supports our initial claims about the utility of the ratio test as a strong heuristic for filtering correspondences.
It is worth noting that Lowe's ratio test~\cite{lowe2004distinctive} shows incredible efficacy in determining correspondence weights; a function often undertaken by far more complex models in recent work~\cite{choy2020deep,gojcic2020learning,ranftl2018deepfundamental,sarlin2020superglue}.
Our approach is able to perform well using such a simple filtering heuristic since it is also learning the features, not just matching them.
\vspace{0.2cm}
\lsparagraph{Randomized Subsets.}
In our model, we estimate $t$ transformations based on $t$ randomly sampled subsets. This is inspired by RANSAC~\cite{RANSAC} as it allows us to better handle outliers.
We ablate this module by estimating a single transformation based on all the correspondences.
Similar to the ratio test, this ablation can be applied to training or inference independently.
As shown in Table~\ref{tab:ablations}, ablating this component at test time results in a significant drop in performance.
Interestingly, we find that applying it during training and relieving it during testing improves performance.
We posit that this ablation acts similarly to DropOut~\cite{srivastava2014dropout} which forces the model to predict using a subset of the features and is only applied during training. As a result, the model is forced to learn better features during training, while gaining the benefits of randomized optimization during inference.
\vspace{0.2cm}
\lsparagraph{Number of subsets. }
We find that the number of subsets chosen has a significant impact on both run-time and performance.
During training, we sample 10 subsets of 80 correspondences each. During testing, we sample 100 subsets of 80 correspondences each.
For this set of experiments, we used the same pretrained weights and only vary the number of subsets used. Each subset still contains 80 correspondences.
As shown in Table~\ref{tab:runtime}, using a larger number of subsets improves the performance while also increasing the run-time.
Additionally, we find that the performance gains saturate at 100 subsets.
\subsection{Point Cloud Generation}
\label{sec:method_pcgen}
Given an input RGB-D image, $I \in \mathbb{R}^{4 \times H \times W}$, we would like to generate a point cloud $\mathcal{P} \in \mathbb{R}^{(6 + F) \times N}$.
Each point $p \in \mathcal{P}$ is represented by a 3D coordinate $\mathbf{x}_p \in \mathbb{R}^{3}$, a color $\mathbf{c}_p \in \mathbb{R}^{3}$, and a feature vector $\mathbf{f}_p \in \mathbb{R}^{F}$.
We first use a feature encoder to extract a feature map using each image's RGB channels.
The extracted feature map has the same spatial resolution as the input image.
As a result, one can easily convert the extracted features and input RGB into a point cloud using the input depth and known camera intrinsic matrix.
However, given that current depth sensors do not predict depth for every pixel, we omit pixels with missing depth from our generated point cloud.
In order to avoid heterogeneous batches, we mark points with missing depths so that subsequent operations ignore them.
\subsection{Correspondence Estimation}
\label{sec:method_corr}
Given two feature point clouds\footnote{As noted in Sec~\ref{sec:method_pcgen}, point clouds will have different numbers of valid points based on the input depth. While our method deals with this by tracking those points and omitting them from subsequent operations, we assume all the points are valid in our model description to enhance clarity.},
$\mathcal{P}$, $\mathcal{Q} \in \mathbb{R}^{(6 + F) \times N}$,
we would like to find the correspondences between the point clouds.
Specifically, for each point in $p \in \mathcal{P}$, we would like to find the point $q_p$ such that
\begin{equation}
q_{p} = \operatorname*{arg\,min}_{q\in{\mathcal{Q}}} D(\mathbf{f}_p, \mathbf{f}_q),
\end{equation}
where $D(p, q)$ is a distance-metric defined on the feature space.
In our experiments, we use cosine distance to determine the closest features.
We extract such correspondences for all the points in both $\mathcal{P}$ and $\mathcal{Q}$ since correspondence is not guaranteed to be bijective.
As a result, we have two sets of correspondences, $\mathcal{C}_{\mathcal{P} \to \mathcal{Q}}$ and $\mathcal{C}_{\mathcal{Q} \to \mathcal{P}}$, where each set consists of $N$ pairs.
\paragraph{Ratio Test.}
Determining the quality of each correspondence is a challenge faced by any correspondence-based geometric fitting approach.
Extracting correspondences based on only the nearest neighbor will result in many false positives due to falsely matching repetitive pairs or non-mutually visible portions of the images.
The standard approach is to estimate a weight for each correspondence that captures the quality of this correspondence.
Recent approaches estimate a correspondence weight for each match using self-attention graph networks~\cite{sarlin2020superglue}, PointNets~\cite{gojcic2020learning,yi2018learning}, and CNNs~\cite{choy2020deep}.
In our experiments, we found that a much simpler approach based on Lowe's ratio test~\cite{lowe2004distinctive} works well without requiring any additional parameters in the network.
The basic intuition behind the ratio test is that unique correspondences are more likely to be true matches.
As a result, the quality of correspondence $(p, q_p)$ is not simply determined by $D(p, q_p)$, but rather between the ratio $r$ which is defined as
\begin{equation}
r = \frac{D(p, q_{p, 1})}{D(p, q_{p, 2})},
\end{equation}
where $q_{p, i}$ is the $i$-th nearest neighbor to point $p$ in $\mathcal{Q}$.
Since $0 \leq r_p \leq 1$ and a lower ratio indicates a better match, we weigh each correspondence by $w = 1 - r$.
In the traditional formulation, one would define a distance ratio threshold for inlier vs outliers.
Instead, we rank the correspondences using their ratio weight and pick the top $k$ correspondences.
We pick an equal number of correspondences from $\mathcal{C}_{\mathcal{P} \to \mathcal{Q}}$ and $\mathcal{C}_{\mathcal{Q} \to \mathcal{P}}$.
Additionally, we keep the weights for each correspondence to use in the geometric fitting step.
Hence, we end up with a correspondence set $\mathcal{M} = \{(p, q, w)_i: 0 \leq i < k \}$ where $k{=}400$.
\subsection{Geometric Fitting}
\label{sec:method_fitting}
Given a set of correspondences $\mathcal{M}$, we would like to find the transformation, $\mathcal{T^{*}} \in \text{SE(3)}$ that would minimize the error between the correspondences
\begin{equation}
\mathcal{T^{*}} = \argmin_{\mathcal{T} \in~\text{SE}(3)} E(\mathcal{M}, \mathcal{T})
\label{eq:w_proc}
\end{equation}
where the error $E(\mathcal{M}, \mathcal{T})$ is defined as:
\begin{equation}
E(\mathcal{M}, \mathcal{T}) = |\mathcal{M}|^{-1} \sum_{(p, q, w) \in \mathcal{M}} w~(\mathbf{x}_p - \mathcal{T}(\mathbf{x}_q))^2
\label{eq:corr_err}
\end{equation}
This can be framed as a weighted Procrustes problem and solved using a weighted variant of Kabsch's algorithm~\cite{kabsch1976solution}.
While the original Procrustes problem minimizes the distance between a set of unweighted correspondences~\cite{gower1975generalized}, Choy \etal~\cite{choy2020deep} have shown that one can integrate weights into this optimization.
This is done by calculating the covariance matrix between the centered and weighted point clouds, followed by calculating the SVD on the covariance matrix.
For more details, see~\cite{choy2020deep, kabsch1976solution}.
Integrating weights into the optimization is important for two reasons.
First, it allows us to build robust estimators that can weigh correspondences based on our confidence in their uniqueness.
More importantly, it makes the optimization differentiable with respect to the weights, allowing us to backpropagate the losses back to the encoder for feature learning.
\paragraph{Randomized Optimization. }
While this approach is capable of integrating the weights into the optimization, it can still be sensitive to outliers with non-zero weights.
We take inspiration from RANSAC and use random sampling to mitigate the problem of outliers.
More specifically, we sample $t$ subsets of $\mathcal{M}$, and use Equation~\ref{eq:w_proc} to find $t$ candidate transformations.
We then choose the candidate that minimizes the weighted error on the full correspondence set.
Since the $t$ optimizations on the correspondence subsets are all independent, we are able to run them in parallel to make the optimization more efficient.
We deviate from classic RANSAC pipelines in that we choose the transformation that minimizes a weighted error, instead of maximizing inlier count, to avoid having to define an arbitrary inlier threshold.
It is worth noting that the model can be trained and tested with a different number of random subsets.
In our experiments, we train the model with 10 randomly sampled subsets of 80 correspondences each.
At test time, we use 100 subsets with 20 correspondences each.
We evaluate the impact of those choices on performance and run time in \S~\ref{sec:exp_ablations}.
\subsection{Point Cloud Rendering}
\label{sec:method_render}
The final step of our approach is to render the RGB-D images from the aligned point clouds. This provides us with our primary learning signals: photometric and depth consistency.
The core idea is that if the camera locations are estimated correctly, the point cloud renders will be consistent with the input images.
We use differentiable rendering to project the colored point clouds onto an image using the estimated camera pose and known intrinsics. Our pipeline is very similar to Wiles \etal~\cite{wiles2019synsin}.
A naive approach of simply rendering both point clouds suffers from a degenerate solution: the rendering will be accurate even if the alignment is incorrect.
An extreme case of this would be to always estimate cameras looking in opposite directions.
In that case, each image is projected in a different location of space and the output will be consistent without alignment.
We address this issue by forcing the network to render each view using only the other image's point cloud, as shown in Fig.~\ref{fig:mask_render}.
This forces the network to learn consistent alignment as a correct reconstruction requires the mutually visible parts of the scene to be correctly aligned.
This introduces another challenge: \textit{how to handle the non-mutually visible surfaces of the scene? }
While view synthesis approaches hallucinate the missing regions to output photo-realistic imagery~\cite{wiles2019synsin}, earlier work in differentiable SfM observed that the gradients coming from the hallucinated region negatively impact the learning~\cite{zhou2017unsupervised}.
Our solution to this problem is to only evaluate the loss for valid pixels.
Valid pixels, as shown in Fig~\ref{fig:mask_render}, are ones for which rendering was possible; \ie, there were points along the viewing ray for those pixels.
This is important in this work since invalid pixels can occur due to two reasons: non-mutually visible surfaces and pixels with missing depth.
While the first reason is due to our approach, the second reason for invalid pixels is governed by current depth sensors which do not produce a depth value for each pixel.
In our experiments, we found that pose networks are very susceptible to the issues above; the network starts estimating very large poses within the first hundred iterations and never recovers.
We also experimented with rendering the features and decoding them, similar to~\cite{wiles2019synsin}, but found that this resulted in worse alignment performance.
\input{sections/f4_model_visual}
\subsection{Losses}
We use three consistency losses to train our model: photometric, depth, and correspondence.
The photometric and depth losses are the L1 losses applied between the rendered and input RGB-D frames.
Those losses are masked to only apply to valid pixels, as discussed in \S~\ref{sec:method_render}.
Additionally, we use the correspondence error calculated in Eq.~\ref{eq:corr_err} as our correspondence loss.
We weight the photometric and depth losses with a weighting of 1 while the correspondence loss receives a weighting of 0.1.
| {'timestamp': '2021-02-24T02:30:38', 'yymm': '2102', 'arxiv_id': '2102.11870', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.11870'} |
\section{Introduction}
\label{sec:intro}
Image fusion is frequently involved in modern \mbox{image-guided} medical interventions, typically augmenting \mbox{intra-operatively} acquired \mbox{2-D}\xspace \mbox{X-ray}\xspace images with \mbox{pre-operative} \mbox{3-D}\xspace CT or MRI images. Accurate alignment between the fused images is essential for clinical applications and can be achieved using \mbox{2-D/3-D}\xspace rigid registration, which aims at finding the pose of a \mbox{3-D}\xspace volume in order to align its projections to \mbox{2-D}\xspace \mbox{X-ray}\xspace images. Most commonly, \mbox{intensity-based} methods are employed~\cite{markelj2010review}, where a similarity measure between the \mbox{2-D}\xspace image and the projection of the \mbox{3-D}\xspace image is defined and optimized as e.\,g.\xspace~described by Kubias~et~al.\xspace~\cite{IMG08}. Despite decades of investigations, \mbox{2-D/3-D}\xspace registration remains challenging. The difference in dimensionality of the input images results in an \mbox{ill-posed} problem. In addition, content mismatch between the \mbox{pre-operative} and \mbox{intra-operative} images, poor image quality and a limited field of view challenge the robustness and accuracy of registration algorithms. Miao~et~al.\xspace~\cite{DFM17} propose a \mbox{learning-based} registration method that is build upon the intensity-based approach. While they achieve a high robustness, registration accuracy remains challenging.
The intuition of \mbox{2-D/3-D}\xspace rigid registration is to globally minimize the visual misalignment between \mbox{2-D}\xspace images and the projections of the \mbox{3-D}\xspace image.
Based on this intuition, Schmid and Ch{\^e}nes~\cite{segm2014Schmid} decompose the target structure to local shape patches and model image forces using Hooke's law of a spring from image block matching.
Wang~et~al.\xspace~\cite{DRR17} propose a \mbox{point-to-plane} \mbox{correspondence (PPC)} model for \mbox{2-D/3-D}\xspace registration, which linearly constrains the global differential motion update using local correspondences. Registration is performed by iteratively establishing correspondences and performing the motion estimation.
During the intervention, devices and implants, as well as locally similar anatomies, can introduce outliers for local correspondence search (see Fig. \ref{fig:sample:td} and \ref{fig:sample:NGC}). Weighting of local correspondences, in order to emphasize the correct correspondences, directly influences the accuracy and robustness of the registration.
An iterative reweighted scheme is suggested by Wang~et~al.\xspace~\cite{DRR17} to enhance the robustness against outliers. However, this scheme only works when outliers are a minority of the measurements.
Recently, Qi~et~al.\xspace~\cite{PND17} proposed the PointNet, a type of neural network directly processing point clouds. PointNet is capable of internally extracting global features of the cloud and relating them to local features of individual points. Thus, it is well suited for correspondence weighting in \mbox{2-D/3-D}\xspace registration.
Yi~et~al.\xspace~\cite{LFG18} propose to learn the selection of correct correspondences for wide-baseline stereo images. As a basis, candidates are established, e.\,g.\xspace~using SIFT features. Ground truth labels are generated by exploiting the epipolar constraint. This way, an outlier label is generated. Additionally, a regression loss is introduced, which is based on the error in the estimation of a known essential matrix between two images. Both losses are combined during training. While including the regression
loss improves the results, the classification loss is shown to be important to find highly accurate correspondences.
The performance of iterative correspondence-based registration algorithms
(e.\,g.\xspace~\cite{segm2014Schmid}, \cite{DRR17})
can be improved by learning a weighting strategy for the correspondences.
However, automatic labeling of the correspondences is not practical for iterative methods as even correct correspondences may have large errors in the first few iterations.
This means that labeling cannot be performed by applying a simple rule such as a threshold based on the ground truth position of a point.
In this paper, we propose a method to learn an optimal weighting strategy for the local correspondences for rigid \mbox{2-D/3-D}\xspace registration directly with the criterion to minimize the registration error, without the need of per-correspondence ground truth annotations.
We treat the correspondences as a point cloud with extended \mbox{per-point} features and use a modified PointNet architecture to learn global interdependencies of local correspondences according to the PPC registration metric.
We choose to use the PPC model as it was shown to enable a high registration accuracy as well as robustness~\cite{DRR17}. Furthermore, it is differentiable and therefore lends itself to the use in our training objective function.
To train the network, we propose a novel training objective function, which is composed of the motion estimation according to the PPC model and the registration error computation steps. It allows us to learn a correspondence weighting strategy by minimizing the registration error.
We demonstrate the effectiveness of the learned weighting strategy by evaluating our method on \mbox{single-vertebra} registration, where we show a highly improved robustness compared to the original PPC registration.
\section{Registration and Learned Correspondence Weighting}
In the following section, we begin with an overview of the registration method using the PPC model.
Then, further details on motion estimation (see Sec.~\ref{sec:motionEstimation}) and registration error computation (see Sec.~\ref{sec:errorComputation}) are given, as these two steps play a crucial role in our objective function.
The architecture of our network is discussed in Sec.~\ref{sec:architecture}, followed by the introduction of our objective function in Sec.~\ref{sec:objective}.
At last, important details regarding the training procedure are given in Sec.~\ref{sec:training}.
\subsection{Registration Using Point-to-Plane Correspondences}
Wang~et~al.\xspace~\cite{DRR17} measure the local misalignment between the projection of a \mbox{3-D}\xspace volume $V$ and the \mbox{2-D}\xspace fluoroscopic (live \mbox{X-ray}\xspace) image $I^\mathrm{FL}$ and compute a motion which compensates for this misalignment.
Surface points are extracted from $V$ using the \mbox{3-D}\xspace Canny detector~\cite{CAE86}.
A set of contour generator points~\cite{hartley03contGen} $\set{\mathbf{w}_i}$, i.\,e.\xspace~surface points $\mathbf{w}_i\in\mathbb{R}^3$ which correspond to contours in the projection of $V$, are projected onto the image as $\set{\mathbf{p}_i}$, i.\,e.\xspace~a set of points $\mathbf{p}_i\in\mathbb{R}^3$ on the image plane.
Additionally, gradient projection images of $V$ are generated and used to perform local patch matching to find correspondences for $\mathbf{p}_i$ in $I^\mathrm{FL}$.
Assuming that the motion along contours is not detectable, the patch matching is only performed in the orthogonal direction to the contour.
Therefore, the displacement of $\mathbf{w}_i$ along the contour is not known, as well as the displacement along the viewing direction. These unknown directions span the plane $\mathrm{\Pi}_i$ with the normal $\mathbf{n}_i\in\mathbb{R}^3$. After the registration, a point $\mathbf{w}_i$ should be located on the plane $\mathrm{\Pi}_i$.
To minimize the point-to-plane distances $\distance{\mathbf{w}_i}{\mathrm{\Pi}_i}$, a linear equation is defined for each correspondence under the small angle assumption.
The resulting system of equations is solved for the differential motion $\delta\mathbf{v}\in\mathbb{R}^6$, which contains both rotational components in the axis-angle representation $\delta{\boldsymbol{\omega}}\in\mathbb{R}^3$ and translational components $\delta{\boldsymbol{\nu}}\in\mathbb{R}^3$, i.\,e.\xspace~$\delta\mathbf{v}=(\delta{\boldsymbol{\omega}}^\intercal, \delta{\boldsymbol{\nu}}^\intercal)^\intercal$.
The correspondence search and motion estimation steps are applied iteratively over multiple resolution levels.
To increase the robustness of the motion estimation, the maximum correntropy criterion for regression (MCCR)~\cite{LMC15} is used to solve the system of linear equations~\cite{DRR17}.
The motion estimation is extended to coordinate systems related to the camera coordinates by a rigid transformation by Schaffert~et~al.\xspace~\cite{MVD17}.
The PPC model sets up a linear relationship between the local point-to-plane correspondences and the differential transformation, i.\,e.\xspace a linear misalignment metric based on the found correspondences.
In this paper, we introduce a learning method for correspondence weighting, where the PPC metric is used during training to optimize the weighting strategy for the used correspondences with respect to the registration error.
\subsection{Weighted Motion Estimation}
\label{sec:motionEstimation}
Motion estimation according to the PPC model is performed by solving a linear system of equations defined by $\matr{A}\in\mathbb{R}^{N\times6}$ and $\vect{b}\in\mathbb{R}^N$, where each equation corresponds to one point-to-plane correspondence and $N$ is the number of used correspondences.
We perform the motion estimation in the camera coordinate system with the origin shifted to the centroid of $\set{\mathbf{w}_i}$. This allows us to use the regularized least-squares estimation
\begin{equation}
\delta\mathbf{v} = \underset{\delta\mathbf{v}'}{\arg\min}\left(\dfrac{1}{N}\norm{\matr{A}_s\delta\mathbf{v}'-\vect{b}_s}_2^2 + \lambda \norm{\delta\mathbf{v}'}_2^2\right)
\label{eq:LS}
\end{equation}
in order to improve the robustness of the estimation.
Here, $\matr{A}_s=\matr{S}\cdot\matr{A}$, $\vect{b}_s=\matr{S}\cdot\vect{b}$ and $\lambda$ is the regularizer weight. The diagonal matrix $\matr{S}=\text{diag}(\vect{s})$ contains weights $\vect{s}\in\mathbb{R}^N$ for all correspondences. As Eq.~\eqref{eq:LS} is differentiable w.\,r.\,t.\xspace $\delta\mathbf{v}'$, we obtain
\begin{equation}
\delta\mathbf{v}=\regPPC{\matr{A}, \vect{b},\mathbf{s}}=(\matr{A}_s^\intercal\matr{A}_s+N\cdot\lambda \matr{I})^{-1}\matr{A}_s^\intercal\vect{b}_s \enspace ,
\label{eq:LSClosedForm}
\end{equation}
where $\matr{I}\in\mathbb{R}^{6\times6}$ is the identity matrix.
After each iteration, the registration $\matr{T}\in\mathbb{R}^{4\times4}$ is updated as
\begin{equation}
\matr{T} =
\begin{pmatrix}
\cos(\alpha)\matr{I}+(1-\cos(\alpha)\vect{r}\vect{r}^\intercal)+\sin(\alpha)[\vect{r}]_\times & \delta{\boldsymbol{\nu}} \\ 0 & 1
\end{pmatrix}
\cdot \hat{\matr{T}}
\enspace ,
\label{eq:currReg}
\end{equation}
where $\alpha = \norm{\delta{\boldsymbol{\omega}}}$, $\vect{r} = \delta{\boldsymbol{\omega}}/\norm{\delta{\boldsymbol{\omega}}}$, $[\vect{r}]_\times\in\mathbb{R}^{3\times3}$ is a skew matrix which expresses the cross product with $\vect{r}$ as a matrix multiplication and $\hat{\matr{T}}\in\mathbb{R}^{4\times4}$ is the registration after the previous iteration~\cite{DRR17}.
\subsection{Registration Error Computation}
\label{sec:errorComputation}
In the training phase, the registration error is measured and minimized via our training objective function.
Different error metrics, such as the mean target registration error (mTRE) or the mean re-projection distance (mRPD) can be used. For more details on these metrics, see Sec.~\ref{sec:evalMetrics}.
In this work, we choose the projection error (PE)~\cite{GBD14}, as it directly corresponds to the visible misalignment in the images and therefore roughly correlates to the difficulty to find correspondences by patch matching for the next iteration of the registration method. The PE is computed as
\begin{equation}
e = \errorFunc{\matr{T}, \matr{T}^{\mathrm{GT}}}=\dfrac{1}{M} \sum_{j=1}^M \norm{\projection{\matr{T}}{\mathbf{q}_j}-\projection{\matr{T}^{\mathrm{GT}}}{\mathbf{q}_j}} \enspace ,
\label{eq:error}
\end{equation}
where a set of $M$ target points $\set{\mathbf{q}_j}$ is used and $j$ is the point index. $\projection{\matr{T}}{\cdot}$ is the projection onto the image plane under the currently estimated registration and $\projection{\matr{T}^{\mathrm{GT}}}{\cdot}$ the projection under the \mbox{ground-truth} registration matrix $\matr{T}^{\mathrm{GT}}\in\mathbb{R}^{4\times4}$. Corners of the bounding box of the point set $\set{\mathbf{w}_i}$ are used as $\set{\mathbf{q}_j}$.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{architecture.pdf}
\caption{Modified PointNet~\cite{PND17} architecture used for correspondence weighting. Rectangles with dashed outlines indicate feature vectors (orange for local features, i.\,e.\xspace containing information from single correspondences, and red for global features, i.\,e.\xspace containing information from the entire set of correspondences). Sets of feature vectors (one feature vector per correspondence) are depicted as a column of feature vectors (three correspondences shown here).
MLP denotes a multi-layer perceptron, which is applied to each feature vector individually.
}
\label{fig:architecture}
\end{figure}
\subsection{Network Architecture}
\label{sec:architecture}
We want to weight individual correspondences based on their geometrical properties as well as the image similarity, taking into account the global properties of the correspondence set.
For every correspondence, we define the features
\begin{equation}
\vect{f}_i =
\begin{pmatrix}
\mathbf{w}_i^\intercal & \vect{n}_i^\intercal & \distance{\mathbf{w}_i}{\mathrm{\Pi}_i} & \text{NGC}_i&
\end{pmatrix}^\intercal \enspace ,
\label{eq:features}
\end{equation}
where $\text{NGC}_i$ denotes the normalized gradient correlation for the correspondences, which is obtained in the patch matching step.
The goal is to learn the mapping from a set of feature vectors $\set{\vect{f}_i}$ representing all correspondences to the weight vector $\vect{s}$ containing weights for all correspondences, i.\,e.\xspace~the mapping
\begin{equation}
\text{M}_{{\boldsymbol{\theta}}}: \set{\vect{f}_i} \mapsto \vect{s} \enspace ,
\end{equation}
where $\text{M}_{{\boldsymbol{\theta}}}$ is our network, and ${\boldsymbol{\theta}}$ the network parameters.
To learn directly on correspondence sets, we use the PointNet~\cite{PND17} architecture and modify it to fit our task (see Fig.~\ref{fig:architecture}).
The basic idea behind PointNet is to process points individually and obtain global information by combining the points in a symmetric way, i.\,e.\xspace~independent of order in which the points appear in the input~\cite{PND17}.
In the simplest variant, the PointNet consists of a multi-layer perceptron (MLP) which is applied for each point, transforming the respective $\vect{f_i}$ into a \mbox{higher-dimensional} feature space and thereby obtaining a local point descriptor.
To describe the global properties of the point set, the resulting local descriptors are combined by max pooling over all points, i.\,e.\xspace~for each feature, the maximum activation over all points in the set is retained.
To obtain per-point outputs, the resulting global descriptor is concatenated to the local descriptors of each point.
The resulting descriptors, containing global as well as local information, are further processed for each point independently by a second MLP.
For our network, we choose MLPs with the size of $8\times64\times128$ and $256\times64\times1$, which are smaller than in the original network~\cite{PND17}.
We enforce the output to be in the range of $[0;1]$ by using a softsign activation function~\cite{elliott1993better} in the last layer of the second MLP and modify it to re-scale the output range from $(-1;1)$ to $(0;1)$.
Our modified softsign activation function $f(\cdot)$ is defined as
\begin{equation}
f(x) = \left(\dfrac{x}{1+|x|}+1\right)\cdot0.5 \enspace ,
\end{equation}
where $x$ is the state of the neuron.
Additionally, we introduce a global trainable weighting factor which is applied to all correspondences.
This allows for an automatic adjustment of the strength of the regularization in the motion estimation step.
Note that the network is able to process correspondence sets of variable size so that no fixed amount of correspondences is needed and all extracted correspondences can be utilized.
\subsection{Training Objective}
\label{sec:objective}
We now combine the motion estimation, PE computation and the modified PointNet to obtain the training objective function as
\begin{equation}
\boldsymbol{\theta}=\underset{\mathbf{{\boldsymbol{\theta}}'}}{\arg\min}\dfrac{1}{K}\sum_{k=1}^K\errorFunc{\regPPC{\matr{A}_k, \vect{b}_k, \text{M}_{{\boldsymbol{\theta}}'}(\set{\mathbf{f}_i}_k)},\matr{T}^{\mathrm{GT}}_k} \enspace ,
\label{eq:Objective}
\end{equation}
where $k$ is the training sample index and $K$ the overall number of samples. Equation \eqref{eq:LSClosedForm} is differentiable with respect to $\vect{s}$, Eq.~\eqref{eq:currReg} with respect to $\delta\mathbf{v}$ and Eq.~\eqref{eq:error} with respect to $\matr{T}$.
Therefore, gradient-based optimization can be performed on Eq.~\eqref{eq:Objective}.
Note that using Eq.~\eqref{eq:Objective}, we learn directly with the objective to minimize the registration error and no per-correspondence \mbox{ground-truth} weights are needed.
Instead, the PPC metric is used to implicitly assess the quality of the correspondences during the back-propagation step of the training and the weights are adjusted accordingly. In other words, the optimization of the weights is driven by the PPC metric.
\subsection{Training Procedure}
\label{sec:training}
To obtain training data, a set of volumes $\setV$ is used, each with one or more \mbox{2-D}\xspace images $\set{I^\mathrm{FL}}$ and a known $\matr{T}^{\mathrm{GT}}$ (see Sec.~\ref{sec:data}). For each pair of images, 60 random initial transformations with an uniformly distributed mTRE are generated~\cite{SEM05}. For details on the computation of the mTRE and start positions, see Sec.~\ref{sec:evalMetrics}.
Estimation of correspondences at training time is computationally expensive.
Instead, the correspondence search is performed once and the precomputed correspondences are used during training.
Training is performed for one iteration of the registration method and start positions with a small initial error are assumed to be representative for subsequent registration iterations at test time.
For training, the number of correspondences is fixed to 1024 to enable efficient batch-wise computations. The subset of used correspondences is selected randomly for every training step. Data augmentation is performed on the correspondence sets by applying translations, \mbox{in-plane} rotations and horizontal flipping, i.\,e.\xspace reflection over the plane spanned by the vertical axis of the \mbox{2-D}\xspace image and the principal direction. For each resolution level, a separate model is trained.
\section{Experiments and Results}
\subsection{Data}
\label{sec:data}
\begin{figure}[t]
\centering
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S1Marked.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S2Marked.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S3Marked.jpg}
}
\hfill
\\
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S13D.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S23D.jpg}
}
\hfill
\subfloat{%
\includegraphics[width=0.32\textwidth]{S33D.jpg}
}
\hfill
\caption{Examples of \mbox{2-D}\xspace images used as $I^\mathrm{FL}$ (top row) and the corresponding \mbox{3-D}\xspace images used as $V$ (bottom row) in the registration evaluation. Evaluated vertebrae are marked by a yellow cross in the top row.}
\label{fig:data}
\end{figure}
We perform experiments for \mbox{single-view} registration of individual vertebrae.
Note that \mbox{single-vertebra} registration is challenging due to the small size of the target structure and the presence of neighbor vertebrae. Therefore, achieving a high robustness is challenging.
We use clinical C-arm CT acquisitions from the thoracic and pelvic regions of the spine for training and evaluation. Each acquisition consists of a sequence of \mbox{2-D}\xspace images acquired with a rotating C-arm. These images are used to reconstruct the \mbox{3-D}\xspace volume. To enable reconstruction, the C-arm geometry has to be calibrated with a high accuracy (the accuracy is $\leq 0.16$\,mm for the projection error at the iso-center in our case). We register the acquired \mbox{2-D}\xspace images to the respective reconstructed volume and therefore the ground truth registration is known within the accuracy of the calibration.
Vertebra are defined by an axis-aligned volume of interest (VOI) containing the whole vertebra. Only surface points inside the VOI are used for registration. We register the projection images (resolution of $616\times480$ pixels, pixel size of 0.62\,mm) to the reconstructed volumes (containing around 390 slices with slice resolution of $512\times512$ voxels and voxel size of 0.49\,mm).
To simulate realistic conditions, we add Poisson noise to all \mbox{2-D}\xspace images and rescale the intensities to better match fluoroscopic images.
The training set consists of \mbox{19 acquisitions} with a total of \mbox{77 vertebrae}.
For each vertebra, \mbox{8 different} \mbox{2-D}\xspace images are used. An additional validation set of \mbox{23 vertebrae} from \mbox{6 acquisitions} is used to monitor the training process.
The registration is performed on a test set of 6 acquisitions. For each acquisition, \mbox{2 vertebrae} are evaluated and registration is performed independently for both the \mbox{anterior-posterior} and the lateral views.
Each set contains data from different patients, i.\,e.\xspace~no patient appears in two different sets. The sets were defined so that all sets are representative to the overall quality of the available images, i.\,e.\xspace~contain both pelvic and thoracic vertebrae, as well as images with more or less clearly visible vertebrae.
Examples of images used in the test set are shown in Fig.~\ref{fig:data}.
\subsection{Compared Methods}
\label{sec:comparedMethods}
We evaluate the performance of the registration using the PPC model in combination with the learned correspondence weighting strategy (PPC-L), which was trained using our proposed metric-driven learning method.
To show the effectiveness of the correspondence weighting, we compare PPC-L to the original PPC method. The compared methods differ in the computation of the correspondence weights $\vect{s}$ and the regularizer weight $\lambda$. For \mbox{PPC-L}\xspace, the correspondence weights $\vect{s}^\mathrm{L} = \text{M}_{{\boldsymbol{\theta}}}({\set{\mathbf{f}})}$ and $\lambda = 0.01$ are used. For PPC, we set $\lambda = 0$ and the used correspondence weights $\vect{s}^\mathrm{PPC}$ are the $\text{NGC}_i$ values of the found correspondences, where any value below $0.1$ is set to $0$, i.\,e.\xspace~the correspondence is rejected. Additionally, the MCCR is used in the PPC method only. The minimum resolution level has a scaling of 0.25 and the highest a scaling of 1.0. For the PPC method, registration is performed on the lowest resolution level without allowing motion in depth first, as this showed to increases the robustness of the method. To differentiate between the effect of the correspondence weighting and the regularized motion estimation, we also consider registration using regularized motion estimation. We use a variant where the global weighting factor, which is applied to all points, is matched to the regularizer weight automatically by using our objective function (\mbox{PPC-R}\xspace). For the different resolution levels, we obtained a data weight in the range of $[2.0 ; 2.1]$. Therefore, we use $\lambda = 0.01$ and $\vect{s}^\mathrm{R} = 2.0 \cdot \vect{s}^\mathrm{PPC}$. Additionally, we empirically set the correspondence weight to $\vect{s}^\mathrm{RM} = 0.25 \cdot \vect{s}^\mathrm{PPC}$, which increases the robustness of the registration while still allowing for a reasonable amount of motion (\mbox{PPC-RM}\xspace).
\subsection{Evaluation Metrics}
\label{sec:evalMetrics}
To evaluate the registration, we follow the standardized evaluation methodology~\cite{SEM05,ROC13}.
The following metrics are defined by van de Kraats~et~al.\xspace~\cite{SEM05}:
\begin{itemize}
\item{\it Mean Target Registration Error:}
The mTRE is defined as the mean distance of target points under $\matr{T}^{\mathrm{GT}}$ and the estimated registration $\matr{T}^\mathrm{est}\in\mathbb{R}^{4\times4}$.
\item{\it Mean Re-Projection Distance (mRPD):}
The mRPD is defined as the mean distance of target points under $\matr{T}^{\mathrm{GT}}$ and the \mbox{re-projection} rays of the points as projected under $\matr{T}^\mathrm{est}$.
\item{\it Success Rate (SR):}
The SR is the number of registrations with with a registration error below a given threshold. As we are concerned with \mbox{single-view} registration, we define the success criterion as a mRPD $\leq$ 2\,mm.
\item{\it Capture Range (CR):}
The CR is defined as the maximum initial mTRE for which at least 95\% of registrations are successful.
\end{itemize}
Additionally, we compute the gross success rate (GSR)~\cite{DFM17} as well as a gross capture range (GCR) with a success criterion of a mRPD $\leq$ 10\,mm in order to further assess the robustness of the methods in case of a low accuracy.
We define target points as uniformly distributed points inside the VOI of the registered vertebra.
For the evaluation, we generate 600 random start transformations for each vertebra in a range \mbox{of 0\,mm - 30\,mm} initial mTRE using the methodology described by van de Kraats~et~al.\xspace~\cite{SEM05}.
We evaluate the accuracy using the mRPD and the robustness using the SR, CR GSR and GCR.
\subsection{Results and Discussion}
\subsubsection{Accuracy and Robustness}
The evaluation results for the compared methods are summarized in Tab. \ref{tab:resBase}. We observe that \mbox{PPC-L}\xspace achieves the best SR of 94.3\,\% and CR of 13\,mm. Compared to PPC (SR of 79.3\,\% and CR of 3\,mm), \mbox{PPC-R}\xspace also achieves a higher SR of 88.1\,\% and CR of 6\,mm. For the regularized motion estimation, the accuracy decreases for increasing regularizer influence (0.79$\pm${0.22}\,mm for \mbox{PPC-R}\xspace and 1.18$\pm${0.42}\,mm for \mbox{PPC-RM}\xspace), compared to PPC (0.75$\pm$0.21\,mm) and \mbox{PPC-L}\xspace (0.74$\pm$0.26\,mm). A sample registration result using \mbox{PPC-L}\xspace is shown in Fig.~\ref{fig:sample:res}.
\begin{table}[b]
\centering
\caption{Evaluation results for the compared methods. The mRPD is computed for the 2\,mm success criterion and is shown as mean\,$\pm$\,standard deviation.}
\label{tab:angioRes}
\begin{tabular}{l|c|c|c|c|c}
\hline
Method & mRPD {[}mm{]} & SR {[}\%{]} & CR {[}mm{]} & GSR {[}\%{]} & GCR {[}mm{]}\\
\hline
PPC & 0.75$\pm$0.21 & 79.3 & 3 & 81.8 & 3 \\
\mbox{PPC-R}\xspace & {0.79}$\pm${0.22} & 88.1 & 6 & 90.72 & 6 \\
\mbox{PPC-RM}\xspace & {1.18}$\pm${0.42} & 59.6 & 4 & 95.1 & 20 \\
\bf{\mbox{PPC-L}\xspace} & {{0.74}}$\pm$0.26 & 94.3 & 13 & 96.3 & 22 \\
\hline
\end{tabular}
\label{tab:resBase}
\end{table}
\begin{figure}[t]
\centering
\subfloat[\label{fig:sample:td}]{%
\includegraphics[width=0.23\textwidth]{Im2D-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:NGC}]{%
\includegraphics[width=0.23\textwidth]{ImNGC-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:W}]{%
\includegraphics[width=0.23\textwidth]{ImW-3.jpg}
}
\hfill
\subfloat[\label{fig:sample:res}]{%
\includegraphics[width=0.23\textwidth]{ImRes-3.jpg}
}
\caption{Registration example: (a) shows $I^\mathrm{FL}$ with one marked vertebra to register. Red dots depict initially extracted (b,\,c) and final aligned (d) contour points. Green lines depict the same randomly selected subset of correspondences, whose intensities are determined by $\text{NGC}_i$ (b) and learned weights (c). Final \mbox{PPC-L}\xspace registration result overlaid in yellow (d). Also see video in the supplementary material.
}
\label{fig:sample}
\end{figure}
For strongly regularized motion estimation, we observe a large difference between the GSR and the SR. While for \mbox{PPC-R}\xspace, the difference is relatively small \mbox{(88.1\% vs. 90.7\%)}, it is very high for \mbox{PPC-RM}\xspace. Here a GSR of 95.1\,\% is achieved, while the SR is 59.6\,\%. This indicates that while the method is robust, the accuracy is low. Compared to the CR, the GCR is increased for \mbox{PPC-L}\xspace (22\,mm vs. 13\,mm) and especially for \mbox{PPC-RM}\xspace (20\,mm vs. 4\,mm).
Overall, this shows that while some inaccurate registrations are present in \mbox{PPC-L}\xspace, they are very common for \mbox{PPC-RM}\xspace.
\subsubsection{Single Iteration Evaluation}
\begin{figure}[b]
\centering
\subfloat[PPC]{%
\includegraphics[width=0.32\textwidth]{BaseSingleIter.pdf}
}
\hfill
\subfloat[\mbox{PPC-R}\xspace]{%
\includegraphics[width=0.32\textwidth]{RegSingleIter.pdf}
}
\hfill
\subfloat[\mbox{PPC-L}\xspace]{%
\includegraphics[width=0.32\textwidth]{LearnedSingleIter.pdf}
}
\caption{Histograms showing initial and result projection error (PE) in pixels for a single iteration of registration on lowest resolution level (on validation set, 1024 correspondences per case). Motion estimation was performed using least squares for all methods. For PPC, no motion in depth is estimated (see Sec.~\ref{sec:comparedMethods}).}
\label{fig:singleIter}
\end{figure}
To better understand the effect of the correspondence weighting and regularization, we investigate the registration results after one iteration on the lowest resolution level. In Fig. \ref{fig:singleIter}, the PE in pixels (computed using $\set{\mathbf{q}_j}$ as target points) is shown for all cases in the validation set. As in training, 1024 correspondences are used per case for all methods. We observe that for PPC, the error has a high spread, where for some cases, it is decreased considerably, while for other cases, it is increased. For \mbox{PPC-R}\xspace, most cases are below the initial error. However, the error is decreased only marginally, as the regularization prevents large motions. For \mbox{PPC-L}\xspace, we observe that the error is drastically decreased for most cases. This shows that \mbox{PPC-L}\xspace is able to estimate motion efficiently. An example for correspondence weighting in \mbox{PPC-L}\xspace is shown in Fig.~\ref{fig:sample:W}, where we observe a set of consistent correspondences with high weights, while the remaining correspondences have low weights.
\subsubsection{Method Combinations}
\begin{figure}[t]
\centering
\subfloat[\mbox{PPC-RM+}\xspace]{%
\includegraphics[width=0.49\textwidth]{RegMPAfterFirst.pdf}
}
\hfill
\subfloat[\mbox{PPC-L+}\xspace]{%
\includegraphics[width=0.49\textwidth]{LearnedPAfterFirst.pdf}
}
\caption{Box plots for distribution of resulting mRPD on the lowest resolution level for successful registrations for different initial mTRE intervalls.}
\label{fig:boxFirstResLevel}
\end{figure}
We observed that while the \mbox{PPC-RM}\xspace method has a high robustness (GCR and GSR), it leads to low accuracy. For \mbox{PPC-L}\xspace, we observed an increased GCR compared to the CR. In both cases, this demonstrates that registrations are present with a mRPD between 2\,mm and 10\,mm. As the PPC works reliably for small initial errors, we combine these methods with PPC by performing PPC on the highest resolution level instead of the respective method. We denote the resulting methods as \mbox{PPC-RM+}\xspace and \mbox{PPC-L+}\xspace. We observe that \mbox{PPC-RM+}\xspace achieves an accuracy of 0.74$\pm$0.18\,mm, an SR of 94.6\,\% and a CR of 18\,mm, while \mbox{PPC-L+}\xspace achieves an accuracy of 0.74$\pm$0.19\,mm, an SR of 96.1\,\% and a CR of 19\,mm. While the results are similar, we note that for \mbox{PPC-RM+}\xspace a manual weight selection is necessary. Further investigations are needed to clarify the better performance of PPC compared to \mbox{PPC-L}\xspace on the highest resolution level. However, this result may also demonstrate the strength of MCCR for cases where the majority of correspondences are correct.
We evaluate the convergence behavior of \mbox{PPC-L+}\xspace and \mbox{PPC-RM+}\xspace by only considering cases which were successful. For these cases, we investigate the error distribution after the first resolution level. The results are shown in Fig. \ref{fig:boxFirstResLevel}. We observe that for \mbox{PPC-L+}\xspace, a mRPD of below 10\,mm is achieved for all cases, while for \mbox{PPC-RM+}\xspace, higher misalignment of around 20\,mm mRPD is present. The result for \mbox{PPC-L+}\xspace is achieved after an average of 7.6 iterations, while 11.8 iterations were performed on average for \mbox{PPC-RM+}\xspace using the stop criterion defined in~\cite{DRR17}. In combination, this further substantiates our findings from the single iteration evaluation and shows the efficiency of \mbox{PPC-L}\xspace and its potential for reducing the computational cost.
\section{Conclusion}
For \mbox{2-D/3-D}\xspace registration, we propose a method to learn the weighting of the local correspondences directly from the global criterion to minimize the registration error. We achieve this by incorporating the motion estimation and error computation steps into our training objective function. A modified PointNet network is trained to weight correspondences based on their geometrical properties and image similarity.
A large improvement in the registration robustness is demonstrated when using the \mbox{learning-based} correspondence weighting,
while maintaining the high accuracy. Although a high robustness can also be achieved by regularized motion estimation, registration using learned correspondence weighting has the following advantages: it is more efficient, does not need manual parameter tuning and achieves a high accuracy.
One direction of future work is to further improve the weighting strategy, e.\,g.\xspace~by including more information into the decision process and optimizing the objective function for robustness and/or accuracy depending on the stage of the registration, such as the current resolution level.
By regarding the motion estimation as part of the network and not the objective function, our model can also be understood in the framework of precision learning~\cite{PRT17} as a regression model for the motion, where we learn only the unknown component (weighting of correspondences), while employing prior knowledge to the known component (motion estimation).
Following the framework of precision learning, replacing further steps of the registration framework with learned counterparts can be investigated. One candidate is the correspondence estimation, as it is challenging to design an optimal correspondence estimation method by hand.
{\bf Disclaimer:} The concept and software presented in this paper are based on research and are not commercially available. Due to regulatory reasons its future availability cannot be guaranteed.
\bibliographystyle{splncs04}
| {'timestamp': '2018-10-29T01:11:38', 'yymm': '1806', 'arxiv_id': '1806.07812', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.07812'} |
\section{Introduction}
\label{sec:intro}
This paper stems from our research of finite simple connected tetravalent graphs that admit a group of automorphisms acting transitively on vertices and edges but not on the arcs of
the graph; such groups of automorphisms are said to be {\em half-arc-transitive}. Observe that the full automorphism group $\mathrm{Aut}(\Gamma)$ of such a graph $\Gamma$
is then either arc-transitive or itself half-arc-transitive. In the latter case the graph $\Gamma$ is called {\em half-arc-transitive}.
Tetravalent graphs admitting a half-arc-transitive group of automorphisms
are surprisingly rich combinatorial objects with connections to several other areas of mathematics (see, for example,
\cite{ConPotSpa15, MarNedMaps,MarNed3, MarPis99, MarSpa08, PotSpiVerBook,genlost}). One of the most fruitful tools for analysing the structure of a tetravalent graph $\Gamma$
admitting a half-arc-transitive group $G$ is to study a certain $G$-invariant decomposition of the edge set $E(\Gamma)$ of $\Gamma$ into the
{\em $G$-alternating cycles} of some even length $2r$; the parameter $r$ is then called the {\em $G$-radius} and denoted $\mathop{{\rm rad}}_G(\Gamma)$
(see Section~\ref{sec:HAT} for more detailed definitions). Since $G$ is edge-transitive and the decomposition into $G$-alternating cycles
is $G$-invariant, any two intersecting $G$-alternating cycles meet in the same number of vertices; this number is then called the {\em attachment number}
and denoted $\mathop{{\rm att}}_G(\Gamma)$. When $G=\mathrm{Aut}(\Gamma)$
the subscript $G$ will be omitted in the above notation.
It is well known and easy to see that $\mathop{{\rm att}}_G(\Gamma)$ divides $2\mathop{{\rm rad}}_G(\Gamma)$.
However, for all known tetravalent half-arc-transitive graphs the attachment number in fact divides the radius.
This brings us to the following question that we would like to propose and address in this paper:
\begin{question}
\label{que:divides}
Is it true that the attachment number $\mathop{{\rm att}}(\Gamma)$ of an arbitrary tetravalent half-arc-transitive graph $\Gamma$ divides the radius $\mathop{{\rm rad}}(\Gamma)$?
\end{question}
By checking the complete list of all tetravalent half-arc-transitive graphs on up to $1000$ vertices (see~\cite{PotSpiVer15}), we see the that answer to the above question is affirmative for the graphs in that range. Further, as was proved in \cite[Theorem~1.2]{MarWal00}, the question has an affirmative answer in the case $\mathop{{\rm att}}(\Gamma) = 2$. In Section~\ref{sec:AT}, we generalise this result by proving the following theorem.
\begin{theorem}
\label{the:AT}
Let $\Gamma$ be a tetravalent half-arc-transitive graph. If its radius $\mathop{{\rm rad}}(\Gamma)$ is odd, then $\mathop{{\rm att}}(\Gamma)$ divides $\mathop{{\rm rad}}(\Gamma)$. Consequently, if $\mathop{{\rm att}}(\Gamma)$ is not divisible by $4$, then $\mathop{{\rm att}}(\Gamma)$ divides $\mathop{{\rm rad}}(\Gamma)$.
\end{theorem}
As a consequence of our second main result (Theorem~\ref{the:main}) we see that, in contrast to Theorem~\ref{the:AT}, there exist infinitely many arc-transitive tetravalent graphs $\Gamma$ admitting a half-arc-transitive group $G$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$. In fact, in Section~\ref{sec:HAT}, we characterise these graphs completely and prove the following theorem (see Section~\ref{subsec:Dart} for the definition of the dart graph).
\begin{theorem}
\label{the:main}
Let $\Gamma$ be a connected tetravalent graph. Then $\Gamma$ is $G$-half-arc-transitive for some $G \leq \mathrm{Aut}(\Gamma)$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ if and only if $\Gamma$ is the dart graph of some $2$-arc-transitive cubic graph.
\end{theorem}
The third main result of this paper, stemming from our analysis of the situation described by Theorem~\ref{the:main}, reveals a surprising connection to the theory of covering projections of graphs. This theory has become one of the central tools in the study of symmetries of graphs. A particularly
thrilling development started with the seminal work of Malni\v{c}, Nedela and \v{S}koviera \cite{MalNedSko} who analysed the condition under which a given automorphism group of the base graph lifts along the covering projection. Recently, the question of determining the structure of the lifted group received a lot of attention (see \cite{FenKutMalMar,MaPo16,MaPo??}).
To be more precise,
let $\wp \colon \tilde{\Gamma} \to \Gamma$ be a covering projection of connected graphs and let $\mathrm{CT}(\wp)$ be the corresponding group of covering transformations (see \cite{MalNedSko}, for example, for the definitions pertaining to the theory of graph covers).
Furthermore, let $G \leq \mathrm{Aut}(\Gamma)$ be a subgroup that lifts along $\wp$. Then the lifted group $\tilde{G}$ is an extension of $\mathrm{CT}(\wp)$ by $G$.
If this extension is split then the covering projection $\wp$ is called {\em $G$-split}. The most natural way in which this can occur is that there exists a complement $\bar{G}$ of $\mathrm{CT}(\wp)$ in
$\tilde{G}$ and a $\bar{G}$-invariant subset $S$ of $V(\tilde{\Gamma})$, that intersects each fibre of $\wp$ in exactly one vertex. In such a case we say that $S$ is a {\em section} for $\bar{G}$ and that $\bar{G}$ is a {\em sectional} complement of $\mathrm{CT}(\wp)$. Split covering projections without any sectional complement are called {\em non-sectional}. These turn out to be rather elusive and hard to analyse. To the best of our knowledge, the only known infinite family of non-sectional split covers was presented in~\cite[Section 4]{FenKutMalMar}. This family of non-sectional split covers involves cubic arc-transitive graphs of extremely large order.
In this paper we show that each connected tetravalent graph $\Gamma$ admitting a half-arc-transitive group $G$ of automorphisms such that $\mathop{{\rm att}}_G(\Gamma) = 2$ and $\mathop{{\rm rad}}_G(\Gamma) = 3$
is a $2$-fold cover of the line graph of a cubic $2$-arc-transitive graph, and that in the case when $\Gamma$ is not bipartite the corresponding covering projection is non-sectional.
This thus provides a new and rather simple infinite family of the somewhat mysterious case of non-sectional split covering projections (see Section~\ref{sec:ourcover} for more details).
\section{Half-arc-transitive group actions on graphs}
\label{sec:HAT}
In the next two paragraphs we briefly review some concepts and results pertaining half-arc-transitive group actions on tetravalent graphs that we shall need in the remainder of this section. For more details see~\cite{Mar98}, where most of these notions were introduced.
A tetravalent graph $\Gamma$ admitting a {\em half-arc-transitive} (that is vertex- and edge- but not arc-transitive) group of automorphisms $G$ is said to be {\em $G$-half-arc-transitive}. The action of $G$ induces two paired orientations of the edges of $\Gamma$ and for any one of them each vertex of $\Gamma$ is the head of two and the tail of the other two of its incident edges. (The fact that the edge $uv$ is oriented from $u$ to $v$ will be denoted by $u \to v$.) A cycle of $\Gamma$ for which every two consecutive edges either have a common head or common tail with respect to this orientation is called a {\em $G$-alternating cycle}. Since the action of $G$ is vertex- and edge-transitive all of the $G$-alternating cycles have the same even length $2\mathop{{\rm rad}}_G(\Gamma)$ and any two non-disjoint $G$-alternating cycles intersect in the same number $\mathop{{\rm att}}_G(\Gamma)$ of vertices. These intersections, called the {\em $G$-attachment sets}, form an imprimitivity block system for the group $G$. The numbers $\mathop{{\rm rad}}_G(\Gamma)$ and $\mathop{{\rm att}}_G(\Gamma)$ are called the {\em $G$-radius} and {\em $G$-attachment number} of $\Gamma$, respectively. If $G = \mathrm{Aut}(\Gamma)$ we suppress the prefix and subscript $\mathrm{Aut}(\Gamma)$ in all of the above definitions.
It was shown in~\cite[Proposition~2.4]{Mar98} that a tetravalent $G$-half-arc-transitive graph $\Gamma$ has at least three $G$-alternating cycles unless $\mathop{{\rm att}}_G(\Gamma) = 2\mathop{{\rm rad}}_G(\Gamma)$ in which case $\Gamma$ is isomorphic to a particular Cayley graph of a cyclic group (and is thus arc-transitive). Moreover, in the case that $\Gamma$ has at least three $G$-alternating cycles, $\mathop{{\rm att}}_G(\Gamma) \leq \mathop{{\rm rad}}_G(\Gamma)$ holds and $\mathop{{\rm att}}_G(\Gamma)$ divides $2\mathop{{\rm rad}}_G(\Gamma)$. In addition, the restriction of the action of $G$ to any $G$-alternating cycle is isomorphic to the dihedral group of order $2\mathop{{\rm rad}}_G(\Gamma)$ (or to the Klein 4-group in the case of $\mathop{{\rm rad}}_G(\Gamma) = 2$) with the cyclic subgroup of order $\mathop{{\rm rad}}_G(\Gamma)$ being the subgroup generated by a two-step rotation of the $G$-alternating cycle in question. In addition, if $C = (v_0, v_1, \ldots , v_{2r-1})$ is a $G$-alternating cycle of $\Gamma$ with $r = \mathop{{\rm rad}}_G(\Gamma)$ and $C'$ is the other $G$-alternating cycle of $\Gamma$ containing $v_0$ then $C \cap C' = \{v_{i\ell} \colon 0 \leq i < a\}$ where $a = \mathop{{\rm att}}_G(\Gamma)$ and $\ell = 2r/a$ (see \cite[Proposition~2.6]{Mar98} and \cite[Proposition~3.4]{MarPra99}).
\medskip
As mentioned in the Introduction one of the goals of this paper is to characterize the tetravalent $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$. The bijective correspondence between such graphs and $2$-arc-transitive cubic graphs (see Theorem~\ref{the:main}) is given via two pairwise inverse constructions: the {\em graph of alternating cycles} construction and the {\em dart graph} construction. We first define the former.
\subsection{The graph of alternating cycles}
\label{subsec:Alt}
Let $\Gamma$ be a tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$. The {\em graph of $G$-alternating cycles} $\mathrm{Alt}_G(\Gamma)$ is the graph whose vertex set consists of all $G$-alternating cycles of $\Gamma$ with two of them being adjacent whenever they have at least one vertex in common. We record some basic properties of the graph $\mathrm{Alt}_G(\Gamma)$.
\begin{proposition}
\label{pro:gr_alt_cyc}
Let $\Gamma$ be a connected tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$ having at least three $G$-alternating cycles. Then the graph $\mathrm{Alt}_G(\Gamma)$ is a regular graph of valence $2\mathop{{\rm rad}}_G(\Gamma)/\mathop{{\rm att}}_G(\Gamma)$ and the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is vertex- and edge-transitive. Moreover, this action is arc-transitive if and only if $\mathop{{\rm rad}}_{G}(\Gamma)$ does not divide $\mathop{{\rm att}}_G(\Gamma)$.
\end{proposition}
\begin{proof}
To simplify notation, denote $r = \mathop{{\rm rad}}_G(\Gamma)$ and $a = \mathop{{\rm att}}_G(\Gamma)$. Since each vertex of $\Gamma$ lies on exactly two $G$-alternating cycles and the intersection of any two non-disjoint $G$-alternating cycles is of size $a$ it is clear that each $G$-alternating cycle is adjacent to $\ell = 2r/a$ other $G$-alternating cycles in $\mathrm{Alt}_G(\Gamma)$. Moreover, since $G$ acts edge-transitively on $\Gamma$ and each edge of $\Gamma$ is contained in a unique $G$-alternating cycle, the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is vertex-transitive. That this action is also edge-transitive follows from the fact that $G$ acts vertex-transitively on $\Gamma$ and that the edges of $\mathrm{Alt}_G(\Gamma)$ correspond to $G$-attachment sets of $\Gamma$.
For the rest of the proof fix one of the two paired orientations of $\Gamma$ given by the action of $G$, let $C = (v_0, v_1, \ldots , v_{2r-1})$ be a $G$-alternating cycle such that $v_0 \to v_1$ and let $C'$ be the other $G$-alternating cycle containing $v_0$, so that $C \cap C' = \{v_{i\ell}\colon 0 \leq i < a\}$. Since every other vertex of $C$ is the tail of the two edges of $C$ incident to it, the vertex $v_\ell$ is the tail of the two edges of $C$ incident to it if and only if $\ell$ is even (in which case each $v_{i\ell}$ has this property).
Now, if $\ell$ is odd, then each element of $G$, mapping $v_0$ to $v_\ell$ necessarily interchanges $C$ and $C'$, proving that in this case the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is in fact arc-transitive. We remark that this also follows from the fact, first observed by Tutte~\cite{Tutte66}, that a vertex- and edge-transitive group of automorphisms of a graph of odd valence is necessarily arc-transitive. To complete the proof we thus only need to show that the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is not arc-transitive when $\ell$ is even. Recall that in this case each vertex $v_{i\ell} \in C \cap C'$ is the tail of the two edges of $C$ incident to it. Therefore, since any element of $G$, mapping the pair $\{C, C'\}$ to itself of course preserves the intersection $C \cap C'$ it is clear that any such element fixes each of $C$ and $C'$ setwise, and so no element of $G$ can interchange $C$ and $C'$. This proves that the induced action of $G$ on $\mathrm{Alt}_G(\Gamma)$ is half-arc-transitive.
\end{proof}
\subsection{The dart graph and its relation to $\mathrm{Alt}_G(\Gamma)$}
\label{subsec:Dart}
The dart graph of a cubic graph was investigated in~\cite{HilWil12} (we remark that this construction can also be viewed as a special kind of the {\em arc graph} construction from~\cite{GR01book}). Of course the dart graph construction can be applied to arbitrary graphs but here, as in~\cite{HilWil12}, we are only interested in dart graphs of cubic graphs. We first recall the definition. Let $\Lambda$ be a cubic graph. Then its {\em dart graph} $\mathop{{\rm Dart}}(\Lambda)$ is the graph whose vertex set consists of all the arcs (called darts in~\cite{HilWil12}) of $\Lambda$ with $(u,v)$ adjacent to $(u', v')$ if and only if either $u' = v$ but $u \neq v'$, or $u = v'$ but $u' \neq v$. In other words, the edges of $\mathop{{\rm Dart}}(\Lambda)$ correspond to the $2$-arcs of $\Lambda$. Note that this enables a natural orientation of the edges of $\mathop{{\rm Dart}}(\Lambda)$ where the edge $(u,v)(v,w)$ is oriented from $(u,v)$ to $(v,w)$.
Clearly, $\mathrm{Aut}(\Lambda)$ can be viewed as a subgroup of $\mathrm{Aut}(\mathop{{\rm Dart}}(\Lambda))$ preserving the natural orientation. Furthermore, the permutation $\tau$ of $V(\mathop{{\rm Dart}}(\Lambda))$, exchanging each $(u,v)$ with $(v,u)$, is an orientation reversing automorphism of $\mathop{{\rm Dart}}(\Lambda)$.
\medskip
We now establish the correspondence between the $2$-arc-transitive cubic graphs and the tetravalent graphs admitting a half-arc-transitive group of automorphisms with the corresponding radius $3$ and attachment number $2$. We do this in two steps.
\begin{proposition}
\label{pro:Dart_to_Alt}
Let $\Lambda$ be a connected cubic graph admitting a $2$-arc-transitive group of automorphisms $G$ and let $\Gamma = \mathop{{\rm Dart}}(\Lambda)$. Then $\Gamma$ is a tetravalent $G$-half-arc-transitive graph such that $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ with $\mathrm{Alt}_G(\Gamma) \cong \Lambda$. Moreover, the natural orientation of $\Gamma$, viewed as $\mathop{{\rm Dart}}(\Lambda)$, coincides with one of the two paired orientations induced by the action of $G$.
\end{proposition}
\begin{proof}
That the natural action of $G$ on $\Gamma$ is half-arc-transitive can easily be verified (see also~\cite{HilWil12}). Now, fix an edge $(u,v)(v,w)$ of $\Gamma$ and choose the $G$-induced orientation of $\Gamma$ in such a way that $(u,v) \to (v,w)$. Since $G$ is $2$-arc-transitive on $\Lambda$, the other edge of $\Gamma$, for which $(u,v)$ is its tail, is $(u,v)(v,w')$, where $w'$ is the remaining neighbour of $v$ in $\Lambda$ (other than $u$ and $w$). It is now clear that for each pair of adjacent vertices $(x,y)$ and $(y,z)$ of $\Gamma$ the corresponding edge is oriented from $(x,y)$ to $(y,z)$, and so the chosen $G$-induced orientation of $\Gamma$ is the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$.
Finally, let $v$ be a vertex of $\Lambda$ and let $u,u',u''$ be its three neighbours. The $G$-alternating cycle of $\Gamma$ containing the edge $(u,v)(v,u')$ is then clearly $C_v = ((u,v),(v,u'),(u'',v),(v,u),(u',v),(v,u''))$, implying that $\mathop{{\rm rad}}_G(\Gamma) = 3$. This also shows that the $G$-alternating cycles of $\Gamma$ naturally correspond to vertices of $\Lambda$. Since the three $G$-alternating cycles of $\Gamma$ that have a nonempty intersection with $C_v$ are the ones corresponding to the vertices $u$, $u'$ and $u''$, this correspondence in fact shows that $\mathrm{Alt}_G(\Gamma)$ and $\Lambda$ are isomorphic and that $\mathop{{\rm att}}_G(\Gamma) = 2$.
\end{proof}
\begin{proposition}
\label{pro:Alt_to_Dart}
Let $\Gamma$ be a connected tetravalent $G$-half-arc-transitive graph for some $G \leq \mathrm{Aut}(\Gamma)$ with $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$, and let $\Lambda = \mathrm{Alt}_G(\Gamma)$. Then the group $G$ induces a $2$-arc-transitive action on $\Lambda$ and $\mathop{{\rm Dart}}(\Lambda) \cong \Gamma$. In fact, an isomorphism $\Psi\colon \mathop{{\rm Dart}}(\Lambda) \to \Gamma$ exists which maps the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to a $G$-induced orientation of $\Gamma$.
\end{proposition}
\begin{proof}
By Proposition~\ref{pro:gr_alt_cyc} the graph $\Lambda$ is cubic and the induced action of $G$ on it is arc-transitive. Since $\mathop{{\rm rad}}_G(\Gamma) = 3$ and $\mathop{{\rm att}}_G(\Gamma) = 2$ it is easy to see that $\Gamma$ and $\mathop{{\rm Dart}}(\Lambda)$ are of the same order. Furthermore, let $C = (v_0, v_1, \ldots , v_5)$ be a $G$-alternating cycle of $\Gamma$ and $C', C'', C'''$ be the other $G$-alternating cycles of $\Gamma$ containing $v_0, v_1$ and $v_5$, respectively. Then $C \cap C' = \{v_0, v_3\}$, $C \cap C'' = \{v_1, v_4\}$ and $C \cap C''' = \{v_2, v_5\}$. It is thus clear that any element of $G$, fixing $v_0$ and mapping $v_1$ to $v_5$ (which exists since $C$ is $G$-alternating and $G$ is edge-transitive on $\Gamma$), fixes both $C$ and $C'$ but maps $C''$ to $C'''$. Therefore, the induced action of $G$ on $\Lambda$ is $2$-arc-transitive.
To complete the proof we exhibit a particular isomorphism $\Psi \colon \mathop{{\rm Dart}}(\Lambda) \to \Gamma$. Fix an orientation of the edges of $\Gamma$, induced by the action of $G$, and let $C$ and $C'$ be two $G$-alternating cycles of $\Gamma$ with a nonempty intersection. Then $(C,C')$ and $(C',C)$ are vertices of $\mathop{{\rm Dart}}(\Lambda)$. Let $C \cap C' = \{u,u'\}$ and observe that precisely one of $u$ and $u'$ is the head of both of the edges of $C$ incident to it. Without loss of generality assume it is $u$. Then of course $u'$ is the head of both of the edges of $C'$ incident to it. We then set $\Psi((C,C')) = u$ and $\Psi((C',C)) = u'$. Therefore, for non-disjoint $G$-alternating cycles $C$ and $C'$ of $\Gamma$ we map $(C,C')$ to the unique vertex in $C \cap C'$ which is the head of both of the edges of $C$ incident to it. Since each pair of non-disjoint $G$-alternating cycles meets in precisely two vertices and each vertex of $\Gamma$ belongs to two $G$-alternating cycles of $\Gamma$, this mapping is injective and thus also bijective. We now only need to show that it preserves adjacency and maps the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to the chosen $G$-induced orientation of $\Gamma$. To this end let $C$, $C'$ and $C''$ be three $G$-alternating cycles of $\Gamma$ such that $C$ has a nonempty intersection with both $C'$ and $C''$. Recall that then the edge $(C',C)(C,C'')$ is oriented from $(C',C)$ to $(C,C'')$ in the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$. Denote $C = (v_0,v_1, \ldots , v_5)$ and without loss of generality assume $C \cap C' = \{v_0,v_3\}$ and $C \cap C'' = \{v_1, v_4\}$.
Suppose first that $v_0 \to v_1$. Then $v_0$ is the head of both of the edges of $C'$ incident to it, and so $\Psi((C',C)) = v_0$. Similarly, $v_1$ is the head of both of the edges of $C$ incident to it, and so $\Psi((C,C'')) = v_1$. If on the other hand $v_1 \to v_0$, then $\Psi((C',C)) = v_3$ and $\Psi((C,C'')) = v_4$. In both cases, $\Psi$ maps the oriented edge $(C',C)(C,C'')$ to an oriented edge of $\Gamma$, proving that it is an isomorphism of graphs, mapping the the natural orientation of $\mathop{{\rm Dart}}(\Lambda)$ to the chosen $G$-induced orientation of $\Gamma$.
\end{proof}
Theorem ~\ref{the:main} now follows directly from Propositions~\ref{pro:Dart_to_Alt} and \ref{pro:Alt_to_Dart}.
\section{Partial answer to Question~\ref{que:divides} and proof of Theorem~\ref{the:AT}}
\label{sec:AT}
In this section we prove Theorem~\ref{the:AT} giving a partial answer to Question~\ref{que:divides}. We first prove an auxiliary result.
\begin{proposition}
\label{pro:transversal}
Let $\Gamma$ be a tetravalent $G$-half-arc-transitive graph with $\mathop{{\rm att}}_G(\Gamma)$ even. Then for each vertex $v$ of $\Gamma$ and the two $G$-alternating cycles $C$ and $C'$, containing $v$, the antipodal vertex of $v$ on $C$ coincides with the antipodal vertex of $v$ on $C'$. Moreover, the involution $\tau$ interchanging each pair of antipodal vertices on all $G$-alternating cycles of $\Gamma$ is an automorphism of $\Gamma$ centralising $G$.
\end{proposition}
\begin{proof}
Denote $r = \mathop{{\rm rad}}_G(\Gamma)$ and $a = \mathop{{\rm att}}_G(\Gamma)$. Let $v$ be a vertex of $\Gamma$ and let $C$ and $C'$ be the two $G$-alternating cycles of $\Gamma$ containing $v$. Denote $C = (v_0, v_1, \ldots , v_{2r-1})$ with $v = v_0$. Recall that then $C \cap C' = \{v_{i\ell}\colon 0 \leq i < a\}$, where $\ell = 2r/a$. Since $a$ is even $v_r \in C \cap C'$. Now, take any element $g \in G_v$ interchanging $v_1$ with $v_{2r-1}$ as well as the other two neighbours of $v$ (which are of course neighbours of $v$ on $C'$). Then $g$ reflects both $C$ and $C'$ with respect to $v$. Since $v_r$ is antipodal to $v$ on $C$, it must be fixed by $g$, but since $v_r$ is also contained in $C'$, this implies that it is in fact also the antipodal vertex of $v$ on $C'$. This shows that for each $G$-alternating cycle $C$ and each vertex $v$ of $C$ the vertex $v$ and its antipodal counterpart on $C$ both belong to the same pair of $G$-alternating cycles (this implies that the $G$-transversals, as they were defined in~\cite{Mar98}, are of length $2$) and are also antipodal on the other $G$-alternating cycle containing them.
It is now clear that $\tau$ is a well defined involution on the vertex set of $\Gamma$. Since the antipodal vertex of a neighbor $v_1$ of $v = v_0$ on $C$ is the neighbor $v_{r+1}$ of the antipodal vertex $v_r$, it is clear that $\tau$ is in fact an automorphism of $\Gamma$. Since any element of $G$ maps $G$-alternating cycles to $G$-alternating cycles it is clear that $\tau$ centralises $G$.
\end{proof}
We are now ready to prove Theorem~\ref{the:AT}. Let $\Gamma$ be a tetravalent half-arc-transitive graph. Denote $r = \mathop{{\rm rad}}(\Gamma)$ and $a = \mathop{{\rm att}}(\Gamma)$, and assume $r$ is odd. Recall that $a$ divides $2r$. We thus only need to prove that $a$ is odd. Suppose to the contrary that $a$ is even, and so by assumption $a \equiv 2 \pmod{4}$. Then the graph $\Gamma$ admits the automorphism $\tau$ from Proposition~\ref{pro:transversal}. Now, fix one of the two paired orientations of the edges induced by the action of $\mathrm{Aut}(\Gamma)$ and let $C = (v_0, v_1, \ldots , v_{2r-1})$ be an alternating cycle of $\Gamma$ with $v_0$ being the tail of the edge $v_0 v_1$. Since $v_0^\tau = v_r$ and $v_1^\tau = v_{r+1}$ it follows that $v_r$ is the tail of the edge $v_rv_{r+1}$. But since $r$ is odd this contradicts the fact that every other vertex of $C$ is the tail of the two edges of $C$ incident to it. Thus $a$ is odd, as claimed.
To prove the second part of the theorem assume that $a$ is not divisible by $4$. If $r$ is even then the fact that $a$ divides $2r$ implies that $a$ divides $r$ as well. If however $r$ is odd, we can apply the first part of the theorem. This completes the proof.
\section{An infinite family of non-sectional split covers}
\label{sec:ourcover}
As announced in the introduction, tetravalent $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$
yield surprising examples of the elusive non-sectional split covers. In this section, we present this connection in some detail.
\begin{theorem}
\label{the:cover}
Let $\Gamma$ be a connected non-bipartite $G$-half-arc-transitive graph $\Gamma$ of order greater than $12$
with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$. Then there exists a $2$-fold covering projection $\wp \colon \Gamma \to \Gamma'$
and an arc-transitive group $H\le \mathrm{Aut}(\Gamma')$ which lifts along $\wp$ in such a way that $\Gamma$ is a non-sectional $H$-split cover of $\Gamma'$.
\end{theorem}
\begin{proof}
Since $\mathop{{\rm att}}_G(\Gamma)=2$, each $G$-attachment set consists of a pair of antipodal vertices on a $G$-alternating cycle of $\Gamma$.
Let $\mathcal{B}$ be the set of all $G$-attachment sets in $\Gamma$.
By Proposition~\ref{pro:transversal}, there exists an automorphism $\tau$ of $\Gamma$ centralising $G$,
which interchanges the two vertices in each element of $\mathcal{B}$. Let $\tilde{G} = \langle G, \tau\rangle$ and note that
$\tilde{G}$ acts transitively on the arcs of $\Gamma$. Since $\tau$ is an involution centralising $G$
not contained in $G$, we see that $\tilde{G} = G \times \langle \tau \rangle$.
Let $\Gamma'$ be the quotient graph with respect to the group $\langle \tau \rangle$, that is, the graph whose vertices are the orbits of $\langle \tau \rangle$
and with two such orbits adjacent whenever they are joined by an edge in $\Gamma$. Since $\tilde{G}$ is arc-transitive and $\langle \tau\rangle$ is normal in $\tilde{G}$, each $\langle \tau \rangle$-orbit is
an independent set. Moreover, if two $\langle \tau \rangle$-orbits $B$ and $C$ are adjacent in $\Gamma'$, then the induced subgraph $\Gamma[B\cup C]$
is clearly vertex- and arc-transitive and is thus either $K_{2,2}$ or $2K_2$. In the former case, it is easy to see that $\Gamma$ is isomorphic to
the lexicographic product of a cycle with the edge-less graph on two vertices. Since $\mathop{{\rm rad}}_G(\Gamma) = 3$ and the orbits of $\langle \tau\rangle$ coincide with the elements of $\mathcal{B}$, this implies that $\Gamma$
has only $6$ vertices, contradicting our assumption on the order of $\Gamma$. This contradiction implies that
$\Gamma[B\cup C] \cong 2K_2$ for any pair of adjacent $\langle \tau \rangle$-orbits $B$ and $C$, and hence the quotient projection
$\wp \colon \Gamma \to \Gamma'$ is a $2$-fold covering projection with $\langle \tau \rangle$ being its group of covering transformations.
Since $\tau$ normalises $G$, the group $\tilde{G}$ projects along $\wp$ and the quotient group
$H = \tilde{G}/ \langle \tau \rangle$ acts faithfully as an arc-transitive group of automorphisms on $\Gamma'$.
In particular, since the group of covering projection $\langle \tau \rangle$ has a complement $G$ in $\tilde{G}$, the covering projection $\wp$ is
$H$-split.
By \cite[Proposition 3.3]{FenKutMalMar}, if $\wp$ had a sectional complement with respect to $H$, then $\Gamma$ would be a canonical double cover of $\Gamma'$, contradicting the assumption that $\Gamma$ is not bipartite.
\end{proof}
{\sc Remark.} In \cite[Proposition~9]{HilWil12} it was shown that a cubic graph $\Lambda$ is bipartite if and only if $\mathop{{\rm Dart}}(\Lambda)$ is bipartite. Since there exist infinitely many connected non-bipartite cubic $2$-arc-transitive graphs, Theorem~\ref{the:main} thus implies
that there are indeed infinitely many connected non-bipartite $G$-half-arc-transitive graphs $\Gamma$ with $\mathop{{\rm rad}}_G(\Gamma) =3$ and $\mathop{{\rm att}}_G(\Gamma)=2$.
In view of Theorem~\ref{the:cover}, these yield infinitely many non-sectional split covers, as announced in the introduction. Furthermore, note that
the $G$-alternating $6$-cycles in the graph $\Gamma$ appearing in the proof of the above theorem
project by $\wp$ to cycles of length $3$, implying that $\Gamma'$ is a tetravalent arc-transitive graph of girth $3$. Since
it is assumed that the order of $\Gamma$ is larger than $12$ (and thus the order of $\Gamma'$ is larger than $6$), we may now use
\cite[Theorem 5.1]{girth4} to conclude that $\Gamma'$ is isomorphic to the line graph of a $2$-arc-transitive cubic graph.
\bigskip
\noindent
{\bf Acknowledgment.} The first author was supported in part by Slovenian Research Agency, program P1-0294. The second author was supported in part by Slovenian Research Agency, program P1-0285 and projects N1-0038, J1-6720 and J1-7051.
| {'timestamp': '2017-04-25T02:11:50', 'yymm': '1704', 'arxiv_id': '1704.07159', 'language': 'en', 'url': 'https://arxiv.org/abs/1704.07159'} |
\section{Introduction}
In the zero temperature limit, quantum fluids behave at the macroscopic scale as a single coherent quantum state, the superfluid \cite{DonnellyLivreVortices}. Compared to classical fluids, the quantum coherence of superfluids creates a strong additional constraint on the velocity field, namely to be irrotational. Rotational motion can only appear when the macroscopic coherence of the wave function is broken by topological defects called quantum vortices. In that case, the circulation of the velocity around the quantum vortex has a fixed value ($\kappa \simeq 10^{-7}$m$^2$s$^{-1}$ in $^4$He). Turbulence in superfluids can be thought of as an intricate process of distortion, reconnection and breaking of those topological singularities \cite{BarenghiSkrbekSreenivasan_IntroPNAS2014}, but in such a way that the system seems to mimic the classical turbulence at large scales \cite{spectra:PNAS2014}. This has been particularly obvious in the velocity spectra probed with a variety of anemometers, in highly turbulent flows \cite{Maurer1998,salort2010turbulent,Salort:EPL2012,rusaouen2017intermittency} or in the measurement of vortex bundles using parietal pressure probes \cite{Rusaouen:parietalEPL2017}. In some sense, quantum turbulence is an irreducible model, or to say it differently, is a kind of "skeleton" for all types of turbulence.
At finite temperature, the quantum fluid is not a pure superfluid: it behaves as if it experienced friction with a background viscous fluid, called the ``normal fluid''. The relative mass density of the superfluid $\rho_s/\rho$ (where $\rho$ is the total mass density) decreases from one at 0~K to zero at the superfluid transition temperature ($T_\lambda \simeq 2.18$~K in $^4$He). The presence of a finite normal fluid fraction allows for propagation of temperature waves - a property referred to as ``second sound"- which opens the rare opportunity to probe directly the presence of the quantum vortices \cite{DonnellyPhysicsToday1984}.
This is done in the present article, where the statistics of superfluid vortex lines density $\mathcal{L}$ are locally measured by ``second sound tweezer" (see the description in paragraph``probes"), over one and a half decade of the inertial scales, and over a wide range of $\rho_s/\rho$ spanning from 0.16 to 0.81. Surprisingly, the result does not corroborate the widespread idea that the large scales of quantum turbulence reproduce those of classical turbulence: the measured spectra of $\mathcal{L}$ (see Fig. \ref{fig:spectres}) differs from classical-like enstrophy spectra \cite{baudet1996spatial,ishihara2003spectra}. Besides, it also differs from the only\footnote{
Literature also reports experimental \cite{Bradley:PRL2008} and numerical \cite{FujiyamaJLTP2010, BaggaleyPRB2011,Baggaley_VLD:PRL2012,BaggaleyPRL2015,tsepelin2017visualization} spectra of the vortex line density spatially integrated across the whole flow. Still, spectra of such ``integral'' quantities differ in nature from the spectra of local quantities, due to strong filtering effects of spatial fluctuations.}
previous direct measurement of $\mathcal{L}$ with second sound tweezers \cite{roche2007vortex} at $\rho_s/\rho \simeq 0.84$.
The measurement of the vortex lines density provides one of the very few constraints for the disputed modeling of the small scales of quantum turbulence. Even after intense numerical \cite{salort2011mesoscale,Baggaley_Coherentvortexstructures_EPL2012} and theoretical \cite{RocheInterpretation:EPL2008,Nemirovskii:PRB2012,boue2015energyVorticity} studies, the statistics of quantum vortices show that even the large scales of quantum flows can still be surprising.
\section{Experimental setup}
\begin{figure}
\begin{centering}
\includegraphics[height=10cm]{figure1.pdf}
\par\end{centering}
\caption{Sketch of the flow and the experimental setup with
probes. \label{fig:schema_toupie}}
\end{figure}
The experimental setup has been described in details in a previous publication
\cite{rusaouen2017intermittency},
and we only review in this section the major modifications. The setup
consists in a wind tunnel inside a cylindrical cryostat (see
Fig. \ref{fig:schema_toupie}) filled with He-II.
The flow is continuously
powered by a centrifugal pump located at the top of the tunnel. At
the bottom, an optimized 3D-printed conditioner ensures a smooth
entry of the fluid, without boundary layer detachment, inside a pipe of $\Phi=76$ mm inner diameter. Spin motion is broken by radial screens built in the conditioner. The fluid
is then ``cleaned'' again by a 5-cm-long and $3$-mm-cell honeycomb. The mean flow velocity $U$ is measured with a Pitot tube located $130$ mm upstream the pipe outlet. We allow a maximal mean velocity $U=1.3$ m/s inside the pipe to avoid any cavitation effect with the pump.
The main new element compared to the previous design
is a mono-planar grid located $177$ mm upstream the probes to generate
turbulence. The grid has a $M=17$ mm mesh with square bars of thickness $b=4$ mm, which gives a porosity of $\beta=(1-b/M)^{2}\approx0.58$.
The choice to position the probes at a distance $\sim 10M$ downstream the grid is the result of a compromise between the desire to have a ``large'' turbulence intensity, and the necessity to leave enough space for turbulence to develop between the grid and the probes. According to \cite{vita2018generating}, this distance is enough to avoid near-field effects of the grid. However, we emphasize that our main experimental results (Fig. \ref{fig:spectres}-\ref{fig:histogrammes}) do not depend on perfect turbulent isotropy and homogeneity. In-situ measurements of the mean vortex line density can be used to indirectly (via Eq. \ref{eq:tau}) give an estimation of the turbulence intensity $\tau=u^\mathrm{rms}/U \simeq 12-13\%$ (where $u^\mathrm{rms}$ is the standard deviation of longitudinal velocity component). We present the results later in Fig. \ref{fig:scalingU}.
For comparison, Vita and co. \cite{vita2018generating} report a turbulence intensity around $\tau=9\%$ percents at $10M$ in a classical grid flow of similar porosity. The difference between both values of $\tau$ could originate from a prefactor uncertainty in Eq. (\ref{eq:tau}) or from differences in flow design (e.g. the absence of a contraction behind the honeycomb). This difference has no important consequences for the measurement of quantum vortex statistics.
The longitudinal integral length scale of the flow $H\simeq 5.0$~mm is assessed by fitting velocity spectra (see bottom panel of Fig. \ref{fig:spectres}) with the von K\'arm\'an formula (eg. see \cite{vita2018generating}). For comparison, the integral scale reported for the similar grid in \cite{vita2018generating}, once rescaled by the grid size, gives a nearby estimate of $7.4$ mm.
The Reynolds number $Re$ defined with $u^\mathrm{rms} H$ and the kinematic
viscosity $1.8\times10^{-8}$~m$^2$s$^{-1}$ of liquid He just above $T_\lambda$, is $Re=3.3\times10^4$ for $U=1$ m/s. Using standard homogeneous isotropic turbulence formula, the Taylor scale Reynolds number is $R_\lambda=\sqrt{15Re}\approx 700$ (for $\tau=12\%$ and $H=5$ mm). This gives an indication of turbulence intensity of the flow below $T_\lambda$.
Temperature of the helium bath is set via pressure regulation gates.
The exceptional thermal conductivity of He-II ensures an homogeneous
temperature inside the bath for $T<T_{\lambda}$. Two Cernox
thermometers, one located just above the pump, the other one on the
side of the pipe close to the probes, allow for direct monitoring
of $T$.
\section{Probes}
Our probes are micro-fabricated second sound tweezers
of the millimeter size according to the same principle as in \cite{roche2007vortex}.
As displayed in the inset of Fig. \ref{fig:probes}, the tweezers are composed
of one heating plate and one thermometer plate facing each other and
thus creating a resonant cavity for thermal waves. The heating
plate generates a stationary thermal wave of the order of $0.1$
mK between the plates, the amplitude of which can be recorded by the
thermometer plate. Two major improvements have been done compared
to the tweezers in \cite{roche2007vortex} : first, the length of
the arms supporting the plates has been increased to \textbf{$14$}
mm to avoid blockage effects due to the stack of silicon wafers (about 1.5 mm thick) downstream the cavity. Second,
two notches are done in the arms to avoid interference due to additional
reflections of the thermal wave on the arms. Further details will be given in a future publication.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure2.png}
\par\end{centering}
\caption{Ring with probes. The inset is a zoom on the heating and the thermometer plates of a second sound tweezers. The Pitot tube is not used in the present experiment. \label{fig:probes}}
\end{figure}
In the presence of He flow, a variation of the amplitude and phase
of the thermal wave can be observed. This variation is due
to two main physical effects. The presence of quantum vortex lines
inside the cavity causes an attenuation of the wave \cite{DonnellyPhysicsToday1984,varga2019} with a very
minor phase shift \cite{miller1978velocity}. This attenuation can be very accurately modelized by
a bulk dissipation coefficient inside the cavity denoted $\xi_{L}$. The second effect is a ballistic advection
of the wave out of the cavity. It is related to both an attenuation of
the temperature oscillation and an important phase shift. Depending
on the flow mean velocity $U$, the size of the tweezers, and the
frequency of the wave, one of these two effects can overwhelm the
other. We have thus designed two models of tweezers: one model to
take advantage of the first effect to measure the vortex lines density (VLD), and the other one
to take advantage of the second effect to measure the velocity.
The two largest tweezers displayed in Fig. \ref{fig:probes} are designed to measure the quantum vortex
lines density. The plates size is $l=1$ mm and the gaps
between the plates are $D=1.32$ mm and $D=0.83$ mm respectively.
The plates face each other with positioning accuracy of a few micrometers.
The tweezers are oriented parallel to the flow (see Fig. \ref{fig:probes}, the mean flow is directed from top to bottom)
to minimize the effect of ballistic advection of the wave.
The smallest tweezers displayed in Fig. \ref{fig:probes} are designed to be mainly sensitive to the velocity fluctuations
parallel to the mean flow. The two plates have a size $l=250$ $\mu$m,
and are separated by a gap of $D=0.431$ mm. The tweezers are oriented
perpendicular to the mean flow (see Fig. \ref{fig:probes})
with an intentional lateral shift of the heater and the thermometer
of about $l/2$. This configuration is expected to maximize the sensitivity
to ballistic advection, and thus to velocity fluctuations. To second order however, the probe still keeps sensitivity to the quantum vortices produced both by turbulence and by the intense heating of the plates, that's why we were not able to calibrate it reliably. The (uncalibrated) spectrum of this probe (see bottom panel of Fig. \ref{fig:spectres}) is only used to estimate the integral length scale. The role of this probe is also to prove that the signal statistics of the largest tweezers are not due to velocity fluctuations.
\section{Method}
Figure \ref{fig:methode} displays a resonance of a large tweezers
at frequency $f_{0}=15.2$ kHz, for increasing values of the mean velocity. The temperature oscillation $T$ measured by the thermometer
is demodulated by a Lock-in amplifier NF LI5640. $T$ can be accurately fitted by a classical
Fabry-Perot formula
\begin{equation}
T=\frac{A}{\sinh\left(i\frac{2\pi(f-f_{0})D}{c_{2}}+\xi D\right)} \label{eq:FP formula}
\end{equation}
where $i^2=-1$, $f_{0}$ is the resonant frequency for which the wave locally reaches its maximal
amplitude, $c_{2}$ is the second sound velocity, $A$ is a parameter
to be fitted, and $\xi$ is related to the energy loss of
the wave in the cavity. The top panel of Fig. \ref{fig:methode}
displays the amplitude of the thermal wave (in mK) as a function
of the frequency, and the bottom panel shows the same signal in phase and quadrature. When the
frequency is swept, the signal follows a curve close to a circle crossing
the point of coordinates $(0,0)$. Fig. \ref{fig:methode} clearly shows that the
resonant peak shrinks more and more when $U$ increases, which is
interpreted as attenuation of the wave inside the cavity. The red points
display the attenuation of the signal at constant value of $f$. It can
be seen on the bottom panel that the variation of the signal is close
to a pure attenuation, that is, without phase shift.
\begin{figure}
\begin{centering}
\includegraphics[height=4.5cm]{figure3a.pdf}\\
\includegraphics[height=5cm]{figure3b.pdf}
\par\end{centering}
\caption{\textbf{Top: }second sound resonance of one of the large tweezers around $15.2$
kHz. The value of $U$ increases from top curve to bottom curve. The vertical axis gives the amplitude
of the thermal wave in K. \textbf{Bottom:} representation of the
same resonance in phase and quadrature.\label{fig:methode}}
\end{figure}
$\xi$ can be decomposed as
\begin{equation}
\xi=\xi_{0}+\xi_{L}\label{eq:xi_dec}
\end{equation}
where $\xi_{0}$ is the attenuation factor when $U=0$ m/s and $\xi_{L}$
is the additional attenuation created by the presence of quantum vortex
lines inside the cavity. $\xi_{L}$ is the signal of interest as
it can be directly related to the vortex lines density (VLD) using
the relation
\begin{eqnarray}
\xi_{L} & = & \frac{B\kappa L_{\perp}}{4c_{2}},\label{eq:VLD}\\
L_{\perp} & = & \frac{1}{\mathcal{V}}\int\sin^{2}\theta(l){\rm d}l\label{eq:VLD def}
\end{eqnarray}
where $B$ is the first Vinen coefficient, $\kappa\approx9.98\times10^{-8}$
m$^{2}$/s is the quantum of circulation, $\mathcal{V}$ is the cavity
volume, $l$ is the curvilinear absciss along the vortex line, $\theta(l)$
is the angle between the vector tangent to the line and the direction
perpendicular to the plates. We note that the summation is weighted by the distribution of the second sound nodes and antinodes inside the cavity and does not exactly corresponds to a uniform average but we neglect this effect in the following. Our aim is to measure both the average value
and the fluctuations of $L_{\perp}$, as a function of $U$ and the superfluid fraction.
The method goes as follows: first, we choose a resonant frequency
$f_{0}$ where the amplitude of the signal has a local maximum and
we fix the frequency of the heating to this value $f_{0}$. Then we
vary the mean velocity $U$ and we record the response of the thermometer
plate in phase and quadrature. The measurements
show that the velocity-induced displacement in the complex plane follows a straight line in a direction
$\overrightarrow{e}$ approximately orthogonal to the resonant curve.
Expressions (\ref{eq:FP formula}-\ref{eq:xi_dec}) give $\xi_{L}$
from the measured amplitude $T$ by\cite{roche2007vortex}
\begin{equation}
\xi_{L}=\frac{1}{D}\rm{asinh}\left(\frac{A}{T}\right)-\xi_{0}.\label{eq:displacement}
\end{equation}
The colored dots of Fig. \ref{fig:fluctuations} illustrate the fluctuations of the signal
in phase and quadrature, for different values of $U$. The average signal
moves in the direction of the attenuation axis. The figure also shows
a part of the resonant curve for $U=0$. The fluctuations have two
components in the plane, both associated with different physical
phenomena. Fluctuations in the direction tangent to the resonant curve
can be interpreted as a variation of the acoustic path $\frac{2\pi(f-f_{0})D}{c_{2}}$
without attenuation of the wave. Those fluctuations can occur for example
because the two arms of the tweezers vibrate with submicron amplitude,
or because the temperature variations modify the second sound velocity
$c_{2}$. To isolate only the fluctuations associated to attenuation by
the quantum vortices, we split the signal into a component along the attenuation axis, and another one along the acoustic path axis.
We then convert the displacement along the attenuation axis into vortex line density (VLD) using
expressions (\ref{fig:methode}-\ref{eq:displacement}).
\begin{figure}
\begin{centering}
\includegraphics[height=7cm]{figure4.pdf}
\par\end{centering}
\caption{Fluctuations of the thermal wave in phase and quadrature. The colored
clouds show the fluctuations of the signal, for different values of
$U$. The blue curve shows the resonance for $U=0$ m/s. The fluctuations tangent
to the resonant curve are created by a variation of the acoustic path.
The quantum vortices are associated to attenuation of the wave and create
a displacement along the attenuation axis. \label{fig:fluctuations}}
\end{figure}
\section{Results}
As a check of the validity of our approach, we measured the average
response of the second sound tweezers as a function of the mean velocity
$U$. According to literature\cite{Babuin:EPL2014}, we were expecting the scaling $\left\langle L_{\perp}\right\rangle^2 \propto U^{3}$, with a prefactor related to the flow main characteristics. The function $\left\langle L_{\perp}\right\rangle $ was thus
measured for a range $0.4<U<1.25$ m/s with a time averaging over
$300$ ms, at the three different temperatures $1.65$ K, $1.99$ K and $2.14$ K.
An effective superfluid viscosity $\nu_\mathrm{eff}$ is customarily defined in quantum turbulence by $\epsilon = \nu_\mathrm{eff} ( \kappa \mathcal{L} )^2$ where $\epsilon$ is the dissipation and $\mathcal{L}=3\left\langle L_{\perp}\right\rangle/2$ is the averaged VLD (we assume isotropy of the tangle)\cite{Vinen:JLTP2002}. For large $R_\lambda$ homogeneous isotropic flows, we also have $\epsilon \simeq 0.79\, U^3\tau^3/H$ (eg see \cite{pope:book} p.245), which entails
\begin{equation}
\tau^3 \simeq 2.85 \frac{ \nu_\mathrm{eff} H \kappa^2\left\langle L_{\perp}\right\rangle^2}{U^3} \label{eq:tau}
\end{equation}
Using Eq. (\ref{eq:tau}), we compute the turbulence intensity as a function of $U$, for the three considered temperatures. The result is displayed in Fig. \ref{fig:scalingU}. The figure shows that the turbulence intensity reaches a plateau of about $12\%$ above $0.8$ m/s, a value in accordance with the turbulence intensity of $9\%$ reported in \cite{vita2018generating} for a grid turbulence with similar characteristics. The figure also confirms that the expected scaling $\left\langle L_{\perp}\right\rangle^2 \propto U^{3}$ is reached in our experiment for the range of velocities $U>0.8$ m/s.
The temperature-dependent viscosity $\nu_\mathrm{eff}$ in Eq. (\ref{eq:tau}) has been measured in a number of experiments (e.g. see compilations in \cite{Babuin:EPL2014,boue2015energyVorticity,gao2018dissipation}). Still, the uncertainty on its value exceeds a factor 2. For the temperatures $1.65$ K and $1.99$ K, we used the average values $0.2 \kappa$ and $0.25\kappa$. By lack of reference experimental value of $\nu_\mathrm{eff}$ above $2.1$ K, we determined it by collapsing the $\tau (U)$ datasets obtained at $2.14$ K with the two others. We found the value $\nu_\mathrm{eff}\approx 0.5\kappa$ at $2.14$ K.
Assuming isotropy of the vortex tangle, the value of $\mathcal{L} $ gives
a direct order of magnitude of the inter-vortex spacing $\delta=1/\sqrt{\mathcal{L}}$. We find $\delta\approx5$
$\mu m$ at 1.65 K and a mean velocity of 1 m/s. This shows the large scale separation between the inter-vortex spacing and the flow integral scale $H$, a confirmation of an intense turbulent regime.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure5.pdf}
\caption{Indirect measurement of the turbulence intensity $\tau=u^{\rm{rms}}/U$ as a function of $U$ using Eq. (\ref{eq:tau}). The three different symbols correspond to three values of the mean temperature. \label{fig:scalingU}}
\par\end{centering}
\end{figure}
Fig. \ref{fig:spectres} presents the main result of this letter.
We display on the top panel the VLD power spectral density $P_L(f)$ of $L_\perp/\left\langle L_{\perp}\right\rangle$. With this definition, the VLD turbulence intensity $L_{\perp}^{\rm{rms}}/\left\langle L_{\perp} \right\rangle$ is directly given by the integral of $P_L(f)$. We have measured the VLD fluctuations at the temperatures $T=1.65$ K and
superfluid fraction $\rho_{S}/\rho=81\%$, $T=1.99$ K and $\rho_{S}/\rho=47\%$,
$T=2.14$ K and $\rho_{S}/\rho=16\%$. At each temperature, the measurement was done for
at least two different mean velocities.
The first striking result is the collapse of all the spectra
independently of the temperature, when properly rescaled
using $f/U$ as coordinate (and $P_L(f)\times U$ as power spectral density to keep the integral constant).
The VLD spectrum does not depend on the superfluid fraction
even for vanishing superfluid fractions, when $T$ comes very close to $T_{\lambda}$. Only one measurement with one of the large tweezers at $T=1.650$ K has given a slight deviation from the master curve of the VLD spectra: it is displayed as the thin grey curve in Fig. \ref{fig:spectres}. We have no explanation for this deviation but did not observe this particular spectrum with the second tweezers, and neither at any other temperature.
Second, the VLD spectrum has no characteristic
power-law decay. We only observe that the spectrum follows an exponential decay approximately above $f/U>100$ m$^{-1}$. This strongly contrasts with the velocity spectrum obtained with the small second sound tweezers anemometer (see bottom panel),
which displays all the major features expected for a velocity
spectrum in classical turbulence: it has a sharp transition from a plateau at large scale to a power law scaling close
to $-5/3$ in the inertial scales of the turbulent cascade. Actually, it can be seen that the spectral decrease is less steep than $-5/3$, which can be due either to non-perfect isotropy and homogeneity, or more likely because the signal has some second-order corrections in addition to its dependence on velocity fluctuations. A fit of the transition using the von K\'arm\'an expression (see \cite{vita2018generating}) gives the value $H=5$ mm for the longitudinal integral scale. As a side remark, the apparent cut-off above $10^3$ m$^{-1}$ is an instrumental frequency cut-off of the tweezers.
We find a value of the VLD turbulent intensity close to 20\%,
which is significantly higher than the velocity turbulence intensity. We also checked that we obtain the same VLD spectrum using different
resonant frequencies $f_{0}$.
Our measurements are limited by two characteristic frequencies. First,
the tweezers average the VLD over a cube of side $l$, which means
that our resolution cannot exceed $f/U>1/l$. For the large tweezers,
this sets a cut-off scale of $10^{3}$ m$^{-1}$, much larger than
the range of inertial scales presented in top panel of Fig. \ref{fig:spectres}.
Second, the frequency bandwidth of the resonator decreases when the
quality factor of the second sound resonance increases. This again sets a cut-off scale given by
$f/U=\xi_{0}c_{2}/(2U)$. The worst configuration corresponds to the
data obtained at 2.14 K and $U=1.2$ m/s where the cut-off scale is about
$600$ m$^{-1}$. For this reason, the VLD spectra of
Fig. \ref{fig:spectres} are conservatively restricted to $f/U<300$ m$^{-1}$ which allows to resolve about one and a half decade of inertial scales.
Figure \ref{fig:histogrammes} displays some typical PDF of the rescaled
VLD fluctuations $L_{\perp}/\left\langle L_{\perp} \right\rangle$ in semilogarithmic scale, for the three considered
temperatures. The PDF have been vertically shifted by one decade from each other
for readability. The figure shows a strong asymmetry at all temperatures,
with a nearly Gaussian left wing, and an exponential right wing. Contrary to the VLD spectra, the PDF do not accurately collapse on a single master curve at different velocities and temperatures: yet, they remain very similar when the temperature and the mean velocity are changed, and their strongly asymetric shape seems to be a robust feature.
\begin{figure}
\begin{centering}
\includegraphics[height=9cm]{figure6.pdf}\\
\par\end{centering}
\caption{\textbf{Top:} Power spectral density of the projected vortex line density (VLD) $L_{\perp}$, obtained
with the large second sound tweezers, for different values of $U$
and temperatures. All measured spectra collapse using the scaling
$f/U$ and $P_L(f)\times U$. The fluctuations have been rescaled by the mean
value of the VLD such that the integral of the above curves directly
give the VLD turbulence intensity.
\textbf{Bottom:} Power spectral density of the uncalibrated velocity signal obtained from the second
sound tweezers anemometer, for two values of $U$ at 1.65 K.
The spectra collapse using the scaling $f/U$ for the frequency
and $P_U(f)/U$ for the spectral density. The straight line displays
the $-5/3$ slope which is expected for a classical velocity spectrum
in the inertial range of the turbulent cascade. The dotted line is a fit using the von K\'arm\'an expression (see \cite{vita2018generating}) to find the integral scale $H$.
\label{fig:spectres}}
\end{figure}
By contrast, the dotted curve in Fig. \ref{fig:histogrammes} displays one PDF of the small tweezers anemometer at $1.65$ K, for which the mean has been shifted and the variance rescaled. It can be seen that the general shape of this latter PDF is much more symmetric and closer to a Gaussian as expected for a PDF of velocity fluctuations.
\begin{figure}
\begin{centering}
\includegraphics[height=6cm]{figure7.pdf}
\par\end{centering}
\caption{Normalized probability distributions of the VLD fluctuations obtained
at three temperatures. The PDF have been shifted by one decade from
each other for readability. By comparison, the dotted black curve displays a rescaled PDF obtained with the small tweezers measuring velocity. \label{fig:histogrammes}}
\end{figure}
\section{Discussion and conclusion}
In the present paper, we have investigated the temperature dependence of the statistics of the local density of vortex lines (VLD) in quantum turbulence. About one and a half decade of inertial scales of the turbulent cascade was resolved. We measure the VLD mean value and deduce from Eq. (\ref{eq:tau}) the turbulence intensity (Fig. \ref{fig:scalingU}), we report the VLD power spectrum (Fig. \ref{fig:spectres}), and the VLD probability distribution (Fig. \ref{fig:histogrammes}). Whereas the VLD mean value at different temperatures confirms previous numerical \cite{salort2011mesoscale,Babuin:EPL2014} and experimental studies \cite{Babuin:EPL2014}, the spectral and PDF studies are completely new. Only one measurement of the VLD fluctuations had been done previously around 1.6K \cite{roche2007vortex} but in a wind tunnel with a very specific geometry and a non-controlled turbulence production. In the present work, we have used a grid turbulence, which is recognized as a reference flow with well-documented turbulence characteristics.
To conclude, we discuss below the three main findings:
\begin{enumerate}
\item A master curve of the VLD spectra, independent of temperature and mean velocity.
\item The observed master curve does not correspond to previously reported spectra in the context of highly turbulent classical flows.
\item A global invariant shape of the strongly skewed PDF.
\end{enumerate}
The mean VLD gives the inter-vortex spacing, and thus tells how many quantum vortices are created in the flow, whereas the PDF and spectra tell how those vortices are organized in the flow. From 2.14K to 1.65K, our results confirm that the inter-vortex spacing only weakly decreases, by less than 23\% for a 5-times increase of the superfluid fraction. In other words, the superfluid fraction has a limited effect on the creation of quantum vortices. The current understanding of the homogeneous isotropic turbulence in He-II is that the superfluid and normal fluid are locked together at large and intermediate scales where they undergo a classical Kolmogorov cascade \cite{spectra:PNAS2014}. The experimental evidences are based on the observation of classical velocity statistics
using anemometers measuring the barycentric velocity of the normal and superfluid components. Here, the temperature-independence of (normalized) VLD spectra supports this general picture, by reminiscence of a similar property of He-II velocity spectra.
In contrast to velocity, the observed VLD master curve has an unexpected shape in the inertial range, at odd with the spectra reported as ``compatible with'' a $f^{-5/3}$ scaling in \cite{roche2007vortex}. The probe is sensitive to the total amount of vorticity in the scales smaller than the probe spatial resolution, and thus keeps track of the small scales fluctuations. A close classical counterpart of VLD is enstrophy, because its 1-D spectrum is also related to the velocity spectrum at smaller scales (eg. see \cite{antonia1996note}). However, the experimental \cite{baudet1996spatial} and numerical (e.g. \cite{ishihara2003spectra}) enstrophy spectra reported so far in three-dimensional classical turbulence strongly differ from the present VLD spectra. We have no definite explanation for this difference. It could originate from remanent quantum vortices pinned on the grid, that cause additional energy injection in the inertial range, in which case the peculiarity of our spectra would be specific to the type of forcing. Otherwise, it could be a more fundamental property associated with the microscopic structure of the vortex tangle that, together with the observed temperature-independence of the spectra, would be very constraining to develop mathematical closures for the continuous description of He-II (eg. see \cite{nemirovskii2020closure}).
As a discussion of the third statement,
we compare the PDF with those of numerical simulations done in classical turbulence. The absolute value of vorticity can be seen as a classical counterpart to the VLD. The work of Iyer and co. \cite{yeung2015extreme} for example, displays some enstrophy PDF from high resolution DNS, that can be compared to the PDF of Fig. \ref{fig:histogrammes}.
At small scale, the enstrophy PDF are strongly asymmetric and will ultimately converge to a Gaussian distribution when averaged over larger and larger scales. Although our tweezers average the VLD over a size much larger than the inter-vortex spacing, they are small enough to sense short-life intense vortical events, typical of small scale phenomenology in classical turbulence. Thus, the strong asymmetry of the PDF supports the analogy between VLD and enstrophy (or its square root) and shows the relevance of VLD statistics to explore the small scales of quantum turbulence.
A side result of the present work is to obtain the relative values of
the empirical coefficient $\nu_\mathrm{eff}=\epsilon(\kappa \mathcal{L})^{-2}$ at the three considered temperatures. Models and simulations predict that $\nu_\mathrm{eff}$ should steeply increase close to $T_\lambda$ (see \cite{Babuin:EPL2014,boue2015energyVorticity,gao2018dissipation} and ref. within),
in apparent contradiction with the only systematic experimental exploration \cite{stalp2002}. We found in Fig. \ref{fig:scalingU} that the effective viscosity $\nu_\mathrm{eff}$ is twice larger at 2.14K than at 1.99K.
To the best of our knowledge, our estimate $\nu_\mathrm{eff}(2.14 K) \simeq 2\,(\pm 0.25)\times \nu_\mathrm{eff}(1.99K)$ is the first experimental hint of such an effective viscosity increase.
\acknowledgments
We warmly thank B. Chabaud for support in upgrading the wind-tunnel and P. Diribarne, E. Lévêque and B. Hébral for their comments.
We thank K. Iyer with his co-authors for sharing data on the statistics of spatially averaged enstrophy analyzed in \cite{iyerNJP2019}.
Financial support from grants ANR-16-CE30-0016 (Ecouturb) and ANR-18-CE46-0013 (QUTE-HPC).
\bibliographystyle{eplbib}
| {'timestamp': '2021-04-09T02:12:14', 'yymm': '2102', 'arxiv_id': '2102.10866', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.10866'} |
\section{Introduction}
\label{sec:introduction}
We study {\sc greedy}\xspace routing over uni-dimensional metrics\footnote{The
principles of this work can be extended to higher dimensional
spaces. We focus on one-dimension for simplicity.} defined over
$n$ nodes lying in a ring. {\sc greedy}\xspace routing is the strategy of
forwarding a message along that out-going edge that minimizes the
{\it distance} remaining to the destination:
\begin{mydefinition}{Greedy Routing}
In a graph $(V,E)$ with a given distance function $\delta: V \times
V \rightarrow \mathcal{R}^+$, {\sc greedy}\xspace routing entails the following
decision: Given a target node $t$, a node $u$ with neighbors $N(u)$
forwards a message to its neighbor $v \in N(u)$ such that
$\delta(v,t) = \min_{x \in N(u)} \delta(x,t)$.
\end{mydefinition}
\noindent
Two {\it natural} distance metrics over $n$ nodes placed in a circle
are the clockwise-distance and the absolute-distance between pairs of
nodes:
\begin{eqnarray*}
\delta_{clockwise}(u, v) & = &
\begin{cases}
v-u & v\geq u\\
n+v-u & \text{otherwise}
\end{cases}\\
\delta_{absolute}(u, v) & = &
\begin{cases}
\min \{ v-u, n+u-v \} & v\geq u\\
\min \{ u-v, n+v-u \} & \text{otherwise}
\end{cases}
\end{eqnarray*}
\noindent
In this paper, we study the following related problems for the
above distance metrics:
\begin{center}
\begin{minipage}{0.95\textwidth} \it
\squishlist
\item[I.] Given integers $d$ and $\Delta$, what is the largest
graph that satisfies two constraints: the out-degree of any node
is at most $d$, and the length of the longest {\sc greedy}\xspace route is
at most $\Delta$ hops?
\item[II.] Given integers $d$ and $n$, design a network in which
each node has out-degree at most $d$ such that the length of the
longest {\sc greedy}\xspace route is minimized.
\squishend
\end{minipage}
\end{center}
\subsection*{Summary of results}
\begin{enumerate}
\item We construct a family of network topologies, the {\em
Papillon\xspace}\footnote{Our constructions are variants of the
well-known butterfly family, hence the name Papillon\xspace.}, in which
{\sc greedy} routes are asymptotically optimal. For both
$\delta_{clockwise}$ and $\delta_{absolute}$, Papillon\xspace has
{\sc greedy}\xspace routes of length $\Delta = \Theta(\log n / \log d)$ hops
in the worst-case when each node has $d$ out-going links.
Papillon\xspace is the first construction that achieves asymptotically
optimal worst-case {\sc greedy}\xspace routes.
\item Upon further investigation:, two properties of Papillon\xspace emerge:
(a) {\sc greedy}\xspace routing does not send messages along shortest paths,
and (b) Edge congestion with {\sc greedy}\xspace routing is not uniform --
some edges are used more often than others. We exhibit the first
property by identifying routing strategies that result in paths
shorter than those achieved by {\sc greedy} routing. In fact,
one of these strategies guarantees uniform edge-congestion.
\item Finally, we consider another distance function
$\delta_{xor}(u, v)$, defined as the number of bit-positions in
which $u$ and $v$ differ. $\delta_{xor}$ occurs naturally, e.g., in
hypercubes, and {\sc greedy}\xspace routing with $\delta_{xor}$ routes along
shortest paths in them. We construct a variant of Papillon\xspace that
supports asymptotically optimal routes of length $\Theta(\log n /
\log d)$ in the worst-case, for {\sc greedy}\xspace routing with distance
function $\delta_{xor}$.
\end{enumerate}
\section{Related Work}
\label{sec:related}
{\sc greedy}\xspace routing is a fundamental strategy in network theory. It
enjoys numerous advantages. It is completely decentralized, in that
any node takes routing decisions locally and independently. It is
oblivious, thus message headers need not be written along the
route. It is inherently fault tolerant, as progress toward the target
is guaranteed so long as some links are available. And it has good
locality behavior in that every step decreases the distance to the
target. Finally, it is simple to implement, yielding robust
deployments. For these reasons, {\sc greedy} routing has long
attracted attention in the research of network design. Recently,
{\sc greedy}\xspace routing has witnessed increased research interest in the
context of decentralized networks. Such networks arise in modeling
social networks that exhibit the ``small world phenomenon'', and in
the design of overlay networks for peer-to-peer (P2P) systems. We now
summarize known results pertaining to {\sc greedy}\xspace routing on a circle.
\subsection*{The Role of the Distance Function}
Efficient graph constructions are known that support {\sc greedy}\xspace routing
with distance function other than $\delta_{clockwise}$,
$\delta_{absolute}$ and $\delta_{xor}$.
For de Bruijn networks, the traditional
routing algorithm (which routes almost always along shortest paths)
corresponds to {\sc greedy}\xspace routing with $\delta(u, v)$ defined as the
longest suffix of $u$ that is also the prefix of $v$. For a 2D grid,
shortest paths correspond to {\sc greedy}\xspace routing with $\delta(u, v)$
defined as the Manhattan distance between nodes $u$ and $v$.
For {\sc greedy}\xspace routing on a circle, the best-known constructions have $d
= \Theta(\log n)$ and $\Delta = \Theta(\log n)$. Examples include:
Chord~\cite{chord:sigcomm01} with distance-function
$\delta_{clockwise}$, a variant of Chord with ``bidirectional
links''~\cite{ganesan:soda04} and distance-function
$\delta_{absolute}$, and the hypercube with distance function
$\delta_{xor}$. In this paper, we improve upon all of these
constructions by showing how to route in
$\Theta(\log n / \log d)$ hops in the worst case with
$d$ links per node.
\subsection*{{\sc greedy}\xspace Routing in Deterministic Graphs}
The \textsf{Degree-Diameter Problem}, studied in extremal graph
theory, seeks to identify the largest graph with diameter $\Delta$,
with each node having out-degree at most $d$ (see Delorme~\cite{ddp}
for a survey). The best constructions for large $\Delta$ tend to be
sophisticated~\cite{bermond:92,comellas:92,exoo:01}. A well-known
upper bound is $N(d, \Delta) = 1 + d + d^2 + \cdots + d^\Delta =
\frac{d^{\Delta+1} - 1}{d-1}$, also known as the Moore bound. A
general lower bound is $d^\Delta + d^{\Delta-1}$, achieved by Kautz
digraphs~\cite{kautz:68,kautz:69}, which are slightly superior to de
Bruijn graphs~\cite{debruijn:46} whose size is only
$d^\Delta$. Thus it is possible to route
in $O(\log n / \log d)$ hops in the worst-case with $d$ out-going
links per node. Whether {\sc greedy}\xspace routes with distance functions
$\delta_{clockwise}$ or $\delta_{absolute}$ can achieve the same
bound, is the question we have addressed in this paper.
{\sc greedy}\xspace routing with distance function $\delta_{absolute}$ has been
studied for Chord~\cite{ganesan:soda04}, a popular topology for P2P
networks. Chord has $2^b$ nodes, with out-degree $2b-1$ per node.
The longest {\sc greedy}\xspace route takes $\Floor{b/2}$ hops. In terms of $d$
and $\Delta$, the largest-sized Chord network has $n = 2^{2\Delta +
1}$ nodes. Moreover, $d$ and $\Delta$ cannot be chosen independently
-- they are functionally related. Both $d$ and $\Delta$ are
$\Theta(\log n)$. Analysis of {\sc greedy}\xspace routing of Chord leaves open
the following question:
\smallskip
\centerline{\it For {\sc greedy}\xspace routing on a circle, is
$\Delta = \Omega(\log n)$ when $d = O(\log n)$?}
\smallskip
Xu {\it et al.}\xspace~\cite{xu:infocom03} provide a partial answer to the above question
by studying {\sc greedy}\xspace routing with distance function
$\delta_{clockwise}$ over \emph{uniform} graph topologies. A graph
over $n$ nodes placed in a circle is said to be uniform if the set of
clockwise offsets of out-going links is identical for all
nodes. Chord is an example of a uniform graph. Xu {\it et al.}\xspace show that
for any uniform graph with $O(\log n)$ links per node, {\sc greedy}\xspace
routing with distance function $\delta_{clockwise}$ necessitates
$\Omega(\log n)$ hops in the worst-case.
Cordasco {\it et al.}\xspace~\cite{fchord:sirocco04} extend the result of Xu
{\it et al.}\xspace~\cite{xu:infocom03} by showing that {\sc greedy}\xspace routing with
distance function $\delta_{clockwise}$ in a uniform graph over $n$
nodes satisfies the inequality $n \leq F(d + \Delta + 1)$, where $d$
denotes the out-degree of each node, $\Delta$ is the length of the
longest {\sc greedy}\xspace path, and $F(k)$ denotes the $k^{th}$ Fibonacci
number. It is well-known that $F(k) = [\phi^k / \sqrt{5}]$, where
$\phi = 1.618\ldots$ is the Golden ratio and $[x]$ denotes the
integer closest to real number $x$. It follows that $1.44 \log_2 n
\leq d + \Delta + 1$. Cordasco {\it et al.}\xspace show that the inequality is
strict if $|d - \Delta| > 1$. For $|d - \Delta| \leq 1$, they
construct uniform graphs based upon Fibonacci numbers which achieve
an optimal tradeoff between $d$ and $\Delta$.
\medskip
The results in ~\cite{ganesan:soda04,xu:infocom03,fchord:sirocco04}
leave open the question whether there exists any graph construction
that permits {\sc greedy}\xspace routes of length $\Theta(\log n / \log d)$ with
distance function $\delta_{clockwise}$ and/or $\delta_{absolute}$.
Papillon\xspace provides an answer to the problem by constructing a
non-uniform graph --- the set of clockwise offsets of out-going links
is different for different nodes.
\subsection*{{\sc greedy}\xspace Routing in Randomized Graphs}
{\sc greedy}\xspace routing over nodes arranged in a ring with distance
function $\delta_{clockwise}$ has recently been studied for certain
classes of {\it randomized} graph constructions. Such graphs arise in
modeling social networks that exhibit the ``small world phenomenon'',
and in the design of overlay networks for P2P systems.
In the seminal work of Kleinberg \cite{kleinberg:stoc00}, a
randomized graph was constructed in order to explain the ``small
world phenomenon'', first identified by
Milgram~\cite{milgram:pt67}. The phenomenon refers to the observation
that individuals are able to route letters to unknown targets on the
basis of knowing only their immediate social contacts. Kleinberg
considers a set of nodes on a uniform two-dimensional grid. It
proposes a link model in which each node is connected to its
immediate grid neighbors, and in addition, has a single long range
link drawn from a normalized harmonic distribution with power $2$.
In the resulting graph, {\sc greedy}\xspace routes have length at most $O(\log^2
n)$ hops in expectation; this complexity was later shown to be tight
by Barri{\`e}re {\it et al.}\xspace in \cite{barriere:disc01}.
Kleinberg's construction has found applications in the design of
overlay routing networks for Distributed Hash Tables.
Symphony~\cite{symphony:usits03} is an adaptation of Kleinberg's
construction in a single dimension. The idea is to place $n$ nodes
in a virtual circle and to equip each node with $d \geq 1$ out-going
links. In the resulting network, the average path length of {\sc greedy}\xspace
routes with distance function $\delta_{clockwise}$ is
$O(\frac{1}{d}\log^2 n)$ hops. Note that unlike Kleinberg's network,
the space here is virtual and so are the distances and the sense of
{\sc greedy}\xspace routing. The same complexity was achieved with a slightly
different Kleinberg-style construction by Aspnes
{\it et al.}\xspace~\cite{aspnes:podc02}. In the same paper, it was also shown
that any symmetric, randomized degree-$d$ network has
$\Omega(\frac{\log^2 n}{d\log\log n})$ {\sc greedy}\xspace routing complexity.
Papillon outperforms all of the above randomized constructions, using
degree $d$ and achieving $\Theta(\log n/\log d)$ routing. It should
be possible to randomize Papillon along similar principles to the
Viceroy\cite{viceroy:podc02} randomized construction of the butterfly
network, though we do not pursue this direction here.
\subsection*{Summary of Known Results}
With $\Theta(\log n)$ out-going links per node, several graphs over
$n$ nodes in a circle support {\sc greedy}\xspace routes with $\Theta(\log n)$
{\sc greedy} hops. Deterministic graphs with this property include:
(a) the original Chord~\cite{chord:sigcomm01} topology with distance
function $\delta_{clockwise}$, (b) Chord with edges treated as
bidirectional~\cite{ganesan:soda04} with distance function
$\delta_{absolute}$. This is also the known lower bound on any
uniform graph with distance function $\delta_{clockwise}$
\cite{xu:infocom03}. Randomized graphs with the same tradeoff
include randomized-Chord~\cite{gummadi:sigcomm03,zhang:sigmetrics03}
and Symphony~\cite{symphony:usits03} -- both with distance function
$\delta_{clockwise}$. With degree $d \le \log n$, Symphony
\cite{symphony:usits03} has {\sc greedy}\xspace routes of length $\Theta((\log^2
n)/ d)$ on average. The network of \cite{aspnes:podc02} also
supports {\sc greedy}\xspace routes of length $O((\log^2 n)/d)$ on average , with
a gap to the known lower bound on their network of
$\Omega(\frac{\log^2 n}{d\log\log n})$.
The above results are somewhat discouraging, because routing that is
\textbf{non}-{\sc greedy} can achieve much better results. In
particular, networks of degree $2$ with hop complexity $O(\log n)$
are well known, e.g., the Butterfly and the de Bruijn (see for
example \cite{leighton:92} for exposition material). And networks of
logarithmic degree can achieve $O(\log n/ \log\log n)$ routing
complexity (e.g., take the degree-$\log_2 n$ de Bruijn). Routing in
these networks is non-{\sc greedy} according to any one of our
metrics ($\delta_{clockwise}$, $\delta_{absolute}$, and
$\delta_{xor}$).
The Papillon\xspace\ construction demonstrates that we can indeed design
networks in which {\sc greedy} routing along these metrics has
asymptotically optimal routing complexity. Our contribution is a
family of networks that extends the Butterfly network family, so as
to facilitate efficient {\sc greedy} routing. With $d$ links per
node, {\sc greedy} routes are $\Theta(\log n/\log d)$ in the
worst-case, which is asymptotically optimal. For $d = o(\log n)$,
this beats the lower bound of \cite{aspnes:podc02} on symmetric,
randomized greedy routing networks (and it meets it for $d=O(\log
n$). In the specific case of $d=\log n$, our greedy routing achieves
$O(\log n/\log \log n)$ average route length.
\subsection*{{\sc greedy}\xspace with {\sc lookahead}\xspace}
Recent work~\cite{manku:stoc04} explores the surprising advantages of
{\sc greedy}\xspace with {\sc lookahead}\xspace in randomized graphs over $n$ nodes in a
circle. The idea behind {\sc lookahead}\xspace is to take neighbor's neighbors
into account to make routing decisions. It shows that greedy with
{\sc lookahead}\xspace achieves $O(\log^2 n/ d \log d)$ expected route length in
Symphony~\cite{symphony:usits03}. For other networks which have
$\Theta(\log n)$ out-going links per node, e.g.,
randomized-Chord~\cite{gummadi:sigcomm03,zhang:sigmetrics03},
randomized-hypercubes~\cite{gummadi:sigcomm03},
skip-graphs~\cite{aspnes:soda03} and SkipNet~\cite{skipnet:usits03},
average path length is $\Theta(\log n / \log \log n)$ hops. Among
these networks, Symphony and randomized-Chord use {\sc greedy}\xspace routing with
distance function $\delta_{clockwise}$. Other networks use a different
distance function (none of them uses $\delta_{xor}$). For each of
these networks, with $O(\log n)$ out-going links per node, it was
established that plain {\sc greedy}\xspace (\emph{without} {\sc lookahead}\xspace) is
sub-optimal and achieves $\Omega(\log n)$ expected route lengths. The
results suggest that {\sc lookahead} has significant impact on {\sc greedy}\xspace
routing.
Unfortunately, realizing {\sc greedy}\xspace routing with {\sc lookahead}\xspace on a
degree-$k$ network implies that $O(k^2)$ nodes need to be considered
in each hop, while plain {\sc greedy}\xspace needs to consider only $k$ nodes.
For $k= \log_2 n$, this implies a $O(\log n)$ overhead for {\sc lookahead}\xspace
routing in every hop.
Papillon\xspace demonstrates that it is possible to construct a graph in
which each node has degree $d$ and in which {\sc greedy}\xspace \emph{without}
1-{\sc lookahead}\xspace has routes of length $\Theta(\log n / \log d)$ in the
worst case, for the metrics $\delta_{clockwise}$, $\delta_{absolute}$
and $\delta_{xor}$. Furthermore, for all $d = o(\log n)$, plain {\sc
greedy} on our network design beats even the results obtained in
\cite{manku:stoc04} with $1$-{\sc lookahead}.
\subsection*{Previous Butterfly-based Constructions}
\noindent
Butterfly networks have been used in the context of routing networks
for DHTs as follows:
\begin{enumerate}
\item Deterministic butterflies have been proposed for DHT routing by
Xu {\it et al.}\xspace~\cite{xu:infocom03}, who subsequently developed their
ideas into Ulysses~\cite{ulysses:icnp03}. Papillon\xspace for distance
function $\delta_{clockwise}$ has structural similarities with
Ulysses -- both are butterfly-based networks. The key differences
are as follows: (a) Ulysses does not use $\delta_{absolute}$ as
its distance function, (b) Ulysses does not use {\sc greedy}\xspace routing,
and (c) Ulysses uses more links than Papillon\xspace for distance
function $\delta_{clockwise}$ -- additional links have been
introduced to ameliorate non-uniform edge congestion caused by
Ulysses' routing algorithm. In contrast, the {\sc congestion-free}
routing algorithm developed in \S\ref{sec:improved} obviates the
need for any additional links in Papillon\xspace (see
Theorem~\ref{thm:congestion_free_clockwise}).
\item Viceroy~\cite{viceroy:podc02} is a \emph{randomized} butterfly
network which routes in $O(\log n)$ hops in expectation with
$\Theta(1)$ links per node. Mariposa (see
reference~\cite{dipsea:2004} or~\cite{manku:podc03}) improves upon
Viceroy by providing routes of length $O(\log n / \log d)$ in the
worst-case, with $d$ out-going links per node. Viceroy and
Mariposa are different from other randomized networks in terms of
their design philosophy.
The Papillon\xspace\ topology borrows elements of the geometric embedding of the
butterfly in a circle from Viceroy \cite{viceroy:podc02} and from
\cite{manku:podc03}, while extending them for {\sc greedy} routing.
\end{enumerate}
\section{Papillon\xspace}
\label{sec:papillon}
We construct two variants of butterfly networks, one each for
distance-functions $\delta_{clockwise}$ and $\delta_{absolute}$. The
network has $n$ nodes arbitrarily positioned on a ring. We label the
nodes from $0$ to $n-1$ according to their order on the ring. For
convenience, $x \bmod n$ always represents an element lying in the
range $[0, n-1]$ (even when $x$ is negative, or greater than $n-1$).
\begin{mydefinition}{Papillon\xspace for $\delta_{clockwise}$}
${\mathcal B}_{clockwise}(\kappa,m)$ is a directed graph, defined
for any pair of integers $\kappa,m \geq 1$
\begin{enumerate}
\item Let $n = \kappa^m m$.
\item Let $\ell(u) \equiv (m-1) - (u \bmod m)$. Each node has
$\kappa$ links. For node $u$, these directed links are to nodes
$(u + x) \bmod n$, where
$ x \in \{1 + im\kappa^{\ell(u)}\ |\ i \in [0, \kappa -1 ] \}$.
We denote the link with node $(u+1) \bmod n$ as $u$'s
\emph{``short link''}. The other $\kappa-1$ links are called
$u$'s \emph{``long links''}.
\end{enumerate}
\end{mydefinition}
\begin{mydefinition}{Papillon\xspace for $\delta_{absolute}$}
${\mathcal B}_{absolute}(k,m)$ is a directed graph, defined for
any pair of integers $k,m \geq 1$,
\begin{enumerate}
\item Let $n =(2k+1)^m m$.
\item Let $\ell(u) \equiv (m-1) - (u \bmod m)$. Each node has
$2k+2$ out-going links. Node $u$ makes $2k+1$ links with nodes
$(u + x) \bmod n$, where
$ x \in \{1 + im(2k+1)^{\ell(u)}\ |\ i \in [-k, +k] \}$.
Node $u$ also makes an out-going link with node $(u+x) \bmod n$,
where $x = -m+1$.
We denote the link with node $(u+1) \bmod n$
as $u$'s \emph{``short link''}. The other $2k+1$ links are
called $u$'s \emph{``long links''}.
\end{enumerate}
\end{mydefinition}
In both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$, all
out-going links of node $u$ are incident upon nodes with level
$(\ell(u) - 1) \bmod m$. In ${\mathcal B}_{clockwise}$, the short
links are such that each hop diminishes the remaining
\emph{clockwise} distance by at least one. Therefore, {\sc greedy}\xspace routing
is guaranteed to take a finite number of hops. In ${\mathcal
B}_{absolute}$, not every {\sc greedy}\xspace hop diminishes the remaining
\emph{absolute} distance. However, {\sc greedy}\xspace routes are still finite
in length, as we show in the proof of Theorem~\ref{thm:absolute}.
\begin{theorem} \label{thm:clockwise}
{\sc greedy}\xspace routing in ${\mathcal B}_{clockwise}$ with distance
function $\delta_{clockwise}$ takes $3m-2$ hops in
the worst-case. The average is less than $2m-1$ hops.
\end{theorem}
\begin{proof}
For any node $u$, we define
$
\text{SPAN}(u)\equiv \{ v \ |\ 0 \leq \delta_{clockwise}(u, v) < m
\kappa^{\ell(u)+1} \}.
$
Let $t$ and $u$ denote the target node and the current node,
respectively. Routing
proceeds in (at most) three phases:
\begin{center}
\begin{tabular}{lll}
Phase I: & $t \not\in \text{SPAN}(u)$ & (at most $m-1$ hops)\\
Phase II: & $t \in \text{SPAN}(u)$ and
$\delta_{clockwise}(u,t) \ge m$ & (at most $m$ hops)\\
Phase III: & $t \in \text{SPAN}(u)$ and
$\delta_{clockwise}(u,t) < m$ & (at most $m-1$ hops)
\end{tabular}
\end{center}
We now prove upper bounds on the number of hops in each phase.
\begin{enumerate}
\item[I.]
The out-going links of $u$ are incident upon nodes at
level $(\ell(u) - 1) \bmod m$. So eventually, the level of the
current node
$u$ will be $m-1$. At this point,
$t \in \text{SPAN}(u)$ because $\text{SPAN}(u)$ includes
\emph{all} the
nodes. Thus Phase 1 lasts for at most $m-1$ hops
($\frac{m-1}{2}$ hops on
average).
\item[II.]
{\sc greedy}\xspace will forward the
message to some node $v$ such that $t \in \text{SPAN}(v)$ and
$\ell(v)=\ell(u)-1$. Eventually, the current node $u$ will
satisfy the property $\ell(u) = 0$. This node will forward
the message to some node
$v$ with $\ell(v) = m-1$ such that $\delta_{clockwise}(v,t) < m$,
thereby terminating this phase of routing. There are at most $m$
hops in this phase (at most $m$ on average as well).
\item[III.]
In this phase, {\sc greedy}\xspace will decrease the clockwise
distance by exactly one in each hop by following the
short-links. Eventually, target $t$ will be reached. This phase
takes at most $m-1$ hops ($\frac{m-1}{2}$ hops on
average).
\end{enumerate}
The worst-case route length is $3m-2$.
On average, routes are at most $2m-1$ hops long.
\end{proof}
\begin{theorem} \label{thm:absolute}
{\sc greedy}\xspace routing in ${\mathcal B}_{absolute}$ with distance
function $\delta_{absolute}$ takes $3m-2$ hops in
the worst-case. The average is less than $2m-1$ hops.
\end{theorem}
\begin{proof}
For any node $u$, we define
\[
\text{SPAN}(u)\equiv \{ v \ |\ \delta_{absolute}(u, v) = | c +
m\sum_{i=0}^{\ell(u)} (2k+1)^i d_i |,\ c \in [0, m-1],\ d_i \in [-k,
+k] \,\}.
\]
Let $t$ and $u$ denote the target node and the current node,
respectively.
Routing proceeds in (at most) three phases:
\begin{center}
\begin{tabular}{lll}
Phase I: & $t \not\in \text{SPAN}(u)$ & (at most $m-1$ hops)\\
Phase II: & $t \in \text{SPAN}(u)$ and
$\delta_{absolute}(u,t) \ge m$ & (at most $m$ hops)\\
Phase III: & $t \in \text{SPAN}(u)$ and
$\delta_{absolute}(u,t) < m$ & (at most $m-1$ hops)
\end{tabular}
\end{center}
We now prove upper bounds on the number of hops in each phase.
\begin{enumerate}
\item[I.] All out-going links of node $u$ are incident upon nodes at
level $(\ell(u) - 1) \bmod m$. So eventually, the current node
$u$ will satisfy the property $\ell(u) = m - 1$. At this point,
$t \in \text{SPAN}(u)$ because $\text{SPAN}(u)$ includes
\emph{all} nodes. Thus Phase I lasts at most $m-1$ hops
(at most $\frac{m-1}{2}$ hops on average).
\item[II.]
Phase 2 terminates if target node $t$ is reached, or if
$\delta_{absolute}(u, t) < m$.
Node
$u$ always forwards the message to some node $v$ such that
$t \in \text{SPAN}(v)$ and $\ell(v) = \ell(u) - 1$. So
eventually, either target $t$ is reached, or the current node
$u$ satisfies the property
$\ell(u) = 0$. At this point, if node $u$ forwards the message to
node $v$, then it is guaranteed that $\ell(v) =
m-1$ and $\delta_{absolute}(v,t) < m$,
thereby terminating Phase II. There are at most $m$
hops in this phase (at most $m$ on average as well).
\item[III.] The target node $t$ is reached in at most $m-1$
hops (the existence of the ``back
edge'' that connects node $u$ to node $(u + 1 - m) \bmod n$
guarantees this). This phase takes at most $m-1$ hops (at most
$\frac{m-1}{2}$ hops on average).
\end{enumerate}
The worst-case route length is $3m-2$.
On average, routes are at most $2m-1$ hops long.
\end{proof}
\bigskip
Routes in both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$
are at most $3m-2$ hops, which is $O(\log (\kappa^m m) / \log
\kappa)$ and $O(\log ((2k+1)^m m) / \log (2k+2))$, respectively.
Given degree $d$ and diameter $\Delta$, the size of Papillon\xspace is $n =
2^{O(\Delta)}\Delta$ nodes. Given degree $d$ and network size $n$,
the longest route has length $\Delta = O(\log n / \log d)$.
\section{Improved Routing Algorithms for Papillon\xspace}
\label{sec:improved}
{\sc greedy}\xspace routing does not route along shortest-paths in ${\mathcal
B}_{clockwise}$ and ${\mathcal B}_{absolute}$. We demonstrate this
constructively below, where we study a routing strategy called {\sc
hypercubic-routing} which achieves shorter path lengths than {\sc greedy}\xspace.
\subsection*{Hypercubic Routing}
\begin{theorem} \label{thm:fast_clockwise}
There exists a routing strategy for ${\mathcal B}_{clockwise}$ in
which routes take $2m-1$ hops in the worst-case. The average is at
most $1.5m$ hops.
\end{theorem}
\begin{proof}
Consider the following {\sc hypercubic-routing} algorithm on
${\mathcal B}_{clockwise}$. Let $s$ be the source node, $t$ the
target, and let $dist = \delta_{clockwise}(s, t) = c + m +
m\sum_{i=0}^{i=m-1} \kappa^i d_i$ with $0 \leq c <m$ and $0 \leq
d_i < \kappa$ ($dist$ has exactly one such representation, unless
$dist \leq m$ in which case routing takes $< m$ hops).
Phase I: Follow the short-links to ``fix'' the $c$-value to zero.
This takes at most $m-1$ hops (at most $0.5m$ hops on average).
Phase II: In exactly $m$ hops, ``fix'' the $d_i$'s in succession to
make them all zeros: When the current node is $u$, we fix
$d_{\ell(u)}$ to zero by following the appropriate long-link, i.e.,
by shrinking the clockwise distance by $d_{\ell(u)}
\kappa^{\ell(u)} m + 1$. The new node $v$ satisfies $\ell(v) =
(\ell(u)+m-1) (\bmod~m)$. When each $d_i$ is zero, we have reached
the target.
Overall, the worst-case route length is $2m-1$. Average route
length is at most $1.5m$.
\end{proof}
\begin{theorem} \label{thm:fast_absolute}
There exists a routing strategy for ${\mathcal B}_{absolute}$ in
which routes take $2m-1$ hops in the worst-case. The average is at
most $1.5m$ hops.
\end{theorem}
\begin{proof}
Let $s$ be the source node, $t$ the target.
Phase I: Follow the short-links in the clockwise direction, to
reach a node $s'$ such that $\ell(s') = \ell(t)$. This takes at
most $m-1$ hops (at most $0.5m$ hops on average). The remaining
distance can be expressed as $m + m\sum_{i=0}^{i=m-1} (2k+1)^i d_i$
where $-k \leq d_i \leq k$. There is a unique such representation.
Phase II: In exactly $m$ hops, ``fix'' the $d_i$'s in succession to
make them all zeros: When the current node is $u$, we fix
$d_{\ell(u)}$ by following the appropriate long-link, i.e.,
by traveling distance $1 + d_{\ell(u)} (2k+1)^{\ell(u)} m$ along
the circle (this distance is positive or negative, depending upon
the sign of $d_{\ell(u)}$). The new node $v$ satisfies $\ell(v) =
(\ell(u)-1) (\bmod~m)$. When each $d_i$ is zero, we have reached
the target.
Overall, the worst-case route length is $2m-1$. Average route
length is at most $1.5m$.
\end{proof}
\bigskip
Note that the edges that connect node $u$ to node $(u+1-m) \bmod n$
are redundant for {\sc hypercubic-routing} since they are never used.
However, these edges play a crucial role in {\sc greedy}\xspace routing in
${\mathcal B}_{absolute}$ (to guide the message to the target in
Phase 3).
\subsection*{Congestion-Free Routing}
Theorems~\ref{thm:fast_clockwise} and ~\ref{thm:fast_absolute} prove
that {\sc greedy}\xspace routing is sub-optimal in the constants. {\sc
hypercubic-routing}, as described above, is faster than
{\sc greedy}\xspace. However, it causes {\it edge-congestion}
because short-links are used more often than long-links. Let $\pi$
denote the ratio of maximum and minimum loads on edges caused by all
$n \choose 2$ pairwise routes. {\sc hypercubic-routing} for
${\mathcal B}_{clockwise}$ consists of two phases (see Proof of
Theorem~\ref{thm:fast_clockwise}). The load due to Phase II is
uniform -- all edges (both short-links and long-links) are used
equally. However, Phase I uses only short-links, due to which $\pi
\not= 1$. We now modify the routing scheme slightly to obtain $\pi =
1$ for both ${\mathcal B}_{clockwise}$ and ${\mathcal B}_{absolute}$.
\begin{theorem} \label{thm:congestion_free_clockwise}
There exists a congestion-free routing strategy in ${\mathcal
B}_{clockwise}$ that takes $2m - 1$ hops in the worst-case and at
most $1.5m$ hops on average, in which $\pi = 1$.
\end{theorem}
\begin{proof}
The theorem is proved constructively, by building a new routing
strategy called {\sc congestion-free}. This routing strategy is
exactly the same as {\sc hypercubic-routing}, with a small change.
Let $s$ be the source node, $t$ the target. Let $c = (t+m-s) \bmod
m$, the difference in levels between $\ell(s)$ and $\ell(t)$.
Phase I: For $c$ steps, follow any out-going link, chosen uniformly
at random. We thus reach a node $s'$ such that $\ell(s') =
\ell(t)$.
Phase II: The remaining distance is $dist = \delta_{clockwise}(s',
t) = m+ m\sum_{i=0}^{i=m-1} \kappa^i d_i$ with $0 \leq d_i <
\kappa$. Continue with Phase II of the {\sc hypercubic-routing}
algorithm for ${\mathcal B}_{clockwise}$ (see
Theorem~\ref{thm:fast_clockwise}).
It is easy to see that in this case, all outgoing links (short- and
long-) are used with equal probability along the route. Hence,
$\pi = 1$.
\end{proof}
\begin{theorem} \label{thm:congestion_free_absolute}
There exists a congestion-free routing strategy in ${\mathcal
B}_{absolute}$ that takes $2m - 1$ hops in the worst-case and at
most $1.5m$ hops on average, in which $\pi = 1$.
\end{theorem}
\begin{proof}
We will ignore the edges that connect node $u$ to node $(u+1-m)
\bmod n$ (recall that these edges are not used in {\sc
hypercubic-routing} described in
Theorem~\ref{thm:fast_absolute}). We will ensure $\pi = 1$ for the
remainder of the edges.
{\sc congestion-free} routing follows the same idea as that for
${\mathcal B}_{clockwise}$
(Theorem~\ref{thm:congestion_free_clockwise}): Let $s$ be the
source node, $t$ the target. Let $c = (t+m-s) \bmod m$, the
difference in levels between $\ell(s)$ and $\ell(t)$. In Phase I,
for $c$ steps, we follow any out-going link, chosen uniformly at
random. We thus reach a node $s'$ such that $\ell(s') = \ell(t)$.
In Phase II, we continue as per Phase II of the {\sc
hypercubic-routing} algorithm for ${\mathcal B}_{absolute}$
(Theorem~\ref{thm:fast_absolute}).
\medskip
An alternate {\sc congestion-free} routing algorithm for ${\mathcal
B}_{absolute}$ that routes deterministically is based upon the
following idea: We express any integer $a \in [-k, +k]$ as the sum
of two integers: $a' = \Floor{(k+a)/2}$ and $a'' =
-\Floor{(k-a)/2}$. It is easy to verify that $a = a' + a''$. Now
if we list all pairs $\langle a', a''\rangle$ for $a \in [-k, +k]$,
then each integer in the range $[-k, +k]$ appears exactly twice as
a member of some pair.
Let $s$ be the source node, $t$ the target. Let $c = (t+m-s) \bmod
m$, the difference in levels between $\ell(s)$ and $\ell(t)$. The
remaining distance is $dist = c + m+ m\sum_{i=0}^{i=m-1} (2k+1)^i
d_i$ with $-k \leq d_i \leq k$ (there is a unique way to represent
$dist$ in this fashion).
Phase I: For $c$ steps, if the current node is $u$, then we follow
the edge corresponding to $d_{\ell(u)}'$, i.e., the edge that
covers distance $1 + md_{\ell(u)}'(2k+1)^{\ell(u)}$ (in the
clockwise or the anti-clockwise direction, depending upon the sign
of $d_{\ell(u)}'$). At the end of this phase, we reach a node $s'$
such that $\ell(s') = \ell(t)$.
Phase II: Continue with Phase II of the {\sc hypercubic-routing}
algorithm for ${\mathcal B}_{absolute}$
(Theorem~\ref{thm:fast_absolute}), for exactly $m$ steps.
Due to the decomposition of integers in $[-k, +k]$ into pairs, as
defined above, all outgoing links (short- and long-) are used
equally. Hence, $\pi = 1$.
\end{proof}
\bigskip
{\bf Notes}: In the context of the current Internet, out-going links
correspond to full-duplex TCP connections. Therefore, the undirected
graph corresponding to ${\mathcal B}_{absolute}$ is of interest. In
this undirected graph, it is possible to devise congestion-free
routing with $\pi = 1$, maximum path length $m + \Floor{m/2}$ and
average route-length at most $1.25m$. This is achieved by making at
most $\Floor{m/2}$ initial random steps either in the down or the up
direction, whichever gets to a node with level $\ell(t)$ faster.
\section{Papillon\xspace with Distance Function $\delta_{xor}$}
\label{sec:xor}
In this Section, we define a variant of Papillon\xspace in which {\sc greedy}\xspace
routing with distance function $\delta_{xor}$ results in worst-case
route length $\Theta(\log n / \log d)$, with $n$ nodes, each having
$d$ out-going links. For integers $s$ and $t$, $\delta_{xor}(s, t)$
is defined as the number of bit-positions in which the binary
representations of $s$ and $t$ differ.
\begin{mydefinition}{Papillon\xspace for $\delta_{xor}$}
${\mathcal B}_{xor}(\lambda, m)$ is a directed graph, defined
for any pair of integers $\lambda, m \geq 1$ where $\lambda$ is a
power of two.
\begin{enumerate}
\item The network has $n = m\lambda^m$ nodes labeled from $0$ to
$n-1$.
\item Let $u$ denote a node. Let $\ell(u)$ denote the unique
integer $x \in [0, m-1]$ that satisfies $x\lambda^m \leq u <
(x+1)\lambda^m$. The node $u$ makes links with nodes with labels
\[
((\ell(u) + 1) \bmod m)\lambda^m + i\lambda^{\ell(u)},
\quad \mathrm{where}\ \
0 \leq i < \lambda.
\]
Thus, if $(u, v)$ is an edge, then $\ell(v) =
(\ell(u) + 1) \bmod m$.
\end{enumerate}
\end{mydefinition}
\begin{theorem} \label{thm:xor}
{\sc greedy}\xspace routing in ${\mathcal B}_{xor}$ with distance function
$\delta_{xor}$ takes $2m-1$ hops in
the worst-case. The average is at most $1.5m$ hops.
\end{theorem}
\begin{proof}
Let the current node be $s$. Let $t$ denote the target node.
Then $s \oplus t$, the bit-wise exclusive-OR of $s$ and $t$, can
uniquely be expressed as $c + \sum_{i=0}^{i=m-1}
\lambda^i d_i$, where $c \geq 0$ and $0 \leq d_i < \lambda$.
Routing proceeds in two phases. In Phase I, each of the $d_i$ is
set to zero. This takes at most $m$ steps (at most $m$ on
average). In Phase II, the most significant
$\Ceiling{\log_2 m}$ bits of $s \oplus t$ are set to zero,
thereby
reaching the target. This phase takes at most $m-1$ hops (at most
$\frac{m-1}{2}$ on average).
\end{proof}
\section{Summary}
\label{summary}
We presented Papillon\xspace, a variant of multi-butterfly networks which supports
asymptotically optimal {\sc greedy}\xspace routes of length $O(\log n / \log d)$
with distance functions $\delta_{clockwise}$, $\delta_{absolute}$ and
$\delta_{xor}$,
when each node makes $d$ out-going links, in an $n$-node network.
Papillon\xspace is the first construction with this property.
\medskip
Some questions that remain unanswered:
\begin{enumerate}
\item {\it Is it possible to devise graphs in which {\sc greedy}\xspace routes
with distance function $\delta_{clockwise}$ and
$\delta_{absolute}$ are along shortest-paths? } As Theorems
~\ref{thm:fast_clockwise} and ~\ref{thm:fast_absolute} illustrate,
{\sc greedy}\xspace routing on Papillon\xspace do not route along shortest-paths.
Is this property inherent in {\sc greedy}\xspace routes?
\item {\it What is the upper-bound for the Problem of Greedy Routing
on the Circle? } Papillon\xspace furnishes a lower-bound, which is
asymptotically optimal. However, constructing the
largest-possible graph with degree $d$ and diameter $\Delta$, is
still an interesting combinatorial problem.
\end{enumerate}
\newcommand{\etalchar}[1]{$^{#1}$}
| {'timestamp': '2005-07-14T02:34:42', 'yymm': '0507', 'arxiv_id': 'cs/0507034', 'language': 'en', 'url': 'https://arxiv.org/abs/cs/0507034'} |
\section{Table of Political Science papers that use text classification}
\noindent Note that to facilitate exposition, in the main text, we use the words political and non-political labels to describe
the problem of binary classification. Without loss of generality, in this supplemental information material,
we use the positive vs. negative class dichotomy instead.
\section{Detailed explanations about the EM algorithm to estimate parameters}
\label{subsec:em}
Let $\mathbf{D}^{lp}$, $\mathbf{D}^{ln}$ and $\mathbf{D}^u$ be the document feature matrices for documents
with positive labels, documents with negative labels, and unlabeled documents, respectively.
Also let $N^{lp}$, $N^{ln}$, and $N^u$ be the number of documents with positive labels, negative labels, and documents without labels.
Likewise, $\mathbf{C}^{lp}$ and $\mathbf{C}^{ln}$ be the vectors of positive and negative labels. Then, the observed-data likelihood is:
\begin{equation}
\begin{split}
&p(\pi, \boldsymbol{\eta} \vert \mathbf{D}, \mathbf{C}^{lp}, \mathbf{C}^{ln}) \\
&\propto p(\pi) p(\boldsymbol{\eta}) p(\mathbf{D}^{lp}, \mathbf{C}^{lp} \vert \pi, \boldsymbol{\eta}) p(\mathbf{D}^{ln}, \mathbf{C}^{ln} \vert \pi, \boldsymbol{\eta}) \Big[p(\mathbf{D}^u \vert \pi, \boldsymbol{\eta})\Big]^{\lambda} \\
&= p(\pi) p(\boldsymbol{\eta})
\times \prod_{i=1}^{N^{lp}} p(\mathbf{D}_i^{lp} \vert Z_{i} = 1, \eta) p(Z_{i} = 1 \vert \pi) \times \prod_{i=1}^{N^{ln}} \Big\{ p(\mathbf{D}_i^{ln} \vert Z_{i} = 0, \eta) p(Z_{i} = 0 \vert \pi) \Big\} \\
&\quad \times \Bigg[\prod_{i=1}^{N^{u}} \Big\{ p(\mathbf{D}_i^{u} \vert Z_{i} = 1, \boldsymbol{\eta}) p(Z_{i} = 1 \vert \pi) + p(\mathbf{D}_i^{u} \vert Z_{i} = 0, \boldsymbol{\eta}) p(Z_{i} =0\vert \pi) \Big\} \Bigg]^{\lambda} \\
&\propto \underbrace{\big\{(1-\pi)^{\alpha_0 - 1} \prod_{v=1}^V \eta_{v0}^{\beta_{0v} - 1}\big\} \times \big\{\pi^{\alpha_1 - 1} \prod_{v=1}^V \eta_{v1}^{\beta_{1v} - 1}\big\}}_\text{prior}
\times
\underbrace{\prod_{i=1}^{N^{lp}} \Big\{ \prod_{v=1}^V \eta_{v1}^{D_{iv}}\times \pi \Big\}}_\text{positive labeled doc. likelihood} \\
&\quad \times \underbrace{\prod_{i=1}^{N^{ln}} \Big\{ \prod_{v=1}^V \eta_{v0}^{D_{iv}}\times (1-\pi) \Big\}}_\text{negative labeled doc. likelihood}
\times \underbrace{\Bigg[\prod_{i=1}^{N^{u}} \Big\{ \prod_{v=1}^V \eta_{v0}^{D_{iv}}\times (1-\pi) \Big\} + \Big\{ \prod_{v=1}^V \eta_{v1}^{D_{iv}}\times \pi \Big\}\Bigg]^{\lambda}
}_\text{unlabeled doc. likelihood}
\end{split}
\end{equation}
We weigh the part of the observed likelihood that refers to the unlabeled document with $\lambda \in [0, 1]$.
This is done because we typically have many more unlabeled documents than labeled documents.
By downweighting the information from the unlabeled document (i.e., setting $\lambda$ to be small),
we can use more reliable information from labeled documents than from unlabeled documents.
We estimate the parameters $\pi$ and $\eta$ using EM algorithm~\cite{dempster1977maximum} and our implementation is presented as pseudocode in Algorithm~\ref{alg:em}.
Note that by taking the expectation of the log complete likelihood function (Q function),
\begin{equation}
\begin{split}
Q &\equiv \E_{\mathbf{Z} \vert \pi^{(t)}, \boldsymbol{\eta}^{(t)}, D, C}[p(\pi, \boldsymbol{\eta}, \mathbf{Z} \vert \mathbf{D}, \mathbf{C})] \\
&= (\alpha_0 -1) \log (1 - \pi^{(t)}) + (\alpha_1 - 1) \log \pi^{(t)} + \sum_{v=1}^V \left\{ (\beta_{0v} - 1) \log \eta_{v0}^{(t)} + (\beta_{1v} - 1) \log \eta_{v1}^{(t)} \right\} \\
&\quad + \sum_{i=1}^{N^{lp}} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{v1}^{(t)} + \log \pi^{(t)} \Big\} + \sum_{i=1}^{N^{ln}} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{v0}^{(t)} + \log (1-\pi^{(t)}) \Big\} \\
&\quad + \lambda \left[ \sum_{i=1}^{N^{u}} p_{i0} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{v0}^{(t)} + \log (1-\pi^{(t)}) \Big\} + p_{i1} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{v1}^{(t)} + \log \pi^{(t)} \Big\}\right]
\end{split}
\end{equation}
where $p_{ik}$ is the posterior probability of a document $i$ being assigned to the $k$ th cluster, $k = \{0, 1\}$, given data and the parameters at $t$ th iteration.
If a document has a positive label, $p_{i0} = 0$ and $p_{i1} = 1$.
\begin{algorithm}[t]
\SetAlgoLined \KwResult{Maximize
$p(\pi^{(t)}, \boldsymbol{\eta}^{(t)} \mid \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u, \boldsymbol{\alpha}, \boldsymbol{\beta})$}
\eIf{In the first iteration of Active learning}{ Initialize $\pi$ and $\boldsymbol{\eta}$
by Naive Bayes\; \quad $\pi^{(0)} \gets$ NB($\mathbf{D}^l$, $Z^l, \balpha$)\; \quad
$\boldsymbol{\eta}^{(0)} \gets$ NB($\mathbf{D}^l$, $\mathbf{Z}^l, \bbeta$)\; }{ Inherit $\pi^{(0)}$ and
$\boldsymbol{\eta}^{(0)}$ from the previous iteration of Active learning\; }
\While{$p(\pi^{(t)}, \boldsymbol{\eta}^{(t)} \mid \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u, \balpha, \bbeta)$ does not
converge}{
(1) E step: obtain the probability of the class for unlabeled documents\;
\quad $p(\mathbf{Z}^u \mid \pi^{(t)}, \boldsymbol{\eta}^{(t)} \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u) \gets$ E step($\mathbf{D}^u$,
$\pi^{(t)}$, $\boldsymbol{\eta}^{(t)}$)\;
(2) Combine the estimated classes for the unlabeled docs and the known
classes for the labeled docs\; \quad
$p(\mathbf{Z} \mid \pi^{(t)}, \boldsymbol{\eta}^{(t)}, \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u)\gets$ combine($\mathbf{D}^l$, $\mathbf{D}^u$,
$Z^l$, $p(Z^u \mid \pi^{(t)}, \boldsymbol{\eta}^{(t)}, \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u)$)\;
(3) M step: Maximize
$Q \equiv \mathbb{E}[p(\pi, \boldsymbol{\eta}, \mathbf{Z}^u \mid \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u, \balpha, \bbeta)]$
w.r.t $\pi$ and $\boldsymbol{\eta}$\; \quad $\pi^{(t+1)} \gets \text{argmax}\ Q$\; \quad
$\boldsymbol{\eta}^{(t+1)} \gets \text{argmax}\ Q$\;
(4) Check convergence: Obtain the value of
$p(\pi^{(t+1)}, \boldsymbol{\eta}^{(t+1)} \mid \mathbf{D}^l, \mathbf{Z}^l, \mathbf{D}^u, \balpha, \bbeta)$\; }
\caption{EM algorithm to classify text}
\label{alg:em}
\end{algorithm}
If a document has no label,
\begin{eqnarray}
\label{eq:pred}
p_{i0} &=& 1 - p_{i1} \nonumber \\
p_{i1} &=& \frac{\prod_{v=1}^V \eta_{v1}^{D_{iv}} \times \pi}{ \prod_{v=1}^V \left\{\eta_{v0}^{D_{iv}} \times (1-\pi)\right\} + \prod_{v=1}^V \left\{ \eta_{v1}^{D_{iv}} \times \pi \right\}}
\end{eqnarray}
Equation \ref{eq:pred} also works as the prediction equation.
The predicted class of a document $i$ is $k$ that maximizes this posterior probability.
In the M-step, we maximize the Q function, and obtain the updating equations for $\pi$ and $\eta$.
The updating equation for $\pi$ is the following.
\begin{equation}
\begin{split}
\pi^{(t+1)} = \frac{\alpha_1 - 1 + N^{lp} + \lambda \sum_{i=1}^{N^u} p_{i1} }{\left(\alpha_1 - 1 + N^{lp} + \lambda \sum_{i=1}^{N^u} p_{i1}\right) +\left(\alpha_0 - 1 + N^{ln} + \lambda \sum_{i=1}^{N^u} p_{i0}\right)}
\end{split}
\end{equation}
The updating equation for $\eta$ is the following.
\begin{equation}
\begin{split}
\hat{\eta}_{v0}^{(t+1)} &\propto (\beta_{v0} -1) + \sum_{i=1}^{N^{ln}} D_{iv} + \lambda \sum_{i=1}^{N^{u}} p_{i0} D_{iv}, \quad v = 1, \ldots, V \\
\hat{\eta}_{v1}^{(t+1)} &\propto (\beta_{v1} -1) + \sum_{i=1}^{N^{lp}} D_{iv} + \lambda \sum_{i=1}^{N^{u}} p_{i1} D_{iv}, \quad v = 1, \ldots, V
\end{split}
\end{equation}
\newpage
\section{EM algorithm for binary classification with multiple clusters}
\label{sec:multiple_cluster_model}
\subsection{Summary}
The model outlined above assumes that there are two latent clusters,
each linked to the positive and the negative class.
However, this assumption can be relaxed to link multiple clusters to the negative class.
In the world of mixture models, the simplest setup is to let $K=2$ since the classification
goal is binary, and we can link each latent cluster to the final classification categories.
A more general setup is to use $K>2$ even when a goal is a binary classification.
If $K>2$, but our focus is to uncover the identity of one cluster,
we can choose one of the latent clusters to be linked to the ``positive'' class
and let all other latent clusters be linked to the ``negative'' class (see e.g., \citealt{lars:rubi:01} for
a similar idea in the realm of record linkage).
In other words, we collapse the $K-1$ latent clusters into one class for the classification purpose.
Using $K>2$ makes sense if the ``negative'' class consists of multiple sub-categories.
For instance, suppose researchers are interested in classifying news articles into political news or not.
Then, it is reasonable to assume that the non-political news category consists of multiple sub-categories, such as technology, entertainment, and sports news.
\subsection{Model}
This section presents a model and inference algorithm when we use more than 2 latent clusters
in estimation but the final classification task is binary. In other words, we impose a hierarchy
where many latent clusters are collapsed into the negative class. In contrast, the positive class
is made out of just one class. The model presented is as follows:
\begin{equation}
\begin{split}
\pi &\sim Dirichlet(\balpha) \\
Z_i &\stackrel{i.i.d}{\sim} Categorical(\bpi) \\
\mathbf{\eta}_{\cdot k} &\stackrel{i.i.d}{\sim} Dirichlet(\boldsymbol{\beta}_k), \quad k = \{1,\ldots, K\} \\
\mathbf{D}_{i\cdot} \vert Z_{i} = k &\stackrel{i.i.d}{\sim} Multinomial(n_i, \boldsymbol{\eta}_{\cdot k}) \\
\end{split}
\end{equation}
Note that $\bpi$ is now a probability vector of length $K$, and it is drawn from a Dirichlet distribution.
Let $k^*$ be the index of the cluster linked to the positive class.
The observed likelihood is the following.
\begin{equation}
\begin{split}
&p(\bpi, \boldsymbol{\eta} \vert \mathbf{D}, \mathbf{C}^{lp}, \mathbf{C}^{ln}) \\
&\propto p(\bpi) p(\boldsymbol{\eta}) p(\mathbf{D}^{lp}, \mathbf{C}^{lp} \vert \bpi, \boldsymbol{\eta}) p(\mathbf{D}^{ln}, \mathbf{C}^{ln} \vert \bpi, \boldsymbol{\eta}) \Big[p(\mathbf{D}^u \vert \bpi, \boldsymbol{\eta})\Big]^{\lambda} \\
&= p(\bpi) p(\boldsymbol{\eta})
\times \prod_{i=1}^{N^{lp}} p(\mathbf{D}_i^{lp} \vert Z_{i} = k^*, \eta) p(Z_{i} = k^* \vert \bpi) \\
&\quad \times \prod_{i=1}^{N^{ln}}\sum_{k\neq k^*} \Big\{ p(\mathbf{D}_i^{ln} \vert Z_{i} = k, \eta) p(Z_{i} = k \vert \bpi) \Big\} \times \Bigg[\prod_{i=1}^{N^{u}}\sum_{k=1}^K \Big\{ p(\mathbf{D}_i^{u} \vert Z_{i} = k, \boldsymbol{\eta}) p(Z_{i} = k \vert \bpi)\Big\} \Bigg]^{\lambda} \\
&\propto \underbrace{\prod_{k=1}^K\left\{\pi_k^{\alpha_k - 1} \prod_{v=1}^V \eta_{vk}^{\beta_{kv} - 1}\right\}}_\text{prior}
\times
\underbrace{\prod_{i=1}^{N^{lp}} \Big\{ \prod_{v=1}^V \eta_{vk^*}^{D_{iv}}\times \pi_k \Big\}}_\text{positive labeled doc. likelihood} \\
&\quad \times \underbrace{\prod_{i=1}^{N^{ln}}\sum_{k\neq k^*} \Big\{ \prod_{v=1}^V \eta_{vk}^{D_{iv}}\times \pi_k \Big\}}_\text{negative labeled doc. likelihood}
\times \underbrace{\Bigg[\prod_{i=1}^{N^{u}} \sum_{k=1}^K \Big\{\prod_{v=1}^V \eta_{vk}^{D_{iv}}\times \pi_k \Big\}\Bigg]^{\lambda}
}_\text{unlabeled doc. likelihood}
\end{split}
\end{equation}
The Q function (the expectation of the complete log likelihood) is
\begin{equation}
\begin{split}
Q &\equiv \E_{\mathbf{Z} \vert \bpi^{(t)}, \boldsymbol{\eta}^{(t)}, D, C}[p(\bpi, \boldsymbol{\eta}, \mathbf{Z} \vert \mathbf{D}, \mathbf{C})] \\
&= \sum_{k=1}^K \left[(\alpha_k - 1) \log \pi_k^{(t)} + \sum_{v=1}^V \left\{ (\beta_{kv} - 1) \log \eta_{vk}^{(t)} \right\}\right] \\
&\quad + \sum_{i=1}^{N^{lp}} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{vk^*}^{(t)} + \log \pi_{k^*}^{(t)} \Big\}
+ \sum_{i=1}^{N^{ln}} \sum_{k\neq k^*} p_{ik}\Big\{ \sum_{v=1}^V D_{iv} \log \eta_{vk}^{(t)} + \log \pi_k^{(t)} \Big\} \\
&\quad + \lambda \left[ \sum_{i=1}^{N^{u}}\sum_{k=1}^K p_{ik} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{vk}^{(t)} + \log \pi_k^{(t)} \Big\}\right]
\end{split}
\end{equation}
The posterior probability of $Z_i = k$, $p_{ik}$, is
\begin{equation}
\begin{split}
p_{ik} &= \frac{\prod_{v=1}^V \eta_{vk}^{D_{iv}} \times \pi_k}{\sum_{k=1}^K \left[\prod_{v=1}^V \eta_{vk}^{D_{iv}} \times \pi_k \right]}
\end{split}
\end{equation}
M step estimators are
The updating equation for $\pi$ is the following.
\begin{equation}
\begin{split}
\hat{\pi_k} \propto
\begin{cases}
\alpha_k - 1 + \sum_{i=1}^{N^{ln}} p_{ik} + \lambda \sum_{i=1}^{N^u} p_{ik} &\text{if}\ k \neq k^*\\
\alpha_k - 1 + N^{lp} + \lambda \sum_{i=1}^{N^u} p_{ik^*} &\text{if}\ k = k^*
\end{cases}
\end{split}
\end{equation}
The updating equation for $\eta$ is the following.
\begin{equation}
\begin{split}
\hat{\eta}_{vk} \propto
\begin{cases}
(\beta_k -1) + \sum_{i=1}^{N^{ln}} p_{ik} D_{iv} +
\lambda \sum_{i=1}^{N^{u}} p_{ik} D_{iv} & \text{if}\ k \neq k^* \\
(\beta_k -1) + \sum_{i=1}^{N^{lp}} D_{iv} +
\lambda \sum_{i=1}^{N^{u}} p_{ik^*} D_{iv} & \text{if}\ k = k^*
\end{cases}
\end{split}
\end{equation}
Note that we downweight the information from the unlabeled documents by $\lambda$, to utilize more reliable information from labeled documents.
\subsection{Results}
\label{subsec:add_res}
Figure \ref{fig:cluster} shows the results of a model with just two latent clusters vs. a model with 5 latent clusters but
only two final classes (positive vs. negative).
The darker lines show the results with 5 latent clusters and the lighter lines show the results with 2 latent clusters.
Overall, the model with 5 clusters performs better or as well as the model with 2 clusters.
The gain from using 5 clusters is the highest when the proportion of positive labels is small and when the size of labeled data is small.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig_cluster.pdf}
\caption{\textbf{Classification Results with 2 and 5 Clusters.}
The darker lines show the results with 5 latent clusters and the lighter lines show 2 latent clusters.
The columns correspond to various proportions of positive labels in the corpus.
The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps.
Using multiple clusters improves the classification performance when the number of latent clusters matches the data generating process.
}
\label{fig:cluster}
\end{figure}
Figure~\ref{fig:fig_cluster_keywords} shows the results when the multiple cluster approach and keyword upweighting approaches are combined.
\begin{figure}[p!]
\centering
\includegraphics[scale=0.85]{fig_cluster_keywords.pdf}
\caption{\textbf{Classification Results with Multiple Clusters and Keywords.}
The rows correspond to different datasets and the columns correspond to various proportions of positively labeled documents in the corpus.
The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps.
The linetype show whether keywords are supplied: the solid lines show the results with keywords and the dashed lines without keywords.
The colors show the number of latent clusters in the mixture model: the darker lines show the results with 5 latent clusters and the lighter lines with 2 latent clusters.
Using 5 clusters leads to as good or slightly better performance than using 2 clusters.
The performance improvement is the largest with the BBC corpus, which consists of 5 news topic categories.
Likewise, our mixture models with keywords leads to as good or better performance than the models without keywords.
The improvement is the largest with the human rights corpus, where the number of words per document is the smallest.
}
\label{fig:fig_cluster_keywords}
\end{figure}
\clearpage
\section{Multiclass Classification}
\label{sec:multiple_class}
\subsection{Model}
This section presents a model and inference algorithm for multiclass classification.
Let $K$ be the number of the clusters and is equal to the number of classes to be classified,
with $K \geq 2$. Differently than in SI~\ref{sec:multiple_cluster_model}, we do not impose any hierarchies and
the model is a true multi-class mixture model, where the end goal is to classify documents
in $K \geq 2$ classes. In other words, the model presented below is a generalization
of the model presented in the main text.
\begin{equation}
\begin{split}
\pi &\sim Dirichlet(\balpha) \\
Z_i &\stackrel{i.i.d}{\sim} Categorical(\bpi) \\
\mathbf{\eta}_{\cdot k} &\stackrel{i.i.d}{\sim} Dirichlet(\boldsymbol{\beta}_k), \quad k = \{1,\ldots, K\} \\
\mathbf{D}_{i\cdot} \vert Z_{i} = k &\stackrel{i.i.d}{\sim} Multinomial(n_i, \boldsymbol{\eta}_{\cdot k}) \\
\end{split}
\end{equation}
Note that $\bpi$ is now a probability vector of length $K$, and it is drawn from a Dirichlet distribution.
The observed likelihood is the following.
\begin{equation}
\begin{split}
p(\bpi, \boldsymbol{\eta} \vert \mathbf{D}, \mathbf{C}^l)
&\propto p(\bpi) p(\boldsymbol{\eta}) p(\mathbf{D}, \mathbf{C} \vert \bpi, \boldsymbol{\eta}) \Big[p(\mathbf{D}^u \vert \bpi, \boldsymbol{\eta})\Big]^{\lambda} \\
&= p(\bpi) p(\boldsymbol{\eta})
\times \prod_{k=1}^K \prod_{i=1}^{N^k} p(\mathbf{D}_i^{l} \vert Z_{i} = k, \eta) p(Z_{i} = k \vert \bpi) \\
&\quad \times \Bigg[\prod_{i=1}^{N^{u}}\sum_{k=1}^K \Big\{ p(\mathbf{D}_i^{u} \vert Z_{i} = k, \boldsymbol{\eta}) p(Z_{i} = k \vert \bpi)\Big\} \Bigg]^{\lambda} \\
&\propto \underbrace{\prod_{k=1}^K\left\{\pi_k^{\alpha_k - 1} \prod_{v=1}^V \eta_{vk}^{\beta_{kv} - 1}\right\}}_\text{prior}
\times
\underbrace{\prod_{k=1}^K \prod_{i=1}^{N^k} \Big\{ \prod_{v=1}^V \eta_{vk}^{D_{iv}}\times \pi_k \Big\}}_\text{labeled doc. likelihood}
\times \underbrace{\Bigg[\prod_{i=1}^{N^{u}} \sum_{k=1}^K \Big\{\prod_{v=1}^V \eta_{vk}^{D_{iv}}\times \pi_k \Big\}\Bigg]^{\lambda}
}_\text{unlabeled doc. likelihood}
\end{split}
\end{equation}
The Q function (the expectation of the complete log-likelihood) is
\begin{equation}
\begin{split}
Q &\equiv \E_{\mathbf{Z} \vert \bpi^{(t)}, \boldsymbol{\eta}^{(t)}, D, C}[p(\bpi, \boldsymbol{\eta}, \mathbf{Z} \vert \mathbf{D}, \mathbf{C})] \\
&= \sum_{k=1}^K \left[(\alpha_k - 1) \log \pi_k^{(t)} + \sum_{v=1}^V \left\{ (\beta_{kv} - 1) \log \eta_{vk}^{(t)} \right\}\right] \\
&\quad + \sum_{k=1}^K \sum_{i=1}^{N^k} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{vk}^{(t)} + \log \pi_{k}^{(t)} \Big\} \\
&\quad + \lambda \left[ \sum_{i=1}^{N^{u}}\sum_{k=1}^K p_{ik} \Big\{ \sum_{v=1}^V D_{iv} \log \eta_{vk}^{(t)} + \log \pi_k^{(t)} \Big\}\right]
\end{split}
\end{equation}
The posterior probability of $Z_i = k$, $p_{ik}$, is
\begin{equation}
\begin{split}
p_{ik} &= \frac{\prod_{v=1}^V \eta_{vk}^{D_{iv}} \times \pi_k}{\sum_{k=1}^K \left[\prod_{v=1}^V \eta_{vk}^{D_{iv}} \times \pi_k \right]}
\end{split}
\end{equation}
M step estimators are
The updating equation for $\pi$ is the following.
\begin{equation}
\begin{split}
\hat{\pi_k} \propto \alpha_k - 1 + N^k + \lambda \sum_{i=1}^{N^u} p_{ik}
\end{split}
\end{equation}
The updating equation for $\eta$ is the following.
\begin{equation}
\begin{split}
\hat{\eta}_{vk} \propto (\beta_k -1) + \sum_{i=1}^{N^k} D_{iv} + \lambda \sum_{i=1}^{N^{u}} p_{ik} D_{iv}
\end{split}
\end{equation}
Note that we downweight the information from the unlabeled documents by $\lambda$, to utilize more reliable information from labeled documents.
\subsection{Results}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.85]{fig1_multi.pdf}
\caption{\textbf{Multiclass Classification Results.}\\
The darker lines show the results with \aT and the lighter lines show the results with SVM.
The solid lines use active sampling to decide the next set of documents to be labeled, and the dashed lines use random (passive) sampling.
The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps.
The left column shows the results on BBC corpus, where the target classes are ``Politics,'' ``Entertainment,'' ``Business,'' ``Sports,'' and ``Technology.''
``Politics'' class has 5\% of the total dataset, and the rest 95\% is evenly split across the rest of classes.
The right column shows the results on the Supreme Court corpus, where the target classes are ``Criminal Procedure'' (32.4\% of the corpus), ``Civil Rights'' (21.4\%), ``Economic Activity'' (22.2\%), ``Judicial Power'' (15.4\%), ``First Amendment (8.6\%).''
In our model, we set the number of latent clusters to be the same as the classification categories and linked each latent cluster to one classification category.
\aT performs the best across the four specifications on both corpora.
}
\label{fig:multiclass}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.85]{fig1_multi_time_linear.pdf}
\caption{\textbf{Time comparison of Multiclass Classification Results.}\\
The darker lines show the results with \aT and the lighter lines show the results with SVM.
The solid lines use active sampling to decide the next set of documents to be labeled, and the dashed lines use random (passive) sampling.
The y-axis indicates the average cumulative computational time and the x-axis shows the number of sampling steps.
The left column shows the results on BBC corpus, and the right column shows the results on the Supreme Court corpus.
\aT is much faster than SVM in multiclass classification.
This is because multiclass classification with SVM requires fitting the model repeatedly at least the same time as the number of target classes.
By contrast, \aT requires to fit only once regardless of the number of target classes.
}
\label{fig:multiclass_time_linear}
\end{figure}
\clearpage
\section{Model Specifications and Description of the Datasets in the Validation Performance}
\label{sec:validation_specification}
We explain our decisions regarding pre-processing steps, model evaluation, and model specifications, followed by a detailed discussion of the results for each dataset.
\subsection{Pre-processing}
We employ the same pre-processing step for each of the four datasets using the \textit{R} package \textit{Quanteda}.\footnote{See \url{https://quanteda.io}}
For each dataset, we construct a \textit{document-feature matrix} (DFM), where each row is a document and each column is a feature.
Each feature is a stemmed unigram. We remove stopwords, features that occur extremely infrequently, as well as all features under 4 characters
To generate dataset with the proportion of positive class $p$ (e.g. 5\% or 50\%), we randomly sample documents from the original dataset so that it achieves the proportion of the positive class $p$.
Suppose the number of documents in the original dataset is $N$ with $N_{pos}$ and $N_{neg}$ the number of positive and negative documents, respectively.
We compute $M_{pos}=\text{floor}(Np)$ and $M_{neg}=N - M_{pos}$ as the ideal numbers of positive and negative documents.
While $M_{pos} > N_{pos}$ or $M_{neg} > N_{neg}$, we decrement $M_{pos}$ and $M_{neg}$ keeping the positive proportion to $p$.
With $M_{pos} < N_{pos}$ and $M_{neg} < N_{neg}$, we sample $M_{pos}$ positive documents and $M_{neg}$ negative documents from the original dataset.
Finally, combine the sampled positive and negative documents to obtain the final dataset.
\subsection{Datasets}
\paragraph{BBC News}
The BBC News Dataset is a collection of 2,225 documents from 2004 to 2005 available at the BBC news website \citep{greene06icml}.
This dataset is divided equally into five topics: business, entertainment, politics, sport, and technology.
The classification exercise is to correctly predict whether or not an article belongs to the `politics' topic.
\paragraph{Wikipedia Toxic Comments}
The Wikipedia Toxic Comments dataset is a dataset made up of conversations between Wikipedia editors in Wikipedia's internal forums.
The dataset was made openly available as part of a Kaggle competition,\footnote{See \url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge}} and was used as a principle dataset of investigation by \cite{millerActiveLearningApproaches2020}.
The basic classification task is to label a given speech as toxic or not, where toxicity is defined as including harassment and/or abuse of other users.\footnote{While the dataset also contains finer gradation of `types' of toxicity, we like \cite{millerActiveLearningApproaches2020} stick to the binary toxic-or-not classification task.}
The complete dataset is comprised of roughly 560,000 documents, roughly 10 percent of which are labeled as toxic.
\paragraph{Supreme Court Cases}
The Supreme Court Rulings dataset is a collection of the text of 2000 US Supreme Court rulings between 1946 and 2012.
We use the majority opinion of each case and the text was obtained through Caselaw Access Project.\footnote{\url{https://case.law}}
For the classification label, we use the categories created by the Supreme Court Database.\footnote{For a full list of categories, see \url{http://www.supremecourtdatabase.org/documentation.php?var=issueArea}.}
The classification exercise here is to correctly identify rulings that are categorized as `criminal procedure', which is the largest category in the corpus (26\% of all rulings).
\paragraph{Human Rights Allegation}
Human Rights Allegation dataset contains more than 2 million sentences of human rights reports in 196 countries between 1996 and 2016, produced by Amnesty International, Human Rights Watch and the US State Department \citep{farissphysical}.
The classification goal is to identify sentences with physical integrity rights allegation (16\% of all reports).
Example violations of physical integrity rights include torture, extrajudicial killing, and arbitrary arrest and imprisonment.
\clearpage
\section{Additional Results on Classification Performance}\label{sec_si:add_results}
To complement the results presented in Figure~1 in the main text, Table~\ref{tab:comparison.ran.ent}
presents the results (across datasets) of fitting our model at the initial (iteration 0) and last active step (iteration 30).
It is clear from the table that the improvements \aT
brings in terms of the F1-score, precision, and recall. Furthermore, after labeling
600 documents (20 per iteration), uncertainty sampling
outperforms random sampling across evaluation metrics, which empirically validates the promise of
active learning in terms of text classification.
\begin{table}[h!]
\centering
\caption{\textbf{Classification Performance: Uncertainty vs Random Sampling with $\lambda = 0.001$}}
\label{tab:comparison.ran.ent}
\footnotesize
\begin{tabular}{l l *{6}{c}}
\toprule
Dataset & Active Step & \multicolumn{3}{c}{Uncertainty Sampling} & \multicolumn{3}{c}{Random Sampling} \\
\cmidrule(lr){3-5} \cmidrule(lr){6-8}
& & Precision & Recall & F1-score & Precision & Recall & F1-score \\ \midrule
\multirow{2}{*}{\texttt{Wikipedia}}
& 0 & 0.71 & 0.13 & 0.22 & 0.71 & 0.13 & 0.22 \\
& 30 & 0.71 & 0.54 & 0.61 & 0.45 & 0.56 & 0.50 \\
\midrule
\multirow{2}{*}{\texttt{BBC}}
& 0 & 0.33 & 0.86 & 0.48 & 0.33 & 0.86 & 0.48\\
& 30 & 0.92 & 0.96 & 0.94 & 0.92 & 0.94 & 0.93 \\
\midrule
\multirow{2}{*}{\texttt{Supreme Court}}
& 0 & 0.46 & 0.98 & 0.63 & 0.46 & 0.98 & 0.63\\
& 30 & 0.85 & 0.91 & 0.88 & 0.75 & 0.96 & 0.84 \\
\midrule
\multirow{2}{*}{\texttt{Human Rights}}
& 0 & 0.61 & 0.01 & 0.02 &0.61 & 0.01 & 0.02 \\
& 30 & 0.53 & 0.42 & 0.47 & 0.46 & 0.44 & 0.45 \\
\bottomrule
\end{tabular}
\end{table}
Similarly, and as noted in the main text, our results appear to be not
too sensitive to the selection of the weighting parameter $\lambda$, provided that its
value remains small. Figures~\ref{fig:lambda_1} confirms this finding.
After 30 active steps, the performance of \aT is better in terms
of F1-score when $\lambda = 0.001$ if compared to $\lambda = 0.01$
\begin{figure}[t!]
\includegraphics[width=\linewidth]{lambda_compare_1.pdf}
\caption{\textbf{Classification Results with 2 Clusters and $\lambda = 0.01$ vs $\lambda = 0.001$.}
The darker lines show the results with $\lambda = 0.001$ and the lighter lines show $\lambda = 0.01$.
The columns correspond to various proportion of positive labels in the corpus.
The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps.
The smaller the value of $\lambda$ the better the performance of our model.
}
\label{fig:lambda_1}
\end{figure}
\clearpage
\section{Main Results when Varying Positive Class Rate}
\label{sec:main_results_appdx}
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{svm_em_comparison_appdx.pdf}
\caption{\textbf{Replication of F1 performance from Figures 2 and 3 with 0.05, 0.5, and population positive class rate}}
\label{sifig:F1comparison}
\end{figure}
\clearpage
\section{Visual Demonstration of Active Keyword}
\label{sec:keywords_visual}
Figure \ref{fig:keywords_eta} illustrates how the word-class matrix $\boldsymbol{\eta}$ is updated with and without keywords across iterations.
A subset of the keywords supplied is labeled and highlighted by black dots.
The x-axis shows the log of $\eta_{v1} / \eta_{v0}$, where $\eta_{v1}$ corresponds the probability of observing the word $v$ in a document with a positive label and $\eta_{v0}$ for a document with a negative label.
The high value in the x-axis means that a word is more strongly associated with positive labels.
The y-axis is the log of word frequency.
A word with high word frequency has more influence in shifting the label probability.
In the generative model for \aT, words that appear often and whose ratio of $\eta_{vk^*}$ vs $\eta_{vk}$ is high play a central role in the label prediction.
By shifting the value of $\boldsymbol{\eta}$ of those keywords, we can accelerate the estimation of $\boldsymbol{\eta}$ and improve the classification performance.
\begin{figure}[p!]
\includegraphics[width=\linewidth]{hr_kw_fig.png}
\caption{\textbf{Update of the Word-class Matrix ($\boldsymbol{\eta}$) with and without Keywords}
}
\label{fig:keywords_eta}
\end{figure}
\clearpage
\section{Classification Performance with Mislabels}
\label{sec:mislabel}
\subsection{Mislabeled Keywords}\label{subsec:mislabel-keywords}
The rows correspond to different datasets and the columns correspond to various values of $\gamma$, which controls the degree of keyword upweighting.
The y-axis indicates the out-of-sample F1 score and the x-axis shows the number of sampling steps.
At each sampling step, 20 documents are labeled.
We use $\lambda = 0.001$ to downweight information from unlabeled documents.
The lines correspond to different levels of mislabels at the keyword labeling.
At each iteration, 10 candidate keywords are proposed, and a hypothetical oracle decides if they are indeed keywords or not.
`True' keywords are defined in the same way as in Section~\nameref{subsec:keyword_results}.
In other words, a candidate keyword $v$ for the positive class is a `true' keyword, if the value of $\eta_{v,k}/\eta_{v,k'}$ is above 90\% quantile, where $k$ is the positive class and $k'$ is the negative class, and this $\boldsymbol{\eta}$ is what we obtain by training the model with the full labels.
The same goes for the negative class.
When the probability of mislabeling keywords is $p$\%, an oracle makes a mistake in the labeling with probability $p$.
Specifically, if a candidate keyword $v$ is a `true' keyword, the oracle would not label $v$ as a keyword with probability $p$.
Likewise, if a candidate keyword $v$ is not a `true' keyword, they would label $v$ as a keyword.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\textwidth]{fig_keywords_mislabel_all.pdf}
\caption{\textbf{Classification Results with Mislabels in Active Keywords}\\
}
\label{fig:keyword_mislabel}
\end{figure}
\clearpage
\subsection{Mislabeled Documents}
\label{subsec:mislabel_documents}
In this section, we present results about the effect of `honest' (random) mislabeling of
documents on the mapping of documents to classes. As Figure~\ref{fig:doc_mislabel} shows,
as the proportion of mislabels increases, the classification performance of \aT decreases.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{fig_doc_mislabel_all.pdf}
\caption{\textbf{Classification Results with Mislabels in Active Document Labeling}\\
The rows correspond to different datasets.
The y-axis indicates the out-of-sample F1 score and the x-axis shows the number of sampling steps.
20 documents are labeled at each sampling step.
The colors correspond to different levels of mislabels in the labeling of documents.
We find that as the proportion of mislabels increases, the classification performance of \aT decreases.
}
\label{fig:doc_mislabel}
\end{figure}
\clearpage
\section{Comparison of the predictions between \aT and xgboost predictions for the \citet{gohdes2020repression} data}
Table \ref{tbl:gohdes_confusion_matrix} shows the confusion matrix between the prediction based on \aT and the prediction by xgboost used in the original paper.
Most observations fall in the diagonal cells of the matrix, and the correlation between the two predictions is quite high (0.93).
One difference is that \aT classifies more documents to target killings compared to the original predictions.
Note that either prediction claims the ground truth. Both are the results of different classifiers.
\begin{table}[h!]
\begin{tabular}{cc|ccc}
& & \multicolumn{3}{c}{Original}\\
& & untargeted & targeted & non-government\\
\hline
\multirow{3}{*}{\aT} & untargeted & 50327 & 411 & 135\\
& targeted & 1630 & 10044 & 31\\
& non-government & 382 & 34 & 2280\\
\end{tabular}
\caption{Confusion matrix between \aT and xgboost predictions}
\label{tbl:gohdes_confusion_matrix}
\end{table}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{prop_target_original_active.pdf}
\caption{\textbf{Scatter plot of the dependent variable between the one constructed by \aT vs. the original}\
The author performs a binomial logit regression where the dependent variable is the ratio of the number of targeted killings to the total number of government killings.
We compare the dependent variable used in the original paper vs. the one we constructed using \aT.
The 45-degree line (in red) corresponds to equality between measures.
We can see that most observations lie around the 45-degree line while there are some values in the upper triangle.
This suggests that \aT yields a similar dependent variable to the original one, while there may be some overestimations of the proportion of target killing with \aT.
}
\label{fig:scatter_predict_compare}
\end{figure}
\clearpage
\section{Regression Table in \citet{gohdes2020repression}}
\label{sec:syria_regression}
Table~\ref{tbl:gohdes_table_original} is the original regression table reported in \cite{gohdes2020repression} while Table~\ref{tbl:gohdes_table_active} is a replication of the original table using \aT.
In both tables, the coefficients on the Internet access variable are positive and statistically significant, which match the author's substantive conclusion.
One may wonder why the absolute values of the coefficients on the IS and Internet is larger in Table~\ref{tbl:gohdes_table_active}.
However, we believe that this is because the number of observations in the IS control is small (51) and there is almost no variation of the Internet access variable within the observations with IS control, as shown in Figure~\ref{fig:internet_is}.
\begin{table}[h!]
\adjustbox{max width=\textwidth}{
\begin{tabular}{l c c c c c c c}
\hline
& I & II & III & IV & V & VI & VII \\
\hline
Intercept & $-2.340^{***}$ & $-2.500^{***}$ & $-0.899^{*}$ & $-0.410$ & $-0.019$ & $-1.308$ & $-3.013^{**}$ \\
& $(0.205)$ & $(0.267)$ & $(0.403)$ & $(0.521)$ & $(0.357)$ & $(1.057)$ & $(1.103)$ \\
Internet access (3G) & $0.224^{*}$ & $0.231^{*}$ & $0.200^{*}$ & $0.205^{*}$ & $0.265^{*}$ & $0.313^{**}$ & $0.909^{***}$ \\
& $(0.095)$ & $(0.094)$ & $(0.085)$ & $(0.087)$ & $(0.113)$ & $(0.116)$ & $(0.124)$ \\
\% Govt control & & & & & & & $0.016^{***}$ \\
& & & & & & & $(0.004)$ \\
Internet (3G) * \% Govt control & & & & & & & $-0.014^{***}$ \\
& & & & & & & $(0.001)$ \\
Govt control & $0.774^{*}$ & $0.803^{**}$ & $1.167^{***}$ & $1.180^{***}$ & $0.080$ & $0.856^{**}$ & $0.811^{***}$ \\
& $(0.332)$ & $(0.272)$ & $(0.284)$ & $(0.288)$ & $(0.344)$ & $(0.313)$ & $(0.237)$ \\
IS control & $2.027^{***}$ & $1.644^{***}$ & $1.045^{*}$ & $-0.324$ & $0.432$ & $0.787$ & $-0.663^{**}$ \\
& $(0.435)$ & $(0.462)$ & $(0.421)$ & $(0.209)$ & $(0.414)$ & $(0.418)$ & $(0.221)$ \\
Kurd control & $0.386$ & $-0.243$ & $-0.506$ & $-1.331$ & $-0.402$ & $0.033$ & $-0.616$ \\
& $(0.594)$ & $(0.843)$ & $(0.760)$ & $(1.134)$ & $(0.745)$ & $(0.802)$ & $(0.432)$ \\
Opp control & $1.160^{***}$ & $1.252^{***}$ & $0.727^{*}$ & $0.759^{*}$ & $-0.700^{*}$ & $-0.281$ & $-0.176$ \\
& $(0.298)$ & $(0.317)$ & $(0.293)$ & $(0.296)$ & $(0.283)$ & $(0.342)$ & $(0.164)$ \\
Internet (3G) * Govt control & $-0.163$ & $-0.182$ & $-0.327^{**}$ & $-0.324^{**}$ & $-0.104$ & $-0.358^{**}$ & \\
& $(0.132)$ & $(0.117)$ & $(0.119)$ & $(0.122)$ & $(0.133)$ & $(0.120)$ & \\
Internet (3G) * IS control & $-1.798^{***}$ & $-1.525^{***}$ & $-1.377^{***}$ & & $-1.391^{***}$ & $-1.336^{***}$ & \\
& $(0.220)$ & $(0.281)$ & $(0.251)$ & & $(0.264)$ & $(0.261)$ & \\
Internet (3G) * Kurd control & $-0.133$ & $0.336$ & $0.093$ & $0.895$ & $-0.052$ & $-0.202$ & \\
& $(0.444)$ & $(0.649)$ & $(0.569)$ & $(0.936)$ & $(0.553)$ & $(0.527)$ & \\
Internet (3G) * Opp. control & $-0.605^{***}$ & $-0.722^{***}$ & $-0.511^{**}$ & $-0.533^{***}$ & $0.316^{*}$ & $0.286$ & \\
& $(0.159)$ & $(0.173)$ & $(0.157)$ & $(0.158)$ & $(0.151)$ & $(0.186)$ & \\
\# Killings (log) & & & $-0.273^{***}$ & $-0.271^{***}$ & $-0.354^{***}$ & $-0.412^{***}$ & $-0.584^{***}$ \\
& & & $(0.054)$ & $(0.055)$ & $(0.051)$ & $(0.072)$ & $(0.074)$ \\
Govt gains & & & & $0.643$ & & & \\
& & & & $(0.385)$ & & & \\
Govt losses & & & & $0.632$ & & & \\
& & & & $(0.413)$ & & & \\
Christian & & & & & $0.068$ & $0.345^{**}$ & $0.398^{***}$ \\
& & & & & $(0.111)$ & $(0.116)$ & $(0.110)$ \\
Alawi & & & & & $1.479^{**}$ & $-1.167^{***}$ & $-0.812^{***}$ \\
& & & & & $(0.522)$ & $(0.177)$ & $(0.176)$ \\
Druze & & & & & $-0.634^{***}$ & $-0.302$ & $0.135$ \\
& & & & & $(0.191)$ & $(0.191)$ & $(0.190)$ \\
Kurd & & & & & $-0.659^{***}$ & $-0.542^{*}$ & $-0.580^{**}$ \\
& & & & & $(0.194)$ & $(0.237)$ & $(0.212)$ \\
Internet (3G) * Alawi & & & & & $-0.909^{***}$ & & \\
& & & & & $(0.163)$ & & \\
Pop (log) & & & & & & $0.196$ & $0.408^{**}$ \\
& & & & & & $(0.149)$ & $(0.150)$ \\
Unempl. (\%) & & & & & & $-0.016$ & $-0.002$ \\
& & & & & & $(0.012)$ & $(0.012)$ \\
\hline
AIC & $11956.847$ & $9993.704$ & $9665.749$ & $9495.591$ & $7671.979$ & $7873.915$ & $7327.796$ \\
BIC & $12001.524$ & $10239.427$ & $9915.941$ & $9744.552$ & $7944.509$ & $8150.913$ & $7595.858$ \\
Log Likelihood & $-5968.424$ & $-4941.852$ & $-4776.875$ & $-4691.796$ & $-3774.990$ & $-3874.958$ & $-3603.898$ \\
Deviance & $9519.651$ & $7466.508$ & $7136.554$ & $7026.891$ & $5132.784$ & $5332.720$ & $4790.601$ \\
Num. obs. & $640$ & $640$ & $640$ & $626$ & $640$ & $640$ & $640$ \\
\hline
\multicolumn{8}{l}{\scriptsize{$^{***}p<0.001$; $^{**}p<0.01$; $^{*}p<0.05$. Reference category: Contested control. Governorate-clustered SEs.}}
\end{tabular}}
\caption{Table 1 in Gohdes 2020: Original table}
\label{tbl:gohdes_table_original}
\end{table}
\begin{table}[h!]
\adjustbox{max width=\textwidth}{
\begin{tabular}{l c c c c c c c}
\hline
& I & II & III & IV & V & VI & VII \\
\hline
Intercept & $-2.196^{***}$ & $-2.428^{***}$ & $-0.795^{*}$ & $-0.351$ & $-0.037$ & $-1.141$ & $-2.695^{*}$ \\
& $(0.197)$ & $(0.242)$ & $(0.390)$ & $(0.490)$ & $(0.348)$ & $(1.229)$ & $(1.227)$ \\
Internet access (3G) & $0.277^{**}$ & $0.282^{***}$ & $0.242^{**}$ & $0.250^{**}$ & $0.342^{***}$ & $0.369^{***}$ & $0.853^{***}$ \\
& $(0.091)$ & $(0.081)$ & $(0.075)$ & $(0.077)$ & $(0.103)$ & $(0.107)$ & $(0.118)$ \\
\% Govt control & & & & & & & $0.015^{***}$ \\
& & & & & & & $(0.004)$ \\
Internet (3G) * \% Govt control & & & & & & & $-0.013^{***}$ \\
& & & & & & & $(0.001)$ \\
Govt control & $0.625^{*}$ & $0.672^{**}$ & $1.048^{***}$ & $1.058^{***}$ & $0.151$ & $0.843^{**}$ & $0.559^{*}$ \\
& $(0.319)$ & $(0.255)$ & $(0.269)$ & $(0.273)$ & $(0.358)$ & $(0.300)$ & $(0.249)$ \\
IS control & $15.157^{***}$ & $15.688^{***}$ & $15.072^{***}$ & $-0.275$ & $14.551^{***}$ & $14.877^{***}$ & $-0.600^{**}$ \\
& $(1.123)$ & $(1.148)$ & $(1.136)$ & $(0.200)$ & $(1.132)$ & $(1.134)$ & $(0.209)$ \\
Kurd control & $0.795$ & $0.099$ & $-0.227$ & $-0.440$ & $-0.157$ & $0.334$ & $-0.369$ \\
& $(0.516)$ & $(0.729)$ & $(0.671)$ & $(1.119)$ & $(0.677)$ & $(0.744)$ & $(0.405)$ \\
Opp control & $0.978^{***}$ & $1.134^{***}$ & $0.594^{*}$ & $0.634^{*}$ & $-0.606^{*}$ & $-0.197$ & $-0.278$ \\
& $(0.294)$ & $(0.304)$ & $(0.284)$ & $(0.289)$ & $(0.270)$ & $(0.322)$ & $(0.155)$ \\
Internet (3G) * Govt control & $-0.169$ & $-0.190$ & $-0.334^{**}$ & $-0.335^{**}$ & $-0.183$ & $-0.408^{***}$ & \\
& $(0.126)$ & $(0.103)$ & $(0.108)$ & $(0.111)$ & $(0.131)$ & $(0.111)$ & \\
Internet (3G) * IS control & $-14.829^{***}$ & $-15.506^{***}$ & $-15.351^{***}$ & & $-15.392^{***}$ & $-15.330^{***}$ & \\
& $(1.080)$ & $(1.096)$ & $(1.090)$ & & $(1.091)$ & $(1.091)$ & \\
Internet (3G) * Kurd control & $-0.400$ & $0.138$ & $-0.080$ & $0.134$ & $-0.240$ & $-0.366$ & \\
& $(0.324)$ & $(0.514)$ & $(0.463)$ & $(0.940)$ & $(0.473)$ & $(0.460)$ & \\
Internet (3G) * Opp. control & $-0.542^{***}$ & $-0.688^{***}$ & $-0.468^{**}$ & $-0.497^{**}$ & $0.181$ & $0.149$ & \\
& $(0.159)$ & $(0.164)$ & $(0.150)$ & $(0.152)$ & $(0.145)$ & $(0.176)$ & \\
\# Killings (log) & & & $-0.278^{***}$ & $-0.274^{***}$ & $-0.356^{***}$ & $-0.415^{***}$ & $-0.567^{***}$ \\
& & & $(0.053)$ & $(0.054)$ & $(0.051)$ & $(0.071)$ & $(0.073)$ \\
Govt gains & & & & $0.512$ & & & \\
& & & & $(0.349)$ & & & \\
Govt losses & & & & $0.730^{*}$ & & & \\
& & & & $(0.334)$ & & & \\
Christian & & & & & $0.092$ & $0.352^{**}$ & $0.369^{***}$ \\
& & & & & $(0.115)$ & $(0.113)$ & $(0.105)$ \\
Alawi & & & & & $1.329^{*}$ & $-0.928^{***}$ & $-0.585^{***}$ \\
& & & & & $(0.528)$ & $(0.167)$ & $(0.168)$ \\
Druze & & & & & $-0.628^{**}$ & $-0.310$ & $0.063$ \\
& & & & & $(0.196)$ & $(0.197)$ & $(0.209)$ \\
Kurd & & & & & $-0.565^{**}$ & $-0.502^{*}$ & $-0.615^{**}$ \\
& & & & & $(0.204)$ & $(0.227)$ & $(0.207)$ \\
Internet (3G) * Alawi & & & & & $-0.782^{***}$ & & \\
& & & & & $(0.164)$ & & \\
Pop (log) & & & & & & $0.185$ & $0.391^{*}$ \\
& & & & & & $(0.167)$ & $(0.168)$ \\
Unempl. (\%) & & & & & & $-0.019$ & $-0.007$ \\
& & & & & & $(0.012)$ & $(0.012)$ \\
\hline
AIC & $12050.644$ & $10116.531$ & $9739.975$ & $9570.556$ & $8038.596$ & $8197.433$ & $7735.527$ \\
BIC & $12095.321$ & $10362.255$ & $9990.166$ & $9819.517$ & $8311.125$ & $8474.431$ & $8003.589$ \\
Log Likelihood & $-6015.322$ & $-5003.266$ & $-4813.988$ & $-4729.278$ & $-3958.298$ & $-4036.717$ & $-3807.763$ \\
Deviance & $9500.059$ & $7475.946$ & $7097.391$ & $6986.658$ & $5386.011$ & $5542.849$ & $5084.942$ \\
Num. obs. & $640$ & $640$ & $640$ & $626$ & $640$ & $640$ & $640$ \\
\hline
\multicolumn{8}{l}{\scriptsize{$^{***}p<0.001$; $^{**}p<0.01$; $^{*}p<0.05$. Reference category: Contested control. Governorate-clustered SEs.}}
\end{tabular}}
\caption{Table 1 in Gohdes 2020: Reanalysis with \aT}
\label{tbl:gohdes_table_active}
\end{table}
\begin{figure}[t!]
\includegraphics[width=\linewidth]{internet_is.pdf}
\caption{\textbf{Histogram of the Internet (3G) variable by the IS control in the original data}\
The left histogram is the distribution of the Internet (3G) variable for the observation under IS control, and the right one is not under IS control.
The number of observations with IS control is only 51 out of the total observation of 640.
In addition, among those with IS control, all observations except one takes the same value for the Internet access variable.
This suggests that the regression coefficient on the interaction of IS control and Internet access can be highly unstable.
}
\label{fig:internet_is}
\end{figure}
\clearpage
\section{Effect of Labeling More Sentences for the \citet{park:etal:2020} Reanalysis}\label{sec:colaresi}
In this section, we present additional results mentioned in the main text about our reanalysis of \citet{park:etal:2020}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figs/colaresi_fig2.pdf}
\caption{\textbf{Using the Difference in Out of Sample F1 Score to Decide a Stopping Point.}}
\label{fig:colaresi_fig2}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figs/colaresi_fig3.pdf}
\caption{\textbf{Replication of Figure 1 in \cite{park:etal:2020}: The Relationship Between Information Density and Average Sentiment Score Across Different Settings for the Total Number of Labeled Documents.}}
\label{fig:colaresi_fig3}
\end{figure}
\clearpage
\section{Conclusion}
\label{sec:conclusion}
Human labeling of documents is the most labor-intensive part of social science research that uses text data.
For automated text classification to work, a machine classifier needs to be trained on the relationship between text features and class labels, and the labels in training data are given manually.
In this paper we have described a new active learning algorithm that combines a mixture model and active learning to incorporate information from labeled and unlabeled documents and better select which documents to be labeled by a human coder.
Our validation study showed that the proposed algorithm performed at least as well as state-of-the-art methods such as BERT while reducing computational costs dramatically.
We replicated two published political science studies to show that our algorithm lead to the same conclusions as the original papers but needed much fewer labeled documents.
In sum, our algorithm enables researchers to save their manual labeling efforts without sacrificing quality.
Machine learning techniques are becoming increasingly popular in political science, but the barrier to entry remains too high for researchers without a technical background to make use of advances in the field.
As a result, there is an opportunity to democratize access to these methods.
Towards this, we continue to work towards publishing the R package \textit{activeText} on CRAN.
We believe that our model will provide applied researchers a tool that they can use to efficiently categorize documents in corpuses of varying sizes and topics.
\section{Discussion}
\label{sec:discussion}
\subsection{Tuning the value of $\lambda$}
As noted above, we downweight the information from unlabeled documents as we typically have more unlabeled than labeled documents. Moreover, since the labeled documents have been classified by an expert, we want to rely more on the information they bring for prediction.
An important practical consideration is: how to select the value of $\lambda$ that maximizes the performance.
One possible approach would be to adopt popular model selection methods (e.g. cross-validation) to choose the appropriate $\lambda$ value during the model initialization process.\footnote{Indeed, it may be beneficial to tune the lambda value \textit{across} active learning iterations.}
However, cross-validation may not be practical when the labeled data is scarce (or absent at the beginning of the process). Using our active learning approach is particularly,
we have observed across a variety of applications that very small values (e.g., 0.001 or 0.01) work the best on the corpora we used (see SI~\ref{si-sec_si:add_results}). However, more work is needed to clearly understand the optimality criteria needed to select $\lambda$. We leave this question for future research.
\subsection{Labeling Error}
While our empirical applications assume that labelers are correct, human labelers do make mistakes.
In SI~\ref{si-sec:mislabel}, we examine how mislabeling keywords and documents affect classification performance.
Our results show that, if compared to the no-keyword approach, a small amount of random noise (classical measurement error) on keyword labeling does not hurt the classification performance. In contrast, random perturbations from true document labels do hurt the classification performance.
A promising avenue for future research should center on developing new active learning algorithms that assign labelers based on their labeling ability and/or are robust to more pervasive forms of labeling error (differential and non-differential measurement error).
For instance, assigning the most competent labelers with the most uncertain or difficult documents and assigning the least competent labelers with easier documents can optimize the workload of the labelers.
At the same time, we note that users may be able to improve the quality of human labeling by other means, such as polishing cateogry concepts and better training of coders, in practical settings.
\section{Introduction}
As the amount and diversity of available information have rapidly increased, social scientists are increasingly resorting to multiple forms of data to answer substantive questions.
In particular, the use of text-as-data in social science research has exploded over the past decade.\footnote{See e.g.,~\cite{grim:stew:2013} for an excellent overview of these methods in political science.}
Document classification has been the primary task in political science, with researchers classifying documents such as legislative speeches \citep{peterson2018classification, moto:2020}, correspondences to administrative agencies \citep{lowande2018polices,lowande2019politicization}, public statements of politicians \citep{airoldi2007whose, stewart2009use}, news articles \citep{boydstun2013making}, election manifestos \citep{catalinac2016electoral}, social media posts \citep{king2017chinese}, treaties \citep{spirling2012us}, religious speeches \citep{nielsen2017deadly}, and human rights text \citep{farissphysical, greene2019machine} into two or more categories.
Researchers use the category labels of documents produced by the classification task as the outcome or predictive variable to test substantive hypotheses.
Statistical methods are used for document classification.
Although text data in political science is typically smaller than data in some other fields (where millions of documents are common), the cost of having human coders categorize all documents is still prohibitively high.
Relying on automated text classification allows researchers to avoid classifying all documents in their data set manually.
Broadly speaking, there are two types of classification methods: supervised and unsupervised algorithms. Supervised approaches use labels from a set of hand-coded documents to categorize unlabeled documents, whereas unsupervised methods cluster documents without needing labeled documents. Both of these methods have downsides, however: in the former, hand-coding documents is labor-intensive and costly; in the latter, the substantive interpretation of the categories discovered by the clustering process can be difficult.
Supervised methods are more popular in political science research because substantive interpretability is important in using category labels to test substantive hypotheses, and justifies the cost associated with labeling many documents manually.
For example, \citet{gohdes2020repression} hand-labeled about $2000$ documents, and \citet{park:etal:2020} used $4000$ human-coded documents. These numbers are much smaller than the size of their entire data sets ($65,274$ and $2,473,874$, respectively), however, having human coders label thousands of (potentially long and complicated) documents still requires a large amount of researchers' time and effort.
We propose \aT, a new algorithm that augments a probabilistic mixture model with active learning.
We use the mixture model of \citet{nigam2000text} to combine the information from both labeled and unlabeled documents, making use of all available information.
In the model, latent classes are observed as labels for labeled documents and estimated as a latent variable for unlabeled documents.
Active learning is a technique that reduces the cost of hand-coding.
It uses measures of label uncertainty to iteratively flag highly informative documents to reduce the number of labeled documents needed to train an accurate classifier, particularly when the classification categories are imbalanced.
Our validation study shows that our model outperforms Support Vector Machines (SVM), a popular supervised learning model when both models are using active learning.
We also show that our algorithm performs favorably in terms of classification accuracy when compared to an off-the-shelf version of Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art classification model in natural language processing, using several orders of magnitude less computational resources.
Furthermore, because our model is generative, it is straightforward to use a researcher's domain expertise, such as keywords associated with a category, to improve text classification.
We also use \aT\ to replicate two published political science studies and show that the authors of these papers could have reached the same substantive conclusions with fewer labeled documents.
The first study is \citet{gohdes2020repression}, which focuses on the relationship between internet access and the form of state violence.
The second study is \citet{park:etal:2020}, which analyzes the association (or the lack thereof) between information communication technologies (ICTs) and the U.S.~Department of State's reports on human rights.
For both studies, we replicate their text classification tasks using \aT\ and conduct the same empirical analyses using the document labels.
Our replication analysis recovers their original conclusions---a higher level of internet access is associated with a larger proportion of targeted killings, and ICTs are not associated with the sentiment of the State Department's human rights reports, respectively---using far fewer labeled documents.
These replication exercises demonstrate that \aT\ performs well on complex documents commonly used in political science research, such as human rights reports.
We provide an \textbf{R} package called \aT\ with the goal of providing researchers from all backgrounds with easily accessible tools to minimize the amount of hand-coding of documents and improve the performance of classification models for their own work.
Before proceeding to a description of our algorithm and analysis, we first offer an accessible primer on the use of automated text classification.
We introduce readers to several basic concepts in machine learning:
tokenization, preprocessing, and the encoding of a corpus of text data into a matrix;
the difference between supervised and unsupervised learning, between discriminative and generative models, and between active and passive learning;
and a set of tools for the evaluation of classification models.
Readers who are already well acquainted with these concepts may prefer to skip directly to the description of our model in Section~\nameref{sec:method}.
\section{The Method}
\label{sec:method}
In this section, we present our modeling strategy and describe our active learning algorithm.
For the probabilistic model (a mixture model for discrete data) at the heart of the algorithm, we build on the work of~\cite{nigam2000text}, who show
that probabilistic classifiers can be augmented by combining the information coming from labeled and unlabeled data.
In other words, our model makes the latent classes for the unlabeled data interpretable
by connecting them to the hand-coded classes from the labeled data. It also takes advantage of the fact that
the unlabeled data provides more information about the features used to predict
the classes for each document. As we will discuss below, we insert our model into
an active learning algorithm and use the Expectation-Maximization (EM) algorithm \citep{demp:etal:1977}
to maximize the observed-data log-likelihood function and estimate the model
parameters.
\subsection{Model}
Consider the task of classifying $N$ documents as one of two classes (e.g., political vs non-political).
Let $\mathbf{D}$ be a $N \times V$ document feature matrix, where $V$ is the size of features.
We use $\mathbf{Z}$, a vector of length $N$, where each entry represents the latent classes assigned to each document.
If a document $i$ is assigned to the $k$th class, we have that $Z_i = k$, where $k \in \{0, 1 \}$ e.g., in our running example,
$k = 1$ represents the class of documents about politics, and $k = 0$ those that are non-political.
Because we use a semi-supervised approach, it can be the case that some documents
are already hand-labeled. This means that the value of $Z_i$ is known for the labeled
documents and is unknown for unlabeled documents.
To facilitate exposition, we assume that the classification goal is binary, however,
our approach can be extended to accommodate for
1) multiclass classification setting, where $k > 2$ and each document needs to be classified into
one of the $k$ classes e.g., classifying
news articles into 3 classes: politics, business, and sports; and 2)
modeling more than two classes but keeping the final classification to be binary.
In other words, a hierarchy that maps multiple sub-classes into one class e.g.,
collapsing the classification of documents that are about business and sports
into a larger class (non-politics), and letting the remaining documents to
be about politics (the main category of interest).
(For more details, see SI~\ref{si-subsec:em}, \ref{si-sec:multiple_cluster_model}, and \ref{si-sec:multiple_class}).
The following sets of equations summarize the model:\\
\begin{tcolorbox}[colback=blue!0,colframe = cobalt,title= {\bf Labeled Data}]
{\small
\vspace{-3mm}
\begin{center}
\begin{eqnarray*}
Z_i = k &\sim& \textrm{hand-coded}, \quad k \in \{0,1\} \\
\mathbf{\eta}_{\cdot k} &\stackrel{i.i.d}{\sim} &Dirichlet(\boldsymbol{\beta}_k) \\
\mathbf{D}_{i\cdot} \vert Z_{i} = k &\stackrel{i.i.d}{\sim}& Multinomial(n_i, \boldsymbol{\eta}_{\cdot k}) \\
\end{eqnarray*}
\end{center}
}
\end{tcolorbox}
\vspace{-10mm}
\begin{center}
\Huge{+}
\end{center}
\vspace{-3mm}
\begin{tcolorbox}[colback=blue!0,colframe=cobalt, title= {\bf $\lambda$ $\cdot$ Unlabeled Data}]
{\small
\vspace{-3mm}
\begin{center}
\begin{eqnarray*}
\pi &\sim& Beta(\alpha_0, \alpha_1) \\
Z_i = k&\stackrel{i.i.d}{\sim} &Bernoulli(\pi), \quad k \in \{0,1\} \\
\mathbf{\eta}_{\cdot k} &\stackrel{i.i.d}{\sim}& Dirichlet(\boldsymbol{\beta}_k) \\
\mathbf{D}_{i\cdot} \vert Z_{i} = k &\stackrel{i.i.d}{\sim}& Multinomial(n_i, \boldsymbol{\eta}_{\cdot k}) \\
\end{eqnarray*}
\end{center}
}
\end{tcolorbox}
If document $i$ is unlabeled, we first draw $\pi = p(Z_i = 1)$, the overall probability that any given document belongs to the first class
(e.g., political documents), from a Beta distribution with hyperparameters $\alpha_0$ and $\alpha_1$.
Similarly, for the other class (e.g., non-political documents), we have that $1 - \pi = p(Z_i = 0)$.
Given $\pi$, for each document indexed by $i$, we draw the latent cluster assignment indicator $Z_i$ from a
Bernoulli distribution.
Then, we draw features for document $i$ from a multinomial distribution governed by
the vector $\boldsymbol{\eta}_{\cdot k}$, where $\eta_{v k} = p(D_{i v} | Z_i = k)$, whose prior is the Dirichlet distribution.
If document $i$ is labeled, the main difference with the unlabeled data case is that
$Z_i$ has been hand-coded, and as a result, we do not draw it from a Bernoulli distribution but
the rest of the model's structure remains the same.
It is worth emphasizing that one of the most notorious problems with the implementation of supervised and semi-supervised approaches is the scarcity of labeled data, especially if compared to the abundance of unlabeled data.
Due to this imbalance problem, for any classifier to be able to extract signal from the labeled data and not be informed by unlabeled data alone, it is key to devise ways to increase the relative importance of the labeled data. Otherwise, the unlabeled data will mute the signal coming from the labeled data.
Following \cite{nigam2000text}, we down-weight information from unlabeled documents by $\lambda \in [0, 1]$.
Note that when the \(\lambda\) is equal to 1, the model treats each document equally, regardless of whether the document is labeled deterministically by a human, or probabilistically by the algorithm.
As \(\lambda\) moves from 1 towards 0, the model increasingly down-weights the information that the probabilistically labeled documents contribute to the estimation of \(\boldsymbol{\eta}\) and \(\pi\), such that when \(\lambda\) is 0, the model \textit{ignores} all information from the probabilistically labeled documents and therefore becomes a supervised algorithm (see SI~\ref{si-subsec:em}).
Finally, because the observed data log-likelihood of our model is difficult to maximize, we use the EM algorithm to
estimate the parameters.\footnote{For a full derivation of the EM algorithm, see SI~\ref{si-subsec:em}.}
\subsection{Active Learning}
\label{subsec:active}
Our active learning algorithm (see Algorithm~\ref{alg:active_learning}) can be split into the following steps: \textit{estimation} of the probability that each unlabeled document belongs to the positive class, \textit{selection} of the unlabeled documents whose predicted class is most uncertain, and \textit{labeling} of the selected documents by human coders.
The algorithm iterates until a stopping criterion is met (Section~\nameref{subsec:active-learning}).
We also describe an optional keyword upweighting feature, where a set of user-provided keywords provide prior information about the likelihood that a word is generated by a given class to the model.
These keywords can either be provided at the outset of the model or identified during the active learning process.
\subsubsection{Estimation}
\label{subsubsec:estimation}
In the first iteration, the model is initialized with a small number of labeled documents.\footnote{While we assume that these documents are selected randomly, the researcher may choose any subset of labeled documents with which to initialize the model.}
The information from these documents is used to estimate the parameters of the model: the probability of a document being of class 1 ($\pi$), and the probability of generating each word given a class, the $V \times 2$ matrix $\boldsymbol{\eta}$.
From the second iteration on, we use information from both labeled and unlabeled documents to estimate the parameters using the EM algorithm, with the log-likelihood of unlabeled documents being down-weighted by $\lambda$, and
with the \(\boldsymbol{\eta}\) and \(\pi\) values from the previous iteration as the initial values.
Using the estimated parameters, we compute the posterior probability that each unlabeled document belongs to class 1.
\subsubsection{Selection}
Using the predicted probability that each unlabeled document belongs to class 1, we use Shannon Entropy to determine which of the probabilistically labeled documents that it was least certain about.
In the binary classification case, this is the equivalent of calculating the absolute value of the distance of the class 1 probability and 0.50 for each document.
Using this criterion, the model ranks all probabilistically labeled documents in descending order of uncertainty.
The \(n\) most uncertain documents are then selected for human labeling, where \(n\) is the number of documents to be labeled by humans at each iteration.
\subsubsection{Labeling}
A human coder reads each document selected by the algorithm and imputes the ``correct'' label.
For example, the researcher may be asked to label as political or non-political each of the following sentences:
\begin{quotation}
The 2020 Presidential Election had the highest turnout in US history.
Qatar is ready to host the FIFA World Cup this coming November.
\end{quotation}
These newly-labeled documents are then added to the set of human-labeled documents, and the process is repeated from the estimation stage.
\subsubsection{Stopping Rule}
Our method is highly modular and supports a variety of stopping rules.
This includes an internal stability criterion, where stoppage is based on small amounts of change of the internal model parameters, as well as the use of a small held-out validation set to assess the marginal benefit of labeling additional documents on measures of model evaluation such as accuracy or F1.
With either rule, the researcher specifies some bound such that if the change in model parameters or out-of-sample performance is less than the pre-specified bound, then the labeling process ends.
We use the out-of-sample validation stopping rule with a bound of 0.01 for the F1 score in our reanalyses in Section~\nameref{sec:reanalysis}.
\begin{algorithm}[t!]
\SetAlgoLined \KwResult{Obtain predicted classes of all documents.}
Randomly select a small subset of documents, and ask humans to label them\;
\textcolor{gray}{[\textbf{Active Keyword}]: Ask humans to provide initial keywords}\;
\While{Stopping conditions are not met yet}{
\textcolor{gray}{(1) [\textbf{Active Keyword}]: Up-weight the important of keywords associated with a class;}\
(2) Predict labels for unlabeled documents using EM algorithm\;
(3) Select documents with the highest uncertainty among unlabeled documents, and ask humans to label them\;
\textcolor{gray}{(4) [\textbf{Active Keyword}]:
Select words most strongly associated with each class, and ask humans to label them;}\
(5) Update sets of labeled and unlabeled documents for the next iteration\;
}
\caption{Active learning with EM algorithm to classify text}
\label{alg:active_learning}
\end{algorithm}
\subsubsection{Active Keyword Upweighting}
\label{subsubsec:keywords}
The researcher also has the option to use an active keyword upweighting scheme, where a set of keywords is used to provide additional information.
This is done by incrementing elements of the \(\boldsymbol{\beta}\) (the prior of $\boldsymbol{\eta}$) by \(\gamma\), a scalar value chosen by the researcher.
In other words, we impose a tight prior on the probability that a given keyword is associated with each class.\footnote{See \citet{eshima2020keyword} for a similar approach for topic models.}
To build the set of keywords for each class, 1) \aT\ proposes a set of candidate words, 2) the researcher decides whether they are indeed keywords or not,\footnote{The researcher may also provide an initial set of keywords, and then iteratively adds new keywords.} and 3) \aT\ updates the parameters based on the set of keywords.
To select a set of candidate keywords, \aT\ calculates the ratio that each word was generated by a particular class using the \(\boldsymbol{\eta}\) parameter.
Specifically, it computes $\eta_{vk}/\eta_{vk'}$ for $k = \{0, 1\}$ with $k'$ the opposite class of $k$, and chooses top \(m\) words whose $\eta_{vk}/\eta_{vk'}$ are the highest as candidate keywords to be queried for class $k$.\footnote{Words are excluded from candidate keywords if they are already in the set of keywords, or if they are already decided as non-keywords. Thus, no words are proposed twice as candidate keywords.)}
Intuitively, words closely associated with the classification classes are proposed as candidate keywords.
For example, words such as ``vote,'' ``election,'' and ``president,'' are likely to be proposed as the keywords for the political class of documents in the classification between political vs. non-political documents.
After \aT\ proposes candidate keywords, the researcher decides whether they are indeed keywords or not.
This is where the researcher can use her expertise to provide additional information.
For example, she can decide names of legislators and acronyms of bills as keywords for the political class.\footnote{See SI~\ref{si-subsec:mislabel-keywords} for more discussion about what if the researcher mislabels keywords.}
Using the set of keywords for each class, \aT\ creates a \(V \times 2\) keyword matrix \(\boldsymbol{\kappa}\) where each element \(\kappa_{v,k}\) takes the value of \(\gamma\) if word \(v\) is a keyword for class \(k\), otherwise \(0\).
Before we estimate parameters in each active iteration, we perform a matrix sum \(\boldsymbol{\beta} \gets \boldsymbol{\kappa} + \bbeta\) to incorporate information from keywords.
The keyword approach therefore effectively upweights our model with prior information about words that the researcher thinks are likely to be associated with one class rather than another.
\section{Using Machine Learning for Text Classification}
\label{sec:teaching}
\subsection{Encoding Text in Matrix Form}
Suppose that a researcher has a collection of social media text data,
called a corpus, and wishes to classify whether each text in a corpus is
political (e.g., refers to political protest, human rights violations, unfavorable views of a given candidate,
targeted political repression, etc.) or not solely based on the words used in a given observation.\footnote{For
simplicity, the exposition here focuses on a binary classification task, however,
our proposed method can be extended to multiple classes e.g., classifying a
document as either a positive, negative, or neutral position about a candidate.
See Sections \nameref{sec:method} and \nameref{sec:reanalysis}, and Supplementary Information (SI)~\ref{si-sec:multiple_class} for more details.}
Critically, the researcher does not yet know which of the texts are
political or not at this point.
The researcher must first choose how to represent text as a series of \textit{tokens}, and decide which tokens to include in their analysis.
This involves a series of sub-choices, such as whether each token represents an individual word (such as ``political'') or a combination of words (such as ``political party''), whether words should be stemmed or not (e.g., reducing both ``political'' and ``politics'' their common stem ``politic''), and whether to remove stop-words (such as ``in'', ``and'', ``on'', etc.) that are collectively referred to as \textit{pre-processing.}\footnote{For a survey of pre-processing techniques and their implications for political science research, see \citet{denny2018text}.}
The researcher must then choose how to encode information about these tokens in matrix form.
The most straightforward way to accomplish this is using a \textit{bag-of-words} approach, where the corpus is transformed into a document-feature matrix (DFM) \(\mathbf{X}\) with \(n\) rows and \(m\) columns,
where \(n\) is the number of documents and \(m\) is the number of tokens, which are more generally referred to as
features.\footnote{Note that in the machine learning literature, the concept
typically described by the term ``variable'' is communicated using the term ``feature.''}
Each element of the DFM encodes the frequency that a token occurs in a given document.\footnote{An alternative to the bag-of-words approach is to encode tokens as \textit{word embeddings}, where in addition to the matrix summarizing the incidences of words in each document, neural network models are used to create vector representations of each token. In this framework, each token is represented by a vector of some arbitrary length, and tokens that are used in similar contexts in the corpus (such as ``minister'' and ``cabinet'') will have similar vectors. While this approach is more complicated, it yields considerably more information about the use of words in the corpus than the simple count that the bag-of-words approach does. For an accessible introduction to the construction and use of word embeddings in political science research, see \citet{rodriguez2022word}. For a more technical treatment, see \citet{pennington2014glove}.}
Once the researcher chooses how to encode their corpus as a matrix, she is left with a set of features corresponding to each document \(\mathbf{X}\) and an unknown vector of true labels \(Y\), where each element of \(Y\) indicates
whether a given document is political or not.
Then, we can repose the classification question as follows: given \(\mathbf{X}\), how might we best learn \(Y\), that is, whether each document
is political or not?
\subsection{Supervised vs. Unsupervised Learning}
\label{subsec:sup-unsup}
A researcher must then choose whether to use a
supervised or unsupervised approach to machine learning.\footnote{For a comprehensive discussion on supervised and unsupervised algorithms for the analysis of text as data, we refer the interested reader to \citet{grimm:etal:2022}.}
The supervised approach to this problem would be to (1) obtain true labels
of some of the documents using human coding e.g., an expert classifies documents such as the following news headline by CNN:
``White House says Covid-19 policy unchanged despite President Biden's comments that the `pandemic is over''' as political or not;
(2) learn the relationship between the text features encoded in the matrix $\mathbf{X}$
and the true label encoded in the vector \(Y\) for the documents with
known labels. In other words, it learns the importance of words such as ``policy'', ``President'', ``Biden'', ``pandemic'' in explaining
whether a document refers to politics or not;\footnote{That is, learn \(P(Y_{\text{labeled}}| \mathbf{X}_{\text{labeled}})\).
This can be accomplished with a variety of models, including e.g. linear or logistic
regression, support vector machines (SVM), Naive Bayes, $K$-nearest neighbor, etc.}
and (3) using the learned association between the text data and the known labels, predict
whether the remaining documents in the corpus (that is, those that were not coded by
a human) are political or not.
In contrast, an unsupervised approach would \textit{not} obtain the true
labels of some of the documents.
Rather, a researcher using an unsupervised approach would choose a
model that \textit{clusters} documents from the corpus that have common patterns
of word frequency.\footnote{Examples of clustering algorithms include \(K\)-means
and Latent Dirichlet Allocation (LDA).}
Using the assignment of documents to clusters, the researcher would then use some
scheme to decide which of the clusters corresponds to the actual outcome of
interest: whether a document is political or not.
The main advantage of a supervised approach over an unsupervised approach is the direct interpretability of results, since it requires the translation of clusters to classifications. This also allows for a more straightforward evaluation of model performance in terms of the distance between the predictions made by the supervised learning algorithm and the true values of $Y$. Because such an objective measure does not exist in unsupervised learning, the researcher needs to rely on heuristics to assess the adequacy of the algorithm \citep{hast:etal:2009}.\footnote{In most political science
applications of unsupervised learning techniques, the author either is
conducting an exploratory analysis and is therefore uninterested in classification,
or performs an \textit{ad hoc} interpretation of the clusters by reading top examples
of a given cluster, and on that basis infers the classification from the
clustering \citep{knox:etal:2022}.}
On the other hand, the main disadvantage of a supervised approach is that
obtaining labels for the documents in the corpus is often time-consuming and costly.
For example, it requires expert knowledge to classify each document to be either political
or non-political. Researchers using an unsupervised approach instead
will avoid this cost since they do not require a set of
labels \textit{a priori}.
Semi-supervised methods combine the strengths of supervised and unsupervised approaches to improve classification \citep{mille:uyar:1997, nigam2000text}.
These methods
are particularly useful in situations where there is a large amount of
unlabeled data, and acquiring labels is costly.
A semi-supervised model proceeds similarly to the supervised
approach, with the difference being that the model learns the
relationship between the matrix of text data $\mathbf{X}$ and the
classification outcome \(Y\) using information from both the labeled
and unlabeled data.\footnote{While $Y$ is not observed for
the unlabeled data, these observations do contain information
about the joint distribution of the features $\mathbf{X}$,
and as such can be used with labeled data to increase the accuracy of a text
classifier \citep{nigam2000text}.}
Since a supervised approach learns the
relationship between the labels and the data solely based on
the labeled documents, a classifier trained with a supervised approach
maybe less accurate than if it were provided information from
both the labeled and unlabeled documents \citep{nigam2000text}.
\subsection{Discriminative vs. Generative Models}
\label{subsec:disc-gen}
In addition to choosing a supervised, unsupervised, or semi-supervised approach,
a researcher must also choose whether to use a discriminative or generative model.
As noted by \citet{ng:jord:2001} and \citet{bish:lass:2007},
when using a discriminative model (e.g., logistic regression, SVM, etc.), the
goal is to directly estimate the probability of the classification outcomes \(Y\)
given the text data $\mathbf{X}$ i.e., directly estimate \(p(Y| \mathbf{X})\).
In contrast, when using a generative model (e.g., Naive Bayes), learning the
relationship between the \(Y\) and $\mathbf{X}$ is a two-step process.
In the first step, the likelihood of the matrix of text data $\mathbf{X}$ and
outcome labels \(Y\) is estimated given the data and a set of parameters \(\theta\)
that indicate structural assumptions about how the data is generated
i.e., \(p(\mathbf{X}, Y | \theta)\) is directly estimated.
In the second step, the researcher uses Bayes' rule to calculate the probability of
the outcome vector given the features and the learned distribution of
the parameters i.e., \(p(Y| \mathbf{X}; \theta)\).
In addition to allowing for the use of unlabeled data (which reduces labeling costs),
one of the main benefits of a generative rather than a discriminative model is that
the researcher can include information they know about the data
generating process by choosing appropriate functional forms.\footnote{This is
particularly true when e.g., the researcher knows that the data has a complicated
hierarchical structure since the hierarchy can be incorporated directly into the generative model.}
This can help prevent overfitting when the amount of data in a corpus is small.\footnote{
Overfitting occurs when a model learns to predict classification outcomes based on patterns
in the training set (i.e., the data used to fit the model) that does not generalize to the broader universe of cases to be classified. A
model that is overfitted may predict the correct class with an extremely high degree of accuracy
for items in the training set, but will perform poorly when used to predict the class for items
that the model has not seen before.
}
Conversely, because it is not necessary to model the data generating process directly, the
main benefit of a discriminative rather than generative model is simplicity (in general
it involves estimating fewer parameters).
Discriminative models are therefore appropriate in situations where the amount of data
in a corpus is very large, and/or when the researcher is unsure about the data-generating
process, which could lead to mis-specification \citep{bish:lass:2007}.\footnote{Another benefit of generative
models is that they can yield better estimates
of how certain we are about the relationship between the outcome and the features. This is the case
when a researcher uses an inference algorithm like Markov Chain Monte Carlo (MCMC) that
learns the entire distribution for each of the parameters, rather than only point estimates.}
\subsection{Model Evaluation}
\label{subsec:model-eval}
A researcher must also decide when she is satisfied with the predictions generated by the model.
In most circumstances, the best way to evaluate the performance of a classification algorithm is to reserve a subset of the corpus for validation, which is sometimes referred to as validation and/or test set.
At the very beginning of the classification process, a researcher puts aside and label a set of randomly chosen documents that the active learning algorithm does not have access to.\footnote{It is important to use a set-aside validation set for testing model performance, rather than a subset of the documents used to train the model, to avoid \textit{overfitting}.}
Then, after training the model on the remainder of the documents (often called the training set), the researcher should generate predictions for the documents in the validation set using the trained model.
By comparing the predicted labels generated by the model to the actual labels, the researcher can evaluate how well the model does at predicting the correct labels.
A common tool for comparing the predicted labels to the actual labels is a \textit{confusion matrix}.
In a binary classification setting, a confusion matrix will be a 2 by 2 matrix, with rows corresponding to the actual label, and the columns corresponding to the predicted label. Returning to our running example, imagine that the classification is to predict whether documents are political or not, Table~\ref{tab:conf-mat} shows the corresponding confusion matrix. In this scenario, True Positives (TP) are the number of documents that the model predicts to be about politics and that is in fact labeled as such.
Correspondingly, True Negatives (TN), are the number of documents that the model predicts to be non-political and is labeled as such in the validation set.
A False Negative (FN) occurs when the model classifies a document as non-political, but according to the validation set, the document is about politics.
Similarly, a False Positive (FP) occurs when the model classifies as political a document that is non-political.
\begin{table}
\centering
\begin{tabular}{cc|cc}
\multicolumn{2}{c}{}
& \multicolumn{2}{c}{Predicted Label} \\
& & Political & Non-political \\
\cline{2-4}
\multirow{2}{*}{Actual Label}
& Political & True Positive (TP) & False Negative (FN) \\
& Non-political & False Positive (FP) & True Negative (TN) \\
\end{tabular}
\caption{\textbf{Confusion Matrix: Comparison of the Predictions of a Classifier to Documents' True Labels}}
\label{tab:conf-mat}
\end{table}
Using the confusion matrix, the researcher can calculate a variety of evaluation statistics.
Some of the most common of these are accuracy, precision, and recall.
Accuracy is the proportion of documents that have been correctly classified.
Precision is used to evaluate the false positivity rate and is the proportion of the model's positive classifications that are true positives.
As the number of false positives increases (decreases), precision decreases (increases).
Recall is used to evaluate the false negativity rate, and is the proportion of the actual positive documents that are true positives.
As the number of false negatives increases, recall decreases, and \textit{vice-versa}.
Accuracy, precision, and recall can be formally calculated as:
\[
\text{Accuracy} = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \qquad
\text{Precision} = \frac{\text{TP}}{\text{TP} + \text{FP}} \qquad
\text{Recall} = \frac{\text{TP}}{\text{TP} + \text{FN}}
\]
When the proportion of political and non-political documents in a corpus is balanced,
accuracy is an adequate measure of model performance.
However, it is often the case in text classification that the corpus is unbalanced, and the proportion of
documents associated with one class is low.
When this is the case, accuracy does a poor job at model evaluation.
Consider the case when 99 percent of documents are non-political, and 1 percent are about politics.
A model which simply predicts that all documents belong to the non-politics class would have an accuracy score of 0.99, but would be poorly suited to the actual classification task. In contrast, the precision and recall rates would be 0, which would signal to the researcher that the model does a poor job
at classifying documents as political. Precision and recall are not perfect measures of model performance, however.
There is a fundamental trade-off involved in controlling the false positivity and false negativity rates: you can have few false positives if you are content with an extremely high number of false negatives, and you can have few false negatives if you are content with an extremely high number of false positives.
Recognizing this trade-off, researchers often combine precision and recall scores to find a model that has the optimal balance of the two.
One common way of combining the two is an F1 score, which is the harmonized mean of precision and recall.
Formally, the F1 score is calculated as:
\[\text{F1} = 2 \cdot \frac{\text{Precision} \cdot \text{Recall}}{\text{Precision} + \text{Recall}}\]
The F1 score evenly weights precision and recall, and so a high F1 score would indicate that both the false negativity and false positivity rate are low.
It is worth noting these evaluation measures (accuracy, precision, recall, and the F1 score) are computed using labeled data (``ground truth''), which in practice,
are available only for a limited subset of the records.
\subsection{Active vs. Passive Learning}
\label{subsec:active-learning}
Finally, if the researcher in our running example decides to use a supervised or
semi-supervised approach for predicting whether documents in their corpus are political
or not, the next step is to decide how many documents to label, and how to choose them.
Since labeling is the bottleneck of any classification task of this kind, it is critical that she also selects an approach to label observations that minimizes the number of documents to be labeled in order to produce an accurate classifier.
There are two popular strategies on how to retrieve cases to be labeled: 1) passively
and 2) actively. The difference between a passive and an active approach amounts to
whether the researcher randomly chooses which documents to label (i.e., choose documents
\textit{passively}), or whether to use some selection scheme (i.e., choose documents
\textit{actively}). Ideally, an active approach must require fewer labels than the number
of randomly labeled data sufficient for a passive approach to achieve the same level of
accuracy.
\citet{cohn:etal:1994} and \citet{lewi:gale:1994}
established that a good active learning algorithm should be fast, and should reliably
choose documents for labeling that provide more information to
the model than a randomly chosen document, particularly in situations when the amount
of labeled data is scarce.\footnote{See also \cite{dasg:2011, settles2011active, hann:2014, hino:2021} and the references therein.}
One of the most studied active learning approaches
is called \textit{uncertainty sampling} \citep{lewi:gale:1994, yang:2015}, a process
where documents are chosen for labeling based on how uncertain the model
is about the correct classification category
for each document in the corpus.\footnote{This is just one of many possible approaches.
Other uncertainty-based approaches to active learning include query-by-committee, variance reduction,
expected model change, etc. We refer the interested reader to~\citet{settles2011active} for an
accessible review on active learning and \citet{hann:2014} for a more technical exposition.
}
As noted above, an active learning process using uncertainty sampling alternates between estimating
the probability that each document belongs to a particular classification outcome, sampling
a subset of the documents that the model is most uncertain about for labeling,\footnote{
While in our presentation, we have focused on instances of labeling one observation per iteration,
exactly how many observations to select and label at each
active iterations is also an important practical consideration for any researcher.
As noted by \citet{hoi:etal:2006}, to reduce the cost of retraining the model per instance of labeling, labeling many
documents per iteration (as a batch) is the best approach. This is especially
important when working with a large amount of data.}
then estimating the probabilities again using the information from the
newly labeled documents. In our running example, a researcher is interested in
classifying documents as political (P) or non-political (N), and needs to decide how to prioritize
her labeling efforts.
As shown in Figure~\ref{fig:active:passive} (Panel A), imagine
there are two new data points to be labeled (denoted by ``$\circ$'' and ``$\ast$'').
A passive learning algorithm would give equal labeling priority to both (Panel B). However, an active approach
would give priority to ``$\circ$'' as the classifier is most uncertain about the
label of ``$\circ$'' if compared to ``$\ast$'' (which is surrounded by many non-political documents).
\begin{figure}
\begin{center}
\includegraphics[width = 16cm, height = 6.5cm]{./inputs/figs/ActivePassive.pdf}
\end{center} \vspace{-16mm}
\caption{\textbf{Passive vs Active Learning.} For a classifier defined in two dimensions, Panel A illustrates
the task: classify unlabeled documents (denoted by $\circ$ and $\ast$) as Political (P) or Non-political (N). A
a passive learning algorithm will request the labels of $\circ$ and $\ast$ with equal probability (Panel C). In contrast, in
active learning approach, $\circ$ will be prioritized for labeling as it is located in the region where the classifier is most
uncertain (shaded region).} \label{fig:active:passive}
\end{figure}
A critical question for a researcher using an iterative algorithm is when to stop labelling.
Many active learning algorithms resort to
heuristics such as a fixed-budget approach, which stops when
the number of newly labeled data points reaches a predetermined size. The problem with
such an approach is that it may lead to under- or over-sampling.\footnote{This is due to the fact the fixed budget
has not been set using an optimality criterion other than to stop human coding at some point. See \citet{ishi:hide:2020} for further discussion of this point.}
One popular strategy is to randomly label a subset of documents at the beginning of the process, which is then used for assessing the performance of the classifier on data that the model has not seen.\footnote{For a discussion of this approach in our own application, see Section~\nameref{subsec:model-eval}.}
With this approach, the process stops when the difference in measures of out-of-sample accuracy
between two consecutive iterations does not surpass a certain threshold pre-established by the researcher (e.g.,
the F1 score does not improve in more than 0.01 units from iteration to iteration) \citep{alts:bood:2019}.
If labeled data does not exist or cannot be set aside for testing due to its scarcity, a stopping rule where
the algorithm stops once in-sample predictions generated by the model (i.e., using the documents that have been labeled by the researcher during the active learning process) do not change from
one iteration to the next.
This is often referred to as a stability-based method \citep{ishi:hide:2020}.
With all these concepts in mind, in the next section we describe our proposed approach with a special
focus on its flexibility that it affords a researcher to both balance the tradeoffs of working with labeled and unlabeled data, and use existing domain expertise to improve classification with the use of keyword upweighting.
\section{Validation Performance}
\label{sec:performance}
This section shows the performance comparisons between \aT and other classification methods.
First, we show comparisons between active vs. passive learning as well as semi-supervised learning vs. supervised learning.
For semi-supervised learning, we use \aT with $\lambda = 0.001$.
For supervised learning, we use active Support Vector Machines (SVM) from \citet{millerActiveLearningApproaches2020} with margin sampling.
Then, we compare classification and time performance between \aT\ and an off-the-shelf version of BERT, a state-of-the-art text classification model.
Furthermore, we show how keyword upweighting can improve classification accuracy.
We compare the classification performance on the following documents: internal forum conversations of Wikipedia editors (class of interest: toxic comment), BBC News articles (political topic), the United States Supreme Court decisions (criminal procedure), and Human Rights allegations (physical integrity rights allegation).\footnote{More information about preprocessing and descriptions about the dataset are in SI~\ref{si-sec:validation_specification}}
We use 80\% of each dataset for the training data and hold out the remaining 20\% for evaluation.
Documents to be labeled are sampled only from the training set, and documents in the test set are not included to train the classifier, even in our semi-supervised approach.
The out-of-sample F1 score is calculated using the held-out testing data.
\subsection{Comparison between \aT\ and Active SVM}
Figure \ref{fig:main} shows the results from four model specifications, each representing one of the combinations of active or passive learning, and semi-supervised or supervised learning.
The first choice is between active learning (solid lines) vs passive learning (dashed lines).
In the active sampling, we select the next set of documents to be labeled based on the entropy of the predicted probabilities of the classes when we use our mixture model, and they are selected based on the margin sampling when we use SVM as the underlying classification method.
The second choice is between our semi-supervised learning (darker lines) vs. off-the-shelf supervised learning (lighter lines).
For the supervised learning, we replicate the results from \citet{millerActiveLearningApproaches2020} which uses SVM as the classifier.
Each panel represents model performance in one of four datasets, with the number in parentheses indicating the proportion of documents associated with the class of interest using ground-truth labels in each dataset.
The y-axis indicates the average out-of-sample F1 score across 50 Monte Carlo iterations, and the x-axis shows the total number of documents labeled, with 20 documents labeled at each sampling step.\footnote{While we simulate human coders who label all documents correctly at the labeling stage, this may not be the case because humans can make mistakes in practice. SI~\ref{si-subsec:mislabel_documents} shows that honest (random) mistakes in the labeling of documents can hurt the classification performance.}
Among the four models, the combination of active learning with the mixture model (\aT\ in Figure \ref{fig:main}) performs the best with most of the specifications.
The gain from active learning tends to be higher when the proportion of documents in the class of interest is small.
On the Wikipedia corpus with the proportion of the positive labels being 9\%, active learning outperforms passive learning, particularly when the number of documents labeled is smaller.
In SI~\ref{si-sec:main_results_appdx}, we further examine how the class-imbalance influences the benefit of active learning, by varying the proportion of the positive class between 5\% and 50\%.\footnote{See SI~\ref{si-sec:validation_specification} for how we generate data with class-imbalance.}
It shows that active learning performs better than passive learning consistently when the proportion of one class is 5\%
One limitation is that \aT\ did not perform better than SVM on the human rights corpus when the number of documents labeled is small (less than 200 in Figure~\ref{fig:main}).
We examine how the optional keyword labeling can assist such a situation in \nameref{subsec:keyword_results}.
\begin{figure}[p!]
\includegraphics[width=\linewidth]{svm_em_comparison.pdf}
\caption{\textbf{Comparison of Classification Results across Random and Active Versions of \aT\ and SVM}
}
\label{fig:main}
\end{figure}
\subsection{Comparison between \textit{activeText} and BERT}
In Figure \ref{fig:main_bert}, we compare both classification performance and computational time for \aT, Active SVM, and BERT, a state-of-the-art text classification model.\footnote{For a technical overview of BERT, and the Transformers technology underpinning it, see \citet{devlin2018bert} and \citet{vaswani2017attention}, respectively.}
We trained two sets of models for the F1 and time comparisons, respectively.
The left-hand column of panels shows F1 (the y-axis) as a function of the number of documents labeled (the x-axis), as with the results shown in Figure~\ref{fig:main}.
We trained models using 50 random initializations for the \aT\ and Active SVM models.
We trained the BERT models using 10 random initializations using V100 GPUs on a cluster computing platform.
The F1 comparison in the left-hand column of Figure~\ref{fig:main_bert} shows that for all four of our corpora, \aT\ performs favorably in comparison to our off-the-shelf implementation of the BERT language model.
We show that with each of the BBC, Supreme Court, and Wikipedia corpora (the first, third, and fourth rows of panels), we significantly outperform BERT when there are very few documents labeled.
As the number of labeled documents increases, BERT as expected performs well and even exceeds the F1 score of \aT\ in the case of Wikipedia.
And as shown in the results for the Human Rights corpus (the second row of panels), BERT does outperform \aT\ at all levels of documents labeled.
\begin{figure}[p!]
\includegraphics[width=\linewidth]{bert_combined_comparison.pdf}
\caption{\textbf{Comparison of Classification and Time Results across \aT, Active SVM, and BERT}}
\label{fig:main_bert}
\end{figure}
The right-hand column of panels in Figure~\ref{fig:main_bert} shows computational time, rather than F1, as a function of documents labeled.
For this analysis, our goal was to compare how long it would take a researcher without access to a cluster computing platform or a high-powered GPU to train these models.
To this end, we re-trained the \aT, Active SVM, and BERT models on a base model M1 Macbook Air with 8 GB of RAM and 7 GPU cores.
While the Active SVM and \aT\ models were trained using a single CPU, we used the recent implementation of support for the GPU in M1 Macs in PyTorch\footnote{See \href{}{https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/}.} to parallelize the training of the BERT model using the M1 Mac's GPU cores.\footnote{Specifically, we trained a \textit{DistilBERT} model (see \citet{sanh2019distilbert}) for three epochs (the number of passes of the entire training dataset BERT has completed) using the default configuration from the Transformers and PyTorch libraries for the Python programming language and used the trained model to predict the labels for the remaining documents for each corpus.}
We also computed the time values \textit{cumulatively} for the \aT\ and Active SVM models, since it is expected that model will be fit over and over again as part of the active learning process, whereas for a model like BERT we expect that the model would only be run once, and as such do not calculate its run-time cumulatively.
For the Human Rights and Wikipedia corpora, which each have several hundred thousand entries, we used a random subsample of 50,000 documents. For the Supreme Court and BBC corpora, we used the full samples. Finally, we present the time results in logarithmic scale to improve visual interpretation.
The right-hand panel of Figure~\ref{fig:main_bert} shows that the slight advantages of the BERT models come at a cost of several orders of magnitude of computation time. Using the Wikipedia corpus as an example, at 500 documents labeled the baseline \aT\ would have run to convergence 25 times, and the sum total of that computation time would have amounted to just under 100 seconds. With BERT, however, training a model with 500 documents and labeling the remaining 45,500 on an average personal computer would take approximately 10,000 seconds (2.78 hours).
\subsection{Benefits of Keyword Upweighting}
\label{subsec:keyword_results}
In Figure \ref{fig:main}, active learning did not improve the performance on the human rights corpus, and the F1 score was lower than other corpora in general.
One reason for the early poor performance of \aT\ may be length of documents.
Because each document of the human rights corpus consists of one sentence only, the average length is shorter than other corpora.\footnote{With the population data, the average length of each document is 121 (BBC), 17 (Wikipedia), 1620 (Supreme Court), and 9 (Human Rights)}
This means that the information the models can learn from labeled documents is less compared to the other corpora with longer documents.
In situations like this, providing keywords in addition to document labels can improve classification performance because it directly shifts the values of the word-class probability matrix, $\boldsymbol{\eta}$, even when the provided keywords is not in the already labeled documents.
\begin{figure}[t!]
\includegraphics[width=\linewidth]{fig_keywords.pdf}
\caption{\textbf{Classification Results of \aT\ with and without Keywords}
}
\label{fig:keywords}
\end{figure}
Figure \ref{fig:keywords} compares the performance with and without providing keywords.
The darker lines show the results with keywords and the lighter lines without.
The columns specify the proportion of documents associated with the class of interests: 5\%, 50\% and the population proportion (16\%).
As in the previous exercises, 20 documents are labeled at each sampling step, and 100 Monte Carlo simulations are performed to stabilize the randomness due to the initial set of documents to be labeled.
We simulated the process of a user starting with no keywords for either class, and then being queried with extreme words indexed by $v$ whose $\eta_{vk}/\eta_{vk'}$ is the highest for each class $k$, with up to 10 keywords for each class being chosen based on the estimated $\boldsymbol{\eta}$ at a given iteration of the active process.
To determine whether a candidate keyword should be added to the list of keywords or not, our simulated user checked whether the word under consideration was among the set of most extreme words in the distribution of the `true' $\boldsymbol{\eta}$ parameter, which we previously estimated by fitting our mixture model with the complete set of labeled documents.\footnote{Specifically, the simulated user checked whether the word in question was in the top 10\% of most extreme words for each class using the `true' $\boldsymbol{\eta}$ parameter. If the candidate word was in the set of `true' extreme words, it was added to the list of keywords and upweighted accordingly in the next active iteration.}
The results suggest that providing keywords improves the performance when the proportion of documents is markedly imbalanced across classes.
The keywords scheme improved the performance when the number of labeled documents is smaller on the corpus with 5\% or 16\% (population) labels associated
with the class of interest. By contrast, it did not on the corpus where both classes were evenly balanced.
These results highlight that our active keyword approach benefits the most when the dataset suffers from serious class-imbalance problems.\footnote{SI~\ref{si-sec:keywords_visual} demonstrates how active keyword works by visualizing the word-class matrix, $\boldsymbol{\eta}$, at each active iteration.}
One caveat is that we provided `true' keywords, in the sense that we used the estimated $\boldsymbol{\eta}$ from a fully labeled dataset.
In practice, researchers have to decide if candidate keywords are indeed keywords using their substantive knowledge.
In this exercise, we believe that the keywords supplied to our simulation are what researchers with substantive knowledge
about physical integrity rights can confidently adjudicate.
For example, the keywords, such as ``torture,'' ``beat,'' and ``murder,'' match our substantive understanding of physical integrity right violation.
Nevertheless, humans can make mistakes, and some words may be difficult to judge.
Thus, we examined the classification performance with varying degrees in the amount of error at the keyword labeling step.
In SI~\ref{si-subsec:mislabel-keywords}, we show that the active keyword approach still improves the classification performance compared to the no-keyword approach -- even in the presence of small amounts (less than 20\%) of ``honest'' (random) measurement error in keyword labeling.
\section{Reanalysis with Fewer Human Annotations}\label{sec:reanalysis}
To further illustrate our proposed approach for text classification, in this section, we reanalyze the results in \cite{gohdes2020repression} and \cite{park:etal:2020}.
We show that via \aT, we arrive at the same substantive conclusions advanced by these authors but using only a small fraction of the labeled data they originally used.
\subsection{Internet Accessibility and State Violence \citep{gohdes2020repression}}\label{subsec:gohdes}
In the article ``Repression Technology: Internet Accessibility and State Violence,'' \cite{gohdes2020repression} argues that higher levels of Internet accessibility are associated with increases in targeted repression by the state. The rationale behind this hypothesis is that through the rapid expansion of the Internet, governments have been able to improve their digital surveillance tools and target more accurately those in the opposition. Thus, even when digital censorship is commonly used to diminish the opposition's capabilities, \cite{gohdes2020repression} claims that digital surveillance remains a powerful tool, especially in areas where the regime is not fully in control.
To measure the extent to which killings result from government targeting operations, \cite{gohdes2020repression} collects 65,274 reports related to lethal violence in Syria. These reports contain detailed information about the person killed, date, location, and cause of death. The period under study goes from June 2013 to April 2015. Among all the reports, 2,346 were hand-coded by Gohdes, and each hand-coded report can fall under one of three classes: 1) government-targeted killing, 2) government-untargeted killing, and 3) non-government killing. Using a document-feature matrix (based on the text of the reports) and the labels of the hand-coded reports, \cite{gohdes2020repression} trained and tested a state-of-the-art supervised decision tree algorithm (extreme gradient boosting, \texttt{xgboost}). Using the parameters learned at the training stage, \cite{gohdes2020repression} predicts the labels for the remaining reports for which the hand-coded labels are not available. For each one of the 14 Syrian governorates (the second largest administrative unit in Syria), \cite{gohdes2020repression} calculates the proportion of biweekly government targeted killings. In other words, she collapses the predictions from the classification stage at the governorate-biweekly level.
We replicate \cite{gohdes2020repression} classification tasks using \aT. In terms of data preparation, we adhere to the very same decisions made by \cite{gohdes2020repression}. To do so, we use the same 2,346 hand-labeled reports (1,028 referred to untargeted killing, 705 to a targeted killing, and 613 a non-government killing) of which 80\% were reserved for training and 20\% to assess classification performance. In addition, we use the same document-feature matrices.\footnote{\cite{gohdes2020repression} removed stopwords, punctuation, and words that appear in at most two reports, resulting in 1,342 features and a document-feature matrix that is 99\% sparse. The median number of words across documents is 13.} As noted in \nameref{subsec:active}, because \aT\ selects (at random) a small number of documents to be hand-labeled to initialize the process, we conduct 100 Monte Carlo simulations and present the average performance across initializations. As in \nameref{sec:performance}, we set $\lambda = 0.001$. The performance of \aT\ and \texttt{xgboost} is evaluated in terms of out-of-sample F1 score. Following the discussion in \nameref{subsec:active-learning}, we stopped the active labeling process at the 30th iteration when the out-of-sample F1 score stopped increasing by more than 0.01 units (our pre-specified threshold). Table \ref{tbl:syria} presents the results\footnote{The values in the bottom row are based on \cite{gohdes2020repression}, Table A9.}. Overall, we find that as the number of active learning steps increases, the classification performance of \aT\ is similar to the one in \cite{gohdes2020repression}. However, the number of hand-labeled documents that are required by \aT\ is significantly smaller (around one-third) if compared to the ones used by \cite{gohdes2020repression}.
\begin{table}[h!]
\centering
\caption{Classification Performance: Comparison with \cite{gohdes2020repression} results}
\label{tbl:syria}
\footnotesize
\begin{tabular}{l l l *{9}{c}}\\
& & & \multicolumn{3}{c}{Ouf-of-sample F1 Score per class} \\
\cmidrule(lr){4-6}
Model & Step & Labels& {Untargeted} & {Targeted} & {Non-Government} \\
\midrule
\aT & 0 & 20 & 0.715 & 0.521 & 0.800 \\
& 10 & 220 & 0.846 & 0.794 & 0.938 \\
& 20 & 420 & 0.867 & 0.828 & 0.963 \\
& \textbf{30} & \textbf{620} & \textbf{0.876} & \textbf{0.842} & \textbf{0.963} \\
& 40 & 820 & 0.879 & 0.845 & 0.961 \\
\midrule
\cite{gohdes2020repression} & & 1876 & 0.910 & 0.890 & 0.940 \\
\end{tabular}
\end{table}
In social science research, oftentimes, text classification is not the end goal but a means to quantify
a concept that is difficult to measure and make inferences about the relationship between
this concept and other constructs of interest. In that sense,
to empirically test her claims, \cite{gohdes2020repression} conducts regression analyses
where the proportion of biweekly government targeted killings is the dependent variable and Internet accessibility is the main
independent variable -- both covariates are measured at the governorate-biweekly level. \cite{gohdes2020repression} finds that
there is a positive and statistically significant relationship between Internet access and the proportion of targeted killings by the
Syrian government. Using the predictions from \aT, we construct the main dependent variable and
replicate the main regression analyses in \cite{gohdes2020repression}.
Tables in SI~\ref{si-sec:syria_regression} reports the estimated coefficients, across the
same model specifications in \cite{gohdes2020repression}. The point estimates and the standard errors
are almost identical whether we use \texttt{xgboost} or \aT.
Moreover, Figure~\ref{fig:gohdes_fig3} presents the expected proportion of targeted killings by region and Internet accessibility.
\cite{gohdes2020repression} finds that in the Alawi region (known to be loyal to the regime) when Internet access is
at its highest, the expected proportion of targeted killings is significantly smaller compared to other regions of Syria.
In the absence of the Internet, however, there is no discernible difference across regions (see Figure~\ref{fig:gohdes_fig3}, right panel).
Our reanalysis does not change the substantive conclusions by \cite{gohdes2020repression} (Figure~\ref{fig:gohdes_fig3}, left panel),
however, it comes just at a fraction of the labeling efforts (labeling 620 instead of 1876 reports).
As noted above, these gains come from our active sampling scheme as it can select the most informative
documents to be labeled.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{inputs/figs/Figure3_expected_proportion_Alawi_combined.pdf}
\caption{\textbf{Replication of Figure 3 in \cite{gohdes2020repression}: Expected Proportion of Target Killings, Given
Internet Accessibility and Whether a Region is Inhabitated by the Alawi Minority.} The results from \aT\ are presented
in the left panel and those of \cite{gohdes2020repression} are on the right.
}
\label{fig:gohdes_fig3}
\end{figure}
\subsection{Human Rights are Increasingly Plural \citep{park:etal:2020}}
The question that drives the work of \cite{park:etal:2020} is as follows: how the rapid growth (in the last four decades) of information communication technologies (ICTs) has changed the composition of texts referring to human rights? \cite{park:etal:2020} make the observation that the average sentiment with which human rights reports are written has not drastically changed over time. Therefore, \cite{park:etal:2020} advance the argument that if one wants to really understand the effect of changes in the access to information on the composition of human rights reports, it is necessary to internalize the fact that human rights are plural (bundles of related concepts). In other words, the authors argue that having access to new information has indeed changed the taxonomy of human rights over time, even when the tone has not.
To empirically test such a proposition, \cite{park:etal:2020} conduct a two-step approach. First, via an SVM for text classification with three classes (negative, neutral, and positive sentiment), the authors show that the average sentiment of human rights reports has indeed remained stable even in periods where the amount of information available has become larger.\footnote{As explained in Appendix A1 of \cite{park:etal:2020}, negative sentiment refers to text about a clear ineffectiveness in protecting or to violations of human rights; positive sentiment refers to text about clear support (or no restrictions) of human rights; and neutral sentiment, refers to stating a simple fact about human rights.} Second, they use a network modeling approach to show that while the average sentiment of these reports has remained constant over time, the taxonomy has drastically changed. In this section, using \aT, we focus on replicating the text classification task of \cite{park:etal:2020} (which is key to motivating their puzzle).
As in the replication of \cite{gohdes2020repression}, we adhere to the same pre-processing decisions made by \cite{park:etal:2020} when working with their corpus of Country Reports on Human Rights Practices from 1977 to 2016 by the US Department of State. In particular, we use the same 4000 hand-labeled human rights reports (1182 are positive, 1743 are negative, and 1075 are neutral) and use the same document-feature matrices (which contain 30,000 features, a combination of unigrams and bigrams). Again, we conduct 100 Monte Carlo simulations and present the average performance across initializations. We stopped the active labeling process at the 25th iteration of our algorithm as the out-of-sample F1 score (from an 80/20 training/test split) does not increase by more than 0.01 units (see Figure~\ref{si-fig:colaresi_fig2} in SI~\ref{si-sec:colaresi}).\footnote{The only point where we depart from \cite{park:etal:2020} is that we use an 80/20 split for training/testing, while they use $k$-fold cross-validation. Conducting $k$-fold cross-validation for an active learning algorithm would require over-labeling and it would be computationally more expensive (the process should be repeated $k$ times). Because of this difference we refrain from comparing our model performance metrics to theirs.} Using the results from the classification task via \aT, the sentiment scores of 2,473,874 documents are predicted. With those predictions, we explore the evolution of the average sentiment of human rights reports per average information density score.\footnote{Information density is a proxy for ICTs based on a variety of indicators related to the expansion of communications and access to information, see Appendix B in \citet{park:etal:2020}.}
Figure~\ref{fig:colaresi_fig1} shows that by labeling only 500 documents with \aT, instead of 4000 labeled documents used by \citet{park:etal:2020} to fit their SVM classifier, we arrive at the same substantive conclusion: the average sentiment of human rights reports has remained stable and almost neutral over time. In Figure~\ref{si-fig:colaresi_fig3} of SI~\ref{si-sec:colaresi}, we also show that this result is not an artifact of our stopping rule and it is robust to the inclusion of additional label documents (e.g, labeling 1000, 1500, and 2000 documents instead of just 500).
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figs/colaresi_fig1.pdf}
\caption{\textbf{Replication of Figure 1 in \cite{park:etal:2020}: The Relationship Between Information Density and Average Sentiment Score.}}
\label{fig:colaresi_fig1}
\end{figure}
| {'timestamp': '2022-09-28T02:06:53', 'yymm': '2202', 'arxiv_id': '2202.02629', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.02629'} |
\section{Introduction}
Over the past few years, deep reinforcement learning has gained much popularity as it has been shown to perform better than previous methods on domains with very large state-spaces.
In one of the earliest deep reinforcement learning papers (hereafter the DQN paper), \citet{mnih2015human} presented a method for learning to play Atari 2600 video games, using the Arcade Learning Environment (ALE)~\citep{bellemare13arcade}, from image and performance data alone using the same deep neural network architecture and hyper-parameters for all the games.
DQN outperformed previous reinforcement learning methods on nearly all of the games and recorded better than human performance on most.
As many researchers tackle reinforcement learning problems with deep reinforcement learning methods and propose alternative algorithms, the results of the DQN paper are often used as a benchmark to show improvement.
Thus, implementing the DQN algorithm is important for both replicating the results of the DQN paper for comparison and also building off the original algorithm.
One of the main contributions of the DQN paper was finding ways to improve stability in their artificial neural networks during training.
There are, however, a number of other areas in the implementation of this method that are crucial to its success, which were only mentioned briefly in the paper.
We implemented a Deep Q-Network (DQN) to play the Atari games and replicated the results of \citet{mnih2015human}.
Our implementation, available freely online,\footnote{\url{www.github.com/h2r/burlap_caffe}} runs around 4x faster than the original implementation.
Our implementation is also designed to be flexible to different neural net network architectures and problem domains outside of ALE.
In replicating these results, we found a few key insights into the process of implementing such a system.
In this paper, we highlight key techniques that are essential for good performance and replicating the results of \citet{mnih2015human}, including termination conditions and gradient descent optimization algorithms, as well as expected results of the algorithm, namely the fluctuating performance of the network.
\section{Related Work}{}
The Markov Decision Process (MDP) \citep{bellman1957markovian} is the typical formulation used for reinforcement learning problems.
An MDP is defined by a five-tuple $(\mathcal{S, A, T, R, E})$;
$\mathcal{S}$ is the agent's state-space;
$\mathcal{A}$ is the agent's action-space;
$\mathcal{T}(s, a, s')$ represents the transition dynamics, which returns the probability that taking action $a$ in state $s$ will result in the state $s'$;
$\mathcal{R}(s, a, s')$ is the reward function, which returns the reward received when transitioning to state $s'$ after taking action $a$ in state $s$;
and $\mathcal{E} \subset \mathcal{S}$ is the set of terminal states, which once reached prevent any future action or reward.
The goal of planning in an MDP is to find a policy $\pi : S \rightarrow A$, a mapping from states to actions, that maximizes the expected future discounted reward when the agent chooses actions according to $\pi$ in the environment. A policy that maximizes the expected future discounted reward is an optimal policy and is denoted by $\pi^*$.
A key concept related to MDPs is the Q-function, $Q^\pi : S \times A \rightarrow \mathbb{R}$, that defines the expected future discounted reward for taking action $a$ in state $s$ and then following policy $\pi$ thereafter. According to the Bellman equation, the Q-function for the optimal policy (denoted $Q^*$) can be recursively expressed as:
\begin{equation}
Q^*(s, a) = \sum_{s' \in S} T(s, a, s') \left [ R(s, a, s') + \gamma \max_{a'} Q^*(s', a') \right ]
\end{equation}
where $0 \leq \gamma \leq 1$ is the discount factor that defines how valuable near-term rewards are compared to long-term rewards.
Given $Q^*$, the optimal policy, $\pi^*$, can be trivially recovered by greedily selecting the action in the current state with the highest Q-value: $\pi^*(s) = \argmax_a Q^*(s, a)$. This property has led to a variety of learning algorithms that seek to directly estimate $Q^*$, and recover the optimal policy from it. Of particular note is Q-Learning~\citep{watkins1989learning}.
In Q-Learning, an agent begins with an arbitrary estimate ($Q_0$) of $Q^*$ and iteratively improves its estimate by taking arbitrary actions in the environment, observing the reward and next state, and updating its Q-function estimate according to
\begin{equation}
Q_{t+1}(s_t, a_t) \gets Q_t(s_t, a_t) + \alpha_t \left[ r_{t+1} + \gamma \max_{a'} Q_t(s_{t+1}, a') - Q_t(s_t, a_t) \right],
\end{equation}
where $s_t$, $a_t$, $r_t$ are the state, action, and reward at time step $t$, and $\alpha_t \in (0, 1]$ is a step size smoothing parameter.
Q-Learning is guaranteed to converge to $Q^*$ under the following conditions: the Q-function estimate is represented tabularly (that is, a value is associated with each unique state-action pair), the agent visits each state and action infinitely often, and $\alpha_t \rightarrow 0$ as $t \rightarrow \infty$.
When the state-space of a problem is large (or infinite), Q-learning's $Q^*$ estimate is often implemented with function approximation, rather than a tabular function, which allows generalization of experience.
However, estimation errors in the function approximation can cause Q-learning, and other ``off policy'' methods, to diverge~\citep{baird1995residual}, requiring careful use of function approximation.
\section{Deep Q-Learning}
\begin{algorithm}[t]
\begin{algorithmic}
\State Initialize replay memory $D$ to capacity $N$
\State Initialize action-value function $Q$ with random weights $\theta$
\State Initialize target action-value function $\hat Q$ with weights $\theta^{-} = \theta$
\For{episode 1, $M$}
Initialize sequence $s_1 = \{ x_1 \}$ and preprocessed sequence $\phi_1 = \phi(s_1)$
\For{$t = 1, T$}
\State With probability $\varepsilon$ select a random action $a_t$
\State otherwise select $a_t = \argmax_a Q(\phi(s_t), a; \theta)$
\State Execute action $a_t$ in the emulator and observe reward $r_t$ and image $x_{t+1}$
\State Set $s_{t+1} = s_t, a_t, x_{t+1}$ and preprocess $\phi_{t+1} = \phi(s_{t+1})$
\State Store experience $(\phi_t, a_t, r_t, \phi_{t+1})$ in $D$
\State Sample random minibatch of experiences $(\phi_j, a_j, r_j, \phi_{j+1})$ from $D$
\State Set $y_j = \begin{cases}
r_j & \text{if episode terminates at step $j+1$}\\
r_j + \gamma \max_{a'} \hat Q(\phi_{j+1}, a'; \theta^{-}) & \text{otherwise}
\end{cases}$
\State Perform a gradient descent step on $(y_j - Q(\phi_j, a_j ; \theta))^2$ with respect to the weights $\theta$
\State Every $C$ steps reset $\hat Q = Q$
\EndFor
\EndFor
\end{algorithmic}
\caption{Deep Q-learning with experience replay}
\label{alg:dqn}
\end{algorithm}
Deep Q-Learning (DQN)~\citep{mnih2015human} is a variation of the classic Q-Learning algorithm with 3 primary contributions: (1) a deep convolutional neural net architecture for Q-function approximation; (2) using mini-batches of random training data rather than single-step updates on the last experience; and (3) using older network parameters to estimate the Q-values of the next state.
Pseudocode for DQN, copied from \citet{mnih2015human}, is shown in Algorithm~\ref{alg:dqn}.
The deep convolutional architecture provides a general purpose mechanism to estimate Q-function values from a short history of image frames (in particular, the last 4 frames of experience). The latter two contributions concern how to keep the iterative Q-function estimation stable.
In supervised deep-learning work, performing gradient descent on mini-batches of data is often used as a means to efficiently train the network. In DQN, it plays an additional role.
Specifically, DQN keeps a large history of the most recent experiences, where each experience is a five-tuple $(s, a, s', r, T)$, corresponding to an agent taking action $a$ in state $s$, arriving in state $s'$ and receiving reward $r$; and $T$ is a boolean indicating if $s'$ is a terminal state.
After each step in the environment, the agent adds the experience to its memory.
After some small number of steps (the DQN paper used 4), the agent randomly samples a mini-batch from its memory on which to perform its Q-function updates.
Reusing previous experiences in updating a Q-function is known as {\em experience replay}~\citep{lin1992self}.
However, while experience replay in RL was typically used to accelerate the backup of rewards, DQN's approach of taking fully random samples from its memory to use in mini-batch updates helps decorrelate the samples from the environment that otherwise can cause bias in the function approximation estimate.
The final major contribution is using older, or ``stale,'' network parameters when estimating the Q-value for the next state in an experience and only updating the stale network parameters on discrete many-step intervals. This approach is useful to DQN, because it provides a stable training target for the network function to fit, and gives it reasonable time (in number of training samples) to do so. Consequently, the errors in the estimation are better controlled.
Although these contributions and overall algorithm are straightforward conceptually, there are number of important details to achieving the same level of performance reported by \citet{mnih2015human}, as well as important properties of the learning process that a designer should keep in mind. We describe these details next.
\subsection{Implementation Details}
Large systems, such as DQN, are often difficult to implement since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution.
Consequently, some important low-level details of the algorithm are not explicitly mentioned or fully clarified in the DQN paper.
Here we highlight some of these key additional implementation details, which are provided in the original DQN code.\footnote{\url{www.github.com/kuz/DeepMind-Atari-Deep-Q-Learner}}
Firstly, every episode is started with a random number of ``No-op'' low-level Atari actions (in contrast to the agent's actions which are repeated for $4$ frames) between $0$ and $30$ in order to offset which frames the agent sees, since the agent only sees every $4$ Atari frames.
Similarly, the $m$ frame history used as the input to the CNN is the last $m$ frames that the agent sees, not the last $m$ Atari frames.
Additionally, before any gradient descent steps, a random policy is run for $\num{50000}$ steps to fill in some experiences in order to avoid over-fitting to early experiences.
Another parameter worth noting is the network update frequency.
The original DQN implementation only chose to take a gradient descent step every $4$ environment steps of the algorithm as opposed to every step, as Algorithm \ref{alg:dqn} might suggest.
Not only does this greatly increase the training speed (since learning steps on the network are far more expensive than forward passes), it also causes the experience memory to more closely resemble the state distribution of the current policy (since 4 new frames are added to the memory between training steps as opposed to 1) and may prevent the network from over-fitting. \stnote{Are there results for this? Could there be? (I know this is a lot of work and probably not worth the tiem but wanted to ask the question...)}
\subsection{The Fluctuating Performance of DQN}
A common belief for new users of DQN is that performance should fairly stably improve as more training time is given. Indeed, average Q-learning learning curves in tabular settings are typically fairly stable improvements and supervised deep-learning problems also tend have fairly steady average improvement as more data becomes available. However, it is not uncommon in DQN to have ``catastrophic forgetting'' in which the agent's performance can drastically drop after a period of learning. For example, in Breakout, the DQN agent may reach a point of averaging a high score over $400$, and then, after another large batch of learning, it might be averaging a score of only around $200$. The solution \citet{mnih2015human} propose to this problem is to simply save the network parameters that resulted in the best test performance.
One of the reasons this forgetting occurs is the inherent instability of approximating the Q-function over a large state-space using these Bellman updates.
One of the main contributions of \citet{mnih2015human} was fighting this instability using experience replay and stale network parameters, as mentioned above.
Additionally, \citet{mnih2015human} found that clipping the gradient of the error term to be between $-1.0$ and $1.0$ further improved the stability of the algorithm by not allowing any single mini-batch update to change the parameters drastically.
These additions, and others, to the DQN algorithm improve its stability significantly, but the network still experiences catastrophic forgetting.
\jmnote{I think I might suggest moving the gradient clipping out as its own unique thing to helping the stabilization. So whereas the main contributions to stabilization are the experience replay and stale network parameters, gradient clipping is also an unsung hero.}
Another reason this catastrophic forgetting occurs is that the algorithm is learning a proxy, the Q-values, for a policy instead of approximating the policy directly.
A side effect of this method of policy generation is a learning update could increase the accuracy of a Q-function approximator, while decreasing the performance of the resulting policy.
For example, say the true Q-value for some state, $s$, and actions, $a_1$ and $a_2$, are $Q^*(s, a_1) = 2$ and $Q^*(s, a_2) = 3$, so the optimal policy at state $s$ would be to choose action $a_2$.
Now say the Q-function approximator for these values using current parameters, $\theta$, estimates $\hat Q(s, a_1; \theta) = 0$ and $\hat Q(s, a_2; \theta) = 1$, so the policy chosen by this approximator will also be $a_2$
But, after some learning updates we arrive at a set of parameters $\theta'$, where $\hat Q(s, a_1; \theta) = 2$ and $\hat Q(s, a_2; \theta) = 1$.
These learning updates clearly decreased the error of the Q-function approximator, but now the agent will not choose the optimal action at state $s$. \jmnote{Doesn't the nature paper show something like the plots of error in Q-function over time showing that it's steadily improving? If so you may want to call that out as evidence that even though the Q-function improves, you might get a really bad result in performance.}
Furthermore Q-values for different actions of the same state can be very similar if any of these actions does not have a significant effect on near-term reward.
These small differences are the result of longer-term rewards and are therefore critical to the optimal policy.
The consequence of trying to learn an approximator for this type of function is that very small errors in the Q-values can result is very different policies, making it difficult to learn long-term policies.
As an example of this, we will consider Breakout.
Breakout is an Atari game where the player controls a paddle with the goal of bouncing a ball to destroy all the bricks on the screen without dropping the ball.
There is an optimal strategy which is to destroy the bricks on the side of the screen so that the ball can be bounced above the bricks.
When the ball is above the bricks, the Q-values are much higher than they are when the ball is below the bricks, so we would expect a policy which follows the true Q-values to quickly exploit this policy.
Every time your paddle bounces the ball, the direction does not affect short-term rewards as a brick will be broken every time you bounce the ball
But, the ball direction will affect the distant reward of bouncing the ball above the bricks by breaking the bricks on the side of the screen.
Thus, it is difficult for a Q-function approximator to learn this long-term optimal policy.
Figure \ref{fig:q_values} shows Q-values approximated by the best network and a network that performed poorly very late into training on the same inputs near the beginning of a Breakout game.
The first frame illustrates a scenario where any action could be made and the agent could still prevent the ball from falling a few actions into the future.
But the actions made before bouncing the ball also allow the agent to aim the ball.
The Q-values in this case are very similar for both networks, but the chosen actions are different.
In the second scenario, if the agent does not take the left action, the ball will be dropped, which is a terminal state.
In this case, the Q-values are much more distinct.
Thus, this fluctuating performance is to be expected while running this algorithm.
\begin{figure}
\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{breakout_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_best_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_worst_ambiguous}
\end{subfigure}
\begin{subfigure}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{breakout_unambiguous}
\caption{Example Frames}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_best_unambiguous}
\caption{Best Network}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{q_values_worst_unambiguous}
\caption{Worst Network}
\end{subfigure}
\caption{
A comparison of calculated Q-values by the networks that received the best and worst performance during testing that had been trained for at least 30,000,000 steps.
The lighter cross-hatched bar indicates the action with the highest Q-value.
The top frame corresponds to a situation where the actions don't have a significant affect on the near-future reward, while the bottom one shows a situation where the left action must be made to not loose a life.
The ``Release'' action releases the ball at the beginning of every round or does nothing (the same as ``No-op'') if the ball is already in play.
}
\label{fig:q_values}
\end{figure}
\section{Machine Learning Libraries}
Our implementation uses the Brown-UMBC Reinforcement Learning and Planning (BURLAP) Java code library \citep{burlap}.
This library makes it easy to define a factored representation of an MDP and offers many well-known learning and planning algorithms as well as the machinery for creating new ones.
For running and interacting with the Atari video games, we used the Arcade Learning Environment (ALE) \citep{bellemare13arcade}.
ALE is a framework that provides a simple way to retrieve the screen and reward data from the Atari games as well as interact with the game through single actions.
We used ALE's FIFO Interface to interact ALE through Java.
To run and train our convolutional neural net, we used the Berkeley's Caffe (Convolutional Architecture for Fast Feature Embedding) library \citep{jia2014caffe}.
Caffe is a fast deep learning framework for constructing and training neural network architectures.
To interact with Caffe through our Java library, we used the JavaCPP library provided by Bytedeco.\footnote{\url{www.github.com/bytedeco/javacpp}}
\section{Results}
To measure our performance against that of \citet{mnih2015human}, we followed the same evaluation procedure as their paper on three games: Pong, Breakout, and Seaquest.
We trained the agent for $\num{50000000}$ steps (each step is 4 Atari frames) and tested performance every $\num{250000}$ steps.
We saved the network parameters that resulted in the best test performance.
We then evaluated the trained agent with the best performing network parameters on 30 games with and $\varepsilon$-greedy policy where $\varepsilon = 0.05$.
Each game was also initialized with a random number of ``No-op'' low-level Atari actions between $0$ and $30$.
We then took the average score of those games.
The comparison of our results and those of the DQN paper on Pong, Breakout, and Seaquest are shown in Table \ref{results}.
Each training process took about 3 days for our implementation and about 10 and a half days for the original implementation on our setup.
The differences in performance stem from the differences in gradient descent optimization algorithm and learning rate.
These differences are covered in more detail in section \ref{RMS}.
\begin{table}[t]
\captionsetup{skip=8pt}
\caption{Comparison of average game scores obtained by our DQN implementation and the original DQN paper.}
\label{results}
\centering
\begin{tabular}{lll}
\toprule
Game & Our implementation & The original implementation \\
\midrule
Pong & $19.7 \ (\pm 1.1)$ & $18.9 \ (\pm 1.3)$ \\
Breakout & $339.3 \ (\pm 86.1)$ & $401.2 \ (\pm 26.9)$ \\
Seaquest & $6309 \ (\pm 1027)$ & $5286 \ (\pm 1310)$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Key Training Techniques}
While implementing our DQN, we found there were a couple methods that were only mentioned briefly in the DQN paper, but critical to the overall performance of the algorithm.
Here we present these methods and explain why they have such a strong impact on training the network.
\subsection{Termination on the Loss of Lives}
In most of the Atari games, there is a notion of ``lives'' for the player, which correspond to the number of times the player can fail (such as dropping the ball in Breakout or running into a shark in Seaquest) before the game is over.
To increase performance, \citet{mnih2015human} chose to count the loss of a life (in the games involving lives) as a terminal state in the MDP during training.
This termination condition was not mentioned in much detail in the DQN paper, but is essential for achieving their performance.
Figure \ref{fig:lives-vs-no-lives} illustrates the difference between training with and without counting losing lives as terminal states in both Breakout and Seaquest.
In Breakout, the average score of the learner that uses end of lives as terminal states increases much faster than the other learner.
However, around halfway through training, the other learner is achieving similar performance, but with much higher variance.
Seaquest is a much more complex game with many more moving sprites and longer episode lengths.
In this game the learner that uses lives as terminal states performs significantly better than the other learner throughout training.
These figures illustrate that this additional prior information greatly benefits early training and stability and, in the more complex games, significantly improves the overall performance.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{lives-vs-no-lives}
\caption{Breakout}
\end{subfigure}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{seaquest_lives-vs-no-lives}
\caption{Seaquest}
\end{subfigure}
\caption{The average training test score received for Breakout and Seaquest at each test set when using lost lives as terminal states and when using the end of a game as terminal states (epoch = 250,000 steps).}
\label{fig:lives-vs-no-lives}
\end{figure}
A terminal state in an MDP, as mentioned above, signifies to the agent that no more reward can be obtained.
Almost all the Atari games give positive rewards (Pong is a notable exception where a reward of $-1$ is received when the enemy scores a point), and thus, this addition essentially informs the agent that losing a life should be avoided at all costs.
This additional information given to the agent does seem reasonable: many human players know that loosing a life in an Atari game is bad the first time they play and it is difficult to imagine situations where the optimal policy would be to lose a life.
There are, however, a few theoretical issues with enforcing this
constraint. The first being that this process is no longer Markovian
as the initial state distribution depends on the current policy. An
example of this is in Breakout: If the agent performed well and broke many bricks before
losing a life, the new initial state for the next life will have many
fewer bricks remaining than if the agent performed poorly and broke very few bricks in the
previous life. The other issue is that this signal gives strong
additional information to the DQN, making it challenging to extend to
domains where such strong signals are not available (e.g., real-world
robotics or more open-ended video games.)
Although ALE stores the number of lives remaining for each game, it does not provide this information to all the interfaces.
To work around this limitation, we modified ALE's FIFO Interface to provide the number of lives remaining along with the screen, reward, and terminal state boolean.
Our fork that provides this data to the FIFO interface is available freely online.\footnote{\url{www.github.com/h2r/arcade-learning-environment}}
\subsection{Gradient Descent Optimization} \label{RMS}
One potential issue in using the hyper-parameters provided by \citet{mnih2015human} is that they are not using the same RMSProp definition that many deep learning libraries (such as Caffe) provide.
The RMSProp gradient descent optimization algorithm was originally proposed by Geoffrey Hinton.\footnote{\url{www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf}}
Hinton's RMSProp keeps a running average of the gradient with respect to each parameter.
The update rule for this running average can be written as:
\begin{equation}
MeanSquare(w, t) = \gamma \cdot MeanSquare(w, t-1) + (1 - \gamma) \cdot (\frac{\partial E}{\partial w}(t))^2
\end{equation}
Here, $w$ corresponds to a single network parameter, $\gamma$ is the decay factor, and $E$ is the loss.
The parameters are then updated by:
\begin{equation}
w_t = w_{t-1} - \frac{\alpha}{\sqrt{MeanSquare(w, t) + \varepsilon}} \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
where $\alpha$ corresponds to the learning rate, and $\varepsilon$ is a small constant to avoid division by $0$.
Although \citet{mnih2015human} cite Hinton's RMSProp, they use a
slight variation on the algorithm. The implementation of this can be
seen in their GitHub
repository\footnote{\url{www.github.com/kuz/DeepMind-Atari-Deep-Q-Learner}}
in the NeuralQLearner.lua file on lines 266-273. \stnote{Nice!} This variation adds
a momentum factor to the RMSProp algorithm that is updated as follows:
\begin{equation}
Momentum(w, t) = \eta \cdot Momentum(w, t-1) + (1 - \eta) \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
Here, $\eta$ is the momentum decay factor.
The parameter update rule is then modified to:
\begin{equation}
w_t = w_{t-1} - \frac{\alpha}{\sqrt{MeanSquare(w, t) - (Momentum(w, t))^2 + \varepsilon}} \cdot \frac{\partial E}{\partial w}(t)
\end{equation}
To account for this change in change in optimization algorithm, we had to modify the learning rate to something much lower than that of the \citet{mnih2015human} implementation (we used $0.00005$ as opposed to their $0.00025$).
We did not choose to implement this variant of RMSProp as it was not trivial to implement with the Java-Caffe bindings and Hinton's version produced similar results.
\section{Speed Performance}
Our implementation is training a bit less than 4x faster than the original implementation written in Lua using Torch.
The setup on which we tested these implementations is using two NVIDIA GTX 980 TI graphics cards along with an Intel i7 processor.
Our implementation runs at around $985$ Atari frames per second (fps) during training and $1584$fps during testing, while the Lua implementation runs at $271$fps during training and $586$fps during testing on our hardware (note that the algorithm only looks at every 4 frames, and so only 1 fourth of this number of frames are processed by the algorithm per second).
We attribute a large portion of this performance increase to cuDNN.
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a proprietary NVIDIA library for running forward and backward passes on common neural network layers optimized specifically for the NVIDIA GPUs.
For both Torch and Caffe, cuDNN is available, but not used by default.
We compiled Caffe using cuDNN for our experiments, while the Lua implementation did not use this feature in Torch.
For comparison, when using Caffe without cuDNN, our implementation runs at around $268$fps during training and $485$fps during testing, which is a bit slower than the Lua implementation.
Another area that significantly increased the speed performance of our implementation was preallocating memory before training, which was also done in the original DQN implementation.
Allocating large amounts of memory is an expensive operation, so preallocating memory for large vectors, such as the experience memory and mini-batch data, and reusing it at each iteration significantly decreases this overhead.
\section{Conclusion}
In this paper we have presented a few key areas in the implementation of the DQN proposed by \citet{mnih2015human} that were essential to the overall performance of the algorithm, but were not covered in great detail in the original paper, in order to make it easier for researchers to implement their own versions of this algorithm.
We also highlighted some of the difficulties in approximating a Q-function with a CNN in such large state-spaces, namely the catastrophic forgetting.
We also have our implementation available freely online\footnote{\url{www.github.com/h2r/burlap_caffe}} and encourage researchers to use this as a tool for implementing novel algorithms as well as comparing performance with that of \citet{mnih2015human}.
\subsubsection*{Acknowledgments}
This material is based upon work supported by the National Science Foundation under grant numbers IIS-1637614 and IIS-1426452, and DARPA under grant numbers W911NF-10-2-0016 and D15AP00102.
\medskip
{
\bibliographystyle{plainnat}
| {'timestamp': '2017-11-22T02:00:17', 'yymm': '1711', 'arxiv_id': '1711.07478', 'language': 'en', 'url': 'https://arxiv.org/abs/1711.07478'} |
\section{Experiments} \label{sec:exp}
We conduct experiments on various kinds of tasks to demonstrate the effectiveness of RGP. We first examine the utility of models trained by RGP. To this end, we apply RGP on the wide ResNet \citep{zagoruyko2016wide} and the BERT \citep{devlin2018bert} models, which are representative models for computer vision and natural language modeling. The results are presented in Section~\ref{subsec:exp_resnet} and~\ref{subsec:exp_bert}. The source code of our implementation is publicly available\footnote{\url{https://github.com/dayu11/Differentially-Private-Deep-Learning}}.
\begin{table}
\small
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Validation accuracy (in \%) of WRN28-4 on vision tasks . } \label{tbl:tbl_resnet}
\begin{tabular}{l|l|l}
\hline
\hline
Method & SVHN & CIFAR10 \\
\hline
Full (N.P.) & 97.2 & 93.3 \\\cline{1-3}
Linear (N.P.) & 41.1 & 39.8 \\\cline{1-3}
RGP (N.P.) & 97.1 & 91.2 \\\cline{1-3}
PowerSGD (N.P.) & 97.1 & 91.9 \\\cline{1-3}
DP-SGD ($\epsilon=8$) & 91.6 & 55.9 \\\cline{1-3}
DP-PowerSGD ($\epsilon=8$) & 91.9 & 57.1 \\\cline{1-3}
RGP-random ($\epsilon=8$) & 91.7 & 51.0 \\\cline{1-3}
RGP ($\epsilon=8$)& 94.2 & 63.4 \\\hline \hline
\end{tabular}
\end{table}
\begin{table}
\small
\centering
\caption{Validation accuracy (in \%) of RGP on vision tasks with varying $\epsilon$. The model architecture is WRN28-4. Numbers in brackets denote the improvements compared to DP-SGD. } \label{tbl:vision_vary_eps}
\begin{tabular}{l|l|l|l}
\hline
\hline
Dataset & $\epsilon=2$ & $\epsilon=4$ & $\epsilon=6$ \\\hline
SVHN & 87.3 (+4.1) & 89.7 (+3.4) & 92.3 (+3.9) \\\hline
CIFAR10 & 44.0 (+6.6) & 53.3 (+6.4) & 59.6 (+7.9) \\\hline \hline
\end{tabular}
\end{table}
Moreover, we empirically evaluate the privacy risk of the models via the success rate of \emph{membership inference (MI) attack} \citep{shokri2017membership,sablayrolles2019white,yu2021how}. The results are presented in Section~\ref{subsec:exp_mi}.
\textbf{Implementation.} The number of iterations for power method is $1$. We use an open-source tool of moments accountant to compute the privacy loss\footnote{\url{https://github.com/tensorflow/privacy}}. For a given setting of hyperparameters, we set $\sigma$ to be the smallest value so that the privacy budget is allowable to run desired epochs. All experiments are run on a node with four Tesla V100 GPUs.
\textbf{Baselines.} We implement several baseline algorithms for comparison. For differentially private learning, the first baseline is \emph{DP-SGD} in \citet{abadi2016deep} and the second one is RGP with gradient carriers consisting of random orthonormal vectors, referred to as \emph{RGP-random}. We also include several non-private baselines, i.e., \textbf{(\romannumeral 1)} \emph{Full (N.P.)}: training the full model, \textbf{(\romannumeral 2)} \emph{Linear (N.P.)}: training only the linear classification layer, \textbf{(\romannumeral 3)} \emph{RGP (N.P.)}: training the model with reparametrization but without gradient clipping or adding noise.
We consider differentially private \emph{PowerSGD} \citep{vogels2019powersgd} as another baseline for vision tasks. PowerSGD approximates full gradients with low-rank matrices to reduce the communication cost. It first aggregates the individual gradients and then runs power iterations to find approximations of the principle components of the averaged gradient. Hence for DP-powerSGD, it is necessary to first perturb the aggregated gradient and then project it into low-rank subspace otherwise the sensitivity is hard to track after projection. As a consequence, DP-powerSGD needs to compute the individual gradients explicitly, which costs huge memory as DP-SGD does. In Section~\ref{subsec:exp_resnet}, we add a DP-powerSGD baseline with the same setting as that of RGP.
Additionally, some ablation experiments are conducted to study the influence of the residual weight and reparametrization ranks, which are relegated to the Appendix~\ref{app:sec:add-exp}.
\subsection{Experiments on Vision Tasks}\label{subsec:exp_resnet}
\textbf{Model.} We use wide ResNet models \citep{zagoruyko2016wide} for the vision tasks. The architecture is WRN28-4 with $\sim$1.5M parameters. All batch normalization layers are replaced with group normalization layers to accommodate private learning.
\textbf{Tasks.} We use two vision datasets: SVHN \citep{netzer2011reading} and CIFAR10 \citep{cifar}. SVHN contains images of $10$ digits and CIFAR10 contains images of 10 classes of real-world objects.
\textbf{Hyperparameters.} We follow the hyperparameters in \citet{zagoruyko2016wide} except using a mini-batch size 1000. This mini-batch size is larger than the default because the averaging effect of large mini-batch reduces the noise variance. The reparametrization rank $r$ is chosen from $\{1, 2, 4, 8, 16\}$. We choose the privacy parameter $\delta<\frac{1}{n}$, and set $\delta=10^{-6}$ for SVHN and $\delta=10^{-5}$ for CIFAR10. We repeat each experiment 3 times and report the average.
\textbf{Results.} The prediction accuracy with $\epsilon=8$ is presented in Table~\ref{tbl:tbl_resnet}. We can see that RGP (N.P.) achieves comparable performance with training the full model (N.P.). When trained with DP, RGP outperforms DP-SGD by a considerable margin while enjoying a much lower memory cost. We also compare RGP with DP-SGD using different privacy budgets ($\epsilon=2/4/6$) and report the results Table~\ref{tbl:vision_vary_eps}.
\subsection{Experiments on the Downstream Tasks of BERT}\label{subsec:exp_bert}
\textbf{Model.} We use the BERT\textsubscript{BASE} model in \citet{devlin2018bert}, which is pre-trained on a massive corpus collected from the Web. The BERT\textsubscript{BASE} model has $\sim$110M parameters.
\textbf{Tasks.} We use four tasks from the General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue}, including MNLI, QQP, QNLI, and SST-2. The other tasks from GLUE are excluded because their datasets are of small sizes (<10K) while differentially private learning requires large amount of data \citep{tramer2021differentially}.
\textbf{Hyperparameters.} We follow the hyperparameters in \citet{devlin2018bert}
except for the mini-batch size and training epochs. The reparametrization rank $r$ is chosen from $\{1, 2, 4, 8\}$. The mini-batch size is 500 for SST-2/QNLI and 1000 for QQP/MNLI. To construct an update with desired mini-batch size, we accumulate the gradients of multiple micro-batches. We choose $\delta = 10^{-5}$ for QNLI/SST-2 and $\delta =10^{-6}$ for QQP/MNLI. The privacy parameter $\epsilon$ is chosen from $\{1, 2, 4, 6, 8\}$. The number of training epochs is 50 for $\epsilon>2$ and $20$ for $\epsilon\leq 2$. We run all experiments 5 times with different random seeds and report the average.
\textbf{Results.} The prediction accuracy of RGP and other baselines is presented in Table~\ref{tbl:tbl_bert}. The results with varying DP parameter $\epsilon$ is plotted in Figure~\ref{fig:fig_bert}. When trained without privacy guarantee, RGP (N.P.) achieves test accuracy comparable with fine-tuning the full model. When trained with differential privacy, RGP achieves the best performance. Its accuracy loss compared to non-private baselines is within $5\%$. The performance of RGP-random is worse than that of RGP because the random subspace does not capture gradient information as effectively as the subspace of historical updates. DP-SGD achieves the worst performance because high-dimensional noise overwhelms the useful signal in gradients. We note that DP-SGD runs the lowest because it needs to compute and store 110M floating-point numbers for each individual gradient.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{imgs/bert_varying_eps.pdf}
\caption{Prediction accuracy of BERT on downstream tasks with varying $\epsilon$. For MNLI, we plot the average score of two test datasets. }
\label{fig:fig_bert}
\end{figure*}
\begin{table}
\small
\centering
\caption{Prediction accuracy of BERT on downstream tasks (in \%). For DP-SGD, RGP, and RGP-random, a same $\epsilon=8$ is used.}
\begin{tabular}{l|l|l|l|l|l}
\hline
\hline
Method & MNLI & QQP & QNLI & SST-2 & Avg. \\
\hline
Full (N.P.) & 84.8/83.7 & 90.2 & 91.6 & 93.4 & 88.7 \\\cline{1-6}
Linear (N.P.) & 51.9/50.8 & 73.2 & 63.0 & 82.1 & 64.2 \\\cline{1-6}
RGP (N.P.) & 83.6/83.2 & 89.3 & 91.3 & 92.9 & 88.1 \\\cline{1-6}
DP-SGD\tablefootnote{As shown in \citet{li2021large}, DP-SGD performs better when large batchsizes and full precision are used.} & 54.6/53.4 & 74.5 & 63.6 & 82.3 & 65.7 \\\cline{1-6}
RGP-random & 74.6/73.3 & 81.7 & 82.1 & 87.8 & 79.9 \\\cline{1-6}
RGP\tablefootnote{The performance of RGP is also better in the above setup. More details are in \url{https://github.com/dayu11/Differentially-Private-Deep-Learning}. } & 79.1/78.0 & 84.8 & 86.2 & 91.5 & 83.9
\\\hline \hline
\end{tabular}
\label{tbl:tbl_bert}
\end{table}
\subsection{Defense Against Membership Inference Attack}\label{subsec:exp_mi}
\textbf{Setup.} We use membership inference (MI) attack to empirically evaluate the privacy risk of models trained with/without RGP. Following the membership decision in \citet{sablayrolles2019white}, we predict a sample from the training data if its loss value is smaller than a chosen threshold. To evaluate the MI success rate, we construct a \emph{MI dataset}, which consists of the same number of training and test samples. Specifically, the MI dataset contains the whole test set and a random subset of the training set. We further divide the MI dataset evenly into two subsets. One is used to find the optimal loss threshold and the other one is used to evaluate the final attack success rate.
\textbf{Results.}
The MI success rates are presented in Table~\ref{tbl:mi_bert}. For MNLI, QQP, QNLI, and SST-2 datasets, we conduct MI attacks on fine-tuned BERT\textsubscript{BASE} models. For SVHN and CIFAR10 datasets, we conduct MI attacks on trained WRN28-4 models. The MI attack on the models trained with RGP ($\epsilon=8$) is no better than random guessing ($50\%$ success rate), which empirically demonstrate the effectiveness of RGP in protecting privacy. Moreover, interestingly, the models trained with low-rank reparametrization alone also achieve much lower MI success rate than the fully trained model, which indicates the benefit of low-rank reparametrization in terms of privacy protection.
\section*{Acknowledgements}
Jian Yin is supported by NSFC (U1711262, U1711261, U1811264, U1811261, U1911203, U2001211), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R\&D Program of Guangdong Province (2018B010107005).
\newpage
\section{Introduction}
A recent line of works \citep{shokri2017membership,carlini2019secret,carlini2020extracting} have exposed the potential privacy risks of trained models, e.g., data extraction from language model. Theoretically, learning with \emph{differential privacy} \citep{dwork2006calibrating} is guaranteed to prevent such information leakage because differential privacy imposes an upper bound on the influence of any individual sample. Empirically, differential privacy also makes learning more resistant to attacks \citep{rahman2018membership,bernau2019assessing, zhu2019deep, carlini2019secret, ma2019data,lecuyer2019certified}.
To learn with differential privacy, many algorithms have been proposed under different settings over the past decade, e.g., \citet{chaudhuri2009privacy,song2013stochastic,agarwal2018cpsgd,wang02019differentially,wang2019differentially,yu2020gradient,phan2020scalable,vietri2020private}, to name a few. Among them, \emph{gradient perturbation} is a popular choice because of its simplicity and wide applicability \cite{abadi2016deep}. In terms of simplicity, gradient perturbation only makes two simple modifications to the standard learning process. It first clips the gradients of individual samples, referred to as individual gradients, to bound the sensitivity and then perturbs the aggregated gradient with random noise. In terms of wide applicability, it does not assume the objective to be convex and hence applies to deep neural networks.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{imgs/repara.pdf}
\caption{The proposed reparametrization scheme. The residual weight makes the reparametrized output the same as the normal output and $\partial{\bm{L}}$, $\partial{\bm{R}}$ naturally connected with the normal gradient. }
\label{fig:repara}
\end{figure}
Despite its advantages, there are two challenges when applying gradient perturbation to cutting-edge deep models. First, one needs to compute and store individual gradients. Recent works \citep{dangel2019backpack,Opacus} have developed toolkits to compute individual gradients for a mini-batch of data through a single forward/backward pass, but storing individual gradients consumes a huge amount of memory as each individual gradient requires the same amount of memory as the model itself. Second, both theoretical and empirical utilities of gradient perturbation suffer from bad dependence on the model size \citep{bassily2014differentially, papernot2020tempered,tramer2021differentially} because the intensity of the added noise scales proportionally with the model size.
To tackle these challenges, we reparameterize each weight matrix ${\bm{W}}$ of a deep neural network with a pair of low-rank \emph{gradient carriers} $\{{\bm{L}},{\bm{R}}\}$ and a \emph{residual weight} $\tilde{{\bm{W}}}$, as illustrated in Figure~\ref{fig:repara}. With this reparametrization, the forward signal and the backward signal propagate the same as before. We show that the gradients on ${\bm{L}}$ and ${\bm{R}}$ are naturally connected with the gradient on ${\bm{W}}$. Especially if the gradient carriers consist of orthonormal vectors, we can construct a projection of the gradient of ${\bm{W}}$ from the gradients of ${\bm{L}}$ and ${\bm{R}}$ that are of low dimension. In other words, we can compute the projection of the gradient without computing the gradient itself. This property could save a huge amount of memory in DP-SGD where a large batch of individual gradients are computed and stored. We note that this could be also useful in other problems involving statistics of individual gradients, e.g. computing the gradient variance \citep{zhao2015stochastic,balles2016coupling,mahsereci2017probabilistic,balles2018dissecting}, which is out of our scope.
Based on the above framework, we propose \emph{reparametrized gradient perturbation (RGP)} for differentially private learning. Specifically, after the backward process, RGP clips and perturbs the gradients of ${\bm{L}}$ and ${\bm{R}}$, which gives a certain level of privacy guarantee. Then RGP uses the noisy gradients to construct an update for the original weight. We note that because the gradient-carrier matrices are of much smaller dimension than the original weight matrix, the total intensity of the added noises is significantly smaller, which helps us break the notorious dimensional dependence of the utility of differentially private learning.
The key of the reparameterization scheme is how well the gradient projection approximates the original gradient. We argue that the approximation is good if 1) the original gradient of ${\bm{W}}$ itself is indeed low-rank and 2) its principal subspace aligns with ${\bm{L}}$ and ${\bm{R}}$. The first condition is empirically verified by showing the gradient of each layer is of low stable rank when training deep neural networks, which has also been exploited for gradient compression in distributed optimization \citep{vogels2019powersgd}. The second condition is guaranteed if ${\bm{L}}$ and ${\bm{R}}$ consists of the principal singular vectors of the original gradient, which, however, violates the differential privacy. Instead, in RGP, we approximately compute a few of principal vectors of the historical updates that are already published and free to use because of the post-processing property of differential privacy, and use them as gradient carriers. We theoretically prove that the optimality of using the historical update substitution for linear regression and empirically verify its efficacy for deep neural networks.
With RGP, we can easily train large models with differential privacy and achieve good utility on both the vision and language modeling tasks. For example, we use RGP to train the BERT model \citep{devlin2018bert} on downstream language understanding tasks. We establish rigorous differential privacy guarantee for such large model with a modest drop in accuracy. With a privacy budget $\epsilon=8$, we achieve an average accuracy $83.9\%$ on downstream tasks, which is within $5\%$ loss compared to the non-private baseline. We also use \emph{membership inference attack} \citep{shokri2017membership,sablayrolles2019white} to evaluate the empirical privacy risks and demonstrate that the models trained with RGP are significantly more robust to membership inference attack than the non-private ones.
Overall, our contribution can be summarized as follows.
\begin{enumerate}[itemsep=0mm]
\item We propose reparametrized gradient perturbation (RGP) that reduces the memory cost and improves the utility when applying DP on large models.
\item We give a detailed analysis on the property of RGP. We propose using the historical update to find the principal subspace and give theoretical arguments.
\item Empirically we are able to efficiently train BERT with differential privacy on downstream tasks, and achieve both good accuracy and privacy protection.
\end{enumerate}
\subsection{Notations}
We introduce some basic notations. Vectors and matrices are denoted with bold lowercase letters, e.g., ${\bm{v}}$, and bold capital letters, e.g., ${\bm{M}}$, respectively. Sets are denoted with double-struck capital letters, e.g., ${\mathbb{S}}$. We use $[n]$ to denote the set of positive numbers $\{1,...,n\}$. Some preliminaries on differential privacy are presented in Appendix \ref{app:sec:preliminary}.
\section{Two Properties of the Gradient Matrix} \label{sec:grad_property}
We show two properties of the gradients of modern deep neural networks to justify the design choices of Algorithm~\ref{alg:dp_lrk_repara}. The first property is that the gradient of each weight matrix is naturally low-rank, which motivates us to use low-rank reparameterization. The second property is that the gradient of a weight matrix along the optimization path could stay in the same subspace, which motivates us to use the historical updates to generate the gradient-carrier matrices.
\subsection{Gradient Matrix Is of Low Stable Rank}
\label{subsec:grad_is_lrk}
Recent works have used the low-rank approximation to compress the gradients and reduce the communication cost in distributed optimization \citep{yurtsever2017sketchy, wang2018atomo, karimireddy2019error, vogels2019powersgd}. These existing works set up a good motivation to exploit the low stable rank property of the gradients of weight matrices.
We further verify this low-rank property which may give a hint about how to set the reparameterization rank $r$ in practice. We empirically compute the stable rank ($\|\cdot\|_F^2/\|\cdot\|^2_{2}$) of the gradient of the weight matrices in a BERT model and a wide ResNet model. The dataset for the BERT model is SST-2 from the GLUE benchmark \citep{wang2018glue}. The dataset for the wide ResNet model is CIFAR-10 \cite{cifar}. The experimental setup can be found in Section~\ref{sec:exp}. We plot the gradient stable rank in Figure~\ref{fig:stbl_rank}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{imgs/grad_stable_rank.pdf}
\caption{Gradient stable rank ($\|\cdot\|_F^2/\|\cdot\|^2_{2}$). For ResNet, we plot the gradient rank of the classification layer and the first residual block. For BERT, we plot the gradient rank of the first fully-connected block and the first attention block.}
\label{fig:stbl_rank}
\end{figure}
As shown in Figure~\ref{fig:stbl_rank}, both the gradients of BERT and ResNet models are naturally of low stable rank over the training process. Hence, low-rank gradient-carrier matrices would have a small approximation error if we find the right gradient subspace. In Section \ref{subsec:historical_grad}, we argue that historical update is a good choice to identify the gradient subspace.
\subsection{Historical Gradients Are Correlated}
\label{subsec:historical_grad}
Suppose that ${\bm{W}}_t$ is a weight matrix at step $t$, and $\partial {\bm{W}}_t$ is the gradient with a batch of data ${\mathbb{D}}$ with a $r$-SVD $\partial {\bm{W}}_t = {\bm{U}}_t \Sigma_t {\bm{V}}_t^T$. For another step ${t'}$ with $t'>t$ and the same data ${\mathbb{D}}$, we have ${\bm{W}}_{t'}, \partial {\bm{W}}_{t'}$ and a $r$-SVD: $\partial {\bm{W}}_{t'} = {\bm{U}}_{t'} \Sigma_{t'} {\bm{V}}_{t'}^T$. We can project $\partial {\bm{W}}_{t'}$ onto the principal subspace of $\partial {\bm{W}}_t$ or $\partial {\bm{W}}_{t'}$ and measure the projection residual
\begin{flalign}
&\|({\bm{I}} - {\bm{U}}_t{\bm{U}}_t^T)\partial {\bm{W}}_{t'}({\bm{I}}-{\bm{V}}_t{\bm{V}}_t^T)\|_F/\|\partial {\bm{W}}_{t'}\|_F,\label{eq:proj_res}
\\
&\|({\bm{I}} - {\bm{U}}_{t'}{\bm{U}}_{t'}^T)\partial {\bm{W}}_{t'}({\bm{I}}-{\bm{V}}_{t'}{\bm{V}}_{t'}^T)\|_F/\|\partial {\bm{W}}_{t'}\|_F,\label{eq:self_proj_res}
\end{flalign}
where Eq~(\ref{eq:proj_res}) is the projection residual using historical gradient, referred to as \emph{historical projection residual}, and Eq~(\ref{eq:self_proj_res}) is the projection residual using current gradient, referred to as \emph{self projection residual}. A small difference between Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}) indicates that the principal subspace of the current gradient aligns with that of the historical gradient.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{imgs/proj_residual.pdf}
\caption{Projection residual with reparametrization rank $8$. We use a fixed mini-batch with $500$ samples. For ResNet, we use the input convolution layer. For BERT, we use the second matrix of the FC layer in the first encoder block. The definition of historical/self projection residual is in Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}). }
\label{fig:proj_res}
\end{figure}
We empirically examine the projection residual of a BERT model and a wide ResNet model. The tasks are the same as in Section~\ref{subsec:grad_is_lrk}. At the beginning of each epoch, we evaluate the projection residual between the current gradient and the gradient of the previous epoch. The results are plotted in Figure~\ref{fig:proj_res}. We can see that the difference between Eq~(\ref{eq:proj_res}) and~(\ref{eq:self_proj_res}) is small for both models.
To understand why historical gradients are correlated, we next use a linear regression problem to rigorously show that the gradients over time could live in the same subspace. Suppose we have a set of observations $\{({\bm{x}}_i, {\bm{y}}_i)\}_{i=1}^n$, where ${\bm{x}}_i \in {\mathbb{R}}^d$ is the feature vector and ${\bm{y}}_i\in {\mathbb{R}}^{p}$ is the target vector for all $i \in [n]$. The least-squares problem is given by
\begin{flalign}
\argmin_{\bm{W}} \frac{1}{n}\sum_{i=1}^n \|{\bm{y}}_i - {\bm{W}} {\bm{x}}_i\|^2. \label{eq:least-squares}
\end{flalign}
\begin{restatable}{proposition}{gradalign}\label{prop:grad-align}
For the least squares problem (\ref{eq:least-squares}), if the model is updated by gradient descent with step size $\eta$
\begin{flalign}
{\bm{W}}_{t+1} \leftarrow {\bm{W}}_t - \eta \cdot \partial{\bm{W}}_t, \label{eq:gd}
\end{flalign}
then the gradients $\{\partial {\bm{W}}_t\}_{t\ge 1}$ share the same range and null space. That is to say, if $\partial {\bm{W}}_1$ is rank $r$ and has $r$-SVD $\partial {\bm{W}}_1 = {\bm{U}}_1 \Sigma_1 {\bm{V}}_1^T$, then for all $t\ge 1$, we have
\begin{flalign}
({\bm{I}} - {\bm{U}}_1{\bm{U}}_1^T) \partial {\bm{W}}_t = 0,\; \partial {\bm{W}}_t ({\bm{I}} - {\bm{V}}_1{\bm{V}}_1^T)= 0.
\end{flalign}
\end{restatable}
\begin{proof}
The proof is relegated to Appendix \ref{apd:subsec:proof_sec4}.
\end{proof}
Hence we can use the historical updates ${\bm{W}}_t-{\bm{W}}_0$ to identify gradient row/column subspaces as in Algorithm \ref{alg:dp_lrk_repara}.
That indicates that for the weight matrix ${\bm{W}}\in {\mathbb{R}}^{p \times d}$, if the gradient turns out to be low-rank $r$ due to the data $\{{\bm{x}}_i, {\bm{y}}_i\}$, we can possibly first identify the intrinsic subspace which is of $r(p+d)$ dimension instead of the original $p\cdot d$ number of parameters. Then we can work within this subspace for differentially private empirical risk minimization. This can both reduce the effect of noise and save the memory cost of gradient perturbation due to the small intrinsic dimension.
We note that identifying the low-rank subspace can be done approximately as in the algorithm, or by using some auxiliary public data as in \citet{zhou2021bypassing, yu2021do}.
\begin{remark}\label{rem:lst-sqr}
Suppose that the least-squares objective $L({\bm{W}}):=\frac{1}{n}\sum_{i=1}^n \|{\bm{y}}_i - {\bm{W}} {\bm{x}}_i\|^2$ is $\beta$-smooth and the gradient subspace is rank $r$ and can be exactly identified. Let the optimizer of RGP be gradient descent and $\sigma$ be set as in Proposition~\ref{prop:privacy}. If $\eta=\frac{1}{\beta}$, $T=\frac{n\beta\epsilon}{\sqrt{p}}$, and $\bar{{\bm{W}}}=\frac{1}{T}\sum_{t=1}^{T}{\bm{W}}_t$, then
\[\mathbb{E}[L(\bar{{\bm{W}}})]-L({\bm{W}}_*)\leq \mathcal{O}\left(\frac{\sqrt{(p+d)r\log(1/\delta)}}{n\epsilon}\right),\]
where ${\bm{W}}_*$ is the optimal point, ${\bm{W}}_{t}$ is the output of Algorithm~\ref{alg:dp_lrk_repara} at step $t$.
\end{remark}
The proof of Remark \ref{rem:lst-sqr} can be adapted from \cite{yu2020gradient}. Although the exact low-rank property of the gradient cannot be rigorously proved for deep neural network because of the co-adaption across layers, we have empirically verified that the gradient matrices are still of low stable rank and stay in roughly the same subspace over iterations (see Figure \ref{fig:stbl_rank} \& \ref{fig:proj_res}). Our algorithm exploits this fact to reparameterize weight matrices, which achieves better utility and reduces the memory cost compared with DP-SGD.
\section{Related Work}
\begin{table}
\renewcommand{\arraystretch}{1.3}
\centering
\caption{Success rates of membership inference attack against fine-tuned BERT models (in \%). The closer to 50, the better.} \label{tbl:mi_bert}
\begin{adjustbox}{max width=0.45\textwidth}
\begin{tabular}{l|l|l|l|l|l|l}
\hline
\hline
Method & MNLI & QQP & QNLI & SST-2 & SVHN & CIFAR10 \\
\hline
Full (N.P.) & 60.3 & 56.1 & 55.8 & 57.7 & 56.4 & 58.1 \\\cline{1-7}
RGP (N.P.) & 52.3 & 51.5 & 51.8 & 52.6 & 52.8 & 53.3 \\\cline{1-7}
RGP ($\epsilon=8$) & 49.9 & 50.0 & 50.4 & 50.1 & 50.1 & 50.3 \\
\hline \hline
\end{tabular}
\end{adjustbox}
\end{table}
Differentially private learning has a poor dimensional dependency, i.e., the utility degrades dramatically when the model dimension gets large. In the high-dimensional setting, related works usually assume the sparse structure \citep{thakurta2013differentially, talwar2015nearly, wang2019sparse, wang2019differentially, cai2019cost} or specific problem structure \cite{chen2020locally,zheng2020locally}. However, these assumptions or specific structures do not hold for the gradient of deep neural networks. Here we emphasize the difference from our low-rank assumption. For the sparsity assumption, the bases are canonical and not private while for the low-rank assumption, it is ``sparse'' under certain bases but the bases are unknown and private. Hence the previous algorithms for sparsity cannot apply here.
Very recently, several works \citep{zhou2020bypassing,kairouz2020dimension, yu2021do} exploit the redundancy of gradients of samples and suggest projecting the gradients into a low dimensional subspace that is identified by some public data points or historical gradients, in order to reduce the noise effect when training large models. However, they all require storing and clipping whole individual gradients and hence are hard to train extremely large models. Our work is orthogonal with theirs, i.e., we exploit the low-rank property of the gradient of each weight matrix, which truly breaks the barrier of applying DP in large models.
Another recent approach of training non-convex models with differential privacy is based on the knowledge transfer of machine learning models \emph{Private Aggregation of Teacher Ensembles (PATE)} \citep{papernot2016semi, papernot2018scalable, jordon2019pate}. They first train independent teacher models on disjoint shards of private data and then tune a student model with privacy by distilling noisy predictions of teacher models on some public samples, whose performance suffers from the data splitting \cite{yu2021do}. It is not clear how to apply PATE to train large language models like BERT. In contrast, our algorithms do not require public data and can be used in different settings with little change.
The phenomenon that the gradients of deep models live on a very low dimensional manifold has been widely observed \citep{gur2018gradient, vogels2019powersgd, gooneratne2020low, li2020hessian, martin2018implicit, li2018algorithmic}. People have also used this fact to compress the gradient with low-rank approximation in the distributed optimization scenario \citep{yurtsever2017sketchy, wang2018atomo, karimireddy2019error, vogels2019powersgd}.
\section{Conclusion}
In this paper, we present the reparametrized gradient perturbation (RGP) for applying DP on large models. The key design of RGP exploits two properties of gradients in deep neural network, which are 1) the gradient of each weight matrix is of low stable rank, 2) the principal components of historical gradients align well with that of the current gradient. We also justify the designs with both theoretical and empirical evidence. Thanks to RGP, we are able to train BERT on several downstream tasks with DP guarantee and achieve small accuracy loss.
\vspace{-1mm}
\section{A Reparametrization Scheme}\label{sec:lrk}
In this section, we introduce a reparametrization scheme for the neural network weight matrices so that computing and storing individual gradients are affordable for large models. Specifically, during each forward/backward process, for a layer with weight matrix ${\bm{W}}\in {\mathbb{R}}^{p\times d}$, we reparametrize it as follows (see Figure~\ref{fig:repara} for an illustration),
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}} + \tilde{{\bm{W}}}.{stop\_gradient()}, \label{eq:repara}
\end{flalign}
where ${\bm{L}}\in{\mathbb{R}}^{p\times r}, {\bm{R}}\in{\mathbb{R}}^{r\times d}$ are two low-rank gradient carriers with $r\ll p \text{ or } d$, $\tilde{{\bm{W}}} = {\bm{W}}-{\bm{L}}{\bm{R}}$ represents the residual weight and $.{stop\_gradient()}$ means that we do not collect the gradient on $\tilde{{\bm{W}}}$.
Hence, such reparametrization does not change the forward signal and the backward signal, but only changes the gradient computation. Now we obtain the gradients on ${\bm{L}}$ and ${\bm{R}}$. We then unveil the connection between the gradient on ${\bm{W}}$ and the gradients on ${\bm{L}}$ and ${\bm{R}}$.
\begin{restatable}{theorem}{gradlr}\label{thm:grad_lr}
For a layer with weight matrix ${\bm{W}}$, suppose that $\partial{\bm{W}}$ is the gradient computed by back-propagation with a mini-batch data ${\mathbb{D}}$. Given two matrices ${\bm{L}}, {\bm{R}}$, we reparametrize ${\bm{W}}$ as in Eq~(\ref{eq:repara}) and compute the gradients $\partial{\bm{L}}$ and $\partial{\bm{R}}$ by running the forward and backward process with the same mini-batch ${\mathbb{D}}$, then
\begin{flalign}
\partial {\bm{L}} = (\partial{\bm{W}}){\bm{R}}^{T},\;\; \partial {\bm{R}} = {\bm{L}}^{T}(\partial{\bm{W}}).
\end{flalign}
\end{restatable}
Based on the above understanding, we can construct an update for ${\bm{W}}$ by using $\partial{\bm{L}}$ and $\partial{\bm{R}}$.
\begin{restatable}{corollary}{corogradlr}\label{corollary:grad_lr}
If the columns of ${\bm{L}}$ and the rows of ${\bm{R}}$ are orthonormal, respectively, and we use
\begin{flalign}
\label{eq:grad_lrk}
(\partial {\bm{L}}) {\bm{R}} + {\bm{L}}(\partial{\bm{R}}) - {\bm{L}}\mL^{T}(\partial{\bm{L}}){\bm{R}},
\end{flalign}
as the update for ${\bm{W}}$, then the update is equivalent to projecting $\partial{\bm{W}}$ into the subspace of matrices whose row/column spaces are spanned by ${\bm{L}}$ and ${\bm{R}}$.
\end{restatable}
\begin{proof}
The proofs of Theorem~\ref{thm:grad_lr} and Corollary~\ref{corollary:grad_lr} are relegated to Appendix~\ref{apd:subsec:proof_sec2}.
\end{proof}
We note that if ${\bm{L}}$ and ${\bm{R}}$ consist of orthonormal bases, Corollary~\ref{corollary:grad_lr} states that we can obtain the projection of $\partial{\bm{W}}$ without explicitly computing and storing $\partial{\bm{W}}$! The size of gradient on ${\bm{L}}$ or ${\bm{R}}$ is much smaller than the size of $\partial{\bm{W}}$ if the gradient carriers are chosen to be low-rank. Therefore, this reparametrization provides a convenient way to compute and store projected gradients of a large matrix. This is extremely beneficial for the scenarios where individual gradients $\{\partial_i {\bm{W}}\}_{i=1}^{m}$ are required, e.g., approximating the variance of gradients and controlling the gradient sensitivity.
It is natural to ask how to choose ${\bm{L}}$ and ${\bm{R}}$ so that the update in Corollary~\ref{corollary:grad_lr} contains the most information of $\partial{\bm{W}}$. Ideally, we can first compute the aggregated gradient $\partial{\bm{W}}$ and run \emph{singular value decomposition} (SVD) $\partial{\bm{W}}={\bm{U}}{\bm{\Sigma}}{\bm{V}}^{T}$. Then we can choose the top few columns of ${\bm{U}}$ and ${\bm{V}}$ to serve as the gradient carriers. In this case, the update in Corollary~\ref{corollary:grad_lr} is equivalent to approximating $\partial{\bm{W}}$ with its top-$r$ principal components.
However, in the context of differential privacy, we can not directly decompose $\partial {\bm{W}}$ as it is private. In the sequel, we give a practical reparametrization scheme for differentially private learning, where we use the historical update to find ${\bm{L}}$ and ${\bm{R}}$ and argue the optimality under certain conditions.
One may wonder why not just replace ${\bm{W}}$ with ${\bm{L}}$ and ${\bm{R}}$ instead of doing the reparametrization. We note that the forward and the backward process remain the same as before if doing the reparametrization, and the only change is the gradient computation of ${\bm{W}}$. In contrast, if using ${\bm{L}}$ and ${\bm{R}}$ to replace the weight ${\bm{W}}$, this would not only reduce the expressive power but also hurt the optimization as the width varies dramatically across layers and the forward/backward signals cannot propagate well by common initialization strategies \cite{glorot2010understanding, he2016deep}.
\subsection{Reparametrization for Convolutional Layers}
\label{sec:lrk_conv}
In the above, we have described how to reparametrize a weight matrix, which covers the usual fully-connected layer and the attention layer in language models. In this subsection, we show the reparametrization of convolutional layers. Let ${\bm{x}}\in\mathbb{R}^{d\times w' \times h'}$ be the input feature maps of one sample and ${\bm{h}}\in\mathbb{R}^{p\times w \times h}$ be the output feature maps. We describe how to compute the elements at one spatial position ${\bm{h}}_{:,i,j}\in\mathbb{R}^{p}$ where $i\in [0,w]$ and $j\in [0,h]$.
Let ${\bm{W}}\in \mathbb{R}^{p\times d\times k\times k}$ be the convolution kernels and ${\bm{x}}^{(i,j)}\in\mathbb{R}^{d\times k\times k}$ be the features that we need to compute ${\bm{h}}_{:,i,j}$. The output feature ${\bm{h}}_{:,i,j}$ can be computed as
${\bm{h}}_{:,i,j}=\bar{\bm{W}} {\bm{x}}^{(i,j)}$,
where $\bar{\bm{W}}\in\mathbb{R}^{p\times dk^{2}}$ is obtained by flattening the channel and kernel dimensions. Hence, we can use the same way as in Eq~(\ref{eq:repara}) to reparametrize $\bar{\bm{W}}$:
\begin{flalign}
{\bm{h}}_{:,i,j} = {\bm{L}}{\bm{R}}{\bm{x}}^{(i,j)} + (\bar{\bm{W}}-{\bm{L}}{\bm{R}}){\bm{x}}^{(i,j)}.
\end{flalign}
Specifically, the operation of ${\bm{R}}$ and ${\bm{L}}$ are implemented by two consequent convolutional layers with kernel sizes $r\times d\times k\times k$ \ and $p\times r\times 1\times 1$, respectively, where $r$ is the reparametrization rank. The residual weight is implemented by a convolutional layer of the original kernel size.
\section{Private Deep Learning with Reparametrized Gradient Perturbation}
\label{sec:dp_learning_lrk}
The above reparametrization strategy can significantly reduce the gradient dimension, which could help us circumvent the difficulties of applying differential privacy on large machine learning models. In this section, we propose a procedure ``reparametrized gradient perturbation (RGP)'' to train large neural network models with differential privacy. Specifically, Section \ref{subsec:dp_learning_lrk_algo} introduces the whole procedure of RGP, Section \ref{subsec:privacy_rgp} gives the privacy guarantee of RGP, and Section \ref{subsec:complexity} presents the complexity analysis.
\begin{algorithm}[tb]
\caption{Reparametrized Gradient Perturbation (RGP)}
\label{alg:dp_lrk_repara}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} NN with weight matrices $\{{\bm{W}}^{(l)}\}_{l=1}^{H}$, steps $T$, probability $q$, variance $\sigma^2$, clipping threshold $C$, warm-up steps $T_{\text{warm-up}}$, Algorithm \ref{alg:decompose_pi} input $\{r, K\}$.
\STATE Randomly initialize the weights and obtain $\{{\bm{W}}^{(l)}_{0}\}_{l=1}^H$;
\FOR{$t=1$ {\bfseries to} $T$}
\medskip
\STATE Sample a minibatch $\{{\bm{x}}_{i}\}_{i\in S_t}$ with probability $q$;
\medskip
\STATE For all $l\in [H]$, compute historical updates
$$\Delta_t^{(l)} \leftarrow {\bm{W}}^{(l)}_t - {\bm{W}}^{(l)}_0 \cdot 1_{\{t>T_{\text{warm-up}}\}};$$
and run Alg.~\ref{alg:decompose_pi} with $\{\Delta_t^{(l)}, r, K\}$ to get ${\bm{L}}_t^{(l)},{\bm{R}}_t^{(l)}$;
\medskip
\STATE \textsl{//Forward/backward process with reparametrization.}
\STATE Run reparametrized forward process with Eq~(\ref{eq:repara});
\STATE Run backward process and compute individual gradients $\{\partial_i{\bm{L}}_t^{(l)},\partial_i{\bm{R}}_t^{(l)}\}_{l\in[H], i\in S_t}$;
\medskip
\STATE \textsl{//Bound gradient sensitivity and add noise.}
\STATE Clip individual gradients with $L_{2}$ norm threshold $C$;
\FOR{$l=1$ {\bfseries to} $H$}
\STATE Sum individual gradients and get $\{\partial{\bm{L}}_t^{(l)},\partial{\bm{R}}_t^{(l)}\}$;
\STATE Perturbation with Gaussian noise ${\bm{z}}_{L,t}^{(l)},{\bm{z}}_{R,t}^{(l)}$ whose elements are independently from $\mathcal{N}(0,\sigma^{2}C^{2})$:
$$\tilde{\partial}{\bm{L}}_t^{(l)} \leftarrow \partial{\bm{L}}_t^{(l)} + {\bm{z}}_{L,t}^{{(l)}}, \quad \tilde{\partial}{\bm{R}}_t^{(l)} \leftarrow \partial{\bm{R}}_t^{(l)}+{\bm{z}}_{R,t}^{(l)};$$
\STATE Use $\tilde{\partial}{\bm{L}}_t^{(l)}$, $\tilde{\partial}{\bm{R}}_t^{(l)}$, and Eq~(\ref{eq:grad_lrk}) to construct $\tilde{\partial} {\bm{W}}_t^{(l)}$;
\STATE Use off-the-shelf optimizer to get ${\bm{W}}_{t+1}^{(l)}$;
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Reparametrized Gradient Perturbation Algorithm}
\label{subsec:dp_learning_lrk_algo}
The pseudocode of RGP is presented in Algorithm~\ref{alg:dp_lrk_repara}. The RGP proceeds for all the layers and we ignore the layer index for simplicity in the following discussion. At each update, for a layer with weight matrix ${\bm{W}}$, RGP consists of four steps: 1) generate the gradient-carrier matrices ${\bm{L}}$ and ${\bm{R}}$, 2) run the reparametrized forward/backward process and obtain the individual gradients $\{\partial_i {\bm{L}}\}_{i=1}^{m}$ and $\{\partial_i {\bm{R}}\}_{i=1}^{m}$, 3) clip and perturb the gradients, 4) reconstruct an approximated gradient on the original weight matrix.
In the RGP procedure, \textbf{step 1)}, which is also the core challenge, is to choose ``good" gradient-carrier matrices so that the reconstructed gradient can approximate the original gradient as well as possible. First, this requires for a given rank $r$, the generated gradient-carrier matrices should align with the principal components of the original gradient well. Moreover, to reconstruct the gradient in step 4), it requires the gradient carriers have orthonormal columns/rows.
For the first requirement, we use historical updates to find the gradient carriers. The historical update is not sensitive because of the post-processing property of differential privacy. In Section~\ref{subsec:historical_grad}, we give both empirical and theoretical arguments to demonstrate that the principal subspace of the current gradient aligns with that of the historical update. In our implementation, we use a warm-up phase in which the decomposition is directly done on the weight. We approximate the principal components via the power method (Algorithm~\ref{alg:decompose_pi}) instead of the time-consuming full SVD. For the second requirement, we apply the Gram-Schmidt process to orthonormalize ${\bm{L}}$ and ${\bm{R}}$.
\begin{algorithm}[tb]
\caption{Decomposition via Power Method.}
\label{alg:decompose_pi}
\begin{algorithmic}
\STATE {\bfseries Input:} Historical update $\Delta$, reparametrization rank $r$, number of iterations $K$.
\STATE {\bfseries Output:} Gradient carriers ${\bm{L}}\in\mathbb{R}^{p\times r}$, ${\bm{R}}\in\mathbb{R}^{r\times d}$.
\medskip
\STATE Initialize ${\bm{R}}$ from standard Gaussian distribution.
\FOR{$k=1$ {\bfseries to} $K$}
\STATE ${\bm{L}} \leftarrow \Delta {\bm{R}}^{T}$
\STATE Orthonormalize the columns of ${\bm{L}}$.
\STATE ${\bm{R}}={\bm{L}}^{T}\Delta$
\ENDFOR
\STATE Orthonormalize the rows of ${\bm{R}}$.
\STATE Return ${\bm{L}}$, ${\bm{R}}$
\end{algorithmic}
\end{algorithm}
\textbf{Step 2)} of RGP is the reparametrization and a round of forward/backward propagations, as presented in Section \ref{sec:lrk}.
\textbf{Step 3)} is for differential privacy guarantee. The individual gradients $\{\partial_i {\bm{L}}, \partial_i {\bm{R}}\}_{i=1}^{m}$ are first clipped by a pre-defined threshold so that the sensitivity is bounded. Then, Gaussian noise is added to the aggregated gradient to establish a differential privacy bound. The energy of added noise is proportional to the dimension, i.e., the rank $r$ of the carrier matrices. Hence, in order to make the noise energy small, it encourages us to use smaller rank $r$. However, smaller rank would increase the approximation error in the \textbf{step 1)}. In practice, we trade off these two factors to choose a proper $r$.
In \textbf{step 4)}, we use the noisy aggregated gradients of gradient-carrier matrices to reconstruct the gradients of original weights, as depicted in Corollary~\ref{corollary:grad_lr}. The reconstructed gradients can then be used by any off-the-shelf optimizer.
\subsection{Privacy Analysis of RGP}
\label{subsec:privacy_rgp}
The privacy bound of Algorithm~\ref{alg:dp_lrk_repara} is given by Proposition~\ref{prop:privacy}. The derivation of Proposition~\ref{prop:privacy} is based on the \emph{moments accountant} that is proposed in \citet{abadi2016deep}. Moments accountant has tighter composition bound than the strong composition theorem in \citet{algofound}. Moments accountant first tracks the privacy budget spent at each update. Then, it composes the spent budget of all updates and cast the final privacy cost into the classic $(\epsilon,\delta)$-differential privacy.
\begin{restatable}[\citet{abadi2016deep}]{proposition}{privacy}\label{prop:privacy}
There exist constants $c_1$ and $c_2$ so that given running steps $T$, for any $\epsilon<c_{1}q^{2}T$, Algorithm~\ref{alg:dp_lrk_repara} is $\left(\epsilon,\delta\right)$-differentially private for any $\delta>0$ if we choose \[\sigma\geq c_2\frac{q\sqrt{Tlog\left(1/\delta\right)}}{\epsilon}.\]
\end{restatable}
\begin{proof}
The proof outline is relegated to Appendix~\ref{apd:subsec:proof_sec3}.
\end{proof}
The value of $\sigma$ in Proposition~\ref{prop:privacy} is based on an asymptotic bound on the moments of the privacy loss random variable. In practice, one can use the numerical tools \citep{wang2019subsampled,mironov2019renyi} to compute a tighter bound. So far we have depicted the overall picture of RGP. We next analyze the computational and memory costs of RGP and compare them with that of DP-SGD.
\subsection{Complexity Analysis of RGP}
\label{subsec:complexity}
For the simplicity of notations, we only give the costs of one fully connected layer at one update (including forward and backward) and assume that the weight matrix is square. The shape of weight matrix, size of minibatch, number of power iterations, and rank of reparametrization are denoted by $(d\times d)$, $m$, $K$, and $r$, respectively.
The computational overhead of RGP consists of three parts. The first part is induced by matrix multiplication of power iteration, whose complexity is $\mathcal{O}(Krd^{2})$. The second part is induced by the Gram–Schmidt process, whose complexity is $\mathcal{O}(Kr^{2}d)$. The third part of overhead is the computational cost induced by gradient carriers during the forward/backward process, which is on the order of $\mathcal{O}(mrd)$.
RGP uses much less memory than DP-SGD in the practice. Although RGP needs some extra memory to store the activation produced by the gradient carriers, it has a significant advantage over DP-SGD on the memory cost of storing individual gradients, which is one of the main challenges of learning with differential privacy. For RGP, the memory cost of individual gradients only scales linearly with model width $d$ in contrast with $d^2$ for DP-SGD. We summarize the computational cost of one update and the memory cost of storing individual gradients in Table~\ref{tbl:complexity}.
\begin{table}
\caption{Computation and memory costs of RGP (Algorithm~\ref{alg:dp_lrk_repara}) and DP-SGD \citep{abadi2016deep}, where $m$ is the size of mini-batch, $d$ is the model width, $r$ is the reparametrization rank, and $K$ is the number of power iterations.}
\label{tbl:complexity}
\centering
\small
\renewcommand{\arraystretch}{1.85}
\begin{tabular}{ P{2.45cm}|P{1.15cm}|P{3.4cm} }
\hline \hline
\backslashbox{Cost}{Method} & DP-SGD & RGP \\
\hline
Computational cost & $\mathcal{O}(md^{2})$ & $\mathcal{O}(md^{2}+Krd^2+Kr^{2}d)$ \\\hline
Memory cost & $\mathcal{O}(md^{2})$ & $\mathcal{O}(mrd)$ \\
\hline
\hline
\end{tabular}
\end{table}
The low-rank nature of gradient permits us to choose a small $r$ without destroying utility (see Section~\ref{subsec:grad_is_lrk}). In practice, we typically choose the rank $r$ smaller than $10$. For the number of power iterations in Algorithm~\ref{alg:decompose_pi}, we find that setting $K=1$ is sufficient to get good performance. Hence, in practice, we always choose small $r$ and $K$ for efficiency while not hurting the performance.
\section{Preliminary on Differential Privacy} \label{app:sec:preliminary}
Differential privacy (DP) \cite{dwork2006calibrating,dwork2014algorithmic} is widely recognized as a gold standard of privacy protection due to its mathematical rigor. It controls the maximum influence that any individual sample can produce. The definition of $(\epsilon,\delta)$-DP is given in Definition~\ref{def:dp}.
\begin{definition}[$(\epsilon,\delta)$-DP]
\label{def:dp}
A randomized mechanism $\mathcal{M}$ guarantees $(\epsilon,\delta)$-differential privacy if for any two neighboring input datasets ${\mathbb{D}}\sim {\mathbb{D}}^{'}$ and for any subset of outputs ${\mathbb{S}}$ it holds that $\text{Pr}[\mathcal{M}({\mathbb{D}})\in {\mathbb{S}}]\leq e^{\epsilon}\text{Pr}[\mathcal{M}({\mathbb{D}}^{'})\in {\mathbb{S}}]+\delta$.
\end{definition}
Two datasets are said to be neighboring datasets if they only differ in a single sample. When being applied to learning problems, DP requires the learned models on neighboring datasets have approximately indistinguishable distributions.
\section{Missing Proofs} \label{app:sec:proof}
\subsection{Missing Proofs in Section \ref{sec:lrk}}
\label{apd:subsec:proof_sec2}
\gradlr*
\begin{proof}
The proof is based on the chain rule of back-propagation. Since the reparametrization does not change the forward and backward signals, we assume the layer inputs are ${\mathbb{D}}=\{{\bm{x}}_{i}\}_{i=1}^{m}$, the corresponding outputs are $\{{\bm{h}}_i\}_{i=1}^{m}$ with ${\bm{h}}_i = {\bm{W}} {\bm{x}}_i$ and the backward signals on the layer output are $\{\partial {\bm{h}}_i\}_{i=1}^{m}$. By back-propagation, we have
\begin{flalign*}
&\partial {\bm{W}} = \sum_{{\bm{x}}_{i}\in {\mathbb{D}}} (\partial {\bm{h}}_i) {\bm{x}}_i^T, \\
&\partial {\bm{L}} =\sum_{{\bm{x}}_{i}\in {\mathbb{D}}}\partial {\bm{h}}_i ({\bm{R}} {\bm{x}}_i)^T,\;\; \partial {\bm{R}} =\sum_{{\bm{x}}_{i}\in {\mathbb{D}}} ({\bm{L}}^{T}\partial {\bm{h}}_i) {\bm{x}}_i^T.
\end{flalign*}
Proof is completed by the multiplication associativity.
\end{proof}
\corogradlr*
\begin{proof}
If the columns of ${\bm{L}}$ and the rows of ${\bm{R}}$ are orthonormal, the projection of $\partial {\bm{W}}$ onto ${\bm{L}}$ and ${\bm{R}}$ is defined as,
\begin{flalign}
{\bm{L}}\mL^T (\partial {\bm{W}}) + (\partial {\bm{W}}) {\bm{R}}^T{\bm{R}} - {\bm{L}}\mL^T (\partial {\bm{W}}){\bm{R}}^T{\bm{R}}.
\end{flalign}
Substituting the above formula with $\partial {\bm{L}} = (\partial {\bm{W}}) {\bm{R}}^T$ and $\partial {\bm{R}} = {\bm{L}}^T (\partial {\bm{W}})$ in Theorem \ref{thm:grad_lr}, completes the proof.
\end{proof}
\subsection{Missing Proofs in Section \ref{sec:dp_learning_lrk}}
\label{apd:subsec:proof_sec3}
\privacy*
\begin{proof}
Although RGP releases projected gradient instead of releasing the whole gradient as in \citet{abadi2016deep}, moments accountant is still applicable because it applies to vectorized function output.
Moments accountant tracks a bound on the moments of the privacy loss random variable, which is built on the ratio of the probability density functions of the output distributions of two neighboring datasets. \citet{abadi2016deep} show the log moments of the privacy loss random variable composes linearly. Therefore one can compute the overall privacy cost by adding the log moments at every update. When the training is done, moments accountant casts the accumulated log moments into $(\epsilon,\delta)$-DP via tail bound. Detailed proof can be found in Appendix B of \citet{abadi2016deep}.
\end{proof}
\subsection{Missing Proofs in Section \ref{sec:grad_property}}
\label{apd:subsec:proof_sec4}
\gradalign*
\begin{proof}
We can compute the gradient at step $t$
\begin{flalign*}
\partial {\bm{W}}_t &= \frac{1}{n}\sum_{i=1}^n ({\bm{W}}_t {\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T.
\end{flalign*}
Given the gradient descent update \eqref{eq:gd}, we can compute the gradient at ${\bm{W}}_{t+1}$ as follows
\begin{flalign*}
\partial {\bm{W}}_{t+1}
&= \frac{1}{n}\sum_{i=1}^n (({\bm{W}}_t - \eta \cdot\partial{\bm{W}}_t){\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T\\
&= \frac{1}{n}\sum_{i=1}^n ({\bm{W}}_t{\bm{x}}_i - {\bm{y}}_i) {\bm{x}}_i^T - \eta\cdot \partial{\bm{W}}_t \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T \\
& = \partial {\bm{W}}_t \left ({\bm{I}} - \eta \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T\right ).
\end{flalign*}
Hence we have $\partial {\bm{W}}_t = \partial {\bm{W}}_0 \left ({\bm{I}} - \eta \sum_{i=1}^n {\bm{x}}_i{\bm{x}}_i^T\right )^t$. The $\partial {\bm{W}}_t$ lives in the same subspace for all $t\ge 1$ as they have the same row/column spaces.
\end{proof}
\section{Additional Experiments}\label{app:sec:add-exp}
\begin{figure*} [t]
\centering
\includegraphics[width=0.9\linewidth]{imgs/show_influ_rank.pdf}
\caption{Prediction accuracy of BERT on four downstream tasks (in \%) with difference choices of reparametrization rank. We plot the average score of two test datasets for MNLI. }
\label{fig:fig_bert_rank}
\end{figure*}
We present some ablation studies in this section to verify the effect of residual weight and reparametrization rank. In Section~\ref{subsec:apd_rank}, we try RGP with difference rank choices. In Section~\ref{subsec:apd_residual}, we give a variant of RGP that simply discards the residual weight.
\subsection{On the Influence of Different Rank Choices} \label{subsec:apd_rank}
We present the results (see Figure \ref{fig:fig_bert_rank}) with different choices of reparametrization rank. We consider four algorithms. The first one is fine-tuning the full model that serves as the baseline. The second one is RGP (N.P.) that trains the model with reparametrization but without gradient clipping or adding noise. The third one is RGP (Algorithm~\ref{alg:dp_lrk_repara}) and the last one is RGP-random, which uses random orthogonal vectors as gradient-carrier matrices. The privacy parameter $\epsilon$ is $8$ and other settings are the same as those in Section~\ref{sec:exp}. The results are plotted in Figure~\ref{fig:fig_bert_rank}. When the models are trained without noise, increasing the reparametrization rank makes the performance of RGP (N.P.) approach the performance of baseline. When the models are trained with privacy guarantee, increasing the rank sometimes decreases the performance because a larger rank induces more trainable parameters and hence higher noise dimension.
\subsection{On the Importance of Residual Weight}\label{subsec:apd_residual}
Recall that our reparametrization scheme reparametrizes the weight matrix as follows:
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}} + \tilde{{\bm{W}}}.{stop\_gradient()}. \label{eq:apd_repara}
\end{flalign}
We have shown that the residual weight $\tilde{{\bm{W}}}$ keeps the forward/backward signals unchanged and makes the gradients of ${\bm{L}}$ and ${\bm{R}}$ naturally connected with the original gradient. To empirically examine the effect of $\tilde{{\bm{W}}}$, we test the following scheme:
\begin{flalign}
{\bm{W}} \rightarrow {\bm{L}} {\bm{R}}. \label{eq:apd_repara_nores}
\end{flalign}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{imgs/show_influ_residual.pdf}
\caption{Prediction accuracy of BERT on two downstream tasks. All methods are trained without privacy guarantee. }
\label{fig:fig_bert_residual}
\end{figure}
We still use the historical update to generate ${\bm{L}}$ and ${\bm{R}}$. Other settings are the same as those in Section~\ref{sec:exp}. The results on two downstream tasks of the BERT model are presented in Figure~\ref{fig:fig_bert_residual}. Without residual weight, the model learns almost nothing from the re-constructed update and the final accuracy is close to the accuracy at initialization.
\end{appendix}
| {'timestamp': '2021-11-05T01:10:25', 'yymm': '2106', 'arxiv_id': '2106.09352', 'language': 'en', 'url': 'https://arxiv.org/abs/2106.09352'} |
\section{Introduction}
The evolution of the mobile and wireless communication networks into the fifth generation (5G) will play a significant role in improving the global economy. With the internet of things (IoT) dictating the way in which people communicate through information sharing and knowledge dissemination, internet coverage neesd to be improved. The capacity to provide radio coverage over a wide geographic area is a \mbox{pre-requisite} towards meeting the \mbox{ultra-low} latency requirements demanded by mobile subscribers~\cite{ericssonreport}\cite{energymanagershow}. Through the installation of a \acp{BS} and the development of the mobile and wireless communications, continuous communications can be achieved. This constitutes a gigantic step towards solving the rural/remote connectivity problem since electricity might be unreliable and it is very costly to extend grid connection to remote areas. Therefore, the provisioning of communication services in remote areas entails the use of renewable energy. Using renewable energy, coupled with sustainable energy storage solutions is a promising solution towards resolving the remote area energy predicament. \\
\indent Despite the use of green energy as a potential solution, many rural and remote areas in developed or undeveloped countries around the world are facing the challenge of unreliable \mbox{high-quality} Internet connectivity~\cite{remote6g}. This is because \ac{MN} operators are still skeptical towards making information \& communications technology (ICT) infrastructure investments in remote areas - hence the digital divide. One of the essential reasons is low expected revenue, calculated as \ac{ARPU}, which reduces companies' willingness to invest in these areas. However, with the current trends in battery and solar module costs showing a decrease, MN operators might be motivated to make investments in remote and rural areas and deploy connectivity networks. Moreover, the advent of open, programmable, and virtualized $5$G networks, will enable MN operators to overcome the limitations presented by the current \acp{MN}~\cite{energymanagershow}\cite{open5G} and make the ease of deploying open and programmable \acp{MN} a possibility. \\
\indent To extend network coverage to remote/rural areas, the use of terrestrial or \mbox{non-terrestrial} networks is proposed in~\cite{antenna_for}. In parallel, Sparse Terrestrial Networks (STN) using high towers and large antenna arrays are being developed to deliver very long transmission ranges. Here, the systems are equipped with the latest emerging antenna technologies and designs such as reconfigurable phased/inflatable/fractal antennas realized with metasurface material. Towards this, the works of~\cite{antenna_for}study the feasibility of providing connectivity to sparse areas utilizing \mbox{massive-MIMO} where the existing infrastructure of TV towers was used. In that work, it is observed that higher frequencies provide larger area coverage, provided that the antenna array area is the same. Another strategy for achieving good coverage as well as high capacity in remote/rural areas is to utilize two frequency bands, one low band and one high band, in an aggregated configuration. Following this strategy, the authors of~\cite{5g_rural_nr} combine the New Radio (NR) $3.5$ GHz and LTE $800$ MHz on a GSM grid. In addition, along the lines of long range systems, the NR is expected to support high data rates with low average network energy consumption through its lean design and massive MIMO utilization. Also, the authors of~\cite{deep_rural} extend rural coverage with STNs. Here, the large cells are created by using \mbox{long-range} links between \acp{BS} and \ac{UE}, where the long range is achieved by high towers combined with large antenna arrays and efficient antenna techniques creating narrow beams with high gain with a \mbox{line-of-sight} (LoS) or \mbox{near-LoS} connection to the UE.\\
\indent In order to end this digital divide, MNs have to \mbox{re-look} the way in which they are operating and make the necessary adjustments. One workable solution is making use of the softwarization technologies such as: \ac{SDN}, \ac{NFV}, \ac{MEC}, to be enablers for \textit{resource sharing} and \textit{edgefication}~\cite{open5G}\cite{online_pimrc}. Furthermore, the emergence of network slicing further avails new market opportunities~\cite{interdigital} for \acp{MN} to explore. In network slicing, the BS site infrastructure (\textit{resource blocks, bandwidth, computing resources}) can be shared {\it fairly} by two or more mobile operators in \mbox{real-time}. This is to effectively maximize the use of existing network resources while simultaneously minimizing the operational costs in remote sites. Also, the open and accessible shared infrastructure can enable more MN operators and Internet service providers to expand their footprint into \mbox{low-income} areas, increasing the availability of connectivity in these areas and contributing to bridging the digital divide. For continuous operation in the rural/remote communication sites, the BS empowered with computing capabilities can be \mbox{co-located} with \ac{EH} systems for harvesting energy from the environment, storing it in \acp{EB} (storage devices), and then powering the site.\\
\indent There are several forms of infrastructure sharing cases already in existence~\cite{mobilesharing}, such as the \mbox{roaming-based} sharing where the MN operators share the cell coverage for a prenegotiated time period. For example, using this \mbox{roaming-based} sharing, a \ac{UE} can employ the roaming procedure in order to connect to a foreign network. In these \say{classical} forms of sharing generally one MN operator still retains ownership of the mobile network.
Under shared infrastructure, new entrants no longer need to incur the \mbox{often-significant} upfront cost of building their own infrastructure and can save time and resources that would otherwise be dedicated to administrative authorization and licensing. However, potential risks to
competition, governance, and implementation need to be managed to achieve the greatest benefit from infrastructure sharing.
In this article, the BS infrastructure sharing and its \mbox{co-located} computing platform (\ac{MEC} server) is done only for handling \mbox{delay-sensitive} workloads in remote/rural areas. Here, MN operators still have control of the delay-tolerant workloads to their remote clouds. This entails bringing the notion of \mbox{\it co-ownership} of the communication sites in remote/rural areas, within the \ac{MEC} paradigm, in which \textit{two} MN operators pull together their capital expenditure in order to share the deployed infrastructure, thus saving precious (already limited) economic resources for other types of expenses. Then, in order to effectively manage the BS sites deployed in remote/rural areas, procedures for dynamic network control (\textit{managing network resources when MN operators share fairly their network infrastructure}) and agile management are required. This will assist in efficiently delivering a comparable \ac{QoS} in remote/rural areas to that of urban areas.\\%, taking into account the future workloads and green energy to be harvested.
\indent The work done in this article is an extension of~\cite{online_pimrc}, where \ac{BS} sleep modes and \ac{VM} \mbox{soft-scaling} procedures were employed towards energy saving in remote sites. In \cite{online_pimrc}, energy savings were obtained through \mbox{short-term} traffic load and harvested energy predictions, along with energy management procedures. However, the considered energy cost model does not take the caching process, tuning of transmission drivers, and the use of \mbox{container-based} virtualization into account. In addition, the considered communication site belongs to \textit{one} MN operator, i.e., the site infrastructure was not shared between multiple operators. Therefore, the \mbox{computing-plus-communication} energy cost model is the main motivation for this article, where the BS site is shared among multiple operators in order to handle \mbox{delay-sensitive} workloads only.
One application of our model (strategy) corresponds to the current situation that has been caused by the new coronavirus (COVID-19) pandemic. The pandemic has reshaped our living preferences such that rural (remote) areas are now becoming more and more attractive. This can motivate MN operators to deploy networks in such areas and then share their communication infrastructure and the computing resources that are \mbox{co-located}. The contributions of this article are summarized as follows:
\begin{itemize}
\item [1)] A \ac{BS} empowered with computing capabilities \mbox{co-located} with a \ac{EH} system is considered, whereby the MN operators share the BS site infrastructure (i.e., \textit{bandwidth, computing resources}) for handling \mbox{delay-sensitive} workloads within a remote/rural area.
\item [2)] In order to enable foresighted optimization, a \mbox{short-term} future communications site workload and harvested energy is forecasted using a \ac{LSTM} neural network~\cite{lstmlearn}.
\item [3)] An online \mbox{controller-based} algorithm {\it called} \ac{DRC-RS} for handling infrastructure sharing and managing the communication site located in remote/rural areas is developed. The proposed algorithm is based on the \ac{LLC} approach and resource allocation procedures with the objective of enabling for infrastructure sharing (BS and its \mbox{co-located} computing platform) and resource management within remote and rural communication sites.
\item [4)] \mbox{Real-world} harvested energy and traffic load traces are used to evaluate the performance of the proposed optimization strategy. The numerical results obtained through simulation show that the proposed optimization strategy is able to efficiently manage the remote/rural site and also allows the sharing of the network infrastructure.
\end{itemize}
\begin{table*} [h!]
\caption{Comparison with existing works.}
\label{tab_opt1}
\begin{threeparttable}
\center
\begin{tabular} {|l|l|l|l|l|}
\hline
{\bf Feature} & {\bf Edge computing} & {\bf Method Used} & {\bf Forecasting} & {\bf Objective}\\
\hline
RAN sharing~\cite{sharingRAN} & No & Linear programming & No & Max. QoS\\ \hline
Traffic load exploitation~\cite{gamebasedsharing} & No & Game theory & No & Min. spending cost\\ \hline
Contractual backup~\cite{strategicsharing} & No & Contract design under & No & Max. resource utilization\\
& & symmetric information & & and profits\\ \hline
Multiple-seller single-buyer~\cite{sanguanpuak} & No & Stochastic geometry & No & Cost minimization\\
& & & & Guarantee of QoS \\ \hline
Communication and & Yes & \ac{LSTM} & Yes & Min. energy consumption\\
Computation [\textbf{Proposed}] & & \ac{LLC} & & Guarantee of QoS\\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item Yes: considered; No: not considered
\end{tablenotes}
\end{threeparttable}
\end{table*}
In order to achieve these, the remainder of this article is organized as follows: Section~\ref{sec:rel} discusses previous research works related to the one undertaken in this article. Section~\ref{sec:sys} describes the proposed system model using detailed explanation on the operation of each network element. The mathematical problem formulation is given in Section~\ref{sec:prob} together with the details of the optimization problem and the proposed \ac{DRC-RS} online algorithm. In Section~\ref{sec:eval}, a performance evaluation of the proposed online algorithm is presented using simulation results and statistical discussions. The conclusions of this article are then given in Section~\ref{sec:concl}.
\section{Related Work}\label{sec:rel}
\noindent MN operators generally have complete ownership and control of their network and their networks are characterized by an inflexible and monolithic infrastructure. Such a rigid status quo incapacitates networks of the required versatility, hence they cannot cope with the dynamically changing requirements. As a result, in their current state, meeting the heterogeneity and variability of future MNs is an impossible task. As mobile and wireless networks evolve, MN operators are faced with the daunting task of keeping up and coping with the accelerated \mbox{roll-out} of new technologies. Due to this fast-paced technological advancements, large and frequent investments are made in order to cope with the new services and network management phases. This proactive network operation and management consequently increases the network operating costs, which reduces the intended profits. Thus, in order to reduce the \mbox{per-MN} operator investment cost, the sharing of network infrastructure between mobile operators is an attractive solution. To this effect, the authors in~\cite{sharingRAN} proposed a \ac{RAN} sharing scheme where MN operators share a single radio infrastructure while maintaining separation and full control over the backhauling and their respective core networks. In that paper, a mixed integer linear programming (MILP) formulation is proposed for determining the sharing configurations that maximize the \ac{QoS}, and a cooperative game theory concept is used to determine stable configurations as envisioned by the MN operator. The regulatory enforcement towards offering the best service level for the users and the greedy approach considered in that paper reduce the effectiveness of infrastructure sharing, as both approaches do not promote fairness among \ac{MN} operators.
In addition, the work of~\cite{gamebasedsharing} employs an infrastructure sharing algorithm towards energy savings by exploiting the under utilization of the network during \mbox{low-traffic} periods. In their work, a \mbox{game-theoretic} framework was proposed in order to enable the MN operators to individually estimate the \mbox{switching-off} probabilities that reduce their expected financial cost. Apart from the energy efficiency benefits, the proposed scheme allows the participating MN operators to minimize their spending costs independently of the strategies of the coexisting MN operators. Despite of the presented benefits, it is worth noting that infrastructure sharing should be considered for both low- and high-traffic periods, which is the focus of this paper. However, due to the existence of competition between the different MNs, collaboration in this infrastructure sharing is a primary requisite. In order to enforce such a collaboration between competitors, the authors in~\cite{strategicsharing} proposed a strategic network infrastructure sharing framework for contractual backup reservation between a small/local network operator of limited resources and uncertain demands, and one resourceful operator with potentially redundant capacity. Here, one MN operator pays for network resources reserved for use by its subscribers in another MN operator, while in turn the payee guarantees the availability of the resources. Then, in~\cite{sanguanpuak}, the problem of infrastructure sharing among MN operators is presented as a \mbox{multiple-seller} \mbox{single-buyer} business. In their contribution, each \ac{BS} is utilized by subscribers from other operators and the owner of the BS is considered as a seller of the BS infrastructure while the owners of the subscribers utilizing the BS are considered as buyers. In the presence of multiple seller MN operators, it is assumed that they compete with each other to sell their network infrastructure resources to potential buyers. \\
\indent The aforementioned works consider BS infrastructure sharing towards lowering operational cost, either by switching on/off the BSs, while maintaining the network control. In addition, infrastructure sharing is treated as a business case instead of a cooperative effort towards boosting connectivity in remote/rural areas. If one MN operator is treated as a seller while the other one as a buyer if it uses its network resources, this becomes a business venture. For example, one MN operator might be using the resource reservation technique, whereby it reserves resources for other small operators. Again, here the other party has to pay in order to use those facilities. However, it is worth mentioning that the works done in~\cite{sharingRAN}\cite{gamebasedsharing}\cite{strategicsharing}\cite{sanguanpuak} do not consider infrastructure sharing with the \ac{MEC} paradigm and the consideration of green energy has been overlooked. Those that are within the \ac{MEC} paradigm they share their \textit{own} network resources, among themselves in order to handle spatially uneven computation workloads in the network. Their objective being to avoid large computation latency at overloaded small BSs as well as to provide high quality of service (QoS) to end users. The details of how internal infrastructure sharing is conducted cannot be covered in this article, interested readers are referred to~\cite{chen2018computation}. Table~\ref{tab_opt1} above summarizes the differences of the infrastructure sharing strategy from existing works.
\section{System Model}\label{sec:sys}
\begin{figure}[h!]
\centering
\includegraphics[width = \columnwidth]{remotesite.eps}
\caption{The remote/rural BS site infrastructure consisting of the BS co-located with the MEC server both powered by green energy obtained from solar radiation and wind turbine.}
\label{fig:remotesite}
\end{figure}
In this paper, we consider a remote/rural site network scenario as illustrated in Fig.~\ref{fig:remotesite} above. Each network apparatus (BS, MEC server) in the figure is mainly powered by renewable energy harvested from wind and solar radiation, and it is equipped with an \ac{EB} for energy storage. The stored energy is shared by the edge server and the BS system. The \ac{EM} is an entity responsible for selecting the appropriate energy source to fulfill the \ac{EB}, and also for monitoring the energy level of the \ac{EB}. Then, the intelligent \mbox{electro-mechanical} switch (I-SW) aggregates the energy sources to fulfill the \ac{EB} level.
\noindent The proposed model in Fig.~\ref{fig:remotesite} above is \mbox{cache-enabled}, TCP/IP offload capable (i.e., enables {\it partial} offloading in the server's \ac{NIC} such as checksum computation~\cite{sohan2010characterizing}). The virtualized MEC server, which is \mbox{co-located} with the \ac{BS}, is assumed to be hosting $C$ containers (see C1, C2 in Fig.~\ref{fig:remotesite}). Also, it has an input and output buffer for holding the workloads. It is assumed that some of the BS functions are virtualized as pointed in~\cite{BS_virtualization} as the \ac{MEC} node is composed of a virtualized access control router which acts as an access gateway for admission control. The virtualized access control router (ACR) is responsible for local and remote routing, and it is locally hosted as an application. Here, it is assumed that the remote/rural site infrastructure is shared between {\it two} MN operators through a \mbox{pre-existing} agreement, where a common microwave backhaul or a \mbox{multi-hop} wireless backhaul relaying is used for accessing remote clouds or the Internet. Moreover, a \mbox{discrete-time} model is considered, whereby the time is discretized as \mbox{$t = 1,2,\dots$} time slots of a fixed duration $\tau$.
\subsection{Input Traffic and Queue Model}
\noindent In the communication site, the BS is the connection point anchor and the computing platform processes the currently assigned \mbox{delay-sensitive} tasks by \mbox{self-managing} its own local virtualized storage/computing resources. Also shown in Fig.~\ref{fig:remotesite} above is an input buffer of size $L_{\rm in}$, a reconfigurable computing platform and the related switched virtual LAN, an output queue of size $L_{\rm out}$; and a controller that \mbox{re-configures} the \mbox{computing-plus-communication} resources and also performs the control of input/output traffic flows. Since the workload demand exhibits a diurnal behavior in remote/rural areas, forecasting the mobile operator's workload can help towards network infrastructure sharing. Thus, in order to emulate the remote site traffic load $L(t)$ (from $|\nu(t)|$ users), real MN traffic load traces from~\cite{bigdata2015tim} are used. It is assumed that \textit{only} Operators A and B share the remote/rural BS site, and their traffic load profiles are denoted by $L_{\rm A} (t)$ and $L_{\rm B}(t)$ ([bits]), respectively. It is also assumed that $L_{\rm A}(t)$ (or $L_{\rm B}(t)$) consists of $0.8$ \mbox{delay-sensitive} workloads $\gamma_{\rm A}(t)$ (or $\gamma_{\rm B}(t)$) and the remainder is delay-tolerant. The total admitted workload is denoted by $\gamma^*(t) = \gamma_{\rm A}(t) + \gamma_{\rm B}(t)$, i.e., $\gamma^*(t) \leq L_{\rm in}$). The input/output (I/O) queue of the system are assumed to be \mbox{loss-free} such that the time evolution of the backlogs queues follows Lindley's equations. The normalized BS traffic load behavior representation of the two mobile operators is illustrated in Fig. \ref{fig:trace_load} above.
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{traffic_profiles.eps}
\caption{Normalized BS traffic loads behavior representing two MN operators represented as operator A and B.}
\label{fig:trace_load}
\end{figure}
\subsection{Communication and Computing Energy Cost Model}
\noindent For the BS system deployed in the remote/rural area, the total energy consumption $\theta_{\rm SITE}(t)$ (measured in $\SI{} {\joule}$) at time slot $t$ consists of the BS communications, denoted by $\theta_{\rm COMM}(t)$, and computing platform processes, related to computing, caching, and communication, which is denoted by $\theta_{\rm COMP}(t)$. Thus, the energy consumption model at time slot $t$ is formulated as follow, inspired by~\cite{steering}:
\begin{equation}
\theta_{\rm SITE}(t) = \theta_{\rm COMM}(t) + \theta_{\rm COMP}(t).
\label{eq:siteconsupt}
\end{equation}
\noindent The \ac{BS} energy consumption processes $\theta_{\rm COMM}(t)$ constitutes of the sum of the following:
\begin{equation}
\theta_{\rm COMM}(t) = \sigma(t)\theta_0 + \theta_{\rm load}(t) + \theta_{\rm bk} + \theta_{\rm data}(t)\gamma^*(t)\,,
\end{equation}
\noindent where $\sigma (t)\in \{0,1\}$ is the BS switching status indicator, with $1$ representing the active mode while $0$ indicates the power saving mode. $\theta_0$ is the load independent constant value representing the operation energy, \mbox{$\theta_{\rm load} (t) = L(t) (2^{\frac{r_0}{\zeta(t) W}}-1)N_0 (K)^\alpha \beta^{-1}$} the load dependent transmission power to the served subscribers that guarantees low latency services at a target rate $r_0$. The term $W$ is the channel bandwidth, $\zeta(t)$ is the fraction of the bandwidth used by the mobile users from operator A and B, while $\alpha$ and $\beta$ are the path loss exponent and the path loss constant, respectively. The term $K$ denotes the average distance between two BSs within the same region, and $N_0$ is the noise power. The parameter $\theta_{\rm bk}$ represents the constant microwave backhaul transmission energy cost, and $\theta_{\rm data}(t)$ (fixed value in J/byte) is the \mbox{inter-communication} cost incurred by exchanging data between the BS and MEC interfaces.\\
\indent Next, we discuss the MEC server processes that make up $\theta_{\rm COMP}(t)$. With $\gamma^*(t)$ being the currently admitted workload to be processed, let $\gamma_c(t) \leq \gamma_{\rm max}, c = 1, \dots, C(t)$, denote the size of the task that the scheduler allocates, per container, bounded by the set maximum amount $\gamma_{\rm max}$. This is such that the following constraint: $\sum_{c=1}^{C(t)} \gamma_c(t) = \gamma^*(t)$, guarantees that the overall workload is partitioned into $|C(t)|$ parallel tasks.
This load distribution is motivated by the shares feature~\cite{migrationpower} that is inherent in virtualization technologies. This enables the resource scheduler to efficiently distribute resources amongst contending containers, thus guaranteeing the completion of the computation process within the expected time.
Thus, the set of attributes which characterize each container are: $\{\psi_c(t), \theta_{{\rm idle},c}(t), \theta_{{\rm max},c}(t), \Delta, f_c(t) \},$, where $\psi_c(t) = (f_c(t)/f_{\rm max})^2$ is the container utilization function, and $f_{\rm max}$ is the maximum available processing rate for container. Here, $f_c(t) \in [f_0, f_{\rm max}]$ denote the processing rates of container $c$, whereby the term $f_0$ is the zero speed of the container, e.g., deep sleep or shutdown. The term $\theta_{{\rm idle},c}(t)$ represents the static energy drained by the container $c$ in its idle state, $\theta_{{\rm max},c}(t)$ is the maximum energy that container $c$ can consume, and $\Delta$ is the maximum \mbox{per-slot} and \mbox{per-container} processing time ([s]).\\
\indent Within the computing platform, the energy drained due to the active containers, denoted by $\theta_{\rm CP}(t)$, is induced by the \ac{CPU} share that is allocated for the workload, and it is given by:
\begin{equation}
\theta_{\rm CP}(t) = \sum_{c=1}^{C(t)}\theta_{{\rm idle}, c}(t) + \psi_{c}(t) (\theta_{{\rm max},c}(t)-\theta_{{\rm idle}, c}(t)).
\label{eq:cp}
\end{equation}
It should be noted that within the edge server there is the virtualization layer with switching capabilities (see Fig.~\ref{fig:remotesite}). Thus, the processing rates are switched from the processing rates of the previous time instance ($t-1$), denoted by $f_c(t-1)$, to the present instance ($t$), denoted by $f_c(t)$. This entails an energy cost, denoted by $\theta_{\rm SW}(t)$, which is defined as:
\begin{equation}
\theta_{\rm SW}(t) = \sum_{c =1}^{C(t)} k_e (f_c(t)-f_c(t-1))^2,
\label{eq:sw}
\end{equation}
where $k_e$ represents the \mbox{per-container} reconfiguration cost caused by a unit-size frequency switching which is limited to a few hundreds of $\SI{}{\metre\joule}$ per $(\rm MHz)^2$. \\
\indent The MEC server can perform TCP/IP computation processing in the network adapter in order to minimize the CPU utilization. Such process incurs an energy that is drained, denoted by $\theta_{\rm OF}(t)$, which is obtained as:
\begin{equation}
\theta_{\rm OF}(t) = \delta(t) \theta_{\rm idle}^{\rm nic}(t) + \theta_{\rm max}^{\rm nic}(t),
\label{eq:of}
\end{equation}
where $\theta_{\rm idle}^{\rm nic}(t)$ (a non-zero value) is the energy drained by the adapter when powered but with no data transfer processes. This avails an opportunity to reduce the \mbox{non-zero} value to zero energy. For this, $\delta(t) = (0, 1)$ is the switching status indicator, with 1 indicating the active state and $0$ representing the idle state. Then, $\theta_{\rm max}^{\rm nic}(t)$ is the maximum energy drained by the network adapter process and it is obtained in a similar way as in~\cite{steering}. \\
\indent In order to keep the \mbox{intra-communication} delays at a minimum, it is assumed that each container $c$ communicates with the resource scheduler through a dedicated reliable link that operates at the transmission rate of $r_c(t)$ [(bits/s)]. Thus, the power drained by the $c^{\rm th}$ \mbox{end-to-end} connection is given by:
\begin{equation}
P_c^{\rm net}(t) = \Psi_c (\overline{rtt_c} \, r_c(t))^2,
\end{equation}
where $c = 1, \dots, C(t), \overline{rtt_c}$ is the average \mbox{round-trip-time} of the $c^{\rm th}$ \mbox{intra-connection}, and $\Psi_c$ (measured in $\SI{}{\watt})$ is the power consumption of the connection when the product, i.e., the \mbox{round-trip-time}, which is by \mbox{communication-rate-unit-valued}. Therefore, after $\gamma_c(t)$ has been allocated to container $c$, the corresponding communication energy consumed by the $c^{\rm th}$ links is, denoted by $\theta_{\rm LK}(t)$, is obtained as:
\begin{equation}
\theta_{\rm LK}(t) = P_c^{\rm net}(t) (\gamma_c(t)/r_c(t)) \equiv (2\Psi_c/(\tau - \Delta)) (\overline{rtt}_c \gamma_c(t))^2.
\label{eq:lk}
\end{equation}
\noindent In practical application scenarios, the maximum \mbox{per-slot} communication rate within the \mbox{intra-communications} is generally limited by a \mbox{pre-assigned} value $r_{\rm max}$, thus the following hard constraint must hold: $\sum_{c=1}^{C(t)} r_c(t) = \sum_{c=1}^{C(t)} (2\gamma_c(t)/ (\tau - \Delta)) \leq r_{\rm max}$. We also note that there exists a \mbox{two-way} per task execution delay where each link delay is denoted by $\varrho_c(t) = \gamma_c(t)/r_c(t)$. In this work, we assume that the overall delay equates to $2\,\varrho_{c}(t) + \Delta$.\\
\indent To dequeue the computational results from the output buffer, denoted by $L_{\rm out}$, the optical tunable drivers are used for the data transfers processes. A \mbox{trade-off} between the transmission speed and the number of active drivers per time instance is required to reduce the energy consumption. For data transfers, $|D(t)| \leq D$ drivers are required for transferring $l_d(t) \in L_{\rm out}$. The energy drained by the data transfer process, denoted by $\theta_{\rm LS}(t)$, consists of the energy for utilizing each fast tunable driver, denoted by $m_d(t) [(J/s)]$ (a constant value), the target transmission rate $r_0$, and $L_{\rm out}$. Thus, the energy is obtained as follows:
\begin{equation}
\theta_{\rm LS}(t) = \sum_{d=1}^{D(t)} (m_d(t) l_d(t))/{r_0},
\label{eq:ls}
\end{equation}
where the parameters are obtained similar to~\cite{steering}. \\
\indent To minimize the network traffic from the remote/rural site to the remote clouds, some of the frequently requested internet content are cached locally, more especially viral contents. The caching process contribute to the energy consumption within the site, denoted by $\theta_{\rm CH}(t)$, and it is obtained as~\cite{steering}:
\begin{equation}
\theta_{\rm CH}(t) = \overline{\lambda} (t)\,(\theta_{\rm TR} (t) + \theta_{\rm CACHE}(t)),
\label{eq:cache}
\end{equation}
where $\theta_{\rm TR} (t)$ represents the power consumption due to transmission processes, $\theta_{\rm CACHE}(t)$ is the power consumption contributed by the caching process with its \mbox{intra-communication}, and $\overline{\lambda} (t)$ is the response time function for viral content~\cite{large_youtube}. \\
\indent In overall, the resulting \mbox{communication-plus-computing} processes incurs an energy cost $\theta_{\rm COMP}(t)$, per slot $t$, which is given by Eqs.~(\ref{eq:cp}), (\ref{eq:sw}), (\ref{eq:of}), (\ref{eq:lk}), (\ref{eq:ls}), (\ref{eq:cache}), as follows:
\begin{equation}
\begin{aligned}
\theta_{\rm COMP}(t) & = \theta_{\rm CP}(t) + \theta_{\rm SW}(t) + \theta_{\rm OF}(t) \\
& + \theta_{\rm LK}(t) + \theta_{\rm LS}(t) + \theta_{\rm CH}(t).
\end{aligned}
\label{eq:mec_cost}
\end{equation}
\subsection{Energy Harvesting and Demand Profiles}
\noindent The rechargebale energy storage device is characterized by its finite energy storage capacity $E_{\rm max}$, and the energy level reports are periodically pushed to the \mbox{DRC-RS} application in the \ac{MEC} server. In this case, the \ac{EB} level $B(t)$ is known, which enables for the provisioning of the required communication and computing resources in the form of the required containers, transmission drivers, and the transmission power in the BS. To emulate the profiles, the amount of harvested energy $H(t)$ in time slot $t$ is obtained from \mbox{open-source} solar and wind traces from a farm located in Belgium~\cite{belgium}, and they are as shown in Fig.~\ref{fig:energy_trace}.
\noindent The data in the dataset matches the time slot duration of ($\SI{30} {\minute}$) used in this work and it is the result of daily environmental records.
In this work, the wind energy is selected as a power source during the solar energy \mbox{off-peak} periods. The available \ac{EB} level $B(t + 1)$ located at the offgrid site evolves according to the following dynamics:
\begin{equation}
\mbox{$E(t + 1) = \min\{E(t) + H(t) - \theta_{\rm SITE}(t)- a(t), E_{\rm max}\}$},
\label{eq:offgrid}
\end{equation}
where $E (t)$ is the energy level in the battery at the beginning of time slot $t$, $\theta_{\rm SITE}(t)$ represents the site energy consumption, {\it see} Eq.~\eq{eq:siteconsupt} above, and $a(t)$ is leakage energy. However, it is worth noting that the energy level $E(t)$ is updated at the beginning of time slot $t$, whereas $H(t)$ and $\theta_{\rm SITE}(t)$ are only known at the end of $t$. Thus, the energy constraint at the off-grid site must be satisfied for every time slot: $\theta_{\rm SITE}(t) \leq E(t)$. Therefore, for decision making, the online controller simply compares the received EB level reports with two \mbox{set-points} ($0 < E_{\rm low} < E_{\rm up} < E_{\rm max}$), the lower $E_{\rm low}$ and upper $E_{\rm up}$ energy thresholds. Here, $E_{\rm low}$ is the lowest EB level that the off-grid site should reach and $E_{\rm up}$ corresponds to the desired energy buffer level at the site. If $E(t) < E_{\rm low}$, then the site is said to be energy deficient, and a suitable energy source at each time slot $t$ is selected on the forecast expectations, i.e., the expected harvested energy $\hat{H}(t)$.
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{energy_profiles.eps}
\caption{Example traces for harvested solar traces and wind traces from~\cite{belgium}.}
\label{fig:energy_trace}
\end{figure}
\section{Problem Formulation}
\label{sec:prob}
\noindent In this section, the optimization problem is formulated to obtain an energy efficient infrastructure sharing and resource management procedures through \mbox{short-term} traffic load and harvested energy forecasting. The overall goal is to enable energy efficient infrastructure sharing and resource management, within remote and rural communication sites, and in turn guaranteeing a comparable \ac{QoS} to that of urban areas, with reduced energy consumption in remote/rural sites.
\subsection{Optimization Problem}
\label{opt}
\noindent Within the BS, the allocated bandwidth $W$ is shared between mobile subscribers from operator A and B, and within the computing platform, the containers (i.e., as the computing resources) and the underlying physical resources (e.g., \ac{CPU}) are shared among the users who offloaded their \mbox{delay-sensitive} workloads.
To address the aforementioned problem, two cost functions are defined, namely, F1 and F2, where (F1) is defined as: $\theta_{\rm SITE}(t)$ (F1), weighs the energy drained in the BS site due to transmission and computing processes; and (F2) which accounts for the comparable \ac{QoS} is defined as: $(\gamma^*(t) - L_{\rm in})^2$. Regarding this formulation, it is worth noting that F1 tends to push the system towards \mbox{self-sustainability} solutions and F2 favors solutions where the delay sensitive load is entirely admitted in the computing platform by the router application, taking into account the expected energy to be harvested. The corresponding (weighted) cost function is defined as:
\begin{equation}
\label{eq:Jfunc_2}
\begin{aligned}
J(\zeta, \sigma,C,D, t) & \stackrel{\Delta}{=} \Upsilon \, \theta_{\rm SITE}(\zeta(t), \sigma(t), C(t), D(t), t)\\
& + \overline{\Upsilon}(\gamma^*(t) - L_{\rm in}(t))^2 \, ,
\end{aligned}
\end{equation}
where $\Upsilon = [0,1]$ is the weight used to balance the two functions, and $\overline{\Upsilon} \stackrel{\Delta}{=} 1 - \Upsilon$. Hence, starting from the current ti,e slot $t = 1$ to the finite horizon $T$, the time is discretized as follows: $t = 1,2, \dots, T$), thus the optimization problem is formulated as follows:
\begin{eqnarray}
\label{eq:objt_2}
\textbf{P1} & : & \min_{\mathcal{N}} \sum_{t=1}^T J(\zeta, \sigma, C,D, t) \\
&& \hspace{-1.25cm}\mbox{subject to:} \nonumber \\
{\rm A1} & : & \sigma(t) \in \{0,1\}, \nonumber \\
{\rm A2} & : & \beta \leq C(t) \leq C, \nonumber \\
{\rm A3} & : & E(t) \geq E_{\rm low} , \nonumber \\
{\rm A4} & : & 0 \leq \gamma_{c}(t) \leq \gamma_{\rm max}, \nonumber \\
{\rm A5} & : & 0 \leq f_{c}(t) \leq f_{\rm max}, \nonumber \\
{\rm A6} & : & \mbox{$r_{\rm min} \leq r_c(t) \leq r_{\rm max}$}, \nonumber\\
{\rm A7} & : & \mbox{$\theta_{\rm SITE}(t) \leq E(t)$}, \nonumber\\
{\rm A8} & : & \max \{2\,\varrho_{c}(t)\} + \Delta = \tau_{\rm max}, \quad t=1,\dots, T \, , \nonumber
\end{eqnarray}
where the set of objective variables to be configured at slot $t$ in the BS system and MEC server is defined as \mbox{$\mathcal{N} \stackrel{\Delta}{=} \{\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t)\}$}. These settings handle the transmission and computing activities using the following constraints. Here, Constraint A1 specifies the BS operation status (either {\it power saving} or {\it active}), A2 forces the required number of containers, $C(t)$, to be always greater than or equal to a minimum number \mbox{$\beta \geq 1$}. The purpose of this is to be always able to handle mission critical communications. The constraint A3 ensures that the \ac{EB} level is always above or equal to a preset threshold $E_{\rm low}$, to guarantee {\it energy \mbox{self-sustainability}} over time. Furthermore, A4 bound the maximum workloads of each running container $c$, with $c = 1,\dots, C(t)$, A5 represents a \mbox{hard-limit} on the corresponding \mbox{per-slot} and \mbox{per-VM} processing time. A6 forces $r_c(t)$ to fall in a desired range: [$r_{\rm min}, r_{\rm max}$] of transmission rates and A7 ensures that the energy consumption at the site is bounded by the available energy in the EB. A8 offers the hard \ac{QoS} guarantees within the computing platform. From \textbf{P1}, it is noted that there exists a \mbox{non-convex} component $P_c^{\rm net}(t)$, from $\theta_{\rm LK}(t)$. In this case, the Geometric programming (GP) concept can be used to convert $\theta_{\rm LK}(t)$ into a convex function similar to~\cite{steering}. Thus, in order to solve {\bf P1} in~\eq{eq:objt_2}, the \ac{LLC} approach~\cite{hayes_2004}, GP technique, and heuristics, is used towards obtaining the feasible system control inputs $\eta (t) = (\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t))$ for $t=1,\dots,T$. Well, it should be noted that~\eq{eq:objt_2} can iteratively be solved at any time slot $t \geq 1$, by just redefining the time horizon as $t^\prime = t, t+1, \dots, t+T-1$.
\subsubsection{Feasibility and QoS guarantees}
Regarding the feasibility of the problem, the following formal results holds.\\
\noindent\textbf{Proposition 1.} Feasibility conditions\\
\indent \textit{The following two inequalities:}
\begin{equation}
(r_{\rm max}/2)(\tau - \Delta) \geq L_{\rm in}
\end{equation}
\begin{equation}
\sum_{c=1}^{C(t)} f_c(t) \Delta \geq r_{\rm min}
\end{equation}
\textit{guarantees that the infrastructure sharing and resource reconfiguration problem is feasible}. \qquad\qquad\qquad\qquad\qquad\qquad $\square$
Since the reported conditions assure that P1 admits the solution, we then consider the corresponding QoS properties. In this regard, it is safe to point out that A6 and A8 lead to the following hard bounds on the resulting \mbox{communication-plus-computing} delay.\\
\noindent\textbf{Proposition 2.} Hard \ac{QoS} guarantees\\
\indent\textit{Firstly, the feasibility conditions of Proposition 1 must be met. Next, we let random variables measure the following: the random queue delay of the input queue $\tau_{IQ}$, the service time of the input queue $\tau_{SI}$, the queue delay of the output queue $\tau_{OQ}$, and the service time of the output queue $\tau_{SO}$. Thus, the following QoS guarantees hold:}
\textit{the random total delay ($\tau_{\rm tot} \stackrel{\Delta}{=} \tau_{IQ} + \tau_{SI} + \tau_{OQ} + \tau_{SO}$) induced by the computing platform is limited (in a hard way) up to:}
\begin{equation}
\tau_{\rm tot} \leq ((L_{\rm in} + L_{\rm out})/ r_{\rm min}) + 2.
\end{equation}
Thus, the reported QoS guarantee lead to the conclusion that the remote/rural site can handle \mbox{delay-sensitive} workloads while meeting the bound in A8.
\subsection{Infrastructure Sharing and Resource Allocation}
\label{infra}
\noindent In this subsection, the predictions for the BS traffic load and energy consumption, the description of the remote/rural site system dynamics, and the proposed online \mbox{controller-based} algorithm are presented.
\subsubsection{Prediction of exogenous processes}
\label{predic}
\noindent Two exogenous processes are considered in this work: the harvested energy $H(t)$ and the BS traffic loads $L(t)$. In order to generate the predictions ($\hat{H}(t), \hat{L}(t)$), the \ac{LSTM} neural networks~\cite{lstmlearn} were adopted. Thus, the \mbox{LSTM-based} predictor has been trained to give an output of the the forecasts for the required number of future time slots $T$. The trained LSTM network consists of an input layer, a single hidden layer consisting of $40$ neurons, for $80$ epochs, for a batch size of $4$; and an output layer. For training and testing purposes, the dataset was split as $70\%$ for training and $30\%$ for testing. As for the performance measure of the model, the \ac{RMSE} is used.
\subsubsection{Remote/Rural site system dynamics}
\label{rurdynamics}
\noindent In order to effectively manage the remote/rural site, an adaptive implementation of the controller is developed. Its purpose is to compute the solutions of both the infrastructure sharing and resource configurations \mbox{on-the-fly}. For this purpose, an online \mbox{controller-based} algorithm is proposed and is outlined in {\bf Algorithm \ref{tab:genm}} below.
\begin{small}
\begin{algorithm}[h!]
\begin{tabular}{l l}
{\bf Input:} & $s(t)$ (current state) \\
{\bf Output:} & $\eta^{*}(t)$ (control input vector)\\
01: & \hspace{-1cm} Parameter initialization\\
& \hspace{-1cm} ${\mathcal G}(t) = \{s(t)\}$ \\
02: & \hspace{-1cm} {\bf for} ($k$ within the prediction horizon of depth $T$) {\bf do}\\
& \hspace{-1cm}\quad - $\hat{L}(t+k)$:= forecast the workload \\
&\hspace{-1cm}\quad - $\hat{H}(t+k)$:= forecast the energy\\
& \hspace{-1cm}\quad - ${\mathcal G}(t+k) = \emptyset$ \\
03: & \hspace{-1cm}\quad {\bf for} (each $s(t)$ in ${\mathcal G}(t+k)$) {\bf do}\\
& \hspace{-1cm}\qquad - generate all reachable states $\hat{s}(t+k)$\\
& \hspace{-1cm}\qquad - ${\mathcal G}(t+k) = {\mathcal G}(t+k) \cup \{\hat{s}(t+k)\}$\\
04: & \hspace{-1.1cm} \quad\quad {\bf for} (each $\hat{s}(t+k)$ in $\mathcal G(t+k)$) {\bf do}\\
& \hspace{-1.1cm}\qquad\quad - calculate the corresponding $\theta_{\rm SITE}(\hat{s}(t+k))$\\
& \hspace{-1.1cm}\qquad\quad taking into account of $\zeta(t)$, and $l_d(t)$ from $L_{\rm out}(t)$\\
& \hspace{-1.1cm} \quad\quad {\bf end for}\\
& \hspace{-1.1cm}\quad\quad {\bf end for}\\
& \hspace{-1cm} \quad {\bf end for}\\
05: & \hspace{-1cm} - obtain a sequence of reachable states yielding\\
& \hspace{-1cm}\quad the best system input\\
06: & \hspace{-1cm} {$\eta^{*}(t):=$ control leading from $s(t)$ to $\hat{s}_{\min}$}\\
07: & \hspace{-1cm} {\bf Return $\eta^{*}(t)$}
\end{tabular}
\caption{DRC-RS Algorithm Pseudocode}
\label{tab:genm}
\end{algorithm}
\end{small}
\noindent At this point, it should be noted that at time slot $t$ the system state vector is $s(t) = (\zeta(t), \sigma(t), C(t), D(t), E(t))$ and the applied input vector that drivers the system towards the desired behaviour. These drivers perform bandwidth sharing, adaptive BS power transmission, autoscaling and reconfiguration of containers, and tuning of the optical drivers and is denoted by $\eta^*(t) = \{\zeta(t), \sigma(t), C(t), \{\psi_c(t)\}, \{P_c^{\rm net}(t)\}, \{\gamma_c(t)\}, \delta(t), D(t)\}$. The system behavior is described by the \mbox{discrete-time} \mbox{state-space} equation, adopting the \ac{LLC} principles~\cite{hayes_2004}:
\begin{equation}
s(t + 1) = \Phi(s(t), \eta(t)) \, ,
\end{equation}
where $\Phi(\cdot)$ is a behavioral model that captures the relationship between $(s(t),\eta(t))$, and the next state $s(t + 1)$. This relationship accounts for the amount of energy drained $\theta_{\rm SITE}(t)$, that harvested $H(t)$, which together lead to the next buffer level $E(t+1)$ through Eq.~\eq{eq:offgrid}. The \ac{DRC-RS} algorithm, finds the best control action vector $\eta^*(t)$ that yields the desired system behaviour within the remote/rural site. Note that $P_c^{\rm net}(t)$ is obtained using the CVXOPT toolbox and $\gamma_c(t), C(t),$ is obtained following the procedure outlined in remark 1 in~\cite{steering}. The entire process is repeated every time slot $t$ when the controller can adjust the behavior given the new state information. The state values of $s(t)$ and $\eta(t)$ are measured and applied at the beginning of the time slot $t$, whereas the offered load $L(t)$ and the harvested energy $H(t)$ are accumulated during the time slot and their value becomes known only at the end of it. This means that, being at the beginning of time slot $t$, the system state at the next time slot $t+1$ can only be estimated, which is formally written as:
\begin{equation}
\hat{s}(t + 1) = \Phi(s(t),\eta(t)) \,.
\label{eq:state_forecast}
\end{equation}
At this regard, it is worth noting that the control actions are taken after exploring only a limited prediction horizon, yielding a limited number of possible operating states. In order to ensure system stability, we rely on the notion that a system is said to be stable under control, if for any state, it is always possible to find a control input that forces it closer to the desired state or within a specified neighborhood of it~\cite{llcprediction}.
\subsubsection{Dynamic Resource Controller for Remote/Rural Sites}
\label{alg}
\noindent The edge network management algorithm pseudocode is outlined in Algorithm 1 above and it is based on the LLC principles, where the controller obtains the best control action $\eta^*(t)$. Starting from the {\it initial state}, the controller constructs, in a \mbox{breadth-first} fashion, a tree comprising all possible future states up to the prediction depth $T$. The algorithm proceeds as follows: \\
\begin{table} [t]
\caption{System Parameters.}
\center
\begin{tabular} {|l| l|l|}
\hline
{\bf Parameter} & {\bf Value} \\
\hline
Microwave backhaul power, $\theta_{\rm bk}$ & $\SI{50}{\watt}$\\
BS operating power $\theta_0$, & $\SI{10.6}{\watt}$\\
Max. number of containers, $C$ & $20$\\
Min. number of containers, $\beta$ & $1$ \\
Time slot duration, $\tau$ & $\SI{30} {\minute}$\\
Container $c$ (idle state), $\theta_{{\rm idle}_c}(t)$ & $\SI{4} {\joule}$\\
Container $c$ (max), $\theta_{{\rm max},c}(t)$ & $\SI{10} {\joule}$\\
Reconfiguration cost, $k_e$ & $ 0.005 \rm J/(\rm MHz)^2$\\
NIC in idle state, $\theta_{\rm idle}^{\rm nic}(t)$ & $13.1 \rm J$\\
Max. allowed processing time, $\Delta$ & $\SI{0.8} {\second}$\\
Processing rate set, $\{f_c(t)\}$ & $\{0,50,70,90,105\}$\\
Bandwidth, $W$ & $1 {\rm MHz}$\\
Max. allocated $c$ workload $\gamma_{\rm max}$ & 10 MB\\
Max. number of drivers, $D$ & $6$\\
Noise spectral density, $N_0$ & $-174 \, {\rm dBm/Hz}$\\
Max. container $c$ load, $\gamma_{\max}$ & $ 10$ MB\\
Driver energy, $m_d(t)$ & $1 \, \rm J/s$\\
Target transmission rate, $r_0$ & $1 \, \rm Mbps$\\
Leakage energy, $a (t)$ & $2\, \mu \rm J$\\
Energy storage capacity, $E_{\rm max}$ & $\SI{490} {\kilo\joule}$\\
Lower energy threshold, $E_{\rm low}$ & $30$\% of $E_{\rm max}$\\
Upper energy threshold, $E_{\rm up}$ & $70$\% of $E_{\rm max}$
\\
\hline
\end{tabular}
\label{tab_opt}
\end{table}
\indent A search set $\mathcal G$ consisting of the current system state is initialized (line 01), and it is accumulated as the algorithm traverse through the tree (line 03), accounting for predictions, accumulated workloads at the output buffer, past outputs and controls, operating intervals. The set of states reached at every prediction depth $t+k$ is referred to as $\mathcal G(t+k)$ (line 02). Given $s(t)$, the traffic load $\hat{L}(t+k)$ and harvested energy $\hat{H}(t+k)$ is estimated first (line 02), and generate the next set of reachable control actions by applying the accepted workload $\gamma^{*}(t+k)$, energy harvested and shared bandwidth fraction $\zeta (t+k)$. The cost function corresponding to each generated state $\hat{s}(t+k)$ is then computed (line 04), where $\hat{s}(t+k)$ take into account of $l_d$ as observed from $L_{\rm out}(t)$. Once the prediction horizon is explored, a sequence of reachable states yielding minimum energy consumption is obtained (line 05). The control action $\eta^{*}(t)$ corresponding to $\hat{s}(t+k)$ (the first state in this sequence) is provided as input to the system while the rest are discarded (line 06). The process is repeated at the beginning of each time slot $t$.
\section{Performance Evaluation}
\label{sec:eval}
\noindent In this section, some selected numerical results for the scenario of Section~\ref{sec:sys} are shown. The parameters that were used in the simulations are listed in Table~\ref{tab_opt} above.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width = \columnwidth]{trafficpredictions.eps}
\caption{One-step ahead predictive mean value for $L(t)$.}
\label{fig:bs_load}
\end{subfigure}
\quad
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width = \columnwidth]{prediction_profiles.eps}
\caption{One-step ahead predictive mean value for $H(t)$.}
\label{fig:energy_load}
\end{subfigure}
\centering
\caption{One-step online forecasting for both $L(t)$ and $H(t)$ patterns.}
\label{fig:patterns}
\end{figure}
\subsection{Simulation setup}
A BS empowered with computation capabilities deployed in a rural/remote area is considered in this setup. Our time slot duration $\tau$ is set to $\SI{30} {\minute}$ and the time horizon is set to $T = 3$ time slots. For simulation, Python is used as the programming language.
\subsection{Numerical results}
\textit{Data preparation:} The information from the used mobile and energy traces is aggregated to the set time slot duration. The mobile traces are aggregated from $\SI{10}{\minute}$ observation time to $\tau$. As for the wind and solar traces, they were aggregated from $\SI{15}{\minute}$ observation time to $\tau$. The used datasets are readily available in a public repository (\textit{see}~\cite{traces}).\\
In Fig.~\ref{fig:patterns}, the real and predicted values for traffic load from operator A and B, harvested energy is shown. Here, the forecasting routing tracks each value and predict it over \mbox{one-step}. The shown selected prediction results are for operator A and B, Solar, and Wind. Then, Table~\ref{tab:pred} below shows the the average \ac{RMSE} of the normalized harvested energy and traffic load processes ($L_A, L_B$), for different time horizon values, $T \in \{1,2,3\}$. In the table, the term $H_{\rm wind} (t)$ represent the forecasted values for energy harvested from wind turbines and $H_{\rm solar} (t)$ is for the harvested energy from solar panels. From the obtained results, the prediction variations are observed between $H(t)$ and $L(t)$ when comparing the average RMSE. The measured accuracy is deemed good enough for the proposed optimization.
\begin{table}[H]
\footnotesize
\centering
\caption{Average prediction error (RMSE) for harvested energy and
traffic load processes, both normalized in [0,1].}
\begin{tabulary}{1.0\textwidth}{|L|L|L|L|}
\hline
& {$T = 1$} & {$T = 2$} & {$T = 3$} \\
\hline
$L_A (t)$ & 0.070 & 0.090 & 0.011\\ \hline
$L_B (t)$ & 0.050 & 0.070 & 0.010\\ \hline
$H_{\rm wind}(t)$ & 0.011 & 0.013 & 0.016\\ \hline
$H_{\rm solar}(t)$ & 0.050 & 0.070 & 0.090\\
\hline
\end{tabulary}
\label{tab:pred}
\end{table}
The \mbox{DRC-RS} algorithm is benchmarked with another one, named Resource Reservation Manager (RRM), which is inspired by the backup reservation agreement from~\cite{strategicsharing}. In the RRM, the network resources are reserved per time slot based on a \mbox{set-point} threshold percentage. Both algorithms make use of the learned information.\\
\begin{figure}[h]
\centering
\includegraphics[width = \columnwidth]{bsenergy.eps}
\caption{Energy savings versus number of users connected to the BS.}
\label{fig:bsusers}
\end{figure}
Figure~\ref{fig:bsusers}, shows the average energy savings obtained within the offgrid system. Here, the number of users connected to the remote site is increased from $|\nu(t)|$ = $5$ to $50$, using an incremental step size of $5$. The obtained energy savings are with respect to the case where the BS site is dimensioned for maximum expected capacity (maximum value of $\theta_{\rm COMM}(t), \theta_{\rm COMP}(t)$). From the results, as expected, it is observed that the energy savings decrease as the number of mobile users connected to the remote site increases. The \mbox{DRC-RS} outperforms the RRM algorithm. At this regard, we note that the communication site will accept users as long as energy harvesting projections are positive.\\
\begin{figure}[t]
\centering
\includegraphics[width = \columnwidth]{joint.eps}
\caption{Mean energy savings for the remote/rural site system.}
\label{fig:rmsite}
\end{figure}
Then, Fig.~\ref{fig:rmsite} shows the average energy savings for the edge system. Here, the BS group size is set to $|\nu(t)| = 20$ and the obtained energy savings results are with respect to the case where no energy management procedures are applied, i.e., the BS is dimensioned for maximum expected capacity (maximum value of $\theta_{\rm SITE} (t)$, $\forall t$) and the MEC server provisions the computing resources for maximum expected computation workload (maximum value of $\theta_{\rm MEC} (t)$, with $C = 20\, \text{containers}, \forall t$). The average results of \mbox{DRC-RS} ($k_e = 0.05, \gamma_{\rm max} = 10$ MB) show energy savings of $51 \%$, while RRM achieves $43 \%$ on average. The effectiveness of the BS management procedure, autoscaling and reconfiguration of the computing resources, and on/off switching of the fast tunable laser drivers, coupled with foresighted optimization is observed in the obtained numerical results.
\section{Conclusions}
\label{sec:concl}
The challenge of providing connectivity to remote/rural areas will be one of the pillars for future mobile networks. To address this issue, in this paper, we present an infrastructure sharing and resource management mechanism for handling \mbox{delay-sensitive} workloads within a remote/rural site.
Numerical results, obtained with \mbox{real-world} energy and traffic load traces, demonstrate that the proposed algorithm achieves mean energy savings of $51 \%$ when compared with the $43 \%$ obtained by our benchmark algorithm. Also, the energy that can be saved decreases as the number of user connected to the BS increases, with a guarantee of serving more users as long the green energy is available.
The energy saving results are obtained with respect to the case where no energy management techniques are applied in the remote site.
\section*{Data Availability}
In this paper, open-source datasets for the mobile network (MN) traffic load, solar, and wind energy have been used. The details are as follows: (1) the real MN traffic load traces used to support the findings of this study were obtained from the Big Data Challenge organized by Telecom Italia Mobile (TIM) and the data repository has been cited in this article. (2) The real solar and wind traces used to support the findings of this study have also been cited in this article.
\bibliographystyle{IEEEtran}
\scriptsize
| {'timestamp': '2021-10-06T02:17:56', 'yymm': '2110', 'arxiv_id': '2110.01910', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.01910'} |
\section{Introduction}
Extreme solar storms can be defined as energetic solar events related to
large-scale disturbances in the Earth's magnetosphere, called as geomagnetic events \citep{Cliver04,Koskinen06, Echer11a,Echer11b,Echer13,Gonzalez11b}.
{ Before the launch of satellites, the activity of the Sun was recorded by ground-based instruments observing in visible light
(e.g. see the Meudon data-base ''BASS2000'' with spectroheliograms registered from 1909 until today- see examples in Figure \ref{spot}). Surveys in white light, in H$\alpha$, and Ca II H and K lines allow to study the solar cycle activity by tracking the sunspots and studying their size, and their complexity \citep{Waldmeier1955,McIntosh1990,Eren17}. The enhancement of emission was used as a good proxy for detecting flares \citep{Carrington1859}. However the detection of flares was limited by the spatial and the temporal resolution of the observations.}
Recently different approaches have succeeded to quantify the intensity of some historical events using different magnetometer stations over the world.
The analysis of magnetic recordings made as early as the middle of the nineteenth century by
ground stations allowed us to clarify the importance of several extreme events { \citep{Tsurutani2003,Cliver04,Lakhina2008,Cid13,Cliver2013}.}
During the XX$^{th}$ century, several important events with $Dst < -700$ nT were observed after intense flares and connected to aurora.
Exploring historical extreme events shows all the problems encountered when one aims at understanding the phenomena from one end to the other.
{ It is difficult to identify the solar source of extreme geoeffective events without continuous observations of the Sun and without quantified numbers of the energy release during the solar events.
The {\it Geostationary Operational Environmental Satellites} (GOES) register the global soft X ray emission 1- 8 \AA\, of the Sun since the ''80s''. The intensity of the flares are classified by the letters X, M, C, which correspond to 10$^{-4}$, 10$^{-5}$, 10$^{-6}$ W m$^{-2}$ energy release respectively. The extreme historical solar events, for which only the size of sunspots and "the magnetic crochet" recorded on the Greenwich magnetogram, for example, for the Carrington event or ionospheric disturbances are known, were associated with extreme geomagnetic events by comparison with recent events.
It is interesting to read the papers of \citet{Tsurutani2003,Cliver2013} where several historical events e.g. Sept. 1859, Oct. 1847, Sept.1849, May 1921 have been discussed and classified.}
With the {\it SOlar and Heliospheric Observatory }\citep[SOHO;][]{Fleck1995}, launched in 1995, and its on-board spectro-imagers and coronagraphs, and more recently with the {\it Solar TErrestrial RElations Observatory} { \citep[STEREO A and B 2006;][]{Wuelser2004,Russel2008} } and its { COR} and { HI} coronagraphs able to reach the Earth in particular conjunction (see the website of HI
HELCATS)
the solar sources of geoeffective events could be identified with more accuracy. A new era was open for { forcasting geomagnetic disturbances by being able to follow the solar events in multi-wavelengths, and particularly the coronal mass ejections from the Sun to the Earth. This is the new science called "Space Weather''.}
Intense flares responsible for geoeffective events are commonly associated with Solar Energetic Particles (SEP) events and/or coronal mass ejections (CMEs). Several minutes after the flares, very high energetic particles (SEPs) may enter in the Earth's atmosphere affecting astronauts or electronics parts in satellites.
However, concerning geomagnetic disturbances, CMEs { can be as geoeffective as the energetic particles when their arrival trajectory is oriented towards the Earth and when their speed is large enough \citep{Gopalswamy10a,Gopalswamy10b,Wimmer14}. SEP ejections produce particle radiation }with large fluence, however only a few of SEPs occur during each solar cycle while CMEs have an occurrence rate between { 2 and 3 per week in solar minimum and between 5 and 6 per day in solar maximum, these numbers also depend on the used coronagraphs \citep{StCyr2000,Webb2012,Lugaz2017}. They are originated from highly-sheared magnetic field regions which can be refereed as large magnetic flux ropes carrying strong electric currents.} They are statistically more likely to lead to geomagnetic disturbances when their solar sources are facing the Earth \citep{Bothmer07,Bein11,Wimmer14}. According to their speed, their interplanetary signatures (ICMEs) may reach the Earth in one to five days after the flare \citep{Yashiro06,Gopalswamy09,Bein11}.
{ Halo CMEs observed with the white light SMM coronagraph were firstly named ''global CMEs'' \citet{Dere2000} and already suspected to be responsible of geoeffective events \citep{Zhang1988}. }Recent studies confirmed the geoeffectivity of halo CMEs which generally form magnetic clouds (MC) (e.g. Bocchialini et al 2017, Solar Physics in press). The MCs are associated with extreme storms ($Dst < -200$ nT) and intense storms ($-200 < Dst < -100$ nT) \citep{Gonzalez07,Zhang07}, while the moderate storms ($-100 < Dst < -50$ nT) studied in the solar cycle 23 were found to be associated with co-rotating regions by $47.9 \%$, to ICMEs or magnetic clouds (MC) by $20.6 \%$, to sheath fields by $10.8 \%$, or to combinations of sheath and ICME ($10\%$) \citep{Echer13}.
{ However magnetic clouds can be not so effective if they are directed away from Earth like the fast ICME of July 2012 \citep{Baker2013} or if the magnetic field of the cloud arrives close to the magnetosphere with an orientation towards the North as for the cases of August 1972 \citep{Tsurutani1992}. In August 1972 a huge sunspot group McMath region 11976 (see Figure \ref{spot}) crossed the disk and was the site of energetic flares and consequently shocks were detected at 2.2 AU by Pionneer 10 \citep{Smith1976}. The estimated velocity of the ejecta was around 1700 km/s which is nearly the highest transit speed on record. \citet{Tsurutani2003} estimated its magnetic field to be around 73 nT which is also a huge number. But the Dst index indicated a recovery phase relatively low like a moderate storm \citep{Tsurutani1992}.
Nowaday the {\it in situ } parameters of the solar wind including the interplanetary magnetic field, IMF,
are monitored at L1 by the ACE spacecraft \citep{Chiu1998} magnetic field (MAG experiment) or similar instruments. They indicate clearly the passage of the satellite through an ICME or magnetic cloud by the changes of the solar wind speed, the reversed sign of the magnetic components Bx and By. The ICME is more geoeffective if the IMF-Bz component is negative indicating a strong coupling with the magnetosphere. }
{ We can conclude that if extreme solar storms do not necessary initiate extreme geomagnetic events, extreme geomagnetic events are nearly always produced by extreme solar storms. And extreme solar storms are most of the time issued from the biggest sunspot groups which produce the most energetic events \citep{Sammis2000}.}
The paper is organized as following. After an historical review of large sunspot groups observed on the Sun related to geomagnetic storms (Section 2), we present statistical results on star and sun flares according to the characteristics of the spots (flux, size) (Section3). Section 4 is focused on a MHD model (OHM) predicting the capability of the Sun to produce extreme events. { Finally the conclusion is given in Section 5.}
\begin{figure}
\centering
\mbox{
\includegraphics[width=12cm]{present_1947_2003.png}
}
\hspace{0.5cm}
\mbox{
\includegraphics[width=12cm]{present_2003_Nov17_1972_3.png}
}
\hspace{-5 cm}
\caption{Full disk spectroheliograms from the Observatoire de Paris in Meudon. ({\it top panels}) The largest sunspot groups ever reported: ({\it left}) on April 4, 1947 with no geoffective effect, ({\it right}) on October 28 2003. The AR 10486 in the south hemisphere { led} to a X17 flare and consequently a geomagnetic disturbance with a Dst=-350 nT.({\it bottom panels}):
({\it left}) AR 10501 on November 17 2003 observed in Ca II KIv with an inserted H$\alpha$ image of the active region. The huge eruptive filament surrounding the AR initiated
the largest Dst of the 23$^{th}$ solar cycle (Dst=-427 nT). ({\it right}) McMath region 11976 large sunspot, source of flares and ejected energetic particles on August 1972
(spectroheliograms from the Meudon data-base ''BASS2000''). }
\label{spot}
\end{figure}
\begin{figure}
\centering
\mbox{
\includegraphics[width=10cm]{Kilcik_figure.png}
}
\caption{{ CME number and speed per solar Carrington rotation related to sunspot number and indexes of geoeffectivity (Dst and Ap).}
The dashed line shows the sunspot number, the bold solid line the CME
speed index, the dotted line the CME number, the double line the
Dst
index, and the
thin solid line represents the
Ap
index (adapted from \citet{Kilcik11}).}
\label{CME}
\end{figure}
\begin{figure*}
\centering
\mbox{
\includegraphics[width=14cm]{Aulanier_graph.png}
}
\caption{
Magnetic flux in the dominant polarity of the bipole, and magnetic energy released during the flare, calculated as a function of the maximum
magnetic field and the size of the photospheric bipole. The x and + signs correspond to extreme solar values. The former is unrealistic and the
latter must be very rare (from \citet{Aulanier13}.)}
\label{OHM}
\end{figure*}
\section{Historical view of solar sources of geoeffectivity}
The Carrington event in September 1, 1859, well known to be one of the largest solar Sunspot groups leading to one of the strongest
flare \citep{Carrington1859,Hodgson1859} had the largest magnetic signature ever
observed at European latitudes with the consequent aurora visible at low { geographic} latitude ($\pm18^\circ$) observed 17.5 hours later. Using the transit time, \citet{Tsurutani2003} proposed that the $Dst$ value decreased down to $-1\ 760$ nT during this event. { The Colaba (Bombay) record allowed to have a more precise determination around -1600 nT \citep{Cliver2013,Cid13}. This value is more than twice the value of the next extreme geomagnetic events.}
Revisiting this event by analysing ice core nitrates and $^{10}Be$ data,
\citet{Cliver2013} claimed that it reached only $-900$ nT. Nevertheless it { seems} to be the strongest geoeffective event registered up to now. A correlation between solar energetic proton fluence (more than $30$MeV) and flare size based on modern data proves that this event can be classified as an extreme solar event with a X-ray flare having an estimated energy larger than $ X10$.
All these extreme registered events, 12 episodes since the Carrington events, are solar activity dependent \citep{Gonzalez11a} (rough association). They occurred mainly during solar cycle maximum of activity with its two bumps and a secondary peak during the declining phase of the solar cycle.
Between 1876 and 2007, the largest sunspot area overlaid by large bright flare ribbons was observed in the Meudon spectroheliograms in Ca II K1v and H$\alpha$ between July 20-26 1946 \citep{Dodson1949}.
A well observed flare event occurred on July 25 1946 at 17:32 UT and caused a huge geomagnetic storm 26.5 hours later.
The size of the sunspot was equivalent to 4200 millionths of the solar hemisphere (MSH) and the ribbon surface around 3570 MSH \citep{Toriumi16};
The Carrington AR sunspot group seemed to be smaller than that one according to the sunspot drawings.
The next year an even larger sunspot was visible in { the spectroheliogram of } April 5, 1947 with a size reaching 6000 MSH but had no geoeffectivity effect (Figure \ref{spot}). The flare looked to be extended and powerful but
not accompanied by coronal mass ejections. It could be a similar case to the more recently event observed in October 2014. The AR 12192 presented a sunspot area of 2800 MSH and was the site of several flares (6 X- and 24 M-class) \citep{Sun15,Thalmann15}. These two active regions are really exceptional. The AR 12192 did not launch any CMEs. Different interpretations have been proposed: the region would possess not enough stress, no enough free energy. Or the CME eruptive flux rope would not have reached the threshold height of the torus instability \citep{Zuccarello15}.
Although there are in average two CMEs per day, only some of them are geoeffective. In October and November 2003, the largest sunspot groups (AR 10486 with an area of 3700 MSH), crossed the disk and were the sites of extreme events (Figure \ref{spot}). X 17, X 10 and X 35 flares were reported on October 28, October 29 and November 4 respectively. However the more extreme geomagnetic storm occurring during the whole Solar Cycle 23 with a $Dst =-422$ nT was linked to a M9.6 class flare on November 20, 2003 \citep{Gopalswamy05,Moestl08,Marubashi12}. The origin of the solar event was in the region AR 10501 and has been associated with the eruption of a large filament \citep{Chandra10} (Figure \ref{spot}).
The AR 10501 had not the largest sunspot area but the cause of the flare and CME was merely due to the injection of opposite magnetic helicity by a new emerging flux which produced a destabilization of the large filament and lead to a full halo CME (speed = 690 km/s) and a magnetic cloud in the heliosphere. The size of the sunspot is an important parameter but it is not sufficient to get an extreme solar storm.
Since the geoeffectivity is not straightforward, in order to forecast major storms, it is important to understand the nature (magnetic strength and helicity) and the location of the solar sources, the propagation of the CMEs through the interplanetary medium and their impacts on the magnetosphere/ionosphere system. Statistical studies of solar and magnetic activities during solar cycle 23 have permitted to associate CMEs and geomagnetic disturbances, providing long lists of CMEs with their characteristics i.e. their width, velocity, and solar sources \citep{Zhang07,Gopalswamy10a, Gopalswamy10b}. They showed that a CME would more likely give rise to a geoeffective event if its characteristics are: a fast halo CME (with an apparent width around $360^\circ$) and a solar source close to the solar central meridian.
In some cases, the proposed sources came from active regions close to the limb. \citet{Cid12} proposed to revisit this subset of events: in order to associate every link in the Sun-Earth chain, they have not only considered the time window of each CME-ICME, but also they have carefully revised every candidate at the solar surface.
The result was that a CME coming from a solar source close to the limb cannot be really geoeffective (i.e, associated with a at least moderate and a fortiori intense storm) if it does not belong to a complex series of other events. {
Possible deflection of a CME in the corona as well as in the interplanetary space may change the geoeffectiveness of a CME \citep{Webb2012}.
It has been reported deflection up a few ten degrees, even during the SMM mission \citep{Mein1982,Bosman2012,Kilpua2009,Zuccarello2012,Isavnin2013,Mostl2015}.}
In the statistical analysis of Bocchialini et al 2017, it has been shown that a CME deflected from its radial direction by more than 20 degrees produced an exceptional geoeffective event. { Moreover the orientation of the magnetic field of the magnetic cloud ($Bz <0$) is also an important parameter to get an extreme geoffective event (see the Introduction).}
\section{Characteristics of super flares}
Free magnetic energy stored in the atmosphere is released through global solar activity including CMEs (kinetic energy), flares and SEPs (thermal and non thermal energy).
There is not really a physical reason to have a relationship between the different categories of released energy.
\citet{Emslie12} estimated all energy components for 38 solar eruptive flares observed between 2002 and 2006. The maximum of non potential energy in an active region reached 3$\times 10^{33}$ erg and therefore could power all flare activity in the region. 0.5 percent of CMEs have a kinetic energy reaching 3 $\times 10^{32}$ erg, otherwise the mean kinetic energy of 4133 CMEs is around 5 $\times 10^{29}$ erg. They found a weak relationship between the estimations of the different energies due to large uncertainties. However the relationship looks to be more reliable for extreme events (syndrome of the big flare).
However the systematic study of geoeffective events occurring through the solar maximum activity year (2002) already mentioned in Section 1, showed that only 2 X-class flares among the 12 X-class flares were related to Sudden Storm Commencement (SSC) leaded events in the magnetosphere, the other SSCs were related to M and even C class flares (Bocchialini et al 2017). The solar cycle variation of the {\it Dst } does not follow the general trend of the sunspot number during the
declining phases of solar cycles but is comparable to the
trend of CME speeds, and CME numbers with the secondary peak \citep{Kilcik11} (Figure \ref{CME}). This behaviour confirmed the importance of CME in the geoeffectivity.\\
However statistical analysis of flare intensity showed a relationship with some categories of active regions. Flares were related to large sunspot active regions (category A, B, F ) in the classification of Zurich \citep{Eren17}. The class F consists of large ARs with sunspot fragmentation, indicating commonly the existence of strong shear.
This study confirmed the finding concerning the historical events that large geoffective effects are linked to the existence of large sunspot groups \citep{Carrington1859,Dodson1949}. { The extreme events should be related to large sunspots like for the ''Halloween'' events on October-November 2003 in AR 10486 (Figure \ref{spot} top right). The flare on November 4 2003, is generally considered to be
the most intense SXR event during the space age, with an estimated
peak SXR classification ranging from X25 to X45 \citep{Gopalswamy05,Cliver2013}. However the most geoeffective event occurred on the 20 November 2003. The AR 11501 has not a large sunspot and the solar extreme event is a coronal mass ejection with large kinetic energy. This event shows one example of large geoffectivity not related to the sunspot size (Figure \ref{spot} bottom row) but to the magnetic shear and magnetic helicity injection \citep{Chandra10}.}
Recently super flares (energy $ 10^{34}$ to $ 10^{36}$ erg) have been discovered in Sun-like stars (slow rotating stars) by the Kepler new space satellite \citep{Maehara12}. A debate started about the possibility of observing such super flares on the Sun. \citet{Shibata13} forecasted that one such super flare could occur every 800 years. Stars are suspected to have large spots and a large sunspot on the Sun with a flux of 2 $\times$ 10$^{23}$ Mx flux would be not impossible and would correspond to an energy of 10$^{34}$ erg \citep{Shibata13}.
\citet{Toriumi16} made a statistical analysis of the new solar Cycle 24 flares between May 2010 and April 2016. Considering 51 flares exhibiting two flare ribbons (20 X and 31 M-class), they determined an empirical relationship between the size of sunspots (S$_{spot}$) in flaring active regions and the magnetic flux $\Phi_{spot}$ in logarithm scale.\\
log $\Phi_{spot}$=0.74 $\times$ log S$_{spot}$ +20 with some uncertainties. \\
Considering the largest spots ever observed on the Sun (July 1946 and October 2014) they extrapolated this relationship and estimated a maximum flux of 1.5$\times 10^{23}$ Mx. They did not take into account the fact that all the energy of the spots can be transformed in thermal and non thermal energy and not in kinetic energy (no CME was launched in October 2014 for example).
\begin{figure*}
\centering
\mbox{
\includegraphics[width=12cm]{present_sun_star.png}
}
\caption{Schematic representation of several modeled sunspot groups without faculae on the solar disk, with their corresponding modeled flare energies computed with the OHM simulation. { A sunspot group consists of several pairs of sunspots. In each group a pair of sunspots (surrounded by red curve) representing 1/3 of the sunspot group area, is modeled in the simulation. The size of the grey areas is normalized to the size of the spots considered in the simulation (adapted from \citet{Aulanier13}).}}
\label{star}
\end{figure*}
\section{Prediction of extreme solar storms}
It appears that MHD simulations of emerging flux could be used to have a systematic survey to investigate the process of energy storage and find the relationship between sunspot size, CME eruptive events.
The {\it Observationally driven High order scheme Magnetohydrodynamic code} (OHM) \citep{Aulanier05,Aulanier2010} simulation has been used as a tool to experiment huge energetic events on the Sun e.g. large super flare ($10^{36}$ erg) by varying the characteristics of the sunspots in a large
parameter space \citep{Aulanier13}.
The model consisted of a bipole with two rotating sunspots which is equivalent to create along the polarity inversion line a strong shear with cancelling flux. The 3D numerical simulation solved the full MHD equations for the mass density, the fluid velocity u, and the magnetic field B under the plasma $\beta$ =0 assumption. The calculations were performed in non-dimensionalized
units, using $\mu$ = 1.
The magnetic field diffusion favored the expulsion of the flux rope. The space parameter study lead to graphs of values of magnetic flux and energy according to the size of sunspot in MSH units and the stress of the field (Figure \ref{OHM}).
The magnetic flux $\Phi$ and the total flare energy E are defined as following:\\
\noindent $\phi$ = 42 $(\frac{B_z}{8T})$ $(\frac{L^{bipole}}{5m})^2$ Wb \\
\noindent E= $\frac{40}{\mu(\frac{B_z}{8T})^2(\frac{L^{bipole}}{5m})^3 }$J
\\
\\
B is the strength of the magnetic field in the bipole (sunspot), L is the size of the bipole.
The problem is the estimation of the value L.
L$^2$ can be computed as the area of an active region with facula (L=200 Mm), The maximum value for the flux is $\phi$ = 10$^{23}$ Mx and for the energy E =3 $\times$ 10$^{34}$ erg that falls in the range of star superflares \citep{Maehara12}. However L should be reduced to 1/3 due to the fact that the stress of the field concerned only a small part of the PIL \citep{Aulanier13}. The maximum of energy could not exceed 10$^{34}$ erg. These results come from a self consistent model with shear flux leading to CME with no approximation.
On the other hand the estimations of \citet{Toriumi16} are very empirical mixing different observations not related one to the other one. Each estimation has been overestimated. For example the volume of the active region concerned by the flare has been estimated by the product of S$_{ribbon}$ (surface area of the ribbons) and distance between the ribbons \citep{Toriumi16}. However the uncertainty on the estimation of the magnetic field in this volume can lead to an overestimation by one to two orders of magnitude { according to the f value introduced in their equations}. Taking unrealistic values of B and flux lead to unrealistic energy values never observed in our era \citep{Emslie12}.
\section{Conclusion}
{ Commonly extreme solar events are produced in active regions having a strong magnetic reservoir (high magnetic field and stress). There are defined as very powerful X ray flares, coronal mass ejections with high kinetic energy faced to the Earth leading to magnetic cloud arriving at the magnetosphere with a good orientation (B$_z$ negative) and strong ejections of energetic particles (SEPs). Large sunspot groups with fragmentation are good candidates for extreme solar storms \citep{Sammis2000}.}
With our Sun as it is today, it seems impossible to get larger sunspots and super-flares with energy $>$ 10$^{34}$ erg. { Figure \ref{star} shows
different sunspot groups. In each of them a pair of sunspot surrounded by red curves represents the bipole used as boundary condition of the OHM simulation. The energy mentioned below the pair is the result of the simulation. With huge sunspots we obtain large energies as it is recorded for stars by the Kepler satellite. Such large spots
have never been observed on the Sun.}
We should not forget that the simulation concerns a bipole with rotating spots imposing a strong shear along the PIL. The shear is a necessary ingredient to have expulsions of CMEs in the simulation and also in the observations. In order to produce stronger flares the Sun-like stars should have a much stronger dynamo than the Sun and a rotation rate exceeding several days. The prediction of having extreme solar storms in 800 years would be very speculative.
Acknowledgements\\
The author would like to thank the organizers of the meeting Drs. Katya Georgieva and Kazuo Shiokawa to invite me in Varna for the VarSITI meeting in June 2016. I want to thank G.Aulanier for his fruitful comments on this work.
| {'timestamp': '2017-08-08T02:05:44', 'yymm': '1708', 'arxiv_id': '1708.01790', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.01790'} |
\section{ Introduction}
The conjectured duality between
the type IIB superstring theory on the AdS$_5\times$S$^5$ space
(AdS superstring)
and
$D=4,~{\cal N}=4$ Yang-Mills theory
\cite{M,GKP,W}
has been driven not only
studies of variety of background theories
but also studies of basic aspects such as integrability.
The approach of the pp-wave background superstring theory \cite{MTpp}
was explored
by Berenstein, Maldacena and Nastase \cite{BMN} and
developed in, for example
\cite{GKP2,FT}.
For further development Mandal, Suryanarayan and Wadia
pointed out the relevance with the integrability \cite{MSW},
and Bethe anzatz approach was explored
by Minahan and Zarembo \cite{MZ} and in for example \cite{B,DNW,BFST}.
The integrability
is a powerful aspect expected in the large N QCD \cite{Lipatov}
and shown to exist in
the IIB superstring theory on the AdS$_5\times$S$^5$ space
by Bena, Polchinski and Roiban \cite{BPR}.
The integrability provides hidden symmetry generated by
an infinite number of
conserved ``non-local" charges \cite{LP,BIZZ}
as well as
an infinite number of conserved ``local" charges
\cite{pol2} which are related by a spectral parameter
at different points.
Related aspects on the integrability of the AdS superstring were
discussed in \cite{new18}.
Recently the conformal symmetry of AdS superstrings
was conjectured due to the $\kappa$ symmetry
\cite{polyakov}.
The classical conformal symmetry of the AdS superstring theory
also leads to an infinite number of conserved Virasoro operators.
The naive questions are
how the conformal generator is related to the infinite number of conserved
``local" currents, and how many independent conserved currents exist.
For principal chiral models the stress-energy tensor
is written by trace of the square of the conserved flat current;
for reviews see refs.
\cite{EHMM,MSW}.
For the AdS superstring theory
the Wess-Zumino term and the $\kappa$ symmetry make a difference.
Recently issues related to
the integrability and the conformal symmetry
of the AdS superstring theory have been discussed
\cite{MP,Mh,AAT}.
In this paper we will obtain the expression of the conformal generator,
which is the stress-energy tensor relating to
the lowest spin ``local" current,
and we calculate the higher spin ``local" currents
to clarify independent components.
The AdS space contains the Ramond/Ramond flux which causes
difficulty of
the standard Neveu-Schwarz-Ramond
(NSR) formulation of the superstring theory.
The AdS superstring was described in the Green-Schwarz (GS) formalism
by Metsaev and Tseytlin based on the coset
PSU(2,2$\mid$4)/[SO(4,1)$\times$SO(5)]
\cite{MT}.
Later Roiban and Siegel reformulate it in terms of the unconstrained
GL(4$\mid$4) supermatrix coordinate based on an alternative
coset GL(4$\mid$4)/[Sp(4)$\times$GL(1)]$^2$ \cite{RS}.
In this formalism the local Lorentz is gauged,
and it turns out that this treatment is essential for
separation into $+/-$ modes (right/left moving modes) easier.
Furthermore the fermionic constraint including the first class and second class
is necessary for
separation of the fermionic modes into $+/-$ modes.
As the first step toward the CFT formulation of the AdS superstring,
the affine Sugawara construction \cite{Halpern},
the Virasoro algebra and the algebra of currents carrying the
space-time indices are also listed.
The organization of this paper is the following;
in the next section the notation is introduced.
In section 3 we analyze the superparticle in the AdS$_5\times$S$^5$ space,
and the relation between
the reparametrization constraint and the conserved right invariant (RI) current
is given.
In section 4 we analyze the superstring in the AdS$_5\times$S$^5$ space,
and the infinite number of conserved currents are presented
both from the conformal point of view and
from the integrability point of view.
We show that the stress-energy tensor
is written by
the ``supertrace" of the square of the RI current
as the lowest spin ``local" current.
Then we calculate higher spin ``local" currents
to clarify independent components of the ``local" currents.
\par\vskip 6mm
\section{ GL(4${\mid}$4) covariant coset}
We review the Roiban-Siegel formulation of the AdS$_5\times$S$^5$ coset
\cite{RS} and follow the notation in \cite{HKAdS}.
The coset GL(4$\mid$4)/[GL(1)$\times$Sp(4)]$^2$ is used instead of
PSU(2,2$\mid$4)/[SO(4,1)$\times$SO(5)] for the linear realization of the
global symmetry after Wick rotations and introducing the auxiliary variables.
A coset element $Z_M{}^A$
is an unconstrained matrix defined on a world-volume
carrying indices $M=(m,\bar{m}),~A=(a,\bar{a})$ with
$m,\bar{m},a,\bar{a}=1,\cdots,4$.
The left invariant (LI) current, $L^L$, is invariant under the left action
$Z_M{}^A~\to~\Lambda_M{}^NZ_N{}^A
$ with
a global parameter GL(4$\mid$4)$\ni \Lambda$
\begin{eqnarray}
(J^L)_A{}^B=(Z^{-1}d Z)_A{}^B~~.
\end{eqnarray}
The LI current satisfies the flatness condition by definition
\begin{eqnarray}
dJ^L=-J^LJ^L~~~.
\end{eqnarray}
The right invariant (RI) current, $J^R$, is invariant under the right action
$Z_M{}^A~\to~Z_M{}^B\lambda_B{}^A
$ with a local parameter
[Sp(4)$\otimes$GL(1)]$^2$ $\ni\lambda$
\begin{eqnarray}
(J^R)_M{}^N=({\cal D}ZZ^{-1})_M{}^N~~,~~({\cal D}Z)_M{}^A\equiv
dZ_M{}^A+Z_M{}^BA_B{}^A
\end{eqnarray}
with
\begin{eqnarray}
A~\to~\lambda A\lambda^{-1}+(d\lambda) \lambda^{-1}~~,
\end{eqnarray}
and
\begin{eqnarray}
dJ^R=J^RJ^R+Z(dA-AA)Z^{-1}~~~.\label{dAAA}
\end{eqnarray}
Originally $A$ is bosonic [Sp(4)$\otimes$GL(1)]$^2$
$\ni$$A$, but we will show that
the fermionic constraint i.e. $\kappa$ symmetry
gives fermionic components of $A$.
The conjugate momenta are introduced
\begin{eqnarray}
\{
Z_M{}^A,\Pi_B{}^N
\}=(-)^A\delta_B^A\delta_M^N~~~
\end{eqnarray}
as the graded Poisson bracket and
$\{q,p\}=-(-)^{qp}\{p,q\}$.
There are also two types of differential operators;
the global symmetry generator (left action generator), $G_M{}^N$,
and
the supercovariant derivatives (right action generator), $D_A{}^B$,
\begin{eqnarray}
G_M{}^N=Z_M{}^A\Pi_A{}^N~~,~~D_A{}^B=\Pi_A{}^M Z_M{}^B~~~.\label{DGDG}
\end{eqnarray}
In our coset approach $8\times 8=64$ variables for $Z_M{}^A$ are introduced
and auxiliary variables are eliminated by the following constraints
corresponding to the stability group [Sp(4)$\times $GL(1)]$^2$,
\begin{eqnarray}
({\bf D})_{(ab)}=(\bar{\bf D})_{(\bar{a}\bar{b})}={\rm tr}~{\bf D}=
{\rm tr}~\bar{\bf D}\equiv 0~~~\label{DSp4GL1}~~~,
\end{eqnarray}
where the bosonic components are denoted by boldfaced characters as
${\bf D}_{ab}\equiv D_{ab}$ and $\bar{\bf D}_{\bar{a}\bar{b}}\equiv
D_{\bar{a}\bar{b}}$ of \bref{DGDG}.
The number of the coset constraints is $10+10+1+1=22$,
so the number of the coset parameters is $64-22=42$
where $10$ bosons and $32$ fermions.
The $[Sp(4)]^2$ invariant metric is anti-symmetric
and a matrix is decomposed into
trace part, anti-symmetric-traceless part and the symmetric part,
denoted by
\begin{eqnarray}
{\bf M}_{ab}=-\frac{1}{4}\Omega_{ab}{\bf M}^c{}_c+{\bf M}_{\langle ab\rangle}+{\bf M}_{(ab)}\equiv - \frac{1}{4}\Omega~{\rm tr}{\bf M}
+\langle {\bf M}\rangle+({\bf M})~~~,
\end{eqnarray}
with $M_{(ab)}=\frac{1}{2}(M_{ab}+M_{ba})$,
and similar notation for the barred sector.
Both $G_M{}^N$ and $D_A{}^B$ in \bref{DGDG} satisfy GL(4$\mid$4) algebra.
If we focus on the AdS superalgebra part,
the global symmetry generators $G_M{}^N$ satisfies the global AdS superalgebra
\begin{eqnarray}
\left\{Q_{A\alpha},Q_{B,\beta}\right\}&=&-2\left[
\tau_3{}_{AB}P_{\alpha\beta} +\epsilon_{AB}
M_{\alpha\beta}\right]\label{QQPM}\\
Q_{1\alpha}&=&G_{m\bar{m}}+G_{\bar{m}m
\nn\\
Q_{2\alpha}&=&G_{m\bar{m}}-G_{\bar{m}m
\nn\\
P_{\alpha\beta}
&=&{G}_{\langle mn\rangle}\Omega_{\bar{m}\bar{n}}
-G_{\langle\bar{m}\bar{n}\rangle}\Omega_{mn}\cdots {\rm total~ momentum}\nn\\
M_{\alpha\beta}&=&-G_{(mn)}\Omega_{\bar{m}\bar{n}}
+G_{(\bar{m}\bar{n})}\Omega_{mn}\cdots{\rm total~ Lorentz}\nn~~~.
\end{eqnarray}
The right hand side of \bref{QQPM} can not be diagonalized
by the real SO(2) rotation of $Q_A$'s
because of the total Lorentz charge term with $\epsilon_{AB}$.
On the other hand the local AdS supersymmetry algebra is given by
\begin{eqnarray}
\left\{d_{A\alpha},d_{B,\beta}\right\}&=&2\left[
\tau_3{}
_{AB}
\tilde{p}_{\alpha\beta} +
\epsilon_{AB}m_{\alpha\beta}\right]\\
d_{1\alpha}&=&{D}_{a\bar{a}}+{\bar{D}}_{\bar{a}a}
\nn
\\
d_{2\alpha}&=&{D}_{a\bar{a}}-{\bar{D}}_{\bar{a}a}
\nn
\\
\tilde{p}_{\alpha\beta}&=&
{\bf D}
_{\langle ab\rangle}\Omega_{\bar{a}\bar{b}}
-{\bar{\bf D}}
_{\langle \bar{a}\bar{b}\rangle}\Omega_{ab}
\cdots {\rm local~LI~momentum}\nn
\\m_{\alpha\beta}&=&
-{\bf D}_{(ab)}\Omega_{\bar{a}\bar{b}}
+\bar{\bf D}
_{(\bar{a}\bar{b})}\Omega_{a{b}}\cdots{\rm local~Lorentz}\nn
~~~.
\end{eqnarray}
In our coset approach the local Lorentz generator
is a constraint \bref{DSp4GL1},
so the
local supercovariant derivative $d_{A\alpha}$'s
can be separated into;
\begin{eqnarray}
\left\{d_{1\alpha},d_{2\beta}\right\}=
2 m_{\alpha\beta}\equiv 0~,~~
\left\{d_{1\alpha},d_{1\beta}\right\}=
2 \tilde{p}_{\alpha\beta}~~,~~
\left\{d_{2\alpha},d_{2\beta}\right\}=
-2\tilde{p}_{\alpha\beta}
\end{eqnarray}
Although the global superalgebra can not be separated into
irreducible algebras in the AdS background,
the local superalgebra can be separated into
irreducible sets on the GL(4${\mid}$4) covariant coset approach.
This property allows simpler description of the AdS superstring
as the flat case at least in the classical mechanics level.
\par\vskip 6mm
\section{ AdS Superparticle}
We begin with the action for a superparticle in the AdS$_5\times$S$^5$
\begin{eqnarray}
&S=\displaystyle\int d\tau~\displaystyle\frac{1}{2e}
\left\{-{\bf J}_\tau^{\langle ab\rangle}{\bf J}_{\tau, \langle ab\rangle}
+\bar{\bf J}_\tau^{\langle \bar{a}\bar{b}\rangle}\bar{\bf J}_{\tau, \langle \bar{a}\bar{b}\rangle}
\right\}&~~~.\label{RS}
\end{eqnarray}
Here we omit the upper-subscript $L$ for the LI currents and their components are denoted as
\begin{eqnarray}
(J^L_{~\mu})_A{}^B
=\left(
\begin{array}{cc}
{\bf J}_{\mu,}{}_a{}^b&j_{\mu,}{}_a{}^{\bar{b}}\\\bar{j}_{\mu,}{}_{\bar{a}}{}^b&\bar{\bf J}_{\mu,}{}_{\bar{a}}{}^{\bar{b}}
\end{array}
\right)~~~.
\end{eqnarray}
From the definition of the canonical conjugates,
$
\Pi_A{}^M={\delta S}/{\delta \partial_\tau Z_M{}^A} (-)^A
$,
we have
the following primary constraints \cite{HKAdS}
\begin{eqnarray}
{\cal A}_{\rm P}=\frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2\right]=0~~,~~
D_{a\bar{b}}=\bar{D}_{\bar{a}{b}}=0~~~\label{ApDD}
\end{eqnarray}
with
\begin{eqnarray}
D_A{}^B=\left(
\begin{array}{cc}{\bf D}_a{}^b&D_a{}^{\bar{b}}\\
\bar{D}_{\bar{a}}{}^b&\bar{\bf D}_{\bar{a}}{}^{\bar{b}}
\end{array}
\right)~~~.
\end{eqnarray}
The Hamiltonian is chosen as
\begin{eqnarray}
{\cal H}=-{\cal A}_{\rm P}=- \frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2\right]
\end{eqnarray}
and the $\tau$-derivative is determined by
the Poisson bracket with ${\cal H}$,
$\partial_\tau{\cal O}=\{{\cal O},{\cal H}\}$.
The fact that a half of the fermionic constraints is second class
requires the Dirac bracket in general.
Fortunately the Dirac bracket with the Hamiltonian is equal to
its Poisson bracket because the fermionic
constrains are ${\cal H}$ invariant.
The LI current is calculated as
\begin{eqnarray}
J^L_\tau=Z^{-1}\partial_\tau Z=\left(
\begin{array}{cc}
\langle{\bf D}\rangle&0\\0&\langle \bar{\bf D}\rangle
\end{array}
\right)~~,~~\partial_\tau J^L=0~~~.
\end{eqnarray}
The RI current, generating the global GL(4$\mid$4) symmetry,
is given as
\begin{eqnarray}
J^R_\tau\equiv Z\Pi=Z\left(J^L_\tau+A_\tau\right)Z^{-1}~~,~~
A_\tau=
\left(
\begin{array}{cc}
({\bf D})-\frac{1}{4}\Omega {\rm tr}{\bf D}&D
\\\bar{D}&(\bar{\bf D})-\frac{1}{4}\Omega {\rm tr}\bar{\bf D}
\end{array}
\right)~~~.\label{JRSP}
\end{eqnarray}
Although the stability group does not contain fermionic components originally,
the fermionic components of the gauge connection $A$ in \bref{JRSP} is induced.
It is noted that ``$A$" is the gauge connection
distinguishing from
the reparametrization
constraint ``${\cal A}$".
The RI current is conserved, since the Hamiltonian is written
by LI currents which are manifestly global symmetry invariant
\begin{eqnarray}
\partial_\tau J^R=0~~~.
\end{eqnarray}
The $\kappa$ symmetry generators are half of the fermionic constraints
by projecting out with the null vector as
\begin{eqnarray}
{\cal B}_{\rm P}{}_a{}^{\bar{b}}=\langle{\bf D}\rangle_a{}^b D_b{}^{\bar{b}}
+D_a{}^{\bar{a}}\langle\bar{\bf D}\rangle_{\bar{a}}{}^{\bar{b}}~~,~~
\bar{\cal B}_{\rm P}{}_{\bar{a}}{}^{{b}}=\langle\bar{\bf D}\rangle_{\bar{a}}{}^{\bar{b}}\bar{D}_{\bar{b}}{}^{{b}}
+\bar{D}_{\bar{a}}{}^{{a}}\langle{\bf D}\rangle_{a}{}^{b}~~~.
\end{eqnarray}
If we construct the closed algebra including these $\kappa$ generators
with keeping
the bilinear of the fermionic constraints,
the $\tau$-reparametrization constraint, ${\cal A}_{\rm P}$, is modified to \cite{HKAdS}
\begin{eqnarray}
\tilde{\cal A}_{\rm P}&=&\frac{1}{2}{\rm tr}\left[
\langle{\bf D}\rangle^2-
\langle\bar{\bf D}\rangle^2
+2D\bar{D}
\right]~~~~~.
\end{eqnarray}
This expression appears in the Poisson bracket of
${\cal B}$ with $\bar{\cal B}$,
when we keep the bilinear of fermionic constrains.
The RR flux is responsible for the last term ``$D\bar{D}$".
The term which is bilinear of the constraints
does not change the Poisson bracket
since its bracket with an arbitrary variable
gives terms proportional to the constraints
which are zero on the constrained surface.
In another word ${\cal A}_{\rm P}$ has an ambiguity of bilinear of the constraints,
and the $\kappa$ invariance fixes it.
On the original coset constrained surface \bref{DSp4GL1}
it is also rewritten as
\begin{eqnarray}
\tilde{\cal A}_{\rm P}=\frac{1}{2}~{\rm Str}~[D_A{}^B]^2=\frac{1}{2}~{\rm Str}~[J^R_\tau]^2~~~.
\end{eqnarray}
This is zero-mode contribution of the classical Virasoro constraint
for a superstring
in the AdS$_5\times$S$^5$ background.
\par\vskip 6mm
\section{ AdS Superstring}
\subsection{Conserved currents}
We take the action for a superstring in the AdS$_5\times$S$^5$ given by
\begin{eqnarray}
&S=\displaystyle\int d^2\sigma~\frac{1}{2}\left\{
-\sqrt{-g}g^{\mu\nu}({\bf J}_\mu^{\langle ab\rangle}{\bf J}_{\nu, \langle ab\rangle}
-\bar{\bf J}_\mu^{\langle \bar{a}\bar{b}\rangle}\bar{\bf J}_{\nu, \langle \bar{a}\bar{b}\rangle}
)
+\frac{k}{2}\epsilon^{\mu\nu}(
E^{1/2}j_\mu^{a\bar{b}}j_{\nu, a\bar{b}}
-
E^{-1/2}\bar{j}_\mu^{\bar{a}{b}}\bar{j}_{\nu, \bar{a}{b}}
)\right\}&\nn\\\label{RS}
\end{eqnarray}
where ``$k$" represents the WZ term contribution with $k=1$ and
$E={\rm sdet} Z_M{}^A$.
The consistent $\tau$ and $\sigma$ reparametrization generators \cite{HKAdS} are
\begin{eqnarray}
{\cal A}_\perp&=&{\cal A}_{0\perp} +k~{\rm tr}\left[-E^{1/4}Fj_\sigma+E^{-1/4}\bar{F}\bar{j}_\sigma\right]\nn\\
{\cal A}_\parallel&=&{\cal A}_{0\parallel} +k~{\rm tr}\left[E^{-1/4}F\bar{j}_\sigma-E^{1/4}\bar{F}{j}_\sigma\right]\label{Aperp}
\end{eqnarray}
with the following primary constraints
\begin{eqnarray}
{\cal A}_{0\perp}&=&\frac{1}{2}{\rm tr}\left[
(\langle{\bf D}\rangle^2+\langle{\bf J}_\sigma\rangle^2)-
(\langle\bar{\bf D}\rangle^2+\langle\bar{\bf J}_\sigma\rangle^2)
\right]=0\nn\\
{\cal A}_{0\parallel}&=&{\rm tr}\left[
\langle{\bf D}\rangle\langle{\bf J}_\sigma\rangle-
\langle\bar{\bf D}\rangle\langle\bar{\bf J}_\sigma\rangle
\right]=0\\
F_{a\bar{b}}&=&E^{1/4}D_{a\bar{b}}
+\frac{k}{2}E^{-1/4}(\bar{j}_\sigma)_{\bar{b}a}=0\nn\\
\bar{F}_{\bar{a}{b}}&=&E^{-1/4}\bar{D}_{\bar{a}{b}}+\frac{k}{2}E^{1/4}({j}_\sigma)_{{b}\bar{a}}=0~~~.\label{FermionicF}
\end{eqnarray}
Their Poisson brackets are
\begin{eqnarray}
\left\{{\cal A}_\perp(\sigma),{\cal A}_\perp(\sigma')\right\}&=&
2{\cal A}_\parallel(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\parallel(\sigma)\delta(\sigma-\sigma')\nn\\
\left\{{\cal A}_\perp(\sigma),{\cal A}_\parallel(\sigma')\right\}&=&
2{\cal A}_\perp(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\perp(\sigma)\delta(\sigma-\sigma')\\
\left\{{\cal A}_\parallel(\sigma),{\cal A}_\parallel(\sigma')\right\}&=&
2{\cal A}_\parallel(\sigma)\partial_\sigma\delta(\sigma-\sigma')+
\partial_\sigma{\cal A}_\parallel(\sigma)\delta(\sigma-\sigma')\nn~~~.
\end{eqnarray}
The Hamiltonian is chosen as
\begin{eqnarray}
{\cal H}&=&-\int d\sigma {\cal A}_\perp\label{HamiltonianSUST}\\&=&
-\int d\sigma {\rm tr}\left[
\frac{1}{2}\left\{
\langle{\bf D}\rangle^2+\langle{\bf J}_\sigma\rangle^2-
\langle\bar{\bf D}\rangle^2-\langle\bar{\bf J}_\sigma\rangle^2\right\}
+\left(kE^{-1/2}\bar{D}\bar{j}_\sigma-k E^{1/2}Dj_\sigma
+j_\sigma\bar{j}_\sigma\right)
\right]
~~~.\nn
\end{eqnarray}
From now on $E=1$ gauge is taken using the local GL(1) invariance.
The global GL(1) symmetry is broken by the WZ term.
Using the Hamiltonian in \bref{HamiltonianSUST}
the $\tau$-derivative of ${\cal A}_{\perp}$ and ${\cal A}_\parallel$ are given as
\begin{eqnarray}
\partial_\tau {\cal A}_\perp=\partial_\sigma {\cal A}_\parallel~~,~~
\partial_\tau {\cal A}_\parallel=\partial_\sigma {\cal A}_\perp~~~.\label{dAdA}
\end{eqnarray}
Although the coset parameter $Z_M{}^A$ does not satisfy
the world-sheet free wave equation, it is essential to introduce
the world-sheet lightcone
coordinate
\begin{eqnarray}
\sigma^\pm=\tau \pm \sigma~~,~~
\partial_\pm=\frac{1}{2}(\partial_\tau \pm \partial_\sigma)~~~.
\end{eqnarray}
The differential equations \bref{dAdA} are rewritten as
\begin{eqnarray}
\partial_- {\cal A}_+=0~~,~~
\partial_+ {\cal A}_-=0~~,~~
{\cal A}_\pm={\cal A}_\perp\pm {\cal A}_\parallel~~~,
\end{eqnarray}
so the infinite number of the conserved
currents are
\begin{eqnarray}
\partial_- \left[f(\sigma^+){\cal A}_+\right]=0~~,~~
\partial_+ \left[f(\sigma^-){\cal A}_-\right]=0~~\label{conformal}
\end{eqnarray}
with an arbitrary function $f$.
Then there exist infinite number of conserved charges
\begin{eqnarray}
\partial_- \left[\displaystyle\int d\sigma~
f(\sigma^+){\cal A}_+\right]=0~~,~~
\partial_+ \left[\displaystyle\int d\sigma~
f(\sigma^-){\cal A}_-\right]=0~~~.
\end{eqnarray}
On the other hand the integrability of the superstring
will provide the infinite number of ``local" charges as well as the
``non-local" charges written down in \cite{HY}.
The LI currents
is given by
\begin{eqnarray}
\left\{\begin{array}{ccl}
J^L_\tau&=&\left(
\begin{array}{cc}
\langle{\bf D}\rangle&-k\bar{j}_\sigma\\-kj_\sigma&\langle \bar{\bf D}\rangle
\end{array}
\right)
=\left(
\begin{array}{cc}
\langle{\bf D}\rangle&
2D-2F\\2\bar{D}-2\bar{F}&\langle \bar{\bf D}\rangle
\end{array}
\right)
\approx
\left(
\begin{array}{cc}
\langle{\bf D}\rangle&
2D\\2\bar{D}&\langle \bar{\bf D}\rangle
\end{array}
\right)\nn\\
J^L_\sigma&=&
\left(
\begin{array}{cc}
{\bf J}_{\sigma}
&j_{\sigma}\\
\bar{j}_{\sigma}&
\bar{\bf J}_{\sigma}
\end{array}
\right)
\end{array}\right.~~~
\end{eqnarray}
where the $\tau$ component is determined by \bref{HamiltonianSUST}.
The LI currents satisfy the flatness condition but
does not satisfy the conservation law.
The RI currents are obtained in \cite{HY} as
\begin{eqnarray}
\left\{\begin{array}{ccl}
J^R_\tau&=&ZDZ^{-1}=Z(J^L_\tau+A_\tau)Z^{-1}\label{RISUST}\\
J^R_\sigma&=&Z(J^L_\sigma+A_\sigma)Z^{-1}~~,~~
J^L_\sigma+A_\sigma=
\left(
\begin{array}{cc}
\langle{\bf J}_{\sigma}\rangle
&\bar{F}+\frac{1}{2}j_{\sigma}\\
F+\frac{1}{2}\bar{j}_{\sigma}&
\langle\bar{\bf J}_{\sigma}\rangle
\end{array}
\right)
\end{array}\right.
\end{eqnarray}
where the gauge connection $A_\mu$ is
\begin{eqnarray}
\left\{\begin{array}{ccl}
A_\tau&=&\left(
\begin{array}{cc}
({\bf D})-\frac{1}{4}\Omega {\rm tr}{\bf D}&-D\\-\bar{D}&(\bar{\bf D})-\frac{1}{4}\Omega {\rm tr}\bar{\bf D}
\end{array}
\right)\nn\\
A_\sigma&=&
\left(
\begin{array}{cc}
-({\bf J}_\sigma)+\frac{1}{4}\Omega {\rm tr}{\bf J}_\sigma&
\bar{F}-\frac{1}{2}j_\sigma
\\F-\frac{1}{2}\bar{j}_\sigma
&-(\bar{\bf J}_\sigma)+\frac{1}{4}\Omega {\rm tr}\bar{\bf J}_\sigma
\end{array}
\right)
\end{array}\right.~~~.
\end{eqnarray}
The fermionic components of $A_\mu$ appear again.
In this paper
the fermionic constraints, $F$ and $\bar{F}$,
in
the fermionic components of $A_\sigma$ are kept
while they were absent in our previous paper \cite{HY}
depending on the treatment of the constraint bilinear terms.
Then the integrability of the superstring
leads to the current conservation and the flatness condition
for the RI current;
\begin{eqnarray}
\partial_\tau J^R_\tau=\partial_\sigma J^R_\sigma~~,~~
\partial_\tau J^R_\sigma-\partial_\sigma J^R_\tau=
2\left[J^R_\tau, J^R_\sigma\right]~~~\label{dJRdJR}~~~.
\end{eqnarray}
They are rewritten as
\begin{eqnarray}
\partial_-J^R_+=\left[J^R_-,J^R_+\right]~~,~~
\partial_+J^R_-=\left[J^R_+,J^R_-\right]~~,~~
J^R_\pm=J^R_\tau\pm J^R_\sigma~~~.\label{JRpm}
\end{eqnarray}
Taking the supertrace, denoting ``Str", leads to
the infinite number of conserved ``local"
currents because $J^R_\mu$ are supermatrices,
\begin{eqnarray}
\partial_-~{\rm Str}\left[(J^R_+)^n\right]=0~~,~~
\partial_+~{\rm Str}\left[(J^R_-)^n\right]=0~~,n=1,2,\cdots~~~.\label{JRn}
\label{integrablity}
\end{eqnarray}
It gives the infinite number of conserved ``local" charges
\begin{eqnarray}
\partial_\tau \left[\displaystyle\int d\sigma~
f(\sigma^+){\rm Str}(J^R_+)^n
\right]=0~~,~~
\partial_\tau \left[\displaystyle\int d\sigma~
f(\sigma^-){\rm Str}(J^R_-)^n
\right]=0~~~.\end{eqnarray}
In this way classical 2-dimensional conformal symmetry and
integrability of AdS superstring lead to two infinite sets of
conserved currents,
\bref{conformal} and \bref{integrablity}.
In next sections the relation between them is examined.
\subsection{Stress-energy tensor ($n=2$)}
The ``$+/-$" (right/left moving) modes of the RI currents
on the original coset constrained space
\bref{DSp4GL1}
are written as
\begin{eqnarray}
J^R_\pm=
Z\left(
\begin{array}{cc}
\langle{\bf D}_\pm \rangle&D\pm (\bar{F}+\frac{1}{2}j_\sigma)
\\
\bar{D}\pm(F+\frac{1}{2}\bar{j}_\sigma)&\langle\bar{\bf D}_\pm\rangle
\end{array}
\right)Z^{-1}=
Z\left(
\begin{array}{cc}
\langle{\bf D}_\pm\rangle&d_\pm+\frac{1}{2}j_\pm
\\
\pm(d_\pm -\frac{1}{2}j_\pm)
&\langle\bar{\bf D}_\pm\rangle
\end{array}
\right)Z^{-1}\nn\\
\end{eqnarray}
with
\begin{eqnarray}
{\bf D}_\pm={\bf D}\pm{\bf J}_\sigma~~,~~
\bar{\bf D}_\pm=\bar{\bf D}\pm\bar{\bf J}_\sigma~~,~~d_\pm=F\pm\bar{F}~~,~~
j_\pm=j_\tau\pm j_\sigma=-\bar{j}_\sigma\pm j_\sigma
\label{pmpm}
\end{eqnarray}
carrying the LI currents indices, $AB$.
This is supertraceless, Str$J^R_\pm=0$, so $n=1$ case of
\bref{JRn} gives just trivial equation.
Let us look at the $n=2$ case of \bref{JRn},
${\rm Str}\left[(J^R_\pm)^2\right]$.
Then the ``+" sector is written as
\begin{eqnarray}
\frac{1}{2}{\rm Str}\left[(J^R_+)^2\right]&=&
\frac{1}{2}{\rm Str}\left[
\left(
\begin{array}{cc}
\langle{\bf D}_+\rangle&d_++\frac{1}{2}j_+
\\
d_+-\frac{1}{2}j_+
&\langle\bar{\bf D}_+\rangle
\end{array}
\right)^2
\right]\nn\\
&=&\frac{1}{2}{\rm tr}\left[
\langle{\bf D}_+\rangle^2-\langle\bar{\bf D}_+\rangle^2
+2(d_++\frac{1}{2}j_+
)(d_+-\frac{1}{2}j_+
)
\right]\nn\\
&=&{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_+\rangle^2-\langle\bar{\bf D}_+\rangle^2\right)
+j_+d_+\right]~~~.\label{419}
\end{eqnarray}
The ``$-$" sector is
\begin{eqnarray}
\frac{1}{2}{\rm Str}\left[(J^R_-)^2\right]&=&
{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_-\rangle^2-\langle\bar{\bf D}_-\rangle^2\right)
-j_-d_-
\right]~~~.\label{420}
\end{eqnarray}
On the other hand the conformal symmetry generator ${\cal A}_\pm$ is
rewritten from the relation \bref{Aperp} and \bref{pmpm} as
\begin{eqnarray}
{\cal A}_\pm&=&
{\rm tr}
\left[\frac{1}{2}\left(
\langle{\bf D}_\pm \rangle^2-\langle\bar{\bf D}_\pm \rangle^2\right)
\pm j_\pm d_\pm
\right]~=~
\frac{1}{2}{\rm Str}\left[(J^R_\pm)^2\right]
~~~.\label{421}
\end{eqnarray}
If we take care of the square of the fermionic constraints,
the closure of the first class constraint set
including the $\kappa$ symmetry
generators,
\begin{eqnarray}
{\cal B}_\pm&=&
\langle{\bf D}_\pm\rangle d_\pm
+d_\pm\langle\bar{\bf D}_\pm\rangle\label{kappaSUST}
~~
\end{eqnarray}
determines the ambiguity of bilinear of the constraints as
\begin{eqnarray}
\tilde{\cal A}_\pm=
{\rm tr}\left[\frac{1}{2}\left(
\langle{\bf D}_\pm \rangle^2-\langle\bar{\bf D}_\pm \rangle^2\right)
\pm(\frac{1}{2}d_\mp +j_\pm)d_\pm
\right]=
{\cal A}_\pm+{\rm tr}F\bar{F}~~
\end{eqnarray}
obtained in \cite{HKAdS} as a generator of the ${\cal ABCD}$
constraint set
known to exist for a superstring in a flat space
\cite{WSmech,ABCD}.
Then the stress-energy tensor is
\begin{eqnarray}
T_{\pm\pm}\equiv\tilde{\cal A}_\pm
\approx
{\cal A}_\pm
={\rm Str}J^R_\pm J^R_\pm
~~~.\label{423}
\end{eqnarray}
This is $\kappa$ symmetric stress-energy tensor
in a supersymmetric generalization of
Sugawara form.
\subsection{Supercovariant derivative algebra}
Existence of the conformal invariance should present the
irreducible coset components of supercovariant derivatives
\cite{HKAdS};
\begin{eqnarray}
\langle{\bf D}_\pm\rangle&=&\langle{\bf D}\rangle\pm\langle{\bf J}_\sigma\rangle
~~,~~
\langle\bar{\bf D}_\pm\rangle~=~\langle\bar{\bf D}\rangle\pm\langle\bar{\bf J}_\sigma\rangle\nn\\
d_\pm&=&F\pm\bar{F}=(D\pm\frac{1}{2}j_\sigma)\pm(\bar{D}\pm\frac{1}{2}\bar{j}_\sigma)
\nn~~~.
\end{eqnarray}
On the constraint surface \bref{DSp4GL1} and \bref{FermionicF}
the $+/-$ sector supercovariant derivatives are separated as
\begin{eqnarray}
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\langle{\bf D}_-\rangle_{cd}(\sigma')\right\}
&=&2\Omega_{\langle c|\langle b}({\bf D})_{a\rangle|d\rangle}
\delta(\sigma-\sigma')
\equiv 0\nn\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
d_{-,c\bar{d}}(\sigma')\right\}
&=&\Omega_{c\langle b}d_{+,a\rangle \bar{d}}
\delta(\sigma-\sigma')=
\Omega_{c\langle b}(F+\bar{F})_{a\rangle \bar{d}}
\delta(\sigma-\sigma')
\approx 0\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
d_{-,c\bar{d}}(\sigma')\right\}
&=&2\left[
\Omega_{ac}(\bar{\bf D})_{\bar{b}\bar{d}}
+\Omega_{\bar{b}\bar{d}}({\bf D})_{ac}
\right]
\delta(\sigma-\sigma')
\equiv 0\nn
\end{eqnarray}
with
analogous relation for the barred sector, $\langle\bar{\bf D}_\pm\rangle$.
The ``+" sector supercovariant derivative algebra is
\begin{eqnarray}
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\langle{\bf D}_+\rangle_{cd}(\sigma')\right\}
&=&2\Omega_{\langle c|\langle b}\Omega_{a\rangle|d\rangle}
\delta'(\sigma-\sigma')+4\Omega_{\langle c|\langle b}
({\bf J}_\sigma)_{a\rangle|d\rangle}
\delta(\sigma-\sigma')\nn\\
&\equiv& 2\Omega_{\langle c|\langle b} \nabla_{a\rangle|d\rangle}
\delta(\sigma-\sigma')
\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
d_{+,c\bar{d}}(\sigma')\right\}
&=&2\left[
\Omega_{\bar{b}\bar{d}}\langle{\bf D}_+\rangle_{ac}
-\Omega_{ac}\langle\bar{\bf D}_+\rangle_{\bar{b}\bar{d}}
\right]
\delta(\sigma-\sigma')
\nn\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
d_{+,c\bar{d}}(\sigma')\right\}
&=&
\Omega_{c\langle b}(d_-+2j_+)_{a\rangle \bar{d}}
\delta(\sigma-\sigma')\approx
2\Omega_{c\langle b}\omega_{+,a\rangle \bar{d}}
\delta(\sigma-\sigma')
\nn\\
\left\{d_{+,a\bar{b}}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&-2\Omega_{\bar{b}\bar{d}}\Omega_{ac}\delta'(\sigma-\sigma')
2\left[
-\Omega_{\bar{b}\bar{d}}({\bf J}_\sigma)_{ac}
-\Omega_{ac}(\bar{\bf J}_\sigma)_{\bar{b}\bar{d}}\right]\delta(\sigma-\sigma')\nn\\
&\equiv& -2\nabla_{\bar{b}\bar{d};ac}\delta(\sigma-\sigma')\nn\\
\label{sucovder}\\
\left\{\langle{\bf D}_+\rangle_{ab}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&
\Omega_{c\langle b}\omega_{-,a\rangle \bar{d}}
\delta(\sigma-\sigma')
\nn\\
\left\{\omega_{+,a\bar{b}}(\sigma),
\omega_{+,c\bar{d}}(\sigma')\right\}
&=&0~~~\nn
\end{eqnarray}
where
\begin{eqnarray}
\omega_\pm&=&j_\pm=-\bar{j}_\sigma\pm j_\sigma~~~.
\end{eqnarray}
This is comparable with the flat case where
the non-local term, $\partial_\sigma\delta(\sigma-\sigma')$, is replaced by the
local Lorentz (~[Sp(4)]$^2$~) covariant non-local term,
$\nabla_\sigma\delta(\sigma-\sigma')$.
For the fifth Poisson bracket,
$\left\{\langle{\bf D}_+\rangle,
\omega\right\}
$,
it is zero for the flat case but it is not for the AdS case.
For a superstring in a flat space
the consistency of the $\kappa$ symmetry constraint
requires
the first class constraint set,
namely ``${\cal ABCD}$" constraint,
which are bilinear of the supercovariant derivatives
\cite{WSmech,ABCD}.
For the AdS case the situation is completely the same,
despite of this anomalous term \cite{HKAdS}.
\par\vskip 6mm
\subsection{``Local" currents ($n\geq 3$)}
Next let us look at $n\geq 3 $ cases
of
the infinite number of conserved ``local" current \bref{JRn}.
For simplicity we focus on the ``+" sector and replace
$``+"$ by $``~\hat{~}~"$, as
$J_+ \to \hat{J}$.
The first three powers of the RI current,
$(J^R)^n$ with $n=1,2,3$, are listed as below:
\begin{eqnarray}
\left[Z^{-1}\hat{J}^R Z\right]_{AB}&=&\left(
\begin{array}{cc}
\langle\hat{\bf D}\rangle_{\langle ab\rangle}&
(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
\\
\pm(\hat{d} -\frac{1}{2}\hat{j})_{b\bar{a}}
&\langle\hat{\bar{\bf D}}\rangle_{\bar{a}\bar{b}}
\end{array}
\right)
\end{eqnarray}
\begin{eqnarray}
\left[Z^{-1}(\hat{J}^R)^2 Z\right]_{AB}&=&-\frac{1}{4}
\left(
\begin{array}{cc}
\Omega_{ab}~{\rm tr}(
\langle\hat{\bf D}\rangle^2
+\hat{j}\hat{d})
&\\&
\Omega_{\bar{a}\bar{b}}~{\rm tr}(
\langle\hat{\bar{\bf D}}\rangle^2
-\hat{j}\hat{d})
\end{array}
\right)\\
&&+
\left(
\begin{array}{cc}
(\hat{d}^2-\frac{1}{4}\hat{j}^2)_{(ab)}
+\langle \hat{j}\hat{d}\rangle_{\langle ab \rangle}
&\hat{\cal B}_{a\bar{b}}
+\frac{1}{2}(\langle \hat{\bf D}\rangle \hat{j}
+\hat{j}\langle \hat{\bar{\bf D}}\rangle
)_{a\bar{b}}
\\
\hat{\cal B}_{b\bar{a}}
-\frac{1}{2}(\langle \hat{\bf D}\rangle \hat{j}
+\hat{j}\langle \hat{\bar{\bf D}}\rangle
)_{b\bar{a}}
&(\hat{d}^2-\frac{1}{4}\hat{j}^2)_{(\bar{a}\bar{b})}
-\langle\hat{j}\hat{d}\rangle_{\langle \bar{a}\bar{b} \rangle}
\end{array}
\right)\nn
\end{eqnarray}
\begin{eqnarray}
&&\left[Z^{-1}(\hat{J}^R)^3 Z\right]_{AB}~=~
\frac{1}{4}
\left(
\begin{array}{cc}
\Omega_{ab}~{\rm tr}\left[\hat{\cal B}\hat{j}-
(\langle\hat{\bf D}\rangle \hat{d} )\hat{j}
\right]
&\\&
-\Omega_{\bar{a}\bar{b}}~{\rm tr}\left[\hat{\cal B}\hat{j}
-(\hat{d}\langle\hat{\bar{\bf D}}\rangle)\hat{j}
\right]
\end{array}
\right)\\
&&-
\left(
\begin{array}{cc}
\left[\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2+\hat{j}\hat{d})~
\langle\hat{\bf D}\rangle
-\langle\hat{\bf D}\rangle (\hat{j} \hat{d})
+\hat{\cal B}\hat{j}
\right]_{\langle ab\rangle}
&
\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2
-\langle\hat{\bar{\bf D}}\rangle^2)~(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
\\
\frac{1}{4}{\rm tr}(\langle\hat{\bf D}\rangle^2
-\langle\hat{\bar{\bf D}}\rangle^2)~(\hat{d}-\frac{1}{2}\hat{j})_{b\bar{a}}
&
\left[\frac{1}{4}{\rm tr}(\langle\hat{\bar{\bf D}}\rangle^2
+\hat{j}\hat{d})~
\langle\hat{\bar{\bf D}}\rangle
-(\hat{j} \hat{d})
\langle\hat{\bar{\bf D}}\rangle -\hat{\cal B}\hat{j}
\right]_{\langle \bar{a}\bar{b}\rangle}
\end{array}
\right)\nn\\
&&+
\left(
\begin{array}{cc}
\left[
2(\hat{d}^2-\frac{1}{4}\hat{j}^2)\langle\hat{\bf D}\rangle
+\hat{d}\langle\hat{\bar{\bf D}}\rangle\hat{d}
-\frac{1}{4}\hat{j}\langle\hat{\bar{\bf D}}\rangle \hat{j}
\right]_{(ab)}&
-\frac{1}{4}{\rm tr}(\hat{j}\hat{d})
(\hat{d}+\frac{1}{2}\hat{j})_{a\bar{b}}
+
\left[\langle\hat{{\bf D}}\rangle
(\hat{d}+\frac{1}{2}\hat{j})\langle\hat{\bar{\bf D}}\rangle
\right]_{a\bar{b}}
\\
\frac{1}{4}{\rm tr}(\hat{j}\hat{d})
(\hat{d}-\frac{1}{2}\hat{j})_{b\bar{a}}
+
\left[\langle
\hat{{\bf D}}\rangle
(\hat{d}-\frac{1}{2}\hat{j})\langle\hat{\bar{\bf D}}\rangle
\right]_{b\bar{a}}
&
\left[
2(\hat{d}^2-\frac{1}{4}\hat{j}^2)\langle\hat{\bar{\bf D}}\rangle
+\hat{d}\langle\hat{{\bf D}}\rangle\hat{d}
-\frac{1}{4}\hat{j}\langle\hat{{\bf D}}\rangle \hat{j}
\right]_{(\bar{a}\bar{b})}
\end{array}
\right)\nn\\
&&+
\left(
\begin{array}{cc}
&
\left[\left\{
(\hat{d}^2-\frac{1}{4}\hat{j}^2)
+\langle\hat{j}\hat{d}
\rangle
\right\}(\hat{d}+\frac{1}{2}\hat{j})
\right]_{a\bar{b}}
\\
\left[\left\{
(\hat{d}^2-\frac{1}{4}\hat{j}^2)
-\langle\hat{j}\hat{d}
\rangle
\right\}(\hat{d}-\frac{1}{2}\hat{j})
\right]_{\bar{a}b}
&
\end{array}
\right)\nn
\end{eqnarray}
In this computation
5-dimensional $\gamma$-matrix relations are used,
for example
${\bf V}^{\langle ab\rangle}{\bf U}_{\langle bc\rangle}
+{\bf U}^{\langle ab\rangle}{\bf V}_{\langle bc\rangle}
=\frac{1}{2}\delta^a_c~{\rm tr}{\bf V}{\bf U}$ for bosonic
vectors ${\bf V},~{\bf U}$.
The conserved ``local" current with $n=3$ becomes
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^3 &=&
{\rm tr}\left[2\hat{\cal B}\hat{j}
-(\langle\hat{\bf D}\rangle \hat{d} )\hat{j}
-(\hat{d}\langle\hat{\bar{\bf D}}\rangle)\hat{j}
\right]
=~{\rm tr}
~(\hat{\cal B}\hat{j}) \label{430}
\end{eqnarray}
where $\hat{\cal B}$ is the $\kappa$ generating constraint
\bref{kappaSUST}.
The conserved ``local" current with $n=4$ becomes
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^4 &=&
-\frac{1}{2}
{\rm tr}\left(
\langle\hat{\bf D}\rangle^2+
\langle\hat{\bar{\bf D}}\rangle^2
\right)
\hat{\cal A}+\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})~~.
\end{eqnarray}
The conserved ``local" current with $n=5,6$ are given as;
Str$(\hat{J}^R)^5=$(
$\hat{\cal B}$ dependent terms),
Str$(\hat{J}^R)^6=$(
$\hat{\cal A}$ and $\hat{\cal B}$ dependent terms).
In general for even $n=2m$ its bosonic part is given as
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^{2m} \mid_{\rm bosonic}&=&
\left({\rm tr}\langle\hat{\bf D}\rangle^2\right)^m-
\left({\rm tr}\langle\hat{\bar{\bf D}}\rangle^2\right)^m\nn\\
&=&{\rm tr}\left(\langle\hat{\bf D}\rangle^2-\langle\hat{\bar{\bf D}}\rangle^2
\right)~\left\{\left(
{\rm tr}\langle\hat{\bf D}\rangle^2\right)^{m-1}
+\cdots +
\left({\rm tr}\langle\hat{\bar{\bf D}}\rangle^2\right)^{m-1}
\right\}\nn\\
&\Rightarrow& (\cdots)\hat{\cal A}+\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})
\end{eqnarray}
where the last equality is guaranteed by the $\kappa$ invariance.
It is also pointed out that
the conserved supertraces of multilinears in the currents factorize in traces of lower number of currents and that for an even number of currents one of the factors
is the stress tensor in \cite{BPR}.
For odd $n=2m+1$ its bosonic part is given as
\begin{eqnarray}
{\rm Str}
(\hat{J}^R)^{2m+1} \mid_{\rm bosonic}~=~0
~\Rightarrow~\left(~\cdots~\right){\rm tr}
~(\hat{\cal B}\hat{j})
\end{eqnarray}
where the possible fermionic variable dependence is
a term proportional to $\hat{\cal B}$
guaranteed by the $\kappa$ invariance.
In this way, after taking supertrace the even
$n$-th power of $J^R$
reduces terms proportional to ${\cal A}$ and ${\cal B}$,
and the odd $n$-th power of $J^R$
reduces a term proportional to ${\cal B}$ only.
In this paper
${\cal CD}$ constraints in the ${\cal ABCD}$ first class constraint set
are not introduced for simpler argument,
and set to zero because they are bilinears of constraints.
\par\vskip 6mm
\section{ Conclusion and discussions}
We obtained
the expression of
the conserved ``local" currents
derived from the integrability of a superstring
in the AdS$_5\times$S$^5$ background.
The infinite number of the conserved ``local" currents
are written by the supertrace of the $n$-th power of the RI currents.
The lowest nontrivial case, $n=2$, is nothing but the stress-energy tensor
which is also Virasoro constraint,
Str$(J^R)^2_\pm$ in \bref{419} and \bref{420}.
For even $n$ the ``local" current reduces to terms proportional to the
Virasoro constraint and the $\kappa$ symmetry constraint.
For odd $n$ it reduces to a term proportional to the
$\kappa$ symmetry constraint.
In another word the integrability
reduces to
the ${\cal AB}({\cal CD})$ first class constraint set
where ${\cal A}$ is the Virasoro generator
and ${\cal B}$ is the $\kappa$ symmetry generator.
The ${\cal ABCD}$ first class constraint set
is the local symmetry generator of superstrings both on the flat space
and on the AdS space.
It is natural that the physical degrees of freedom of
a superstring is common locally,
independently of flat or AdS backgrounds.
It seems that the combination of the ${\cal B}_\pm j_\pm$
in \bref{430}
plays the role of the world-sheet supersymmetry operator
in a sense of the grading of the conformal generator.
However it is not
straightforward
to construct the worldsheet supersymmetry operator.
As in the flat case where the lightcone gauge makes
the relation between
the GS fermion and the NSR fermion more transparent,
the $\kappa$ gauge fixing will be a clue to
make a connection to the world-sheet supersymmetry.
We leave this problem in addition to the quantization
problem for future investigations.
\par\vskip 6mm
\noindent{\bf Acknowledgments}
The author thanks to K. Kamimura, S. Mizoguchi and K. Yoshida for fruitful discussions.
\par\vskip 6mm
| {'timestamp': '2005-10-24T02:52:13', 'yymm': '0507', 'arxiv_id': 'hep-th/0507047', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/0507047'} |
\section{Introduction}
Two problems that have attracted much attention in the quantum
information community are the MUB and SIC problems for Hilbert spaces of
finite dimension $N$. In the MUB problem \cite{Ivanovic, Wootters1}
one looks for $N+1$ orthonormal bases that are mutually unbiased, in the sense
that
\begin{equation} |\langle e_m|f_n\rangle|^2 = \frac{1}{N} \ , \hspace{8mm}
0 \leq m,n \leq N-1 \ , \end{equation}
\noindent whenever the vector $|e_m\rangle$ belongs to one basis and the
vector $|f_n\rangle$ to another. In the SIC problem \cite{Zauner, Renes}
one looks for a symmetric and informationally complete POVM, which translates
to the problem of finding $N^2$ unit vectors $|\psi_i\rangle$ such that
\begin{equation} |\langle \psi_i|\psi_j\rangle |^2 = \frac{1}{N+1}
\ , \hspace{8mm} 0 \leq i,j \leq N^2 - 1 \ , \end{equation}
\noindent whenever $i \neq j$. These problems are hard. For the MUB problem
an elegant solution exists whenever $N$ is a power of a prime \cite{Wootters2}.
For the SIC problem quite ad hoc looking analytic solutions are known for
eighteen different dimensions; these are described (and in some cases derived)
by Scott and Grassl, who also give full references to the earlier literature
\cite{Grassl}. The belief in the community is that a complete set of $N+1$
MUB does not exist for general $N$, while the SICs do.
Since the problems are so easy to state, it is not surprising that they have
been posed independently in many different branches of science. One purpose
of this article is to describe what nineteenth century geometers had to say
about them. A story told by Eddington \cite{Eddington}
is relevant here:
\
\noindent {\small Some years ago I worked out the structure of this group of operators
in connection with Dirac's theory of the electron. I afterwards learned that
a great deal of what I had written was to be found in a treatise on Kummer's
quartic surface. There happens to be a model of Kummer's quartic surface in
my lecture-room, at which I had sometimes glanced with curiosity, wondering
what it was all about. The last thing that entered my head was that I had
written (somewhat belatedly) a paper on its structure. Perhaps the author
of the treatise would have been equally surprised to learn that he was
dealing with the behaviour of an electron.}
\
\noindent We will see what Eddington saw as we proceed. Meanwhile, let us
observe that when $N$ is a prime
the MUB are the eigenbases of the $N+1$ cyclic subgroups of the Heisenberg
group, while there is a conjecture (enjoying very considerable numerical
support \cite{Grassl}) that the SICs can always be chosen to be special orbits of
this group. When $N$ is a power of a prime the solution of the MUB problem
shifts a little, since the MUBs now consist of eigenvectors of the cyclic subgroups
of the Heisenberg group defined over a finite field rather than over the
ring of integers modulo $N$. Concerning SICs that are orbits under the
Heisenberg group there is a link to the MUB problem: If the dimension
$N$ is a prime the SIC Bloch vectors, when projected onto any one of the
MUB eigenvalue simplices, have the same length for all the
$N+1$ MUB \cite{Khat, ADF}.
In mathematics elliptic curves provide the natural home for the Heisenberg
group, so it seems natural to investigate if elliptic curves can be used
to illuminate the MUB and SIC problems. In dimensions 3 \cite{Hughston} and
4 they certainly can, as we will see, but in higher dimensions I am not so
sure. There will be some comments and formulas that I could not find in
the books and papers I studied, but keeping
Eddington's example in mind I do not claim originality for them.
\section{Two pieces of background information}
We had better define the Heisenberg group properly. A defining non-unitary
representation is given by the upper triangular matrices
\begin{equation} g(\gamma, \alpha, \beta) =
\left( \begin{array}{ccc} 1 & \alpha & \gamma \\ 0 & 1 & \beta \\
0 & 0 & 1 \end{array} \right) \ . \end{equation}
\noindent Here the matrix elements belong to some ring. In the original
Weyl-Heisenberg group \cite{Weyl} they are real numbers, but here we are
more interested in the case that they belong to the ring of integers
modulo $N$. We denote the resulting group by $H(N)$. It is generated
by two elements $X$ and $Z$ obeying
\begin{equation} ZX = qXZ \ , \hspace{8mm} X^N = Z^N = {\bf 1} \ ,
\hspace{8mm} q = e^{\frac{2\pi i}{N}} \ . \end{equation}
\noindent For $N = 2$ we can use the Pauli
matrices to set $X = \sigma_X$, $Z = \sigma_Z$, which makes it possible
to remember the notation. We will consider the group projectively, so for
our purposes it can often be regarded as a group of order $N^2$.
Because $q$ is a primitive $N$th root of unity the unitary representation
in which $Z$ is diagonal is unique up to permutations \cite{Weyl}.
It is known as the clock and shift representation. If the components of
any vector are denoted $x_a$ the action is given by
\begin{equation} \begin{array}{lll} X: & & x_0 \rightarrow x_{N-1} \rightarrow
x_{N-2} \rightarrow \dots \rightarrow x_1 \rightarrow x_0 \\
\ \label{group} \\ Z: & & x_a \rightarrow q^ax_a \end{array} \ ,
\hspace{8mm} 0 \leq a \leq N-1 \ . \end{equation}
\noindent The unitary automorphism group of the Heisenberg group plays
prominent roles in quantum information theory \cite{Fivel, Gottesman},
and is often called the Clifford group. In the older literature the
Heisenberg group is sometimes called the Clifford collineation group,
and the Clifford group is called the Clifford transform group \cite{Horadam}.
Although we will discuss it in detail for the case $N = 3$ later on, we will
mostly be concerned with automorphisms of order 2. In the clock and shift
representation such an automorphism acts according to
\begin{equation} A: \ \ \ x_a \leftrightarrow x_{-a} \ . \label{A} \ .
\end{equation}
\noindent Adding this generator leads us to consider an extended group which is
twice as large as $H(N)$. In quantum information language the involution $A$ is
generated by one of Wootters' phase point operators \cite{Wootters1}.
Finally there is the curious conjecture \cite{Zauner} that
the SIC vectors are always left invariant by a unitary automorphism of the
Heisenberg group having order 3. No one knows why this should be so,
but it does appear to be true \cite{Marcus, Grassl}, and in four dimensions
we will see exactly how it happens.
What is special about the case when $N$ is prime is that $H(N)$ then admits
$N+1$ cyclic subgroups of order $N$, forming a flower with $N+1$ petals
with only the unit element in common. Correspondingly there are $N+1$
eigenbases, and they necessarily form a complete set of
MUB \cite{Vatan}. In prime power dimensions $N = p^k$ the known complete set
of MUB is the
set of eigenbases of the cyclic subgroups of a Heisenberg group defined
over a Galois field. The only case we will discuss is when $N = 4$, for
which the Galois Heisenberg group is the tensor product $H(2) \otimes H(2)$.
Another piece of background information is that SICs and MUBs look
natural in Bloch space, which is the $N^2-1$ dimensional
space of Hermitean operators of trace 1, considered as a vector space with
the trace inner product and with the maximally mixed state at the origin.
Density matrices form a convex body in Bloch space. A SIC is simply a regular
simplex in Bloch space, inscribed into this convex body. But it is not easy
to rotate the simplex while keeping the body of density matrices fixed,
because the symmetry group of this body is only $SU(N)/Z_N$, a rather
small subgroup of $SO(N^2-1)$ as soon as $N > 2$. This is why the SIC
problem is hard. An orthonormal basis is a regular simplex with only
$N$ corners, spanning some $(N-1)$-plane through the origin in Bloch space.
Two bases are mutually unbiased if the corresponding $(N-1)$-planes are
totally orthogonal, from which it immediately follows that no more than
$N+1$ MUB can exist.
Any pure state corresponds to a Bloch vector of a definite length. Given a
complete set of MUB we can project this vector onto the $N+1$ different
$(N-1)$-planes defined by the MUB. Should it happen that these projected
vectors all have the same length the vector is as it were unbiased with
respect to the MUB, and is then---for some reason---called a
Minimum Uncertainty State \cite{Wootters3, Appleby}. The condition on a
state vector to be unbiased in this sense is easily worked out using the
Euclidean metric on Bloch space in conjunction with Pythagoras' theorem.
Choose any one of the MUB as the computational basis, and express the
Hilbert space components of a unit vector with respect to that basis as
\begin{equation} x_a = \sqrt{p_a}e^{i\mu_a} \ , \hspace{8mm}
\sum_{n = 0}^{N-1} p_a = 1 \ . \label{octant} \end{equation}
\noindent If the corresponding Bloch vector projected onto the $(N-1)$-plane spanned by
the computational basis has the length appropriate to a Minimum Uncertainty
State it must be true that
\begin{equation} \sum_{a = 0}^{N-1}p_a^2 = \frac{2}{N+1} \ . \label{MUS}
\end{equation}
\noindent This is simple enough, but there is the complication that this has
to be done for all the $N+1$ MUB, which will give an additional set of $N$
constraints on the phases ensuring that the vector has the appropriate
length when projected to the other MUB planes. We spare the reader from
the details, but we repeat that all Heisenberg covariant SIC vectors are
Minimum Uncertainty States whenever $N$ is a prime. Examining the
proof of this interesting statement shows that something similar
is true also when no complete set of MUB is available: In any eigenbasis
of a cyclic subgroup of $H(N)$ of order $N$ eq. (\ref{MUS}) will hold
for any vector belonging to a Heisenberg covariant SIC \cite{Khat, ADF}.
This is true regardless of how many bases of this kind there are.
\section{The syzygetic Hesse pencil}
We now descend to the complex projective plane, and begin by introducing the
language used by nineteenth century geometers. Points are represented by ket
or column vectors in ${\bf C}^3$, or more precisely by one-dimensional
subspaces, while lines are represented by two-dimensional subspaces. Using
the scalar product in Hilbert space we can equally well represent the
lines by bra or row vectors orthogonal to the subspaces they represent,
so that the relation
\begin{equation} \langle Y|X\rangle = 0 \label{ett} \end{equation}
\noindent means that the point $X$ lies on the line $Y$. The two-dimensional
subspace representing the line consists of all vectors whose scalar product with
the bra vector $\langle Y|$ vanishes. Since there is a one-to-one correspondence
$|X\rangle \leftrightarrow \langle X|$ between bras and kets there is also a
one-to-one correspondence between points and lines. Clearly eq.
(\ref{ett}) implies that
\begin{equation} \langle X|Y\rangle = 0 \ , \end{equation}
\noindent which says that the point $Y$ lies on the line $X$. This is known as
the duality between points and lines in the projective plane.
We will study complex plane curves defined by homogeneous polynomials
in three variables. Linear polynomials define two-dimensional
subspaces, that is to say two real-dimensional subsets of the complex plane,
and by the above they define projective lines. Intrinsically they are
spheres, namely Bloch spheres, because ${\bf CP}^1 = {\bf S}^2$.
Quadratic polynomials or quadrics define conic sections, and over the
complex numbers the intrinsic geometry of a conic section is again that of a
sphere. The set of spin coherent states is an example \cite{BH}. To
the next order in complication we choose a cubic polynomial. We require the curve
to transform into itself under the Heisenberg group in the clock and shift
representation (\ref{group}). Up to an irrelevant overall constant the most
general solution for the cubic is then
\begin{equation} P = x^3 + y^3 + z^3 + txyz \ . \label{cubic} \end{equation}
\noindent Here $t$ is a complex number parametrising what is known as the
syzygetic Hesse pencil of cubics. Intrinsically each cubic is a torus rather
than a sphere. We observe that the polynomial is automatically invariant
under the additional involution $A$ given above in (\ref{A}).
Hesse \cite{Hesse}, and before him Pl\"ucker \cite{Plucker}, studied this family
of curves in detail. Their first object was to determine the inflection points.
They are given by those points on the curve for which the determinant of its
matrix of second derivatives---its Hessian---vanishes. In the present case this
is a cubic polynomial as well; in fact
\begin{equation} H = \det{\partial_i\partial_jP} =
(6^3 + 2t^3)xyz - 6t^2(x^3 + y^3 + z^3) \ . \end{equation}
\noindent This is again a member of the Hesse pencil of cubics. In astronomy
a ``syzygy'' occurs when three planets lie on a line, so we can
begin to appreciate why the pencil is called ``syzygetic''. The inflection
points are given by $P = H = 0$. By B\'ezout's theorem two cubics in the
complex projective plane intersect in nine points, hence there are nine
inflection points. They coincide for all cubics in the pencil, and are
given by
\begin{equation} \left[ \begin{array}{ccccccccc} 0 & 0 & 0 & -1 & - q & - q^2 &
1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 0 & 0 & -1 & - q & - q^2 \\ -1 & - q & - q^2 &
1 & 1 & 1 & 0 & 0 & 0 \end{array} \right] \ . \label{points} \end{equation}
\noindent This is recognisable as a set of nine
SIC vectors covariant under the Heisenberg group \cite{Zauner, Renes}. We can
normalise our vectors if we want to, but in the spirit of projective geometry
we choose not to.
There are four singular members of the Hesse pencil, defined by values of the
parameter $t$ such that there are non-zero solutions to $P = P_{,x} = P_{,y}
= P_{,z} = 0$. These values are
\begin{equation} t = \infty \hspace{5mm} \mbox{and} \hspace{5mm} t^3 = -
3^3 \ . \end{equation}
\noindent If $t = \infty$ the polynomial reduces to $xyz = 0$. In
this case the singular cubic consists of three projective lines that make
up a triangle. The remaining three singular cases will give rise to three
other triangles. Therefore the syzygetic pencil singles
out 4 special triangles in the projective plane, given by their 12 vertices
\begin{eqnarray} \triangle^{(0)} = \left[ \begin{array}{ccc} 1 & 0 & 0 \\
0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] \ , \hspace{6mm}
\triangle^{(1)} = \left[ \begin{array}{ccc} 1 & q^2 & q^2 \\ q^2 & 1 & q^2 \\
q^2 & q^2 & 1 \end{array} \right] \ , \hspace{12mm} \nonumber \\
\label{MUB3} \\
\triangle^{(2)} = \left[ \begin{array}{ccc} 1 & q & q \\ q & 1 & q \\
q & q & 1 \end{array} \right] \ , \hspace{8mm}
\triangle^{(\infty )} = \left[ \begin{array}{ccc} 1 & 1 & 1 \\ 1 & q & q^2 \\
1 & q^2 & q \end{array} \right] \ , \hspace{12mm} \nonumber \end{eqnarray}
\noindent where $q = e^{2\pi i/3}$. The columns, labelled consecutively by
$0,1,2$, can indeed be regarded as 12 points or by duality as 12 lines.
The four triangles are referred to as the inflection triangles.
What gives the triangles their name is the remarkable fact that the nine
inflection points lie by threes on their twelve edges. Hesse calls this
a ``{\it sch\"onen Lehrsatz}'', and attributes it to Pl\"ucker \cite{Plucker}.
It is not hard
to verify. After a small calculation one finds that the orthogonalities
between the columns in the four triangles and the vectors representing
the inflection points are as follows:
\
{\tiny \begin{tabular}{|c||ccc|ccc|ccc|ccc|} \hline
\ & $\triangle_0^{(0)}$ & $\triangle_1^{(0)}$& $\triangle_2^{(0)}$ & $\triangle_0^{(1)}$
& $\triangle_1^{(1)}$ & $\triangle_2^{(1)}$ & $\triangle_0^{(2)}$ & $\triangle_1^{(2)}$ &
$\triangle_2^{(2)}$ & $\triangle_0^{(\infty )}$ & $\triangle_1^{(\infty )}$ &
$\triangle_2^{(\infty )}$ \\ \hline \hline
$X_0$ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$X_1$ & $\bullet$ & \ & \ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$X_2$ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & $\bullet$ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_0$ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_1$ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & \ & \ & $\bullet$ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Y_2$ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & $\bullet$ & \ & \ & \ & \ & $\bullet$ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_0$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & \ & \ & $\bullet$ & $\bullet$ & \ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_1$ & \ & \ & $\bullet$ & \ & $\bullet$ & \ & $\bullet$ & \ & \ & \ & $\bullet$ & \ \\
\ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ & \ \\
$Z_2$ & \ & \ & $\bullet$ & $\bullet$ & \ & \ & \ & $\bullet$ & \ & \ & \ & $\bullet$ \\
\hline
\end{tabular}}
\
\
\noindent Thus we have
\begin{equation} \langle \Delta_0^{(0)} |X_0\rangle =
\langle \Delta_0^{(0)} |X_1\rangle = \langle \Delta_0^{(0)} |X_2\rangle =
0 \end{equation}
\noindent and so on. Recalling the interpretation of the vanishing scalar
products we see by
inspection of the table that Hesse's beautiful theorem is true.
We have verified that there exists a configuration of 9 points and 12 lines
such that each point belongs to four lines, and each line goes through three points.
This is denoted $(9_4, 12_3)$, and is known as the Hesse configuration. Using the
duality between points and lines we have also proved the existence of the
configuration $(12_3, 9_4)$. From an abstract point of view such a configuration
is a combinatorial object known as a finite affine plane \cite{Dolgachev}. In the
language of quantum information theory the inflection triangles form a complete
set of four MUB, while the inflection points form a SIC.
We can now expand on our discussion of group theory in section 2.
First, every plane cubic can be
regarded as a commutative group in a natural way. This is not surprising,
given that the curve is intrinsically a torus---that is a group manifold.
The idea relies on B\'ezout's theorem, which this time assures us that any
line intersects the cubic in three points---two of which coincide
if the line is a tangent, and all of which coincide if the line is a line of
inflection. An arbitrary point on the cubic is taken to be the identity element,
and denoted $O$. To add two arbitrary points $A$ and $B$ on the
cubic, draw the line between them and locate its third intersection point $P$
with the cubic. Then draw the line between $O$ and $P$ and again locate the
third intersection point $C$. By definition then $A + B = C$. All the group
axioms are obeyed, although it is non-trivial to prove associativity.
Now choose the origin to sit at one of the inflection points. With Hesse's
construction in hand one sees that the nine inflection points form a
group of order nine, which is precisely the projective Heisenberg group.
This is also the torsion group of the curve, meaning that it contains all
group elements of finite order. Because they are group elements of order
3 the inflection points are also called 3-torsion points.
Next we ask for the group of transformations transforming the cubics of the
Hesse pencil among themselves. Recall that the parameter $t$ in the Hesse
cubic (\ref{cubic}) can serve as a complex coordinate on a sphere. The
four singular members of the pencil defines a tetrahedron
on that sphere. Transformations within the pencil act as M\"obius transformations
on the complex number $t$. Moreover they must permute the singular members
of the Hesse pencil among themselves. This means that they form a well known
subgroup of $SO(3)$, namely the symmetry group $A_4$ of the regular
tetrahedron. It enjoys the isomorphism
\begin{equation} A_4 \sim PSL(2, {\bf F}_3) \ , \end{equation}
\noindent where ${\bf F}_3$ is the field of integers modulo 3. The group
$SL(2, {\bf F}_3)$ consists of unimodular two by two matrices with
integer entries taken modulo three; here only its projective part enters
because the subgroup generated by the matrix $-{\bf 1}$ gives rise to
the involution $A$ and does not act on $t$, although it does
permute the inflection points among themselves. The full symmetry
group of the pencil is a semi-direct product of the Heisenberg group
and $SL(2, {\bf F}_3)$. This is the affine group on a finite affine
plane. It is known as the Hessian group \cite{Jordan}, or as the
Clifford group.
There are many accounts of this material in the literature, from geometric
\cite{Grove}, undergraduate \cite{Gibson}, and modern \cite{Artebani} points
of view. It forms a recurrent theme in Klein's history of nineteenth
century mathematics \cite{Klein}. The fact that the inflection points form
a SIC was first noted by Lane Hughston \cite{Hughston}.
\section{The elliptic normal curve in prime dimensions}
Felix Klein and the people around him put considerable effort into the
description of elliptic curves embedded into projective spaces of dimension
higher than 2. They proceeded by means of explicit parametrisations of
the curve using Weierstrass' $\sigma$-function \cite{Bianchi, Hulek}. As far
as we are concerned now, we only need to know that the symmetries they
built into their curves is again the Heisenberg group supplemented with
the involution $x_a \leftrightarrow x_{-a}$ coming from the Clifford group.
An analysis of this group of symmetries leads directly to ``{\it une
configuration tr\`es-remarquable}'' originally discovered by Segre
\cite{Segre}. We will present it using some notational improvements
that were invented later \cite{Gross, ADF}.
Since $N = 2n-1$ is odd, the integer $n$ serves
as the multiplicative inverse of $2$ among the integers modulo $N$.
It is then convenient to write the Heisenberg group elements as
\begin{equation} D(i,j) = q^{nij}X^iZ^j \hspace{5mm} \Rightarrow
\hspace{5mm} D(i,j)D(k,l) = q^{n(jk-il)}D(i+k, j+l) = q^{jk-il}D(k,l)D(i,j) \ . \end{equation}
\noindent Let us also introduce explicit matrix representations of
the group generators:
\begin{equation} D(i,j) = q^{nij + bj}\delta_{a,b+i} \ ,
\hspace{10mm} A = \delta_{a+b,0} \ . \end{equation}
\noindent Note that the spectrum of the involution $A$ consists of
$n$ eigenvalues $1$ and $n-1$ eigenvalues $-1$. Hence $A$ splits the
vector space into the direct sum
\begin{equation} {\cal H}_N = {\cal H}_n^{(+)} \oplus {\cal H}_{n-1}^{(-)}
\ . \end{equation}
\noindent It is these subspaces that we should watch. In fact there
are altogether $N^2$ subspaces of dimension $n$ singled out in this
way, because there are $N^2$ involutions
\begin{equation} A_{ij} = D(i,j)AD(i,j)^\dagger
\ . \label{Aij} \end{equation}
\noindent
The eigenvectors of the various cyclic subgroups can be collected
into the $N+1$ MUB
\begin{equation} \triangle_{am}^{(k)} = \left\{\begin{array}{cll}
\delta_{am} & , & k = 0 \\ \ \\ \frac{1}{\sqrt{N}}q^{\frac{(a-m)^2}{2k}} &
, & 1 \leq k \leq N-1 \\ \ \\
\frac{1}{\sqrt{N}}q^{am} & , & k = \infty \end{array} \right. \
.\end{equation}
\noindent Here $k$ labels the basis, $m$ the vectors, and $a$ their
components. For $N = 3$ this coincides with form (\ref{MUB3}) given
earlier. Note that $N-1$ MUB have been written as circulant matrices,
which is a convenient thing to do.
The key observation is that the zeroth columns in the MUB all obey---we
suppress the index labelling components---
\begin{equation} A\triangle_0^{(k)} = \triangle_0^{(k)} \ . \end{equation}
\noindent Hence this set of $N+1$ vectors belongs to the $n$-dimensional
subspace ${\cal H}_n^{(+)}$ defined by the involution $A$. We can go on to
show that each of the $n$-dimensional eigenspaces defined by the
$N^2$ involutions $A_{ij}$ contain $N+1$ MUB vectors. Conversely,
each MUB vector belongs to $N$ subspaces. We have found the Segre configuration
\begin{equation} \left( N(N+1)_N, N^2_{N+1} \right) \end{equation}
\noindent containing $N^2 + N$ points and $N^2$ $(n-1)$-planes in
projective $(N-1)$-space, always assuming that $N$ is an odd prime.
The intersection properties of the Segre configuration are remarkable.
Two $n$-spaces in $2n-1$ dimensions intersect at least in a single ray.
With a total of $N^2$ such subspaces to play with we expect many vectors
to arise in this way. But let $\psi$ be such a vector. A minor calculation
shows that
\begin{equation} \psi = A_{ij}A_{kl}\psi =
q^{2(il-jk)}D(2i-2k, 2j-2k)\psi \ . \end{equation}
\noindent Thus $\psi$ must be an eigenvector of some element in the Heisenberg
group, and hence the intersection of any two $n$-spaces is always one
of the $N(N+1)$ eigenvectors in the configuration. In the other direction
things are a little more complicated. Two vectors belonging to the same
basis are never members of same eigenspace, while two vectors of two
different MUB belong to a unique common eigenspace. Using
projective duality we obtain the dual configuration
\begin{equation} \left( N^2_{N+1}, N(N+1)_N \right) \end{equation}
\noindent consisting of $N^2$ $(n-1)$-spaces and $N^2 + N$ hyperplanes.
The intersection properties are precisely those of a finite affine plane
\cite{Dolgachev}.
These are the facts that so delighted Segre. A hundred years
later they delighted Wootters \cite{Wootters1}---although he phrased the
discussion directly in terms of the phase point
operators $A_{ij}$ rather than in terms of their eigenspaces.
A systematic study of prime power dimensions in Segre's spirit appears
not to have been made, although there are some results for $N = 9$ \cite{Horadam}.
But where is the SIC? It is hard to tell. When the dimension
$N = 2n-1 = 3$ we observe that $n-1 = 1$, so the dual Segre configuration
involves $N^2$ vectors, and these are precisely the SIC vectors (\ref{points}).
When $N \geq 5$ the Segre configuration contains not even a candidate
set of $N^2$ vectors. But at least, as a byproduct of the construction,
we find a set of $2n$ equiangular vectors in any $n$ dimensional Hilbert
space such that $2n-1$ is an odd prime. Explicitly they are
\begin{equation} \left[ \begin{array}{ccccc} \sqrt{2n-1} & 1 & 1 & \dots
& 1 \\ 0 & \sqrt{2} & \sqrt{2}q^{1\cdot 1^2} & \dots & \sqrt{2}q^{(2n-2)\cdot 1^2} \\
0 & \sqrt{2} & \sqrt{2}q^{1\cdot2^2} & \dots & \sqrt{2}q^{(2n-2)\cdot 2^2} \\
\vdots & \vdots & \vdots & & \vdots \\
0 & \sqrt{2} & \sqrt{2}q^{1\cdot (n-1)^2} & \dots & \sqrt{2}q^{(2n-2)(n-1)^2}
\end{array} \right] \ . \label{2n} \end{equation}
\noindent Such sets are of some interest in connection with pure state
quantum tomography \cite{Flammia}.
The elliptic curve itself has not been much in evidence in this section.
It is still there in the background though, and in any dimension it
contains $N^2$ distinguished $N$-torsion points. A study of the explicit
expression for the Heisenberg covariant elliptic curve shows that each
of its torsion points belong to one of the $N^2$ eigenspaces
${\cal H}_{n-1}^{(-)}$ \cite{Hulek}, and with the single exception of the
$N = 3$ example (\ref{points}) the known SICs never sit in such a subspace,
so the torsion points are not SICs. This is discouraging, but we will
find some consolation when we proceed to examine the $N = 4$ case.
\section{The SIC in 4 dimensions}
In an $N = 4$ dimensional Hilbert space there is a parting of the ways,
in the sense that the MUB and the SIC are defined using two different
versions of the Heisenberg group. The elliptic curve stays with $H(4)$.
Using an argument concerning line bundles and employing ingredients such
as the Riemann-Roch theorem, it can be shown that an elliptic
normal curve in projective 3-space (not confined to any projective plane)
is the non-singular intersection of two quadratic polynomials. If we insist
that it is transformed into itself by the Heisenberg group in its clock
and shift representation (\ref{group}), it follows \cite{Hulek} that
these quadratic polynomials are
\begin{equation} Q_0 = x_0^2 + x_2^2 + 2ax_1x_3 \ , \hspace{8mm} Q_1 =
x_1^2 + x_3^2 + 2ax_0x_2 \ . \end{equation}
\noindent The extra symmetry under the involution $A$, defined in (\ref{A}),
again appears
automatically. We can diagonalise these quadratic forms by means of a unitary
transformation of our Hilbert space. In the new coordinates we have
\begin{equation} Q_0 = z_0^2 + iz_1^2 + a(iz_2^2 + z_3^2) \ , \hspace{8mm}
Q_1 = iz_2^2 - z_3^2 + a(z_0^2 - iz_1^2) \ . \end{equation}
\noindent Note that $Q_0 = Q_1 = 0$ implies
\begin{equation} z_0^4 +z_1^4 + z_2^4 + z_3^4 = 0 \ . \end{equation}
\noindent Hence the elliptic curve lies on a quartic surface.
The new basis that we have introduced has a natural interpretation in
terms of the involution $A$. First of all, by acting on $A$ with the
Heisenberg group as in eq. (\ref{Aij}) we obtain only four involutions
altogether, rather than $N^2$ as in the odd prime case. Their spectra
are $(1,1,1,-1)$, and in the new basis they are all represented by
diagonal matrices. Hence each basis vector is inverted by one involution,
and left invariant by the others. In projective 3-space they correspond
to four reference points, and one can show that the 16 tangents of the 16 torsion
points on the curve divide into 4 sets of 4 each coming together at one of
the 4 reference points \cite{Hulek}. Each such set is an orbit under the
subgroup of elements of order 2.
In our preferred basis the generators of the Heisenberg group appear in the form
\begin{equation} Z = e^{\frac{i\pi}{4}} \left( \begin{array}{rrrr} 0 & 1 & 0 & 0 \\
-i & 0 & 0 & 0 \\
0 & 0 & 0 & -i \\ 0 & 0 & -1 & 0 \end{array}\right) \ , \hspace{9mm}
X = e^{\frac{i\pi}{4}} \left( \begin{array}{rrrr}
0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -i & 0 & 0 & 0 \\ 0 & i & 0 & 0 \end{array}
\right) \ . \end{equation}
\noindent Finding a set of 16 SIC-vectors covariant under the Heisenberg group
is now a matter of simple guesswork. One answer, ignoring overall phases and
normalisation, is
\begin{equation} \left[
\begin{array}{rrrrrrrrrrrrrrrr} x & x & x & x & i & i & - i & - i & i & i
& - i & - i & i & i & - i & - i \\ 1 & 1 & - 1 & - 1 & x & x & x & x & i & -i
& i & - i & 1 & - 1 & 1 & - 1 \\ 1 & -1 & 1 & -1 & 1 & - 1 & 1 & -1 & x &
x & x & x & - i & i & i & - i \\
1 & - 1 & - 1 & 1 & -i & i & i & - i & - 1 & 1 & 1 & - 1 & x & x & x & x
\end{array} \right] \ , \label{SIC4} \end{equation}
\noindent where
\begin{equation} x = \sqrt{2 + \sqrt{5}} \ . \end{equation}
\noindent All scalar products have the same modulus because
\begin{equation} (x^2-1)^2 = |x+1 + i(x-1)|^2 \ . \end{equation}
\noindent Thanks to our change of basis, this is significantly more memorable
than the standard solutions \cite{Zauner, Renes} (and it was in fact arrived
at, without considering the Heisenberg group at all, by Belovs \cite{Belovs}).
The whole set is organised
into 4 groups, where each group sits at a standard distance from the 4 basis
vectors that are naturally singled out by the elliptic curve. The normalised vectors
obey eq. (\ref{MUS}) for a Minimum Uncertainty State, even though our basis
is unusual.
The otherwise mysterious invariance of the SIC vectors under some element
of the Clifford group of order 3 is now easy to see. We focus
on the group of vectors
\begin{equation} \left[ \begin{array}{rrrr} x & x & x & x \\ 1 & 1 & -1 & -1 \\
1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 \end{array} \right] \ . \end{equation}
\noindent They form an orbit under the subgroup of elements of order 2. When
we project them to the subspace orthogonal to the first basis vector we have
4 equiangular vectors in a 3 dimensional subspace. Each projected vector
will be invariant under a rotation of order 3 belonging to the symmetry
group of this tetrahedron. An example leaving the first vector invariant is
\begin{equation} R = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{array} \right) \ . \end{equation}
\noindent It is straightforward to check that the rotation $R$ belongs to
the Clifford group, and is indeed identical to one of "Zauner's unitaries"
\cite{Zauner}.
Each of the four involutions $A$ admit a "square root" belonging to the
Clifford group, such as
\begin{equation} F = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\ 0 & 0 & 0 & i \end{array} \right) \hspace{5mm}
\Rightarrow \hspace{5mm} F^2 = A = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right)\ . \end{equation}
\noindent Acting with these unitaries on the SIC (\ref{SIC4}) will give a
set of altogether 16 different SICs, collectively forming an orbit under the
Clifford group \cite{Zauner, Marcus}.
Note that the 16 SIC points in projective space do not actually sit on
the elliptic curve. In this sense the step from $N = 3$ to $N = 4$ is
non-trivial. In an arbitrary even dimension $N = 2n$
the involution $A$, see (\ref{A}), has a spectrum consisting of $n+1$
eigenvalues $1$ and $n-1$ eigenvalues $-1$. When $N = 4$ this singles out
a unique ray but in higher dimensions this is not so, so generalising
to arbitrary even dimension will not be easy.
\section{Minimum Uncertainty States in four dimensions}
Eddington and his surface have not yet appeared. The group on whose twofold
cover his Fundamental Theory hinged was not the Heisenberg group over
the ring of integers modulo 4, but a different Heisenberg group of
the form $H(2)\otimes H(2)$ \cite{Eddington}. This group can be
represented by real matrices, and is in fact the group which gives rise
to the complete set of MUB in 4 dimensions. What can we do with it?
There does not exist a SIC which is covariant under Eddington's group.
In fact the group $H(2)^{\otimes k}$ admits such an orbit only if
$k = 1$ or $k = 3$ \cite{Godsil}.
As a substitute we can look for an orbit of 16 Minimum Uncertainty
States with
respect to the maximal set of MUB. Such an orbit does exist, and is
given by the 16 vectors
\begin{equation} \left[ \begin{array}{cccccccccccccccc} x & x & x & x &
\alpha & \alpha & - \alpha & - \alpha & \alpha & \alpha & - \alpha &
- \alpha & \alpha & \alpha & - \alpha & - \alpha \\
\alpha & \alpha & - \alpha & - \alpha & x & x & x & x &
\alpha & - \alpha & \alpha & - \alpha & \alpha &
- \alpha & \alpha & - \alpha \\
\alpha & - \alpha & \alpha & - \alpha & \alpha & - \alpha & \alpha
& - \alpha & x & x & x & x & \alpha & -\alpha & - \alpha & \alpha \\
\alpha & -\alpha & - \alpha & \alpha & \alpha & - \alpha & - \alpha
& \alpha & \alpha & - \alpha & - \alpha & \alpha & x & x & x & x
\end{array} \right] \ , \label{Edd} \end{equation}
\noindent where
\begin{equation} x = \sqrt{2 + \sqrt{5}} \ , \hspace{8mm} \alpha = e^{ia} \ ,
\hspace{8mm} \cos{a} = \frac{\sqrt{5}-1}{2\sqrt{2 + \sqrt{5}}} \ . \end{equation}
\noindent I omit the lengthy proof that these 16 vectors really are
Minimum Uncertainty States \cite{Asa}. Although this is not a SIC,
in a way it comes close to being one. Like the SICs (\ref{points}) and
(\ref{SIC4}), it can be arrived at using the following procedure: Introduce a
vector $(x, e^{i\mu_1}, \dots , e^{i\mu_{N-1}})^{\rm T}$, and adjust the value
of $x$ so that the normalised vector solves eq. (\ref{MUS}) for a Minimum
Uncertainty State. Next introduce a complex Hadamard matrix, that is to
say a unitary matrix all of whose matrix elements have the same modulus.
Such matrices exist in any dimension, although their classification problem
is unsolved if the dimension exceeds 5 \cite{Tadej}. By multiplying with
an overall factor $\sqrt{N}$, and then multiplying the columns with phase
factors, we can ensure that all matrix elements in the first row equal
1. Replace these elements with $x$. Next multiply the rows
with phase factors until one of the columns equals the vector we introduced.
The result is a set of $N$ vectors with all mutual scalar products taking
the value that characterises a SIC. Next permute the entries of the original
vector cyclically, and afterwards try to adjust the phases $\mu_a$ so that the
resulting $N$ vectors are again equiangular with the mutual scalar products
characterising a SIC. Extending the new vectors using an Hadamard matrix
in the same way as before then gives $N$ equiangular vectors each of which
belongs to a separate group of $N$ equiangular vectors. Before we can say
that we have constructed a SIC we must check that all scalar products
between pairs of vectors not belonging to the same group take the SIC values.
The vectors (\ref{Edd}) fail to form a SIC only because the last step fails.
Finally we come back to Eddington's lecture room. In the treatise that
he read \cite{Hudson}
it is explained that an orbit of $H(2)\otimes H(2)$ gives a realisation of
the Kummer configuration $16_6$, consisting of 16 points and 16 planes in
projective 3-space, such that each point belongs to 6 planes and each plane
contains 6 points. The above set of Minimum Uncertainty States realises this
configuration. As an example, the 6 vectors
\begin{equation} \left[
\begin{array}{cccccc} - \alpha & - \alpha & - \alpha & - \alpha & - \alpha
& - \alpha \\
x & x & \alpha & - \alpha
& \alpha & - \alpha \\ \alpha & -\alpha & x & x & - \alpha & \alpha \\
-\alpha & \alpha & - \alpha & \alpha & x & x
\end{array} \right] \end{equation}
\noindent are orthogonal to the row vector
\begin{equation} ( \begin{array}{ccccc} x & \alpha & \alpha &
\alpha \end{array} ) \ , \end{equation}
\noindent or in other words the corresponding 6 points belong to the
corresponding plane. This is a purely group theoretical property and does
not require the vectors to be Minimum Uncertainty States. Still, Eddington's
story suggests that our 16 special vectors may have some use, somewhere.
\section*{Acknowledgments}
I thank Subhash Chaturvedi for telling me about the Segre configuration, at a
point in time when neither of us knew about Segre. Both of us give our best
wishes to Tony!
\section*{References}
\medskip
| {'timestamp': '2011-03-11T02:01:36', 'yymm': '1103', 'arxiv_id': '1103.2030', 'language': 'en', 'url': 'https://arxiv.org/abs/1103.2030'} |
\section{Introduction}
\label{sec:Introduction}
Manipulation of heavy and bulky objects is a challenging task for manipulators and humanoid robots. An object is considered heavy if the manipulator's joint torques are not large enough to balance the object weight while lifting it off the ground. Thus, heavy objects cannot be manipulated with usual pick-and-place strategy due to actuator saturation.
Consider the manipulation scenario shown in Fig.~\ref{Fig:Motivation}, where a heavy object has to be moved from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ by a dual-armed robot. The object has to negotiate a step during the manipulation which implies that the final pose cannot be achieved by either pick-and-place strategies or by pushing. One possible way to move the object and negotiate the step is to use a sequence of pivoting motions, which we call object gaiting, and this is a common strategy used by humans to manipulate heavy objects. Therefore, the goal of this paper is to develop an algorithmic approach to compute a plan for manipulating heavy objects by a sequence of pivoting motions.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{Figures/Motivation.pdf}
\caption{Dual-handed manipulation of a heavy object between two given poses $\mathcal{C}_O$ and $\mathcal{C}_F$ by a sequence of pivoting motions.}
\label{Fig:Motivation}
\end{figure}
In a pivoting motion, we move the object while maintaining point or line contact with the environment. A point contact acts like a spherical joint, whereas a line contact acts like a revolute joint. The location and axes of these joints change during a gaiting motion. These joints are force-closed joints and can only be implemented through adequate frictional force at the object-ground contact that prevents slippage. Thus, a plan for pivoting operations consists of (a) {\em Motion plan}: a sequence of joint angles of the manipulators (that are within joint limits) and the corresponding object poses that maintains contact with the ground, (b) {\em Force plan}: a sequence of joint torques that are within the actuator limits and ensure that there is enough force at the object ground contact to prevent slippage. Furthermore, we also want to ensure that the manipulator does not lose the grasp of the object and there is no slippage at the hand-object contact. In this paper, we will focus on the motion planning problem. We have studied the problem of computing the force plan or force synthesis problem in~\cite{Patankar2020} and we will combine it with our motion plan to generate torques to achieve the motion.
The key challenge in solving the motion planning problem is that the kinematic constraints of the object maintaining a spherical or a revolute joint with the ground during the motion corresponds to nonlinear manifold constraints in the joint space of the manipulator. In sampling-based motion planning in joint space ($\mathbb{J}$-space), these constraints are hard to deal with, although there have been some efforts in this direction~\cite{BerensonSK11,JailletP12,Stilman10,KimU16,YaoK07, bonilla2015sample, KingstonMK2019}. Furthermore, in manipulation by gaiting, where we are performing a sequence of pivoting operations, these manifold constraints are not known beforehand since they depend on the choice of the pivot points (or lines) which has to be computed as a part of the plan. In this paper, we present a novel task-space ($\mathbb{T}$-space) based approach for generating the motion plan that exploits the fact that the kinematic constraints of a revolute or spherical joint constrains the motion of the object to a subgroup of $SE(3)$.
We present a two-step approach for computing the motion plan. In the first step, we develop an algorithm to compute a sequence of intermediate poses for the object to go from the initial to the goal pose. Two consecutive intermediate poses implicitly determine a point or line on the object and the ground that stay fixed during motion, thus encoding motion about a revolute or a spherical joint. In the second step, we use Screw Linear Interpolation (ScLERP) to determine a task space path between two intermediate poses, along with resolved motion rate control (RMRC)~\cite{whitney1969resolved,Pieper68} to convert the task space path to a joint space path. The advantage of using ScLERP is that it automatically satisfies the kinematic motion constraints during the pivoting motion without explicitly encoding it~\cite{Sarker2020}.
Thus, the joint space path that we compute along with the object path automatically ensures that the kinematic contact constraints are satisfied. This computationally efficient approach for motion planning for manipulation by pivoting is the key contribution of this paper. We also show that our motion plan can be combined with the second order cone programming (SOCP) based approach to compute joint torques and grasping forces~\cite{Patankar2020}, while ensuring that all no-slip constraints at the contacts and actuator limits are satisfied. We demonstrate our approach in simulation using a dual-armed Baxter robot.
\section{Related Work}
The use of external environment contacts to enhance the in-hand manipulation capability was first studied by Chavan-Dafle in \cite{Dafle2014}. More recently Hou \textit{et. al} have referred to the use of environment contact as \textit{shared grasping} wherein they treat the environment as an additional finger \cite{hou2020manipulation}. They have provided stability analysis of shared grasping by using \textit{Hybrid Force-Velocity Control} (HFVC).
Murooka \textit{et. al.} \cite{Murooka2015} proposed a method for pushing a heavy object by an arbitrary region of a humanoid robot. Polverini \textit{et. al.} \cite{Polverini2020} also developed a control architecture for a humanoid robot which is able to exploit the complexity of the environment to perform the pushing task of a heavy object.
Pivoting was first was first introduced by Aiyama \textit{et. al.} \cite{aiyama1993pivoting} as a new method of graspless/non-prehensile manipulation. Based on this method, Yoshida \textit{et. al.} \cite{yoshida2007pivoting,yoshida2008whole,yoshida2010pivoting} developed a whole-body motion planner for a humanoid robot to autonomously plan a pivoting
strategy for manipulating bulky objects. They first planned a sequence of collision-free Reeds and Shepp paths (especially straight and circular paths in $\mathbb{R}^2$), then convert these paths into a sequence of pivoting motions. However, this method is limited to the motion on Reeds and Shepp curves to satisfy a nonholonomic constraint, which is not always required. Thus, it is not a general, efficient, and optimum way to manipulate objects between two given poses, especially when there are no obstacles in the workspace.
Hence, we propose a general gait planning method as an optimization problem by defining the \textit{intermediate poses} and using the ScLERP to manipulate the object by gaiting between any two arbitrary poses.
\section{Preliminaries}
\noindent
\textbf{Quaternions and Rotations}: The quaternions are the set of hypercomplex numbers, $\mathbb{H}$. A quaternion $Q \in \mathbb{H}$ can be represented as a 4-tuple $Q = (q_0, \boldsymbol{q}_r) = (q_0, q_1, q_2, q_3)$, $q_0 \in \mathbb{R}$ is the real scalar part,
$\boldsymbol{q}_r=(q_1, q_2, q_3) \in \mathbb{R}^3$ corresponds to the imaginary part.
The conjugate, norm, and inverse of a quaternion $Q$ is given by
$Q^* = (q_0, -\boldsymbol{q}_r)$, $\lVert Q \rVert = \sqrt{Q Q^*} = \sqrt{Q^* Q}$,
and $Q^{-1} = Q^*/{\lVert Q \rVert}^2$, respectively. Addition and multiplication of two quaternions
$P = (p_0, \boldsymbol{p}_r)$ and
$Q = (q_0, \boldsymbol{q}_r)$ are performed as $P+Q = (p_0 + q_0, \boldsymbol{p}_r + \boldsymbol{q}_r)$ and $PQ = (p_0 q_0 - \boldsymbol{p}_r \cdot \boldsymbol{q}_r, p_0 \boldsymbol{q}_r + q_0 \boldsymbol{p}_r + \boldsymbol{p}_r \times \boldsymbol{q}_r)$.
The quaternion $Q$ is a \textit{unit quaternion}
if ${\lVert Q \rVert} = 1$, and consequently, $Q^{-1} = Q^*$. Unit quaternions are used to represent the set of all rigid body rotations, $SO(3)$, the Special Orthogonal group of dimension $3$. Mathematically, $SO(3)=\left\{\boldsymbol{R} \in \mathbb{R}^{3 \times 3}\left|\boldsymbol{R}^{\mathrm{T}} \boldsymbol{R}=\boldsymbol{R} \boldsymbol{R}^{\boldsymbol{T}}=\boldsymbol{I}_3,\right| \boldsymbol{R} \mid=1\right\}$, where $\boldsymbol{I}_3$ is a $3\times3$ identity matrix and $\left| \cdot \right|$ is the determinant operator. The unit quaternion corresponding to a rotation is $Q_R = (\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2})$, where $\theta \in [0,\pi]$ is the angle of rotation about a unit axis $\boldsymbol{l} \in \mathbb{R}^3$.
\noindent
\textbf{Dual Quaternions and Rigid Displacements}:
In general, dual numbers are defined as $d = a + \epsilon b$ where $a$ and $b$ are elements of an algebraic field, and $\epsilon$ is a \textit{dual unit} with $\epsilon ^ 2 = 0, \epsilon \ne 0$.
Similarly, a dual quaternion $D$ is defined as $D= P + \epsilon Q$
where $P, Q \in \mathbb{H}$. The conjugate, norm, and inverse of the dual quaternion $D$ is represented as $D^* = P^* + \epsilon Q^*$, $\lVert D \rVert = \sqrt{D D^*} = \sqrt{P P^* + \epsilon (PQ^* + QP^*)}$, and $D^{-1} = D^*/{\lVert D \rVert}^2$,
respectively. Another definition for the conjugate of $D$ is represented as $D^\dag = P^* - \epsilon Q^*$. Addition and multiplication of two dual quaternions $D_1= P_1 + \epsilon Q_1$ and $D_2= P_2 + \epsilon Q_2$ are performed as $D_1 + D_2 = (P_1 + P_2) + \epsilon (Q_1 + Q_2)$ and $D_1 D_2 = (P_1 P_2) + \epsilon (P_1 Q_2 + Q_1 P_2) $.
The dual quaternion $D$ is a \textit{unit dual quaternion} if ${\lVert D \rVert} = 1$, i.e., ${\lVert P \rVert} = 1$ and $PQ^* + QP^* = 0$, and consequently, $D^{-1} = D^*$. Unit dual quaternions can be used to represent the group of rigid body displacements, $SE(3) = \mathbb{R}^3 \times SO(3)$, $S E(3)=\left\{(\boldsymbol{R}, \boldsymbol{p}) \mid \boldsymbol{R} \in S O(3), \boldsymbol{p} \in \mathbb{R}^{3}\right\}$. An element $\boldsymbol{T} \in SE(3)$, which is a pose of the rigid body, can also be expressed by a $4 \times 4$ homogeneous transformation matrix as
$\boldsymbol{T} = \left[\begin{smallmatrix}\boldsymbol{R}&\boldsymbol{p}\\\boldsymbol{0}&1\end{smallmatrix}\right]$ where $\boldsymbol{0}$ is a $1 \times 3$ zero vector. A rigid body displacement (or transformation) is represented by a unit dual quaternion $D_T = Q_R + \frac{\epsilon}{2} Q_p Q_R$ where $Q_R$ is the unit quaternion corresponding to rotation and $Q_p = (0, \boldsymbol{p}) \in \mathbb{H}$ corresponds to the translation.
\noindent
\textbf{Screw Displacement}: Chasles-Mozzi theorem states that the general Euclidean displacement/motion of a rigid body from the origin $\boldsymbol{I}$ to $\boldsymbol{T} = (\boldsymbol{R},\boldsymbol{p}) \in SE(3)$
can be expressed as a rotation $\theta$ about a fixed axis $\mathcal{S}$, called the \textit{screw axis}, and a translation $d$ along that axis (see Fig.~\ref{Fig:ScrewDisplacement}). Plücker coordinates can be used to represent the screw axis by $\boldsymbol{l}$ and $\boldsymbol{m}$, where $\boldsymbol{l} \in \mathbb{R}^3$ is a unit vector that represents the direction of the screw axis $\mathcal{S}$, $\boldsymbol{m} = \boldsymbol{r} \times \boldsymbol{l}$, and $\boldsymbol{r} \in \mathbb{R}^3$ is an arbitrary point on the axis. Thus, the screw parameters are defined as $\boldsymbol{l}, \boldsymbol{m}, \theta, d$.
The screw displacements can be expressed by the dual quaternions as $D_T = Q_R + \frac{\epsilon}{2} Q_p Q_R = (\cos \frac{\Phi}{2}, L \sin \frac{\Phi}{2})$ where $\Phi = \theta + \epsilon d$ is a dual number and $L = \boldsymbol{l} + \epsilon \boldsymbol{m}$ is a
dual vector.
A power of the dual quaternion $D_T$ is then defined as $D_T^{\tau} = (\cos \frac{\tau \Phi}{2}, L \sin \frac{\tau \Phi}{2})$, $\tau >0$.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.57]{Figures/ScrewDisplacement.pdf}
\caption{Screw displacement from pose $\mathcal{C}_1$ to pose $\mathcal{C}_2$.}
\label{Fig:ScrewDisplacement}
\end{figure}
\noindent
\textbf{Screw Linear Interpolation (ScLERP)}: To perform a one degree-of-freedom smooth screw motion (with a constant rotation and translation rate) between two object poses in $SE(3)$, the screw linear interpolation (ScLERP) can be used. The ScLERP provides a \textit{straight line} in $SE(3)$ which is the closest path between two given poses in $SE(3)$.
If the poses are represented by unit dual quaternions $D_{1}$ and $D_{2}$, the path provided by the ScLERP is derived by $D(\tau) = D_1 (D_1^{-1}D_2)^{\tau}$ where $ \tau \in[0,1]$ is a scalar path parameter.
As $\tau$ increases from 0 to 1, the object moves between two poses along the path
$D(\tau)$ by the rotation $\tau \theta$ and translation $\tau d$. Let $D_{12} = D_1^{-1}D_2$. To compute $D_{12}^\tau$, the screw coordinates $\boldsymbol{l}, \boldsymbol{m}, \theta, d$ are first extracted from $D_{12} = P + \epsilon Q = (p_0,\boldsymbol{p}_r) + \epsilon (q_0,\boldsymbol{q}_r) = (\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2}) + \epsilon Q$ by $\boldsymbol{l} = \boldsymbol{p}_r/ \lVert \boldsymbol{p}_r \lVert $, $\theta = 2 \, \mathrm{atan2}(\lVert \boldsymbol{p}_r \lVert, p_0)$, $d = \boldsymbol{p} \cdot \boldsymbol{l}$, and $\boldsymbol{m} = \frac{1}{2} (\boldsymbol{p} \times \boldsymbol{l} + (\boldsymbol{p}-d \boldsymbol{l})\cot \frac{\theta}{2})$ where $\boldsymbol{p}$ is derived from $2QP^* = (0, \boldsymbol{p})$ and $\mathrm{atan2}(\cdot)$ is the two-argument arctangent. Then, $D_{12}^\tau = (\cos \frac{\tau \Phi}{2}, L \sin \frac{\tau \Phi}{2})$ is directly derived from $\left(\cos \frac{\tau \theta}{2}, \sin \frac{\tau \theta}{2}\boldsymbol{l}\right)+\epsilon \left( -\frac{\tau d}{2}\sin \frac{\tau \theta}{2}, \frac{\tau d}{2}\cos \frac{\tau \theta}{2}\boldsymbol{l}+\sin \frac{\tau \theta}{2}\boldsymbol{m} \right) $. Note that $\theta =0, \pi$ corresponds to pure translation between two poses and the screw axis is at infinity.
\section{Problem Statement}
\label{sec:ProblemStatement}
Let us assume that we want to manipulate a heavy cuboid object quasi-statically by using $n$ manipulators, while maintaining contact with environment, from an initial pose $\mathcal{C}_O \in SE(3)$ to a final pose $\mathcal{C}_F \in SE(3)$.
We also assume that the object always remains in the manipulators' workspace.
Figure~\ref{Fig:Cube_Manipulator} shows a cuboid object in contact with the environment at the vertex $v$ and also with the $i$-th manipulator's end-effector at the contact point $c_i$ (where $i=1,..,n$). Contact coordinate frames \{$c_i$\} and \{$v$\} are attached to the object at each manipulator and environment contact, respectively, such that $\bm{n}$-axis of the frames is normal (inward) to the object surface and two other axes, $\bm{t}$ and $\bm{o}$, are tangent to the surface. The coordinate frame \{$b$\} is attached to the object center of mass, coordinate frame \{$e_i$\} is attached to the $i$-th end-effector, and \{$s$\} is the inertial coordinate frame.
Let $\boldsymbol{\Theta}^i = [\theta_1^i, \theta_2^i, \cdots, \theta_{l_i}^i] \in \mathbb{R}^{l_i}$ be the vector of joint angles of the $i$-th $l_i$-DoF manipulator, which represents the \textit{joint space} ($\mathbb{J}$-space) or the \textit{configuration space} ($\mathbb{C}$-space) of the manipulator.
Moreover, $\mathcal{E}^i \in SE(3)$ is defined as the pose of the end-effector of the $i$-th manipulator where $\mathcal{E}^i = \mathcal{FK}(\boldsymbol{\Theta}^i)$ and $\mathcal{FK}(\cdot)$ is the manipulator forward kinematics map. Therefore, $\boldsymbol{\Theta}_O^i \in \mathbb{R}^{l_i}$ and $\mathcal{E}_O^i \in SE(3)$ represent the initial configuration of the $i$-th manipulator (in $\mathbb{J}$-space) and pose of $i$-th end-effector, respectively, corresponding to the object initial pose $\mathcal{C}_O$ and $\boldsymbol{\Theta}_F^i \in \mathbb{R}^{l_i}$ and $\mathcal{E}_F^i \in SE(3)$ represent the final configuration of the $i$-th manipulator (in $\mathbb{J}$-space) and pose of $i$-th end-effector, respectively, corresponding to the object final pose $\mathcal{C}_F$. We assume that the position of the manipulator-object contact $c_i$ is given and the transformation between the frames $\{e_i\}$ and $\{c_i\}$ remains constant during the manipulation, i.e., there is no relative motion at the contact interface.
Our motion planning problem is now defined as computing a sequence of joint angles $\boldsymbol{\Theta}^i(j)$, where $j=1,\cdots,m$, $\boldsymbol{\Theta}^i(1) = \boldsymbol{\Theta}^i_O$, $\boldsymbol{\Theta}^i(m) = \boldsymbol{\Theta}^i_F$, to manipulate the object while maintaining contact with the environment from its initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ when $(\mathcal{C}_O, \mathcal{E}_O^i, \boldsymbol{\Theta}_O^i)$ and $(\mathcal{C}_F, \mathcal{E}_F^i)$ ($i=1,...,n$) are given. Moreover, our force planning problem is computing the minimum contact wrenches required to be applied at $c_i$ during the object manipulation to balance the external wrenches (e.g., gravity) and also the environment contact wrenches using the method we have presented in \cite{Patankar2020}.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.5]{Figures/Cuboid_Manipulator.pdf}
\caption{An cuboid object being tilted at one of its vertices.}
\label{Fig:Cube_Manipulator}
\end{figure}
\textbf{Solution Approach Overview}: Generally speaking, to move an object while maintaining contact we can use two primitive motions, namely, (1) \textit{sliding} on a vertex, edge, or face of the object in contact with the environment (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_S}) and
(2) \textit{pivoting} about an axis passing through a vertex, edge, or face of the object in contact with the environment (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_T},\subref{Fig:SRP_P}, Fig.~\ref{Fig:Motivation}). All other motions can be made by combining these primitive motions. Note that we consider \textit{tumbling} as a special case of pivoting when the axis of rotation passes through an object edge or face. Manipulation by sliding (or pushing) can be useful in many scenarios like picking a penny off a table. However, in heavy and bulky object manipulation scenarios, sliding may not give feasible solutions. Thus, in this paper, we will focus on manipulation using the pivoting primitive.
Our \textit{manipulation strategy} can be described briefly as follows.
(i) Given the initial and final pose of the object, we first determine if multiple pivoting moves have to be made and, if necessary, compute intermediate poses of the object. (ii) Using the dual quaternion representation of these poses, we compute paths in $SE(3)$ using the ScLERP for the object and end-effectors. These paths automatically satisfies all the basic task related constraints (without any additional explicit representation of the constraints).
(iii) We use the (weighted) pseudoinverse of the Jacobian to derive the joint angles in the $\mathbb{J}$-space from the computed $\mathbb{T}$-space path. (iv) Finally, we compute the minimum required contact wrenches and manipulators' joint torques required for object manipulation. Note that the steps (ii) to (iv) can be done either sequentially or they can be interleaved in a single discrete time-step.
\section{Pivoting}
Pivoting is a motion where an object is moved while maintaining a point or line contact with a support surface. When an object maintains a point contact, the constraints on motions are same as those imposed by a spherical joint. Thus, the motion of the object is restricted to $SO(3)$, which is a subgroup of $SE(3)$, and the axis of rotation passes through the contact point. During pivoting with line contact (or tumbling), the constraint on the motion is same as that imposed by a revolute joint with the axis of the joint being the line of contact. Thus, in this case, the motion of the object is restricted to $SO(2)$, which is also a subgroup of $SE(3)$. This mathematical structure of pivoting motions is key to our approach as we discuss below.
Suppose an object can reach a goal pose from a start pose using a single pivoting motion. This can happen when the start and the goal poses are such that there is a common vertex, say $v$, between the start and goal poses that lie on the support surface (see Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_P}). In such situations, when planning in $\mathbb{T}$-space, one should be careful about the interpolation scheme for generating the motion of the object. If we use linear interpolation between the end poses in the space of parameters (a popular choice being linear interpolation for position and spherical linear interpolation for orientation using unit quaternion parameterization of orientation), the resulting intermediate poses will not ensure that the contact between the object and the support surface is maintained. The motion obtained will also change with the choice of the coordinate frames for the initial and final pose. The advantage of using ScLERP is that it is coordinate invariant. Furthermore, since the pivoting motions also belongs to a subgroup of $SE(3)$, ScLERP ensures that all the intermediate poses will lie in the same subgroup that contains the initial and goal pose (i.e., all intermediate poses will have the vertex $v$ fixed to the support surface). Thus, it is not necessary to explicitly enforce the pivoting constraints for motion planning. Lemma $1$ formalizes this discussion.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.36]{Figures/S.pdf}\label{Fig:SRP_S}}
\subfloat[]{\includegraphics[scale=0.36]{Figures/T.pdf}\label{Fig:SRP_T}}
\subfloat[]{\includegraphics[scale=0.36]{Figures/P.pdf}\label{Fig:SRP_P}}
\caption{Examples of the primitive motions for manipulating polyhedral objects by exploiting the environment contact, (a) sliding or pushing on a face, (b) pivoting about an edge (tumbling), (c) pivoting about a vertex.}
\label{Fig:SRP}
\end{figure}
\begin{lemma}
Let $D_1 = Q_{R1} + \frac{\epsilon}{2} Q_{p1} Q_{R1}$ and $D_2 = Q_{R2} + \frac{\epsilon}{2} Q_{p2} Q_{R2}$ be two unit dual quaternions representing two poses of a rigid body. If a point $\boldsymbol{v} \in \mathbb{R}^3$ in the rigid body has the same position in both poses, the position of this point remains the same in all the poses provided by the ScLERP $D(\tau) = D_1 (D_1^{-1}D_2)^{\tau}$ where $ \tau \in[0,1]$.
\label{lemma:FixedPoint}
\end{lemma}
\begin{proof}
Let $Q_v = (0,\boldsymbol{v}) \in \mathbb{H}$ be a pure quaternion representing the point $\boldsymbol{v}$. Since the point $\boldsymbol{v}$ has the same position in both poses $D_1$ and $D_2$, therefore
\begin{align}
D_1(1+\epsilon Q_v)D_1^\dag & = D_2(1+\epsilon Q_v)D_2^\dag, \\
\therefore \,\, Q_{p2} - Q_{p1} & = Q_{R1} Q_v Q_{R1}^* - Q_{R2} Q_v Q_{R2}^*.
\label{eq:Qp2_Qp1}
\end{align}
Therefore, the transformation from $D_1$ to $D_2$ is derived as
\begin{equation}
\begin{split}
D_{12} & = D_1^{*}D_2 = Q_{R1}^* Q_{R2} + \frac{\epsilon}{2}Q_{R1}^* (Q_{p2} - Q_{p1}) Q_{R2}\\
&= Q_{R1}^* Q_{R2} + \frac{\epsilon}{2}(Q_v Q_{R1}^* Q_{R2} - Q_{R1}^* Q_{R2} Q_v).
\end{split}
\label{eq:D_1D_2_}
\end{equation}
By representing the rotation $Q_{R1}^* Q_{R2}$ as $(\cos\frac{\theta}{2}, \boldsymbol{l} \sin\frac{\theta}{2}) \in \mathbb{H}$ (where $\boldsymbol{l}$ is a unit vector along the screw axis and $\theta$ is rotation about the screw axis), ($\ref{eq:D_1D_2_}$) can be simplified as
\begin{equation}
D_{12} = (\cos\frac{\theta}{2}, \boldsymbol{l}\sin\frac{\theta}{2}) + \epsilon (0, \boldsymbol{v} \times \boldsymbol{l} \sin\frac{\theta}{2}) = P + \epsilon Q.
\label{eq:D_12}
\end{equation}
The translation $d$ along the screw axis is determined by $d = \boldsymbol{p} \cdot \boldsymbol{l}$ where $\boldsymbol{p}$ is derived from $2QP^* = (0, \boldsymbol{p})$. By using (\ref{eq:D_12}),
\begin{equation}
\boldsymbol{p} = \boldsymbol{v} \times \boldsymbol{l} \sin\frac{\theta}{2} \cos\frac{\theta}{2} - (\boldsymbol{v} \times \boldsymbol{l}) \times \boldsymbol{l} \sin^2\frac{\theta}{2},
\label{eq:p}
\end{equation}
and $d = \boldsymbol{p} \cdot \boldsymbol{l} = 0$. Therefore, the transformation $D(\tau)$ is a pure rotation about the fixed point $\boldsymbol{v}$ on the screw axis.
\end{proof}
Furthermore, when using multiple manipulators to pivot an object and we assume that there is no relative motion at the hand-object contact, the motion of each end-effector can be obtained independently by ScLERP using a shared interpolation parameter. This will ensure that the constraint that the relative end-effector poses of the manipulators are unchanged during motion is maintained without explicitly encoding it (this follows from Lemma $3$ of~\cite{Sarker2020} and so we do not repeat the formal statements and proofs here). In the next section, we use pivoting as a primitive motion for motion planning between any two given poses in $\mathbb{T}$-space.
\section{Motion Planning in Task Space}
\label{sec:MotionPlanningTS}
To manipulate a polyhedral object between any two given poses $\mathcal{C}_O$ and $\mathcal{C}_F$ while maintaining contact with the environment, multiple pivoting moves can be combined by defining a set of appropriate \textit{intermediate poses}. The set of the intermediate poses $\mathcal{C}_I = \{\mathcal{C}_I^1,\mathcal{C}_I^2, \cdots, \mathcal{C}_I^h \}$ are defined in a way that the motion between any two successive poses $\{\mathcal{C}_O, \mathcal{C}_I, \mathcal{C}_F \}$ can be represented by a single constant screw pivoting move.
Thus, we can conveniently represent the motion between any two given object poses $\mathcal{C}_O$ and $\mathcal{C}_F$ in $SE(3)$ by using ScLERP to ensure that the object maintains its contact with the environment continuously. The object manipulation strategies on a flat surface can be categorized into 3 cases; (\textbf{Case I}) If $\mathcal{C}_O$ and $\mathcal{C}_F$ have a contact edge or vertex in common, the final pose can be achieved by pivoting the object about the common point or edge (Fig.~\ref{Fig:SRP}-\subref{Fig:SRP_T},\subref{Fig:SRP_P}).
(\textbf{Case II}) If $\mathcal{C}_O$ and $\mathcal{C}_F$ do not have any edge or vertex in common but the same face of the object is in contact with the environment in both poses, different strategies can be considered. One of the strategies is using a sequence of pivoting motions about the object edges (tumbling). In this motion, the travel distance is discrete and depends on the object size and it may not be suitable for manipulating some objects like furniture.
In this situation, we can manipulate the object is \textit{object gaiting} (Fig.~\ref{Fig:IntermediateConfigs_Edges}-a) which is defined as a sequence of pivoting motions on two adjacent object vertices in contact
(see \ref{subsec:IntermediatePosesObjectGaiting} and \ref{subsec:GaitPlanning}).
(\textbf{Case III}) If
the adjacent or opposite faces of the object are in contact with the environment in both poses, a combination of pivoting and gaiting is required to achieve the final pose as shown in Fig.~\ref{Fig:Examples}. Depending on the manipulators' physical limitations, object gaiting is more efficient only when a specific face of the object is in contact with the environment. For instance, manipulation on the longer edge of the cuboid shown in Fig.~\ref{Fig:IntermediateConfigs_Edges}-a may be more difficult than two other edges.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseIV_A.pdf}}
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseIV_B.pdf}}
\subfloat[]{\includegraphics[scale=0.26]{Figures/CaseV_A.pdf}}
\caption{Examples of the object manipulation with primitive motions when two adjacent (a,b) or opposite (c) object faces are in contact with the environment in initial and final poses (P: Pivoting, G: Gaiting).}
\label{Fig:Examples}
\end{figure}
\subsection{Intermediate Poses in Object Gaiting}
\label{subsec:IntermediatePosesObjectGaiting}
Let us assume that the axes of the body frame $\{b\}$ are parallel to the cuboid edges and the inertia frame $\{s\}$ is attached to the supporting plane such that $Z$-axis is perpendicular to the plane (Fig.~\ref{Fig:ObjectGaiting}). Three successive intermediate poses while pivoting about the vertex $a$ are shown in Fig.~\ref{Fig:ObjectGaiting}-a,b. The object is initially in the pose $\mathcal{C}_I^1 = (R_{1}, p_{1})$ (Fig.~\ref{Fig:ObjectGaiting}-a) holding on the contact edge $ab$. The angle $\gamma$ can be determined such that the object weight pass through the contact edge $ab$ to reduce the required contact forces during the manipulation. The pose $\mathcal{C}_I^2 = (R_{2}, p_{2})$ (Fig.~\ref{Fig:ObjectGaiting}-a) is achieved by rotating the object by a small angle $\beta$ along the edge passing through the vertex $a$; therefore, $R_{2} = R_{1} R_{x}({-\beta})$ and only the vertex $a$ is in the contact. Note that the angle $\beta$ can be adjusted during the motion to allow the object to pass over small obstacles in the environment. Finally, the pose $\mathcal{C}_I^3 = (R_{3}, p_{3})$ (Fig.~\ref{Fig:ObjectGaiting}-b) is determined by rotating $\mathcal{C}_I^1$ by an angle $\alpha$ along $Z$-axis about the vertex $a$; therefore, $R_{3} = R_Z({\alpha}) R_{1}$ and the edge $ab$ is again in contact with the environment. This procedure can be also repeated for the vertex $b$. Using the intermediate poses ScLERP can be used to derive a smooth motion for object gaiting while maintaining contact with the environment.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.55]{Figures/I2.pdf}} \quad \quad
\subfloat[]{\includegraphics[scale=0.55]{Figures/I3.pdf}}
\caption{Intermediate poses in object gaiting while pivoting.}
\label{Fig:ObjectGaiting}
\end{figure}
\subsection{Gait Planning}
\label{subsec:GaitPlanning}
In order to manipulate the object from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$ by object gaiting, a sequence of the rotation angle $\alpha$ between these two poses should be properly determined (Fig.~\ref{Fig:IntermediateConfigs_Edges}-a). Let $k$ be the number of required edge contacts and $\bm{\alpha} = [\alpha_1, \cdots, \alpha_k]^T \in \mathbb{R}^k$ be the angles between the contact edges as shown in Fig.~\ref{Fig:IntermediateConfigs_Edges}-b. We can find $\bm{\alpha}$ using an optimization problem as
\begin{equation}
\begin{aligned}
&{\underset {\bm{\alpha}}{\operatorname {minimize}}}&& \lVert \bm{\alpha} \rVert \\[-8pt]
&\operatorname {subject\;to} && \boldsymbol{x} = \pm w \sum_{i=1}^k{\left( -1 \right) ^{i}\left[ \begin{array}{@{\mkern0mu} c @{\mkern0mu}}
\cos \left( \alpha _O \pm \bar{\alpha} \right)\\
\sin \left( \alpha _O \pm \bar{\alpha} \right)\\
\end{array} \right]},\\[-5pt]
&&& \alpha _{F} - \alpha _O = \pm \sum_{i=1}^k{\left( -1 \right) ^{i}\alpha_i },\\
&&& \left| \alpha _i \right| \leq \alpha_{\text{max}},\ \ i=1,...,k,
\end{aligned}
\label{eq:GaitPlanning}
\end{equation}
where $\bar{\alpha} = \sum_{j=1}^i{\left( -1 \right) ^{j}\alpha _j }$, $\alpha_{\text{max}}$ is the maximum allowed rotation angle, $w$ is the length of the edge contact, and $\alpha_{O}$ and $\alpha_{F}$ represent the orientation of the contact edges $a_O b_O$ and $a_F b_F$ relative to $X$-axis, respectively. The negative sign correspond to the case that the first gait begins from the edge $a_O$, where $\boldsymbol{x} = \boldsymbol{b}_{F} - \boldsymbol{a}_{O}$ if $k$ is an odd number and $\boldsymbol{x} = \boldsymbol{a}_{F} - \boldsymbol{a}_{O}$ if $k$ is an even number, moreover, the positive sign correspond to the case that the first gait begins from the edge $b_O$, where $\boldsymbol{x} = \boldsymbol{a}_{F} - \boldsymbol{b}_{O}$ if $k$ is an odd number and $\boldsymbol{x} = \boldsymbol{b}_{F} - \boldsymbol{b}_{O}$ if $k$ is an even number. $\boldsymbol{a}_{O}$, $\boldsymbol{b}_{O}$, $\boldsymbol{a}_{F}$, $\boldsymbol{b}_{F} \in \mathbb{R}^2$ are the coordinates of the contact vertices in $\mathcal{C}_O$ and $\mathcal{C}_F$ poses along $X$- and $Y$-axis of the frame $\{s\}$. In the optimization problems (\ref{eq:GaitPlanning}), the first constraint represents the distance of the last contact vertex ($a_F$ or $b_F$) relative to the first contact vertex ($a_O$ or $b_O$) in $X$ and $Y$ directions. The second constraint represents the relative angle between the contact edges $a_O b_O$ and $a_F b_F$, and the last constraint considers the manipulators' limitations to rotate the object.
In order to find the feasible minimum number of edge contacts, $k$, required to manipulate the object between two poses $\mathcal{C}_O$ and $\mathcal{C}_F$, we need to repeat (\ref{eq:GaitPlanning}) for different values of $k$.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.4]{Figures/ObjectGaiting.pdf}} \quad \,
\subfloat[]{\includegraphics[scale=0.34]{Figures/IntermediateConfigs_Edges_Obstacle.pdf}}
\caption{A sequence of contact edges for object gaiting between two poses $\mathcal{C}_O$ and $\mathcal{C}_F$ when the first gait begins from the edge $a_O$.}
\label{Fig:IntermediateConfigs_Edges}
\end{figure}
\section{Mapping from $\mathbb{T}$-space to $\mathbb{J}$-space}
Since it is assumed that the transformation between the end-effector frame $\{e_i\}$ and contact frame $\{c_i\}$ remains constant,
after planning a path in the $\mathbb{T}$-space, we can compute the end-effector poses $\mathcal{E}_i$ for each object intermediate pose.
Then, we use the ScLERP for each of these end-effector poses individually with a shared screw parameter \cite{daniilidis1999hand,kavan2006dual}. To find the joint angles of the manipulators in $\mathbb{J}$-space, we use the (weighted) pseudoinverse of the manipulators' Jacobian \cite{Klein1983}.
Let $\boldsymbol{\Theta}_{t}$ and $\boldsymbol{\chi}_{t}$ be the vector of joint angles and end-effector’s pose at the step $t$, respectively.
For each manipulator, given the current end effector pose $\boldsymbol{\chi}_{t}$ and the target end effector pose $\boldsymbol{\chi}_{t+1}$ (obtained from ScLERP) we have the corresponding joint angles $\boldsymbol{\Theta}_{t+1}$ as
\begin{equation}
\boldsymbol{\Theta}_{t+1} = \boldsymbol{\Theta}_{t} + \lambda \boldsymbol{J}(\boldsymbol{\Theta}_{t}) (\boldsymbol{\chi}_{t+1} - \boldsymbol{\chi}_{t}),
\label{eq:IK}
\end{equation}
where $0 < \lambda \le 1$ is a step length parameter (refer to \cite{Sarker2020} for a complete algorithm). Here $\boldsymbol{J}$ is the (weighted) pseudo-inverse of the manipulator Jacobian. By using (\ref{eq:IK}) between any two successive poses in $\{\mathcal{C}_O, \mathcal{C}_I, \mathcal{C}_F \}$, $\boldsymbol{\Theta}^i(j)$ ($j=1,\cdots,m$) for the $i$-th manipulator is computed.
\section{Implementation and Results}
In this section, we briefly present the simulation results for manipulating a heavy cuboid object on a flat surface and over a step.
Videos of our simulations are presented in the video attachment to the paper.
\noindent
\textbf{Manipulation on a Flat Surface}: In this example, we plan motion to reorient a heavy object from an initial pose $\mathcal{C}_O$ to a final pose $\mathcal{C}_F$, in its vicinity, by object gaiting as shown in Fig.~\ref{Fig:Example_Flat}-\subref{Fig:Example_Flat_Object}.
Existing planning algorithms~\cite{yoshida2010pivoting} cannot efficiently solve this problem, because their motion plan is essentially restricted to move on Reeds and Shepp curves.
By using the proposed optimization problem (\ref{eq:GaitPlanning}), we can find the minimum number of contact edges required to manipulate the object between these two poses. The simulation results are shown in Fig.~\ref{Fig:Example_Flat}-\subref{Fig:Example_Flat_Edges}. As shown, at least 3 contact edges (in total 7 intermediate poses) are required to reach the final pose by starting pivoting from the edge $a$.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[scale=0.44]{Figures/Example_Flat.pdf}\label{Fig:Example_Flat_Object}} \qquad
\subfloat[]{\includegraphics[scale=0.34]{Figures/Example_Flat_Edges.pdf}\label{Fig:Example_Flat_Edges}}
\caption{Object gaiting on a flat surface where $a_O = [0, \, 0]$, $\alpha_{O} = 0^{\circ}$, $a_F = [0.13, \, 0.13]$m, $\alpha_F = -80^{\circ}$, $w = 0.2$m, $\alpha_{\text{max}} = 35^{\circ}$, $\alpha_1 = -10.55^{\circ}$, $\alpha_2 = 29.56^{\circ}$, $\alpha_3 = -12.63^{\circ}$, $\alpha_4 = 27.25^{\circ}$.}
\label{Fig:Example_Flat}
\end{figure}
\noindent
\textbf{Manipulation over a Step}: In this example, we plan motion and force to manipulate a heavy object over a step (Fig.~\ref{Fig:Example_Step}) by both 7-DoF arms of Baxter robot.
The computed motion plan includes 3 stages: (1) pivoting about the object edge ($\mathcal{C}_I^1$), (2) pivoting about the vertex $v$ ($\mathcal{C}_I^2$), where the object face and only the vertex $v$ are in contact with the environment, (3) changing the location of the end-effectors' contacts and pivoting about the step edge ($\mathcal{C}_F$). Thus, we have two intermediate poses $\{\mathcal{C}_I^1,\mathcal{C}_I^2\}$.
We implemented $\mathbb{T}$-space planning, conversion to $\mathbb{J}$-space, and our force planning method described in \cite{Patankar2020} to find the minimum required normal forces $f_{c_{n,1}}$ and $f_{c_{n,2}}$ at both object--end-effector contacts $\{c_1\}$ and $\{c_2\}$ in each motion stage.
In Fig.~\ref{Fig:contact_force_results}, the variations of the normal contact forces with respect to the number of iterations to reach the goal pose at 3 stages of object manipulation over a step are shown.
In stage 1, $f_{c_{n,1}}$ and $f_{c_{n,2}}$ first decrease and become negligible at a particular object tilting angle where the weight of the object passes through its support edge, and then increases. In stage 2, since the motion is not symmetric, there is a difference between the right and left end-effector normal contact forces in order to balance the the object weight. In stage 3, the object-environment contact points are initially located closer to the object center of mass; thus, less contact forces are initially required and by pivoting the object, these forces increases.
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.45]{Figures/Example_Step.pdf}
\caption{Object manipulation over a step.}
\label{Fig:Example_Step}
\end{figure}
\begin{figure}[!htbp]
\centering
\includegraphics[scale=0.29]{Figures/fc_complete.pdf}
\caption{The normal contact forces at $\{c_1\}$ and $\{c_2\}$ where the object weight is $m = 2$kg, maximum joint torque for shoulder and elbow joints is $\tau_{\text{max}} = 50$Nm, and maximum joint torque for wrist joints is $\tau_{\text{max}} = 15$Nm.}
\label{Fig:contact_force_results}
\end{figure}
\section{Conclusion and Future Work}
In this paper, we have proposed a novel approach for manipulating heavy objects while using a sequence of pivoting motions. We have implemented our proposed motion and force planning on two different scenarios; reorienting an object using gaiting and also manipulating a heavy object over a step. Given the initial and final poses of the object, we first compute the required intermediate poses. These poses can be derived by an optimization problem which computes the optimal values of the rotation angles between contact edges while \textit{object gaiting}. Then, by using ScLERP, we can interpolate between these intermediate poses while satisfying all the task-related constraints.
Using RMRC we can map the task-space based plan to the joint-space allowing us to compute the contact forces and the joint torques required to manipulate the object. Future work includes the relaxation of the quasi-static assumption for the force planning and experimental evaluation of the proposed approach.
\addtolength{\textheight}{-10.5cm}
\bibliographystyle{IEEEtran}
| {'timestamp': '2020-12-14T02:05:48', 'yymm': '2012', 'arxiv_id': '2012.06022', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.06022'} |
\section{Introduction} \label{sec:intro}
With the development of natural language processing and deep learning, multilingual machine translation has gradually attracted the interest of researchers \citep{dabre-etal-2020-multilingual}.
Moreover, the multilingual machine translation model demands less space than multiple bilingual unidirectional machine translation models, making it more popular among developers \citep{liu2020multilingual, zhang-etal-2020-improving, fan2020beyond}.
However, existing multilingual machine translation models face imbalance problems.
On the one hand, various sizes of training corpora for different language pairs cause imbalance.
Typically, the training corpora size of some high resource languages (HRLs) is hundreds or thousands of times that of some low resource languages (LRLs) \citep{schwenk2019ccmatrix}, resulting in lower competence of LRL learning.
On the other hand, translation between different languages has different difficulty, which also leads to imbalance.
In general, translation between closely related language pairs is more effortless than that between distant language pairs, even if the training corpora is of the same size \citep{barrault-etal-2020-findings}.
This would lead to low learning competencies for distant languages compared to closely related languages.
Therefore, multilingual machine translation is inherently imbalanced, and dealing with this imbalance is critical to advancing multilingual machine translation \citep{dabre-etal-2020-multilingual}.
To address the above problem, existing balancing methods can be divided into two categories, i.e., static and dynamic.
1) Among static balancing methods, temperature-based sampling \citep{arivazhagan2019massively} is the most common one, compensating for the gap between different training corpora sizes by oversampling the LRLs and undersampling the HRLs.
2) Researchers have also proposed some dynamic balancing methods \citep{jean2019adaptive, wang-etal-2020-balancing}.
\citet{jean2019adaptive} introduce an adaptive scheduling, oversampling the languages with poorer results than their respective baselines.
In addition, MultiDDS-S \citep{wang-etal-2020-balancing} focus on learning an optimal strategy to automatically balance the usage of training corpora for different languages at multilingual training.
Nevertheless, the above methods focus too much on balancing LRLs, resulting in lower competencies for HRLs compared to that trained only on bitext corpora.
Consequently, the performance on the HRLs by the multilingual translation model is inevitably worse than that of bitext models by a large margin \citep{lin-etal-2020-pre}.
Besides, knowledge learned by related HRLs is also beneficial for LRLs \citep{neubig-hu-2018-rapid}, while is neglected by previous approaches, limiting the performance on LRLs.
\iffalse
Therefore, in this paper, we we manage to balance the competencies of languages and propose a \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning Approach for \textbf{M}ultilingual Machine Translation}, named CCL-M.
Specifically, we treat the HRLs as easy samples, the LRLs as hard samples, and learn all the samples from easy to hard through curriculum learning \citep{bengio2009curriculum}.
Hence, we define two competencies to help scheduling: 1) \emph{Self-evaluated Competence}, evaluating how well a language is learned; 2) \emph{HRLs-evaluated Competence}, evaluating how well an LRL is initialized using HRLs' \emph{Self-evaluated Competence}.
We further define two sets, the training set and the candidate set, to determine whether to train a language.
At the beginning of the training, we assign the HRLs to the training set and the LRLs to the candidate set.
For the languages in the training set, we gradually calculate their \emph{Self-evaluated Competence} and apply sampling weight to the languages with the reciprocal of their \emph{Self-evaluated Competence}.
For the LRLs out of the training set, we also gradually calculate their \emph{HRLs-evaluated Competence} and add the LRLs to the training set when its \emph{HRLs-evaluated Competence} is larger than a certain threshold.
Eventually, all the languages are added to the training set.
\fi
Therefore, in this paper, we try to balance the learning competencies of languages and propose a \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning Approach for \textbf{M}ultilingual Machine Translation}, named CCL-M.
Specifically, we firstly define two competence-based evaluation metrics to help schedule languages, which are 1) \emph{Self-evaluated Competence}, for evaluating how well the language itself has been learned; and 2) \emph{HRLs-evaluated Competence}, for evaluating whether an LRL is ready to be learned by the LRL-specific HRLs' \emph{Self-evaluated Competence}.
Based on the above two competence-based evaluation metrics, we design the CCL-M algorithm to gradually add new languages into the training set.
Furthermore, we propose a novel competence-aware dynamic balancing sampling method for better selecting training samples at multilingual training.
We evaluate our approach on the multilingual Transformer \citep{vaswani2017attention} and conduct experiments on the TED talks\footnote{\url{https://www.ted.com/participate/translate}} to validate the performance in two multilingual machine translation scenarios, i.e., \emph{many-to-one} and \emph{one-to-many} ("\emph{one}" refers to English).
Experimental results show that our approach brings in consistent and significant improvements compared to the previous state-of-the-art approach \citep{wang-etal-2020-balancing} on multiple translation directions in the two scenarios.
Our contributions\footnote{We release our code on \url{https://github.com/zml24/ccl-m}.} are summarized as follows:
\begin{itemize}
\item
We propose a novel competence-based curriculum learning method for multilingual machine translation.
To the best of our knowledge, we are the first that integrate curriculum learning into multilingual machine translation.
\item
We propose two effective competence-based evaluation metrics to dynamically schedule which languages to learn, and a competence-aware dynamic balancing sampling method for better selecting training samples at multilingual training.
\item
Comprehensive experiments on the TED talks dataset in two multilingual machine translation scenarios, i.e., \emph{many-to-one} and \emph{one-to-many}, demonstrating the effectiveness and superiority of our approach,
which significantly outperforms the previous state-of-the-art approach.
\end{itemize}
\section{Background}
\subsection{Multilingual Machine Translation}
Bilingual machine translation model translates a sentence of source language $S$ into a sentence of target language $T$ (\citealp{sutskever2014sequence}; \citealp{cho-etal-2014-learning}; \citealp{bahdanau2014neural}; \citealp{luong-etal-2015-effective}; \citealp{vaswani2017attention}), which is trained as
\begin{equation}
\theta^* = \argmin_\theta \mathcal{L} (\theta; S, T) ,
\end{equation}
where $\mathcal{L}$ is the loss function, $\theta^*$ is the model parameters.
Multilingual machine translation system aims to train multiple language pairs in a single model, including \emph{many-to-one} (translation from multiple languages into one language), \emph{one-to-many} (translation from one language to multiple languages), and \emph{many-to-many} (translation from several languages into multiple languages) \citep{dabre-etal-2020-multilingual}.
Specifically, we denote the training corpora of $n$ language pairs in multilingual machine translation as $\{S_1, T_1\}$, $\{S_2, T_2\}$, $\dots$, $\{S_n, T_n\}$ and multilingual machine translation aims to train a model $\theta^*$ as
\begin{equation}
\theta^* = \argmin_\theta \frac{1}{n} \sum_{i = 1}^n \mathcal{L} (\theta; S_i, T_i) .
\end{equation}
\subsection{Sampling Methods}
Generally, the size of the training corpora for different language pairs in multilingual machine translation varies greatly.
Researchers hence developed two kinds of sampling methods, i.e., static and dynamic, to sample the language pairs at training \citep{dabre-etal-2020-multilingual}.
There are three mainstream static sampling methods, i.e., uniform sampling, proportional sampling, and temperature-based sampling \citep{arivazhagan2019massively}.
These methods sample the language pairs by the predefined fixed sampling weights $\psi$.
\paragraph{Uniform Sampling.} Uniform sampling is the most straightforward solution \citep{johnson-etal-2017-googles}. The sampling weight $\psi_i$ for each language pair $i$ of this method is calculated as follows
\begin{equation}
\psi_i = \frac{1}{\vert \mathcal{S}_\text{lang} \vert ,}
\end{equation}
where $\mathcal{S}_\text{lang}$ is the language sets for training.
\paragraph{Proportional Sampling.} Another method is sampling by proportion \citep{neubig-hu-2018-rapid}. This method improves the model's performance on high resource languages and reduces the performance of the model on low resource languages. Specifically, we calculate its sampling weight $\psi_i$ for each language pair $i$ as
\begin{equation}
\psi_i = \frac{\vert \mathcal{D}^i_\text{Train} \vert}{\sum_{k \in \mathcal{S}_\text{lang} } \vert \mathcal{D}^k_\text{Train} \vert} ,
\end{equation}
where $\mathcal{D}_\text{Train}$ is the training corpora of language $i$.
\paragraph{Temperature-based Sampling.} It samples the language pairs according to the corpora size exponentiated by a temperature term $\tau$ (\citealp{arivazhagan2019massively}; \citealp{conneau-etal-2020-unsupervised}) as
\begin{equation}
\psi_i = \frac{p_k^{1 / \tau}}{\sum_{k \in \mathcal{S}_\text{lang}} p_k^{1 / \tau}} \ \text{where} \ p_i = \frac{\vert \mathcal{D}^i_\text{Train} \vert}{\sum_{k \in \mathcal{S}_\text{lang} } \vert \mathcal{D}^k_\text{Train} \vert} .
\end{equation}
Obviously, $\tau = \infty$ is the uniform sampling and $\tau = 1$ is the proportional sampling. Both of them are a bit extreme from the perspective of $\tau$.
In practice, we usually select a proper $\tau$ to achieve a balanced result.
On the contrary, dynamic sampling methods (e.g., MultiDDS-S\citep{wang-etal-2020-balancing}) aim to automatically adjust the sampling weights by some predefined rules.
\paragraph{MultiDDS-S.} MultiDDS-S \citep{wang-etal-2020-balancing} is a dynamic sampling method performing differentiable data sampling.
It takes turns to optimize the sampling weights of different languages and the multilingual machine translation model, showing more significant potential than static sampling methods. This method optimizes the sample weight $\psi$ to minimize the development loss as follows
\begin{equation}
\psi^* = \argmin_\psi \mathcal{L} (\theta^*; \mathcal{D}_\text{Dev}) , \\
\end{equation}
\begin{equation}
\theta^* = \argmin_\theta \sum_{i = 1}^n \psi_i \mathcal{L} (\theta; \mathcal{D}_\text{Train}) ,
\end{equation}
where $\mathcal{D}_\text{Dev}$ and $\mathcal{D}_\text{Train}$ denote the development corpora and the training corpora, respectively.
\section{Methodology}
In this section, we first define a directed bipartite language graph, on which we deploy the languages to train.
Then, we define two competence-based evaluation metrics, i.e., the \emph{Self-evaluated Competence} $c$ and the \emph{HRLs-evaluated Competence} $\hat{c}$, to help decide which languages to learn.
Finally, we elaborate the entire CCL-M algorithm.
\subsection{Directed Bipartite Language Graph}
Formally, we define a directed bipartite language graph $G(V, E)$, in which one side is full of HRLs and the other side of LRLs.
Each vertex $v_i$ on the graph represents a language, and the weight of each directed edge (from HRLs to LRLs) $e_{ij}$ indicates the similarity between a HRL $i$ and an LRL $j$:
\begin{equation}
e_{ij} = \text{sim}(i, j) .
\end{equation}
Inspired by TCS \citep{wang-neubig-2019-target}, we measure it using vocabulary overlap and define the language similarity between language $i$ and language $j$ as
\begin{equation}
\text{sim}(i, j) = \frac{\vert \text{vocab}_k(i) \cap \text{vocab}_k(j) \vert}{k} ,
\label{eq:sim}
\end{equation}
where $\text{vocab}_k(\cdot)$ represents the top $k$ most frequent subwords in the training corpus of a specific language.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{cl.png}
\caption{Diagram of the CCL-M Algorithm. This graph shows how to gradually add the LRLs to the training set $\mathcal{S}_\text{selected}$ using graph coloring. "aze" stands for Azerbaijani, "bel" stands for Belarusian, etc. The number after the colon indicates current \emph{HRLs-evaluated Competence}, and suppose the corresponding threshold $t$ is set to 0.8. Subfigure (a) represents the state before training. Subfigure (b) indicates that "slk" (Slovak) is added to the training set because the \emph{HRLs-evaluated Competence} is higher than the threshold. Subfigure (c) indicates that "aze" (Azerbaijani) and "glg" (Glacian) are added to the training set, and Subfigure (d) indicates that all the LRLs are added to the training set. Notice we use the abbreviation of language (xxx) to indicate language pairs (xxx-eng or eng-xxx), which is more general.}
\label{ccl-m}
\end{figure*}
\subsection{Competence-based Evaluation Metrics}
\paragraph{Self-evaluated Competence.} We define how well a language itself has been learned as the \emph{Self-evaluated Competence} $c$.
In the following paragraphs, we first introduce the concept of \emph{Likelihood Score} and then give a formula for calculating the \emph{Self-evaluated Competence} in multilingual training based on the relationship between current \emph{Likelihood Score} and the \emph{Likelihood Score} of model trained on bitext corpus.
For machine translation, we usually use the label smoothed \citep{szegedy2016rethinking} cross-entropy loss $\mathcal{L}$ to measure how well the model is trained, and calculate it as
\begin{equation}
\mathcal{L} = - \sum_i p_i \log_2 q_i ,
\end{equation}
where $p$ is the label smoothed actual probability distribution, and $q$ is the model output probability distribution\footnote{We select 2 as the base number for all relevant formulas and experiments in this paper.}.
We find that the exponential of negative label smoothed cross-entropy loss is a likelihood to some extent, which is negatively correlated to the loss.
Since neural network usually optimizes the model by minimizing the loss, we use the likelihood as a positive correlation indicator to measure competence.
Therefore, we define a \emph{Likelihood Score} $s$ to estimate how well the model is trained as follows
\begin{equation}
s = 2^{-\mathcal{L}} = \prod_i q_i^{p_i} .
\end{equation}
Inspired by \citet{jean2019adaptive}, we estimate the \emph{Self-evaluated Competence} $c$ of a specific language by calculating the quotient of its current \emph{Likelihood Score} and baseline's \emph{Likelihood Score}.
Finally, we obtain the formula as follows
\begin{equation} \label{self-competence}
c = \frac{s}{s^*} = 2^{\mathcal{L}^* - \mathcal{L}} ,
\end{equation}
where $\mathcal{L}$ is the current loss on the development set, $\mathcal{L}^*$ is the \emph{benchmark} loss of the converged bitext model on the development set, and $s$ and $s^*$ are their corresponding \emph{Likelihood Scores}, respectively.
\paragraph{HRLs-evaluated Competence.} Furthermore, we define how well an LRL is ready to be learned as its \emph{HRLs-evaluated Competence} $\hat{c}$.
We believe that each LRL can learn adequate knowledge from its similar HRLs before training.
Therefore, we estimate each LRL's \emph{HRLs-evaluated Competence} by the LRL-specific HRLs' \emph{Self-evaluated Competence}.
Specifically, we propose two methods for calculating the \emph{HRLs-evaluated Competence}, i.e., \emph{maximal} ($\text{CCL-M}_\text{max}$) and \emph{weighted average} ($\text{CCL-M}_\text{avg}$).
The $\text{CCL-M}_\text{max}$ only migrates the knowledge from the HRL that is most similar to the LRL, so we calculate \emph{maximal} \emph{HRLs-evaluated Competence} $\hat{c}_{\text{max}}$ for each LRL $j$ as
\begin{equation}
\hat{c}_{\text{max}}(j)= c_{\argmax_{i \in \mathcal{S}_\text{HRLs}} e_{ij}} ,
\end{equation}
where $\mathcal{S}_{\text{HRLs}}$ is the set of the HRLs.
On the other side, the $\text{CCL-M}_\text{avg}$ method pays attention to all HRLs.
In general, the higher the language similarity, the more knowledge an LRL can migrate from HRLs.
Therefore, we calculate \emph{weighted average} \emph{HRLs-evaluated Competence} $\hat{c}_{\text{avg}}$ for each LRL $j$ as
\begin{equation}
\hat{c}_{\text{avg}}(j)= \sum_{i \in \mathcal{S}_\text{HRLs}} \left ( \frac{e_{ij}}{\sum_{k \in \mathcal{S}_\text{HRLs}} e_{kj}} \cdot c_i \right ) .
\end{equation}
\subsection{The CCL-M Algorithm}
Now we detailly describe the \emph{\textbf{C}ompetence-based \textbf{C}urriculum \textbf{L}earning for \textbf{M}ultilingual Machine Translation}, namely the CCL-M algorithm. The algorithm is divided into two parts: 1) curriculum learning scheduling framework, guiding when to add a language to the training set; 2) competence-aware dynamic balancing sampling, guiding how to sample languages in the training set.
First, we present how to schedule which languages on the directed bipartite language graph should be added to the training set according to the two competence-based evaluation metrics as shown in Figure \ref{ccl-m} and Algorithm \ref{alg:the_alg}, where $\mathcal{S}_{\text{LRLs}}$ is the set of LRLs, and $f(\cdot)$ is the function calculating the \emph{HRLs-evaluated Competence} $\hat{c}$ for LRLs.
Initialized as Line \ref{lst:1}, we add all languages on the HRLs side to the training set $\mathcal{S}_\text{selected}$ at the beginning of training, leaving all languages on the LRLs side in the candidate set $\mathcal{S}_\text{candidate}$.
Then, we regularly sample the development corpora of different languages and calculate current \emph{HRLs-evaluated Competence} of the languages in the candidate set $\mathcal{S}_\text{candidate}$ as shown in Line \ref{lst:8} and \ref{lst:9}.
Further, the "if" condition in Line \ref{lst:13} illustrates that we would add the LRL to the training set $\mathcal{S}_\text{selected}$ when its \emph{HRLs-evaluated Competence} is greater than a pre-defined threshold $t$.
However, as the calculation of Equation \ref{self-competence}, the upper bound of the \emph{Self-evaluated Competence} for a specific language may not always be 1 at multilingual training.
This may cause that some LRLs remain out of the training set $\mathcal{S}_\text{selected}$ for some thresholds.
To ensure the completeness of our algorithm, we will directly add the languages still in the candidate set $\mathcal{S}_\text{candidate}$ to the training set $\mathcal{S}_\text{selected}$ after a long enough number of steps, which is described between Line \ref{lst:22} and Line \ref{lst:32}.
\begin{algorithm}[!t]
\SetAlgoLined
\KwIn{Randomly initialized model $\theta$; language graph $G$; \emph{benchmark} losses $\mathcal{L}_i^*$; training corpora $\mathcal{D}_\text{Train}$; development corpora $\mathcal{D}_\text{Dev}$;}
\KwOut{The converged model $\theta^*$;}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{HRLs}$, $\mathcal{S}_\text{candidate} \gets \mathcal{S}_\text{LRLs}$,
$\psi \gets 0$\; \label{lst:1}
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
$\psi_i \gets \frac{1}{\vert \mathcal{S}_\text{\normalfont{selected}} \vert}$\; \label{lst:3}
}
\While{$\theta$ \normalfont{not converge}}{
train the model on $\mathcal{D}_\text{Train}$ for some steps with sampling weight $\psi$\;
\For{$i \in \mathcal{S}_\text{\normalfont{selected}} \cup \mathcal{S}_\text{\normalfont{candidate}}$}{
sample $\mathcal{D}_\text{Dev}$ and calculate $\mathcal{L}_i$\; \label{lst:8}
$c_i \gets 2^{\mathcal{L}_i^* - \mathcal{L}_i}$\; \label{lst:9}
}
\For{$i \in \mathcal{S}_\text{\normalfont{candidate}}$}{
$\hat{c}_i \gets f(G, i, c_{\mathcal{S}_\text{HRLs}})$\;
\If{$\hat{c}_i \geq t$}{ \label{lst:13}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{selected} \cup \{ i \}$\;
$\mathcal{S}_\text{candidate} \gets \mathcal{S}_\text{candidate} \setminus \{ i \} $\;
}
}
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
$\psi_i \gets \frac{1}{c_i}$\;
}
}
\If{$\mathcal{S}_\text{\normalfont{candidate}} \neq \varnothing$}{ \label{lst:22}
$\mathcal{S}_\text{selected} \gets \mathcal{S}_\text{selected} \cup \mathcal{S}_\text{candidate}$\;
\While{$\theta$ \normalfont{not converge}}{
train the model on $\mathcal{D}_\text{Train}$ for some steps with sampling weight $\psi$\;
\For{$i \in \mathcal{S}_\text{\normalfont{selected}}$}{
sample $\mathcal{D}_\text{Dev}$ and calculate $\mathcal{L}_i$\;
$c_i \gets 2^{\mathcal{L}_i^* - \mathcal{L}_i}$\;
$\psi_i \gets \frac{1}{c_i}$\;
}
}
} \label{lst:32}
\caption{The CCL-M Algorithm}
\label{alg:the_alg}
\end{algorithm}
Then, we introduce our competence-aware dynamic balancing sampling method, which is based on the \emph{Self-evaluated Competence}.
For languages in the training set $\mathcal{S}_\text{selected}$, we randomly select samples from the development corpora and calculate their \emph{Self-evaluated Competence}.
Those languages with low \emph{Self-evaluated Competence} should get more attention, therefore we simply assign the sampling weight $\psi_i$ to each language $i$ in the training set to the reciprocal of its \emph{Self-evaluated Competence}, as follows
\begin{equation}
\psi_i \propto \frac{1}{c_i} = 2^{\mathcal{L} - \mathcal{L^*}} .
\end{equation}
Notice that the uniform sampling is used for the training set $\mathcal{S}_\text{selected}$ at the beginning of training as a balancing cold-start strategy.
The corresponding pseudo code can be found in Line \ref{lst:3}.
\section{Experiments}
\begin{table*}[!t]
\centering
{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{M2O}} & \multicolumn{2}{c}{\textbf{O2M}} \\
& \textbf{Related} & \textbf{Diverse} & \textbf{Related} & \textbf{Diverse} \\
\midrule
Bitext Models & 20.37 & 22.38 & 15.73 & 17.83 \\
Uniform Sampling $(\tau = \infty)$ & 22.63 & 24.81 & 15.54 & 16.86 \\
Temperature-Based Sampling $(\tau = 5)$ & 24.00 & 26.01 & 16.61 & 17.94 \\
Proportional Sampling $(\tau = 1)$ & 24.88 & 26.68 & 15.49 & 16.79 \\
\midrule
MultiDDS \cite{wang-etal-2020-balancing} & 25.26 & 26.65 & 17.17 & 18.40 \\
MultiDDS-S \cite{wang-etal-2020-balancing} & 25.52 & 27.00 & 17.32 & 18.24 \\
\midrule
$\text{CCL-M}_\text{max}$ (Ours) & 26.59** & 28.29** & \textbf{18.89}** & \textbf{19.53}** \\
$\text{CCL-M}_\text{avg}$ (Ours) & \textbf{26.73}** & \textbf{28.34}** & 18.74** & \textbf{19.53}** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the baselines and our methods.
$\text{CCL-M}_\text{max}$ is the CCL-M algorithm using \emph{maximal HRLs-evaluated Competence}, $\text{CCL-M}_\text{max}$ is the CCL-M algorithm using \emph{weighted average HRLs-evaluated Competence}.
Bold indicates the highest value.
"$**$" indicates significantly \citep{koehn-2004-statistical} better than MultiDDS-S with t-test $p < 0.01$.
}
\label{tab:results}
\end{table*}
\subsection{Dataset Setup}
Following \citet{wang-etal-2020-balancing}, we use the 58-languages-to-English TED talks parallel data \cite{qi-etal-2018-pre} to conduct experiments.
Two sets of language pairs with different levels of language diversity are selected: \emph{related} (language pairs with high similarity) and \emph{diverse} (language pairs with low similarity).
Both of them consist of 4 high resource languages (HRLs) and 4 low resource languages (LRLs).
For the \emph{related} language set, we select 4 HRLs (Turkish: "tur", Russian: "rus", Portuguese: "por", Czech, "ces") and its related LRLs (Azerbaijani: "aze", Belarusian: "bel", Glacian: "glg", Slovak: "slk"). For the \emph{diverse} language set, we select 4 HRLs (Greek: "ell", Bulgarian: "bul", French: "fra", Korean: "kor") and 4 LRLs (Bosnian: "bos", Marathi: "mar", Hindi: "hin", Macedonian: "mkd") as \citep{wang-etal-2020-balancing}.
Please refer to Appendix for a more detailed description.
We test two kinds of multilingual machine translation scenarios for each set: 1) \emph{many-to-one} (M2O): translating 8 languages to English; 2) \emph{one-to-many} (O2M): translating English to 8 languages.
The data is preprocessed by SentencePiece\footnote{\url{https://github.com/google/sentencepiece}} \citep{kudo-richardson-2018-sentencepiece} with a vocabulary size of 8k for each language.
Moreover, we add a target language tag before the source and target sentences in O2M as \citep{johnson-etal-2017-googles}.
\subsection{Implementation Details} \label{model}
\paragraph{Baseline.} We select three static heuristic strategies: uniform sampling, proportional sampling, and temperature-based sampling ($\tau = 5$), and the bitext models for the baseline.
In addition, we compare our approach with the previous state-of-the-art sampling method, MultiDDS-S \citep{wang-etal-2020-balancing}. All baseline methods use the same model and the same set of hyper-parameters as our approach.
\paragraph{Model.} We validate our approach upon the multilingual Transformer \citep{vaswani2017attention} implemented by fairseq\footnote{\url{https://github.com/pytorch/fairseq}} \citep{ott-etal-2019-fairseq}.
The number of layers is 6 and the number of attention heads is 4, with the embedding dimension $d_{\text{model}}$ of 512 and the feed-forward dimension $d_{\text{ff}}$ of 1024 as \citep{wang-etal-2020-balancing}.
For training stability, we adopt Pre-LN \citep{xiong2020layer} for the layer-norm \citep{ba2016layer} module.
For M2O tasks, we use a shared encoder with a vocabulary of 64k.
Similarly, for O2M tasks, we use a shared decoder with a vocabulary of 64k.
\paragraph{Training Setup.} We use the Adam optimizer \citep{kingma2014adam} with $\beta_1 = \text{0.9}$, $\beta_2 = \text{0.98}$ to optimize the model.
Further, the same learning rate schedule as \citet{vaswani2017attention} is used, i.e., linearly increase the learning rate for 4000 steps to 2e-4 and decay proportionally to the inverse square root of the step number.
We accumulate the batch size to 9,600 and adopt half-precision training implemented by apex\footnote{\url{https://github.com/NVIDIA/apex}} for faster convergence \citep{ott-etal-2018-scaling}.
For regularization, we also use a dropout \citep{srivastava2014dropout} $p = \text{0.3}$ and a label smoothing \citep{szegedy2016rethinking} $\epsilon_{ls} = \text{0.1}$.
As for our approach, we sample 256 candidates from each languages' development corpora every 100 steps to calculate the \emph{Self-evaluated Competence} $c$ for each language and \emph{HRLs-evaluated Competence} $\hat{c}$ for each LRL.
\paragraph{Evaluation.} \label{sec:eval}
In practice, we perform a grid search for the best threshold $t$ in \{0.5, 0.6, 0.7, 0.8, 0.9, 1.0\}, and select the checkpoints with the lowest weighted loss\footnote{This loss is calculated by averaging the loss of each samples in development corpora of all languages, which is equivalent to taking the proportional weighted average of the loss for each language.} on the development sets to conduct the evaluation.
The corresponding early stopping patience is set to 10.
For target sentence generation, we set the beam size to 5 and a length penalty of 1.0.
Following \citet{wang-etal-2020-balancing}, we use the SacreBLEU \citep{post-2018-call} to evaluate the model performance.
In the end, we compare our result with MultiDDS-S using paired bootstrap resampling \citep{koehn-2004-statistical} for significant test.
\subsection{Results}
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Related M2O}} & \multicolumn{2}{c}{\textbf{Diverse M2O}} \\
& \textbf{LRLs} & \textbf{HRLs} & \textbf{LRLs} & \textbf{HRLs} \\
\midrule
Bi. & 10.45 & \textbf{30.29} & 11.18 & \textbf{33.58} \\
MultiDDS-S & 22.51 & 28.54 & 22.72 & 31.29 \\
\midrule
$\text{CCL-M}_\text{max}$ & 23.14* & 30.04** & 23.31* & 33.26** \\
$\text{CCL-M}_\text{avg}$ & \textbf{23.30}* & 30.15** & \textbf{23.55}* & 33.13** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the HRLs and the LRLs for the best baselines and our methods in M2O tasks. Bitext models (``Bi." for short) and MultiDDS-S are selected from the baselines since ``Bi." performs better on the HRLs and MultiDDS-S performs better on the LRLs. Bold indicates the highest value.
"$*$" and "$**$" indicates significantly better than MultiDDS-S with t-test $p < 0.05$ and $p < 0.01$, respectively.
}
\label{tab:m2o hrl and lrl}
\end{table}
\paragraph{Main Results.} \label{main}
The main results are listed in Table \ref{tab:results}.
As we can see, both methods significantly outperform the baselines and MultiDDS with averaged BLEU scores of over +1.07 and +1.13, respectively, indicating the superiority of our approach.
Additionally, the $\text{CCL-M}_\text{avg}$ is slightly better than the $\text{CCL-M}_\text{max}$ in more cases.
This is because the $\text{CCL-M}_\text{avg}$ can get more information provided by the HRLs, and can more accurately estimate when to add an LRL into the training.
Moreover, we find that O2M tasks are much more complicated than M2O tasks since decoders shared by multiple languages might generate tokens in wrong languages.
Consequently, the BLEU scores of O2M tasks are more inferior than M2O tasks by a large margin.
\paragraph{Results on HRLs and LRLs in M2O.}
We further study the performance of our approach on LRLs and the HRLs in M2O tasks and list the results in Table \ref{tab:m2o hrl and lrl}.
As widely known, the bitext model performs poorly on LRLs while performs well on HRLs.
Also, we find our method performs much better than MultiDDS-S, both on LRLs and HRLs.
Although our method does not strictly match the performance of the bitext model on HRLs, the gap between them is much smaller than that of MultiDDS-S and bitext models.
All of the above proves the importance of balancing learning competencies of different languages.
\paragraph{Results on HRLs and LRLs in O2M.}
\begin{table}[t]
\centering
\resizebox{\columnwidth}{!}{
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Related O2M}} & \multicolumn{2}{c}{\textbf{Diverse O2M}} \\
& \textbf{LRLs} & \textbf{HRLs} & \textbf{LRLs} & \textbf{HRLs} \\
\midrule
Bi. & 8.25 & \textbf{23.22} & 7.82 & \textbf{27.83} \\
MultiDDS-S & 15.31 & 19.34 & 13.98 & 22.52 \\
\midrule
$\text{CCL-M}_\text{max}$ & \textbf{16.54}** & 21.24** & \textbf{14.36}* & 24.71** \\
$\text{CCL-M}_\text{avg}$ & 16.33** & 21.14** & 13.82 & 25.42** \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets of the HRLs and the LRLs for the best baselines and our methods in O2M tasks.
Bitext models (``Bi." for short) and MultiDDS-S are selected from the baselines since ``Bi." performs better on the HRLs and MultiDDS-S performs better on the LRLs. Bold indicates the highest value.
"$*$" and "$**$" indicates significantly better than MultiDDS-S with t-test $p < 0.05$ and $p < 0.01$, respectively.
}
\label{tab:o2m hrl and lrl}
\end{table}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{threshold.png}
\caption{Weighted losses on development sets and average BLEU scores (\%) on test sets for different thresholds (the abscissa) in four scenarios. The blue line represents $\text{CCL-M}_\text{max}$, the green line represents $\text{CCL-M}_\text{avg}$.
The yellow dotted line represents MultiDDS-S \citep{wang-etal-2020-balancing}.}
\label{grid}
\end{figure*}
As shown in Table \ref{tab:o2m hrl and lrl}, our approach also performs well on the more difficult scenario, i.e., the O2M.
Apparently, our approach almost doubles the performance of the LRLs from bitext models.
Consistently, there is a roughly -2 and -3 BLEU decay for the HRLs in the \emph{related} and \emph{diverse} language sets.
Compared to MultiDDS-S, both our approach in the LRLs and the HRLs are significantly better.
This again proves the importance of balancing the competencies of different languages.
Additionally, the performance on HRLs in O2M task has a larger drop from the bitext model than that in M2O task.
This is because the decoder shares a 64k vocabulary for all languages in O2M tasks, but each language has only 8k vocabulary.
Thus, it is easier for the model to output misleading tokens that do not belong to the target language during inference.
\section{Analysis}
\subsection{Effects of Different Threshold $t$}
\label{threshold}
We firstly conduct a grid search for the best \emph{HRLs-evaluated Competence} threshold $t$.
As we can see from Figure \ref{grid}, the more HRLs are trained (the larger the threshold $t$ is), the better the model's performance is in M2O tasks.
This phenomenon again suggests that M2O tasks are easier than O2M tasks.
The curriculum learning framework performs better in the \emph{related} set than that in the \emph{diverse} set in M2O tasks, because languages in the \emph{related} set are more similar.
Still, our method is better than MultiDDS-S, as shown in Figure \ref{grid}.
This again demonstrates the positive effect of our curriculum learning framework.
Experimental results also reveal that the optimal threshold $t$ for O2M tasks may not be 1 because more training on HRLs would not produce optimal overall performance.
Furthermore, the optimal threshold for the \emph{diverse} language set is lower than that for the \emph{related} language set as the task in the \emph{diverse} language set is more complicated.
\subsection{Effects of Different Sampling Methods}
\begin{table}[t]
\resizebox{\columnwidth}{!} {
\centering
\begin{tabular}{l|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{M2O}} & \multicolumn{2}{c}{\textbf{O2M}} \\
& \textbf{Related} & \textbf{Diverse} & \textbf{Related} & \textbf{Diverse} \\
\midrule
$\text{CCL-M}_\text{avg}$ & 26.73 & 28.34 & \textbf{18.74} & \textbf{19.53} \\
\ \ \ $+$ Uni. & 24.59 & 27.13 & 18.29 & 18.21 \\
\ \ \ $+$ Temp. & 25.28 & 27.50 & 18.65 & 19.28 \\
\ \ \ $+$ Prop. & \textbf{27.21} & \textbf{28.72} & 18.20 & 18.80 \\
\bottomrule
\end{tabular}
}
\caption{Average BLEU scores (\%) on test sets by the $\text{CCL-M}_\text{avg}$ algorithm using our dynamic sampling method and three static sampling methods. "Uni." refers to the uniform sampling, "Temp." refers to the temperature-based sampling ($\tau = 5$), and "Prop." refers to the proportional sampling. Bold indicates the highest value.
}
\label{tab:sample}
\end{table}
We also analyze the effects of different sampling methods.
Substituting our competence-aware dynamic sampling method in the $\text{CCL-M}_\text{avg}$ with three static sampling methods, we get the results in Table \ref{tab:sample}.
Consistently, our method performs best among the sampling methods in O2M tasks, which shows the superiority of sampling by language-specific competence.
Surprisingly, we find that proportional sampling surpasses our proposed dynamic method in M2O tasks.
This also indicates that more training on HRLs has a positive effect in M2O tasks, since proportional sampling would train more on the HRLs than the dynamic sampling we proposed.
In addition, all three static sampling methods outperform their respective baselines in Table \ref{tab:results}. Some of them are even better than the previous state-of-the-art sampling method, i.e., MultiDDS-S.
This shows that our curriculum learning approach has a strong generability.
\section{Related Work}
Curriculum learning was first proposed by \citet{bengio2009curriculum} with the idea of learning samples from easy to hard to get a better optimized model.
As a general method for model improvement, curriculum learning has been widely used in a variety of machine learning fields \citep{gong2016multi, kocmi-bojar-2017-curriculum, hacohen2019power, platanios-etal-2019-competence, narvekar2020curriculum}
There are also some previous curriculum learning researches for machine translation.
For example, \citep{kocmi-bojar-2017-curriculum} divide the training corpus into smaller buckets using some features such as sentence length or word frequency and then train the buckets from easy to hard according to the predefined difficulty.
\citet{platanios-etal-2019-competence} propose competence-based curriculum learning for machine translation, which treats the model competence as a variable in training and samples the training corpus in line with the competence.
In detail, they believes that competence is positively related to the training steps, and uses linear or square root functions for experiments.
We bring the concept of competence and redefine it in this paper with a multilingual context.
Further, we define \emph{Self-evaluated Competence} and \emph{HRLs-evaluated Competence} as the competence of each language pair to capture the model's multilingual competence more accurately.
\section{Conclusion}
In this paper, we focus on balancing the learning competencies of different languages in multilingual machine translation and propose a competence-based curriculum learning framework for this task.
The experimental results show that our approach brings significant improvements over baselines and the previous state-of-the-art balancing sampling method, MultiDDS-S.
Furthermore, the ablation study on sampling methods verifies the great generalibility of our curriculum learning framework.
\section*{Acknowledgements}
We would like to thank anonymous reviewers for their suggestions and comments. This work was supported by the National Key Research and Development Program of China (No. 2020YFB2103402).
| {'timestamp': '2021-09-10T02:07:27', 'yymm': '2109', 'arxiv_id': '2109.04002', 'language': 'en', 'url': 'https://arxiv.org/abs/2109.04002'} |
\section{Conclusion $\&$ Future Work}
\vspace{-0.3cm}
In this work, we propose a novel co-motion pattern, a second-order local motion descriptor in order to detect whether the video is deep-faked. Our method is fully interpretable and pretty robust to slight variations such as video compression and noises. We have achieved superior performance on the latest datasets under classification and anomaly detection settings, and have comprehensively evaluated various characteristics of our method including robustness and generalizability. In the future, an interesting direction is to investigate whether a more accurate motion estimation can be achieved as well as how temporal information can be integrated within our method.
\clearpage
\bibliographystyle{splncs04}
\section{Experiments}
\label{sect:Exp}
In this section, extensive experiments are conducted to empirically demonstrate the feasibility of our co-motion pattern, coupled with the advantages over other methods. We first describe the experiment protocol, followed by the choice of hyperparameters. The quantitative performance of our method evaluated on different datasets is reported and analyzed in Sect.~\ref{sec:quantitative}. Subsequently, we interpret the composition of the co-motion pattern, showing how it can be used for determining the genuineness of any given sequence or even individual estimated motion set. Finally, we demonstrate the transferability and robustness of our method under different scenarios.
\subsubsection{Dataset}
We evaluate our method on FaceForensics++~\cite{FaceForensics} dataset which consists of four sub-databases that produce face forgery via different methods, i.e. Deepfake~\cite{deepfake}, FaceSwap~\cite{faceswap}, Face2Face~\cite{F2F} and NeuralTexture~\cite{NeuralTexture}. In addition, we utilize the real set from~\cite{Google_dataset} to demonstrate the similarity of co-motion patterns from real videos.
Since each sub-database contains 1,000 videos, we form 2,000 co-motion patterns with each composed of picking $N$ $\rho$ matrices for training and testing respectively.
We use c23 and c40 to indicate the quality of datasets, which are compressed by H.264~\cite{H264} with 23 and 40 as constant rate quantization parameters.
Unless otherwise stated, all of our performance reported are achieved on c23.
The validation set and testing set are split before any experiments to ensure no overlapping would interfere the results.
\subsubsection{Implementation}
In this section, we specify hyperparameters and other detailed settings in order to reproduce our method. The local motion estimation procedure is accomplished by integrating \cite{opticalflow} as the estimator and \cite{Landmark} as the landmark detector, both with default parameter settings as reported in the original papers. For the facial landmarks, we only keep the last 51 landmarks out of 68 in total as the first 17 denotes the face boundary which is usually not manipulated. During the calculation of co-motion, we constrain $K$ to be at most 8 as only 8 facial components, thus avoiding unnecessary computation.
Since a certain portion of frames do not contain sufficient motion, we only preserve co-motion patterns with $p\%$ motion features having greater magnitude than the total $p\%$ of others, i.e. $p = 0.5$ with magnitude $\geq 0.85$, where the number is acquired by randomly sampling a set of 100 videos. An AdaBoost~\cite{AdaBoost} classifier is employed for all supervised classification tasks.
For Gaussian smoothing, we set $\hat{k} = 3$ for all experiments.
\subsection{Quantitative Results}
\label{sec:quantitative}
\begin{table}[t!]
\caption{Accuracy of our method on all four forgery databases, with each treated as a binary classification task against the real videos. Performance of \cite{OpenWorld} is estimated from figures in the paper.
}
\begin{center}
\begin{tabular}{l|c|c|c|c|c}
\hline
Method/Dataset & Deepfakes & FaceSwap & Face2Face & NeuralTexture & Combined \\ \hline
Xception~\cite{FaceForensics} & 93.46\% & 92.72\% & 89.80\% & N/A & \textbf{95.73\%} \\
R-CNN~\cite{RCNN} & 96.90\% & 96.30\% & \textbf{94.35\%} & N/A & N/A \\
Optical Flow + CNN~\cite{OFCNN} & N/A & N/A & 81.61\% & N/A & N/A \\
FacenetLSTM~\cite{OpenWorld} & 89\% & 90\% & 87\% & N/A & N/A \\ \hline
$N$ = 1 (Ours) & 63.65\% & 61.90\% & 56.50\% & 56.65\% & 57.05\% \\
$N$ = 10 (Ours) & 82.80\% & 81.95\% & 72.30\% & 68.50\% & 71.30\% \\
$N$ = 35 (Ours) & 95.95\% & 93.60\% & 85.35\% & 83.00\% & 88.25\% \\
$N$ = 70 (Ours) & \textbf{99.10\%} & \textbf{98.30\%} & 93.25\% & \textbf{90.45\%} & 94.55\% \\ \hline
\end{tabular}
\end{center}
\end{table}
In this section, we demonstrate the quantitative results of our method under different settings. At first, we show that the co-motion pattern can adequately separate forged and real videos in classification tasks as shown in Tab.~1. Comparing with other state-of-the-art forensic methods in terms of classification accuracy, we have achieved competent performance and have outperformed them by a large margin on Deepfakes~\cite{deepfake} and FaceSwap~\cite{faceswap}, respectively $99.10\%$ and $98.30\%$. In \cite{OFCNN}, while the researchers have similarly attempted establishing a forensic pipeline on top of motion features, we have outperformed its performance by approx. 12$\%$. It is noteworthy that \cite{RCNN,OpenWorld,FaceForensics} are all exploiting deep features that are learned in an end-to-end manner and consequently cannot be properly explained. By contrast, as interpretability is one of the principal factors to media forensics, our attention lies on proposing a method such that it can be justified and make no effort on deliberately outperforming deep learning based methods.
Equally importantly, as forgery methods are various and targeting each is expensive, we demonstrate that the proposed co-motion pattern can also be employed for anomaly detection tasks, where only the behaviors of real videos require to be modeled, and forged videos can be separated if an appropriate threshold is selected. As presented in Fig.~\ref{fig:ROCs}, we show receiver operating characteristic (ROC) curves on each forgery database with increasing $N$. The real co-motion template is constructed of 3,000 randomly selected $\rho$ matrices for each co-motion pattern (real or fake) to compare against during evaluation. In general, our method can be used for authenticating videos even without supervision. In the next section, we exhibit that the co-motion pattern is also robust to random noise and data compression.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/DF.jpg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/FS.jpg}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/F2F.jpg}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.485\textwidth}
\centering
\includegraphics[width=\textwidth]{eccv2020kit/NT.jpg}
\end{subfigure}
\caption{Anomaly detection performance of our co-motion patterns. }
\label{fig:ROCs}
\end{figure}
\vspace{-0.3cm}
\subsection{Robustness Analysis}
\label{sec:robustness}
In this section, we demonstrate the robustness of our proposed method against noises or data compression and the generalizability of co-motion patterns. Experiments about whether the compression rate of the video and noise would affect the effectiveness of co-motion patterns are conducted and the results are shown in Tab. 2. Empirically, co-motion has demonstrated great robustness against heavy compression (c40) and random noise, i.e. $N(\mu,\sigma^2)$ with $\mu = 0$ and $\sigma = 1$. Such results verify our proposed co-motion patterns exploiting high-level temporal information are much less sensitive to pixel-level variation, while statistics based methods as reviewed in Sect.~2.2 do not possess this property.
\vspace{-0.7cm}
\begin{table}
\caption{Robustness experiment for demonstrating that co-motion can maintain its characteristics under different scenarios. All experiments are conducted on Deepfake~\cite{deepfake} with $N = 35$. Classification accuracy and area under curve (AUC) are reported respectively. }
\begin{center}
\begin{tabular}{l|c|c|c|c}
\hline
Setting / Dataset & Original &c23&c40& c23+noise \\ \hline
Binary classification & 97.80\% & 95.95\% & 91.60\% & 91.95\% \\ \hline
Anomaly detection & 98.57 & 96.14 & 93.76 & 92.60 \\ \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-0.5cm}
In addition to demonstrating the robustness, we also investigate in whether the modeled co-motion patterns are generalizable, as recorded in Tab.~3. It turns out that co-motion patterns constructed on relatively high-quality forgery databases such as NeuralTextures~\cite{NeuralTexture} and Face2Face~\cite{F2F} can easily be generalized for classifying other low-quality databases, while the opposite results in inferior accuracy. This phenomenon is caused by that videos forged by NeuralTextures are generally being more consistent, thus the inconsistency learned is more narrowed down and specific, while the types of inconsistency vary greatly in low-quality databases, which can be hard to model.
\vspace{-0.5cm}
\begin{table}
\caption{Experiments for demonstrating generalizability of co-motion patterns. Same experiment setting was employed as in Tab. 1. }
\begin{center}
\begin{tabular}{l|c|c|c|c}
\hline Test on / Train on & Deepfakes & FaceSwap & Face2Face & NeuralTexture \\ \hline
Deepfakes & N/A & 92.15\% & 93.45\% & 95.85\% \\
FaceSwap & 84.25\% & N/A & 76.75\% & 84.95\% \\
Face2Face & 70.30\% & 64.85\% & N/A & 81.65\% \\
NeuralTexture & 76.20\% & 65.15\% & 77.85\% & N/A \\ \hline
\end{tabular}
\end{center}
\end{table}
\vspace{-1cm}
\subsection{Abnormality Reasoning}
\label{sec:reasoning}
In this section, we explicitly interpret the implication of each co-motion pattern for an intuitive understanding. A co-motion example of real videos can be found in Fig. ~6. As we illustrated, the local motion at 51 facial landmarks are estimated as features, where the order of landmarks are preserved identically in all places on purpose for better visual understanding. It is noteworthy that the order of landmarks do not affect the performance as long as they are aligned during experiments.
Consequently, each co-motion pattern describes the relationship of any pair of two local motion features, where features from the same or highly correlated facial component would instinctively have greater correlation. For instance, it is apparent that two eyes would generally move in the same direction, as the center area highlighted in Fig. ~6. Similarly, a weak yet stable high correlation of the first 31 features is consistently observed on all real co-motion patterns, which conforms to the accordant movement of facial components on upper and middle face area. We also observe strong negative correlation, indicating opposite movements, between upper lip and lower lip. This credits to the dataset containing a large volume of videos with people talking, while in forged videos such a negative correlation is undermined, usually due to the fact that the videos are synthesized in a frame-by-frame manner, thus the temporal relationship is not well-preserved. Moreover, the co-motion is normalized in range $[0, 1]$ for visualization purpose which leads to the weakened difference between real and fake co-motion patterns, while in original scale the difference can be more magnificent, verified by the experiments.
\begin{figure}[t!]
\centering
\includegraphics[width=0.55\textwidth, height=0.48\textwidth]{eccv2020kit/Interpret.png}
\caption{An example of interpreting co-motion patterns. }
\label{fig:interpret}
\end{figure}
For an explicit comparison, we also average 1,000 $\rho$ matrices from each source to illustrate the distinction and which motion pattern in specific was not well-learned as in Fig.~7. Evidently, co-motion patterns from forged videos fail to model the negative correlation between upper lip and lower lip. Moreover, in Deepfake and FaceSwap, the positive correlation between homogeneous components (e.g. eyes and eyebrows) is also diluted, while in reality it would be difficult to control them having uncorrelated motion. We also attempt to construct co-motion patterns on another set of real videos~\cite{Google_dataset} to illustrate the commonality of co-motion patterns over all real videos. Additionally, we show that visually, the structure of co-motion pattern could quickly converge as illustrated in Fig. 8, which sustains our choices of building second-order pattern as it is less sensitive to intra-instance variation.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/real_cooccurrence.jpg}
{{\small Real videos}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/deepfakes_cooccurrence.jpg}
{{\small Deepfakes}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/faceswap_cooccurrence.jpg}
{{\small FaceSwap}}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/actor_cooccurrence.jpg}
{{\small Real videos from \cite{Google_dataset}}}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/face2face_cooccurrence.jpg}
{{\small Face2Face}}
\label{fig:mean and std of net44}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=0.83\textwidth]{eccv2020kit/neuraltextures_cooccurrence.jpg}
{{\small NeuralTexture}}
\end{subfigure}
\caption{Averaged co-motion pattern from different sources. Two real co-motion patterns (leftmost column) collectively present component-wise motion consistency while forged videos fail to maintain that property. }
\label{fig:Cooccurrences}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.68\textwidth, height=0.3\textwidth]{eccv2020kit/Convergence.png}
\caption{Co-motion pattern comparison on the same video (original and deep-faked based on the original one). As $N$ increases, both co-motion patterns gradually converge to the same structure. }
\label{fig:framework}
\end{figure}
\section{Introduction}
Media forensic, referring to judge the authenticity, detect potentially manipulated region and reason its decision of the given images/videos, plays an important role in real life to prevent media data from being edited and utilized for malicious purposes, e.g., spreading fake news~\cite{FakeNews,WorldLeader}. Unlike traditional forgery methods (e.g., copy-move and slicing) which can falsify the original content with low cost but are also easily observable, the development of deep generative models such as generative adversarial net (GAN)~\cite{GAN} makes the boundary between realness and forgery more blurred than ever, as deep models are capable of learning the distribution from real-world data so well. In this paper, among all the forensic-related tasks, we focus on exposing forged videos produced by face swapping and manipulation applications~\cite{FastFaceSwap,DVP,F2F,FSGAN,MakeAFace,NeuralTexture}. These methods, while initially designed for entertainment purposes, have gradually become uncontrollable in particular when the face of celebrities, who possess greater social impact such as Obama~\cite{obama}, can be misused at no cost, leading to pernicious influence.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\textwidth, height=0.51\textwidth]{eccv2020kit/clear_comparison.png}
\caption{Example of motion analysis results by our method. \textbf{Landmarks} with the same color are considered having analogous motion patterns, which are consistent with facial structure in real videos but not in deep-faked videos. We compactly model such patterns and utilize them to determine the authenticity of given videos.}
\label{fig:clear_comparison}
\end{figure}
Traditional forensic methods focusing on detecting specific traces remained ineluctably during the editing (e.g., inconsistency in re-sampling~\cite{Resampling}, shadowing~\cite{shadow}, reflection~\cite{Reflection}, compression quality~\cite{CompressionQuality} and noise pattern~\cite{Noise}) fail to tackle the indistinguishable DNN-generated images/videos due to the powerful generative ability of existing deep models.
Therefore, the demand for forensic approaches explicitly against deep-faked videos is increasing.
Existing deep forensic models can be readily categorized into three branches including real-forged binary classification-based methods~\cite{XRay,TwoStep,RCNN,MesoNet}, anomaly image statistics detection based approaches~\cite{ColorComponent,FaceArtifict,PRNU,Unmasking,AttributeGAN} and high-level information driven cases~\cite{headpose,exposelandmark,blinking}.
However, no matter which kind of methods, their success heavily relies on a high-quality, uncompressed and well-labeled forensic dataset to facilitate the learning. Once the given data are compressed or in low-resolution, their performance is inevitably affected. More importantly, these end-to-end deep forensic methods are completely unexplainable, no explicit reason can be provided by these methods to justify based on what a real or fake decision is made.
To overcome the aforementioned issues, in this paper, we propose a video forensic method based on motion features to explicitly against deep-faked videos. Our method aims to model the conjoint patterns of local motion features from real videos, and consequently spot the abnormality of forged videos by comparing the extracted motion pattern against the real ones. To do so, we first estimate motion features of keypoints that are commonly shared across deep-faked videos. In order to enhance the generalizability of obtained motion features as well as eliminate noises introduced by inaccurate estimation results, we divide motion features into various groups which are further reformed into a correlation matrix as a more compact frame-wise representation. Then a sequence of correlation matrices are calculated from each video, with each weighted by the grouping performance to form the co-motion pattern which describes the local motion consistency and correlation of the whole video. In general, co-motion patterns collected from real videos obey the movement pattern of facial structures and are homogeneous with each other regardless of the video content variation, while it becomes less associated across fake videos.
To sum up, our contributions are four-fold: (1) We propose co-motion pattern, a descriptor of consecutive image pairs that can be used to effectively describe local motion consistency and correlation. (2) The proposed co-motion pattern is being entirely explainable, robust to video compression/pixel noises and generalizes well. (3) We conduct experiments under both classification and anomaly detection settings, showing that the co-motion pattern is able to accurately reveal the motion-consistency level of given videos. (4) We also evaluate our method on datasets with different quality and forgery methods, with the intention to demonstrate the robustness and transferability of our method.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth, height=0.45\textwidth]{eccv2020kit/framework.png}
\caption{The pipeline of our proposed co-motion pattern extraction method. As illustrated, we firstly estimate the motion of corresponding keypoints, which are then to be grouped for analysis. On top of that, we construct co-motion pattern as a compact representation to describe the relationship between motion features. }
\label{fig:framework}
\end{figure}
\section{Related Work}
\vspace{-0.25cm}
\subsection{Face Forgery by Media Manipulation}
\vspace{-0.15cm}
First of all, we review relevant human face forgery methods. Traditionally, methods such as copy-move and slicing, if employed for face swapping tasks, can hardly produce convincing result due to the inconsistency caused by image quality~\cite{Resampling,quantization,jpeg_ghosts}, lighting changing~\cite{lighting,complex_lighting} and noise patterns~\cite{Noise,estimate_noise} between the tampered face region and other regions. With the rapid expeditious development of deep generative models~\cite{GAN}, the quality of generated images has significantly improved. The success of ProGAN~\cite{pggan} makes visually determining the authenticity of generated images pretty challenging if only focusing on the face region. Furthermore, the artifacts remained in boundary regions whose corresponding distribution in training datasets are relatively disperse are also progressively eliminated by \cite{StyleGANV1,StyleGANV2,glow,BigGAN}. Although these methods have demonstrated appealing generating capability, they do not focus on a certain identity but generate faces with random input.
Currently, the capability of deep neural networks has also been exploited for human-related tasks such as face swapping~\cite{deepfake,faceswap,FastFaceSwap,F2F,NeuralTexture,FSNET,FSGAN,DeformAE}, face expression manipulation~\cite{MakeAFace,F2F,x2face,NFE} and facial attribute editing~\cite{NFE,AttGAN,DA_Face_M,SMIT,MulGAN} majorly for entertainment purposes at the initial stage (samples of deep-faked face data are shown in Fig.~\ref{fig:deepfake_samples}.). However, since the face swapping methods in particular have already been misused for commercial purposes, homologous techniques should be studied and devised as prevention measures before it causing irreparable adverse influence.
\vspace{-15pt}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth, height=0.45\textwidth]{eccv2020kit/fake_samples.png}
\caption{Samples to illustrate what ``Deepfake'' is. Top left~\cite{StyleGANV2}: high fidelity generated faces. Top right~\cite{jim}: face swapping. Bottom left~\cite{MakeAFace}: face expression manipulation, original image on top and expression manipulated on bottom. Bottom right~\cite{MulGAN}: face attribute editing, original images on top and edited on bottom. }
\label{fig:deepfake_samples}
\end{figure}
\vspace{-15pt}
\subsection{Deep-faked Manipulation Detection}
While media forensic has been a long existing field, the countermeasures against deep-faked images and videos are scarce. As we mentioned earlier, existing methods can be categorized into three genres, respectively by utilizing a deep neural network~\cite{XRay,FaceForensics,RCNN,MesoNet,TwoStep,OFCNN,Incremental,DetectF2F,OpenWorld}, by exploiting the unnatural low-level statistics and by detecting the abnormality of high-level information. In the very first category, it has been usually considered as a binary classification problem where a classifier is constructed to learn the boundary between original and manipulated data via hand-crafted or deep features. As one of the earliest works in this branch, \cite{MesoNet} employs an Inception~\cite{Inception} with proper architecture improvement to directly classify each original or edited frame. Later, in order to consider the intra-frame correlation, \cite{RCNN} constructed a recurrent convolutional neural network that learns from temporal sequences. Due to the variety of video content and the characteristics of neural network, a sufficiently large dataset is required. To overcome this problem, \cite{OFCNN} attempted using the optical flow as input to train a neural network. While high classification accuracy achieved, since the features learned directly by neural networks yet to be fully comprehended, the decision of whether the input data has been manipulated cannot be appropriately elucidated.
Regarding the second category, \cite{Unmasking,PRNU,AttributeGAN,CameraFingerprint} have all utilized the characteristics that the current deep generated images can barely learn the natural noise carried with untampered images, hence using the noise pattern for authentication. In \cite{ColorComponent}, the diminutive difference of color components between original and manipulated images for classification. While effective, these methods are also exceedingly susceptible to the quality of dataset. Our method lies in the third category and is constructed based upon high-level information~\cite{headpose,exposelandmark}, which are generally being more explainable and robust to the miniature pixel change introduced by compression or noise. Furthermore, as co-motion pattern is derived by second-order statistics, it is being more robust than ~\cite{headpose,exposelandmark} to instance-wise variation.
\section{Methodology}
In this section, we elaborate on the details of our proposed video forensic method based on co-motion pattern extraction from videos and the overall pipeline of our method is illustrated in Fig.~2. Firstly, we obtain aligned local motion feature describing the movement of specific keypoints from the input videos (Sect.~\ref{sect:LME}). To eliminate the instance-wise deviation, we then design high-order patterns among the extracted local motion features. Subsequently, we demonstrate how to construct co-motion patterns that describe the motion consistency over each video, as well as its usage altogether in Sect.~\ref{sect:CMP}.
\subsection{Local Motion Estimation}
\label{sect:LME}
The fundamental of constructing co-motion pattern is to extract local motion features firstly. Since each co-motion pattern is comprised by multiple independent correlation matrices (explained in Sect.~\ref{sect:CMP}), we expound on how to obtain local motion features from two consecutive frames in this section first.
Denote a pixel on image $I$ with coordinate $(x, y)$ at time $t$ as $I(x, y, t)$, according to brightness constancy assumption, we have~\cite{HS,opticalflow}:
\begin{equation}
I(x, y, t) = I(x + \Delta x, y + \Delta x, t + \Delta t)
\end{equation}
where $\Delta x, \Delta y$ and $\Delta t$ denote the displacements on $\mathbb{R}^3$ respectively. $\Delta t$ is usually 1 to denote two consecutive frames. This leads to the optical flow constraint:
\begin{equation}
\frac{\partial I}{\partial x} \Delta x + \frac{\partial I}{\partial y} \Delta y + \frac{\partial I}{\partial t} = 0
\end{equation}
However, such a hard constraint can lead motion estimation result to be sensitive to even slight changes in brightness, and therefore gradient constancy assumption is proposed~\cite{gradient,opticalflow}:
\begin{equation}
\nabla I(x, y, t) = \nabla I(x + \Delta x, y + \Delta y, t + 1)
\end{equation}
where
\begin{equation}
\nabla = (\partial x, \partial y)^\intercal
\end{equation}
Based on above constraints, the objective function can be formulated as:
\begin{equation}
\underset{\Delta x, \Delta y}{\min} E_{total}(\Delta x, \Delta y) = E_{brightness} + \alpha E_{smoothness}
\end{equation}
where:
\begin{equation}
\begin{split}
E_{brightness} = \iint & \psi(I(x, y, t) - I(x + \Delta x, y + \Delta y, t + 1)) ~ + \\
& \psi(\nabla I(x, y, t) - \nabla I(x + \Delta x, y + \Delta y, t + 1)) dxdy
\end{split}
\end{equation}
$\alpha$ denotes a weighting parameter and $\psi$ denotes a concave cost function, and $E_{smoothness}$ penalization term is introduced to avoid too significant motion displacement:
\begin{equation}
E_{smoothness} = \iint \psi(|\nabla x|^2 + |\nabla y|^2) dxdy
\end{equation}
In our approach, we utilize Liu's~\cite{celiu} dense optical flow to estimate motion over frame pairs. However, while the intra-frame movement is estimable, it cannot be used directly as motion features because the content of each video varies considerably which makes the comparison between the estimated motion of different videos unreasonable~\cite{OFCNN}. Moreover, the estimated motion cannot be pixel-wise accurate due to the influence of noises and non-linear displacements.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth, height=0.38\textwidth]{eccv2020kit/LME.png}
\caption{Illustration of local motion estimation step.}
\label{fig:lme}
\end{figure}
To overcome the above problems, we propose to narrow the region of interests via finding facial landmarks for comparison. By employing an arbitrary facial landmark detector $f_{D}$, we are able to obtain a set of spatial coordinates $L$ as:
\begin{equation}
f_D(I) = L_I = \{l^i_I | l_I^i \in \mathbb{R}^2, 1 \leq i \leq n \}
\end{equation}
so that the local motion features $M_I$ can be denoted as:
\begin{equation}
M_I = \{m_I^i | m_I^i = I_{\Delta x, \Delta y} \oplus \mathcal{N}(l_I^i \pm \hat{k}), l_I^i \in L_I\}
\end{equation}
representing the Gaussian-weighted average of estimated motion map $I_{\Delta x, \Delta y}$ centered on $(l_i^x, l_i^y)$ with stride $\hat{k}$. The Gaussian smoothing is introduced to further mitigate the negative impact by inaccurate estimation result. By doing so, we align the motion feature extracted from each video for equitable comparison. An intuitive illustration of this step is presented in Fig.~\ref{fig:lme}.
Due to the lack of sufficient motion in some $I_{\Delta x, \Delta y}$, we abandon these with trivial magnitude by setting a hyperparameter as threshold where the detailed choice will be discussed in Sect.~\ref{sect:Exp}.
\subsection{Co-motion Patterns}
\label{sect:CMP}
Depending merely on local motion features obtained above would require an incredibly large-scale dataset to cover as many scenarios as possible, which is redundant and costly. Based on the observation that a human face is an articulated structure, the intra-component correlation can also depict the motion in an efficient manner. Inspired by the co-occurrence feature~\cite{Cooccurrence}, which has been frequently employed in texture analysis, we propose to further calculate the second-order statistics from extracted local motion features.
\subsubsection{Grouping Intra-Correlated Motion Features} \hfill \break
\noindent In this step, we group analogous $m_I^i \in M_I$ to estimate articulated facial structure by motion features since motion features that are collected from the same facial component would more likely to share consistent movement. Meanwhile, the negative correlation can also be represented where motion features having opposite directions (e.g. upper lip and lower lip) would be assigned to disjoint groups.
As $m_I^i \in \mathbb{R}^2$ denotes motion on two orthogonal directions, we construct the affinity matrix $A_I$ on $M_I$ such that:
\begin{equation}
A_I^{i, j} = m_I^i \cdot m_I^j
\end{equation}
We here choose the inner product over other metrics such as cosine and euclidean since we wish to both emphasize the correlation instead of difference and to lighten the impact of noise within $M_I$. In specific, using inner product can ensure the significance of two highly correlated motions that both possess certain magnitude to be highlighted, while noises with trivial magnitude would relatively affect less. The normalized spectral clustering~\cite{spectral,tutorial} is then performed, where we calculate the degree matrix $D$ such that:
\begin{equation}
D_I^{i, j} =
\begin{cases}
\sum^n_{j} A_I^{i, j} & \text{if $i = j$}\\
0 & \text{if $i \neq j$}\\
\end{cases}
\end{equation}
and the normalized Laplacian matrix $\mathcal{L}$ as:
\begin{equation}
\mathcal{L} = (D_I)^{-\frac{1}{2}}(D_I - A_I)(D_I)^{\frac{1}{2}}
\end{equation}
In order to split $M_I$ into $K$ disjoint groups, the first $K$ eigenvectors of $\mathcal{L}$, denote as $\textbf{V} = \{\nu_k | k \in [1, K]\}$, are extracted to form matrix $F \in \mathbb{R}^{n \times K}$. After normalizing $F$ by dividing the corresponding L2-norms row-wisely, a K-Means clustering is used to separate $P = \{p_i | p_i = F^i \in \mathbb{R}^{K}, i \in [1, n]\}$ into $K$ clusters where $C_k = \{i | p_i \in C_k\}$. However, since $K$ is not directly available in our case, we will demonstrate how to determine the optimal $K$ in the next step.
\subsubsection{Constructing Co-motion Patterns} \hfill \break
As previously stated, determining a proper $K$ can also assist in describing the motion pattern more accurately. A straightforward approach is to iterate through all possible $K$ such that the Calinski-Harabasz index~\cite{CH} is maximized:
\begin{equation}
\operatorname*{arg\,max}_{K \in [2, n]} ~ f_{CH}(\{C_k | k \in [1, K]\}, K)
\end{equation}
where
\begin{equation}
f_{CH}(\{C_k | k \in [1, K]\}, K) = \frac{tr(\sum^K_y \sum_{p_i \in C_y} (p_i - C_y^{\mu})(p_i - C_y^{\mu})^\intercal)}{tr(\sum^K_y |C_y| (C_y^{\mu} - M_I^{\mu})(C_y^{\mu} - M_I^{\mu})^\intercal)} \times \frac{n - K}{K - 1}
\end{equation}
with $C_y^{\mu}$ is the centroid of $C_y$, $M_I^{\mu}$ is the center of all local motion features and $tr$ denotes taking the trace of the corresponding matrix. After all the efforts, the motion correlation matrix $\rho_{I_t, I_{t+1}}$ of two consecutive frames $I_t$ and $I_{t+1}$ can be calculated as:
\begin{equation}
\rho_{I_t, I_{t+1}}^{i, j} =
\begin{cases}
1 & \text{if $(m_i \in C_k ~\&~ m_j \in C_k ~|~ \exists C_k)$}\\
0 & \text{otherwise}\\
\end{cases}
\end{equation}
and consequently, the co-motion pattern of sequence $S = \{I_1, ..., I_T\}$ is calculated as the weighted average of all correlation matrices:
\begin{equation}
f_{CP}(S) = \sum^T_t k_{I_t, I_{t+1}} \times f_{CH}(\{C_k | k \in [1, K]\}, k_{I_t, I_{t+1}}) \times \rho_{I_t, I_{t+1}}
\end{equation}
where the weighting procedure is also to reduce the impact of noise: the greater the $f_{CH}(\{C_k | k \in [1, K]\}, K)$, naturally the more consistent the motions are; simultaneously, co-motion pattern constructed on noisy estimated local motion would scatter more sparse, which should be weighted as less important.
\subsubsection{Usage of Co-motion Patterns} \hfill \break
The co-motion pattern can be utilized as a statistical feature for comparison purposes. When used for supervised classification, each co-motion must be normalized by its L1 norm:
\begin{equation}
\dot f_{CP}(S) = \frac{f_{CP}(S)}{\sum |f_{CP}(S)|}
\end{equation}
and $\dot f_{CP}(S)$ can be used as features for arbitrary objectives.
In order to illustrate that our co-motion pattern can effectively distinguish all forgery types by only modeling on real videos, we also conduct anomaly detection experiments where a real co-motion pattern is firstly built as template. Then, co-motion patterns from real and forgery databases are all compared against the template where the naturalness is determined by the threshold.
Jensen–Shannon divergence is suggested to be employed as distance measure between any two co-motion patterns:
\begin{equation}
d_{KL}(f_{CP}(S_1), f_{CP}(S_2)) = \sum_i \sum_j^{i-1} f_{CP}(S_1)^{i, j} log(\frac{f_{CP}(S_1)^{i, j}}{f_{CP}(S_2)^{i, j}})
\end{equation}
\begin{equation}
d_{JS}(f_{CP}(S_1), f_{CP}(S_2)) = \frac{1}{2} d_{KL}(f_{CP}(S_1), \overline{f_{CP}}_{S_1, S_2}) + \frac{1}{2} d_{KL}(f_{CP}(S_2), \overline{f_{CP}}_{S_1, S_2})
\end{equation}
where $\overline{f_{CP}}_{S_1, S_2} = \frac{f_{CP}(S_1) + f_{CP}(S_2)}{2}$ and $S1, S2$ denote two sequences.
| {'timestamp': '2020-08-12T02:18:56', 'yymm': '2008', 'arxiv_id': '2008.04848', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.04848'} |
\section{Introduction}
The tremendous performance of deep learning models has led to rampant application of these systems in practice. However, these models can be manipulated by introducing minor perturbations {\cite{szegedy2013intriguing, goodfellow2014explaining, wang2020you, wang2020adversarial, zhang2022local}}. This process is called adversarial attacks. In case of person re-identification, for a given query input $x$, a target model $f$ and a gallery, the attack is defined as,
\begin{align}
&\lVert f(\mathbf{x}+\boldsymbol{\delta}) - f(\mathbf{x}_g)\rVert_2 > \lVert f(\mathbf{x}+\boldsymbol{\delta}) - f(\bar{\mathbf{x}}_g)\rVert_2 \;\;\;\textit{s.t.}\; \lVert \boldsymbol{\delta} \rVert_p \leq \epsilon, \nonumber\\
&\mathbf{x}_g \ni topk(\mathbf{x}+\boldsymbol{\delta}), ID(\mathbf{x}) = ID(\mathbf{x}_g) \neq ID(\bar{\mathbf{x}}_g) \nonumber
\end{align}
where $\mathbf{x}_g$ and $\bar{\mathbf{x}}_g$ are gallery samples belonging to different identity and $\boldsymbol{\delta}$ is the adversarial perturbation with an $l_p$ norm bound of $\epsilon$. {\textit{topk}($\cdot$)} refers to the top $k$ retrieved images for the given argument.
Adversarial attacks have been extensively investigated under classification setting \cite{akhtar2021advances} and also studied in other domains \cite{li2021concealed, li2021simple, jia20203d} in the recent times. However, {to the best of our knowledge}, there are very few works which study these attacks in person re-identification domain. In the following we briefly discuss some classical attacks under classification setting. Szegedy \etal~\cite{szegedy2013intriguing} proposed the first work on generation of adversarial sample for deep neural networks using L-BFGS. Goodfellow \etal~\cite{goodfellow2014explaining} proposed an efficient adversarial sample generation method using fast gradient sign method (FGSM). Kurakin \etal~\cite{kurakin2016adversarial} proposed an iterative FGSM method. Other prominent works include \cite{madry2017towards,carlini2017towards,papernot2016limitations,dong2018boosting,croce2020reliable,wang2021feature}.
In person re-id {\cite{zhou2019omni,chang2018multi,li2019cross,yang2021pixel}}, both white-box and black box attacks have been proposed in \cite{yang2021learning, ding2021beyond, wang2020transferable, li2021qair}. These attacks use a labeled source dataset and show that the attacks are transferable under cross-dataset or cross-model, or both settings. However, transferabilty of attacks in the challenging cross-dataset and cross-model setting is still an issue. In this work, we propose to use a mask and meta-learning for better transferability of attacks. We also investigate adversarial attacks in a completely new setting where the source dataset does not have any labels and the target model structure or parameters are unknown.
\section{Related Works}
In \cite{9226484}, authors propose white box and black box attacks. The black box attack only assumes that the victim model is unknown but the dataset is available. \cite{wang2019advpattern} introduces physically realizable attacks in white box setting by generating adversarial clothing pattern. \cite{li2021qair} proposes a query based attack wherein the images obtained by querying the victim model are used to form triplets for triplet loss. \cite{bouniot2020vulnerability} proposes white box attack using self metric attack; wherein the positive sample is obtained by adding noise to the given input and obtain negative sample from other images. In \cite{yang2021learning}, authors propose a meta-learning framework using a labeled source and extra association dataset. This method generalizes well in cross-dataset scenario. In \cite{ding2021beyond}, Ding~\etal ~proposed to use a list-wise attack objective function along with model agnostic regularization for better transferability. A GAN based framework is proposed in \cite{wang2020transferable}. Here the authors generate adversarial noise and mask by training the network using triplet loss.
In this work we use a GAN network to generate adversarial sample. In order to achieve better transferability of attack across models, we suppress the pixels that generate large gradients. Suppressing these gradients allows the network to focus on other pixels. In this way, the network can focus on pixels that are not explicitly salient with respect to the model used for attack. We further use meta learning \cite{finn2017model} which also allows incorporation of an additional dataset to boost the transferability. We refer this attack as Meta Generative Attack (MeGA). Our work is closest in spirit to \cite{yang2021learning, wang2020transferable}, however, the mask generation and application of meta learning under GAN framework are quite distinct from these works.
\iffalse
\textbf{Adversarial Defense} Countering adversarial attacks, the goal of adversarial defense is to achieve the accuracy comparable to that of untargeted model. The defense methods either use adversarial examples during training or modify the network itself. Adversarial training is often considered as a first line of defense \cite{szegedy2013intriguing, goodfellow2014explaining, moosavi2016deepfool} and also demonstrates the strongest defense. Among other class of defenses which modify the network are defensive distillation \cite{papernot2016distillation}, gradient regularization \cite{ross2018improving}, biologically inspired models \cite{nayebi2017biologically, krotov2018dense}, convex ReLU relaxation \cite{wong2018provable}, image enhancement \cite{mustafa2019image}, image restoration \cite{zhao2021removing}.
\fi
\section{Methodology}
In this work we address both white-box and black-box attacks. We need that the attack is transferable across models and datasets. Thus if we obtain the attack sample using a given model $f$, the attack is inherently tied to $f$ \cite{wang2021feature}. In order that attack does not over-learn, we apply a mask that can focus on regions that are not highly salient for discrimination. This way the network can focus on less salient but discriminative regions thereby increasing the generalizability of attack to other models. On the other hand, meta learning has been efficiently used in adversarial attacks \cite{yuan2021meta, yang2021learning, feng2021meta} to obtain better transferability across datasets. However meta learning has not been explored with generative learning for attacks in case of PRID. We adapt the MAML meta learning framework \cite{finn2017model} in our proposed method. While the black box attack works assume the presence of a labeled source dataset, we additionally present a more challenging setting wherein no labels are available during attack.
\begin{figure}
\centering
\includegraphics[width = .45\textwidth]{prid_images/Copy_of_arch.png}
\caption{Model architecture. Mask $\mathbf{M}$ is generated using model $f$ and is used to mask the input $\mathbf{x}$. GAN is trained using a meta learning framework with an adversarial triplet loss and GAN loss.}
\label{fig:architecture}
\end{figure}
Our proposed model is illustrated in Figure \ref{fig:architecture}. In case of white-box setting,
the generator $\mathcal{G}$ is trained using the generator loss, adversarial triplet loss and meta learning loss while the discriminator $\mathcal{D}$ is trained with the classical binary cross-entropy discriminator loss. The mask is obtained via self-supervised triplet loss. The network learns to generate adversarial image. While the GAN loss itself focuses on generating real samples, the adversarial triplet loss guides the network to generate samples that will be closer to negative samples and farther away from positive samples.
\subsection{GAN training}
Given a clean sample $\mathbf{x}$, we use the generator $\mathcal{G}$ to create the adversarial sample $\mathbf{x}_{adv}$. The overall GAN loss is given by, $\mathcal{L}_{GAN} = E_{\mathbf{x}}\log \mathcal{D}(\mathbf{x}) + E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$.
Here $\Pi(.)$ denotes the projection into $l_{\infty}$ ball of $\epsilon$-radius within $\mathbf{x}$ and $\mathbf{x}_{adv} = \Pi(\mathcal{G}(\mathbf{x}))$. In order to generate adversarial samples, a deep mis-ranking loss is used \cite{wang2020transferable},
\begin{align}
\mathcal{L}_{adv-trip}(\mathbf{x}_{adv}^{a}, \mathbf{x}_{adv}^{n}, \mathbf{x}_{adv}^{p}) &= \max(\lVert \mathbf{x}_{adv}^{a} - \mathbf{x}_{adv}^n\rVert_2 \label{eq:adv-triplet} \\ \nonumber
&- \lVert \mathbf{x}_{adv}^{a} - \mathbf{x}_{adv}^p\rVert_2 + m,0)
\end{align}
where $m$ is the margin. {$\mathbf{x}_{adv}^{a}$ is the adversarial sample obtained from anchor sample $\mathbf{x}^{a}$. Similarly, $\mathbf{x}_{adv}^{p}$ and $\mathbf{x}_{adv}^{n}$ are the adversarial samples obtained from respective positive and negative samples $\mathbf{x}^{p}$ and $\mathbf{x}^{n}$.} This loss tries to push the negatives closer to each other and pulls the positives farther away. Thus the network learns to generate convincing adversarial samples.
\subsection{Mask Generation}
Attack obtained using the given model $f$ leads to poor generelization to other networks. In order to have a better tranferability, we first compute the gradients with respect to self-supervised triplet loss $\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p)$, where $\mathbf{x}^p$ is obtained by augmentation of $\mathbf{x}$ and $\mathbf{x}^n$ is the sample in the batch which lies at a maximum Euclidean distance from $\mathbf{x}$. Here, the large gradients are primarily responsible for loss convergence. Since this way of achieving convergence is clearly coupled with $f$, we mask the large gradients. Thus, the convergence is not entirely dependent on the large gradients and focuses on other smaller ones which can also potentially posses discriminative nature. Thus the overfitting can be reduced by using the mask. To obtain the mask, we compute,
\begin{equation}
\mathbf{grad}_{adv-triplet} = \nabla_{\mathbf{x}}\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p)
\label{eq:grad}
\end{equation}
Note that, we use the real samples in Eq. \ref{eq:grad}.
The mask is given by $\mathbf{M} = sigmoid(\lvert \mathbf{grad}_{adv-triplet} \rvert)$, where $\lvert \cdot \rvert$ denotes absolute value. We mask $\mathbf{x}$ before feeding as an input to the generator $\mathcal{G}$. The masked input is given as $\mathbf{x} = \mathbf{x}\odot (1-\mathbf{M})$, where $\odot$ denotes Hadamard product.
{Masking techniques have also been explored in \cite{parascandolo2020learning, shahtalebi2021sand} where the idea is to learn the model such that it does not overfit to the training distribution. Our masking technique is motivated from the idea that an adversarial example should be transferbale across different reid models. Our technique is distinct and can be applied to an individual sample. Whereas, masking technique in \cite{parascandolo2020learning, shahtalebi2021sand} seeks agreement among the gradients obtained from all the samples of a batch.
This technique in \cite{parascandolo2020learning, shahtalebi2021sand} also suffers from the drawback of tuning hyperparameter. Further, the masking technique of \cite{parascandolo2020learning} is boolean while ours is continuous.}
\subsection{Meta Learning}
Meta optimization technique allows to learn from multiple datasets for different tasks while generalizing well on a given task. One of the popular meta learning approaches, MAML \cite{finn2017model}, applies two update steps. The first update happens in an inner loop with a meta-train set while the second update happens in outer loop with a meta-test set. In our case, we perform the inner loop update on the discriminator and generator parameters using the meta-train set and the outer loop update is performed on the parameters of generator using a meta-test set.
\begin{algorithm}[h]
\SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{Datasets $\mathcal{T}$ and $\mathcal{A}$, model $f$}
\Output{Generator network $\mathcal{G}$ parameters $\boldsymbol{\theta}_g$}
\BlankLine
\While{not converge}{
\For{samples in $\mathcal{T}$}{
\tcc*[h]{Obtain the mask}\\
$\mathbf{M}$ $\leftarrow$ $\sigma$($\lvert \nabla_{\mathbf{x}}{\mathcal{L}_{adv-trip}(\mathbf{x},\mathbf{x}^n,\mathbf{x}^p) } \rvert$)\\
\tcc*[h]{Meta train update using $\mathcal{T}$}\\
$\boldsymbol{\theta}_d \leftarrow \argmax_{\boldsymbol{\theta}_d} E_{\mathbf{x}}\log \mathcal{D}(\mathbf{x}) + E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$ \\
$\boldsymbol{\theta}_g \leftarrow \argmin_{\boldsymbol{\theta}_g} \mathcal{L}_{\mathcal{G}}^{\mathcal{T}} + \lambda \mathcal{L}_{adv-trip}^{\mathcal{T}}(\mathbf{x}_{adv}^a,\mathbf{x}_{adv}^n,\mathbf{x}_{adv}^p)$\\
$\boldsymbol{\delta} = \mathbf{x} - \Pi(G(\mathbf{x}))$\\
\tcc*[h]{Meta test loss using $\mathcal{A}$}\\
Sample triplets from meta-test set $\mathcal{A}$ and compute $\mathcal{L} = \mathcal{L}_{adv-trip}^{\mathcal{A}}(\mathbf{x}^a - \boldsymbol{\delta},\mathbf{x}^n,\mathbf{x}^p)$\\
}
\tcc*[h]{Meta test update}\\
$\boldsymbol{\theta}_g \leftarrow \argmin_{\boldsymbol{\theta}_g} \lambda \mathcal{L}$\\
}
\caption{{Training for MeGA}}\label{algo_disjdecomp}
\end{algorithm}
More formally, given a network $\mathcal{D}$ parametrized by $\boldsymbol{\theta}_d$ and $\mathcal{G}$ parametrized by $\boldsymbol{\theta}_g$, we perform the meta-training phase to obtain the parameters $\boldsymbol{\theta}_d$ and $\boldsymbol{\theta}_g$. The update steps are given in Algorithm \ref{algo_disjdecomp}.
We also obtain the adversarial perturbation as, $\boldsymbol{\delta} = \mathbf{x} - \Pi(G(\mathbf{x}))$.
We then apply the meta-testing update using the additional meta-test dataset ${\mathcal{A}}$. In Algorithm \ref{algo_disjdecomp},
$\mathcal{L}_{\mathcal{G}}^{\mathcal{T}} = E_{\mathbf{x}}\log(1 - \mathcal{D}(\Pi(\mathcal{G}(\mathbf{x}))))$. We discriminate the datasets using superscripts $\mathcal{T}$ for meta-train set and $\mathcal{A}$ for meta-test set. $\mathcal{L}_{adv-trip}^{\mathcal{A}}$ draws its samples $\mathbf{x}$ from $\mathcal{A}$. At the inference stage, we only use $\mathcal{G}$ to generate the adversarial sample.
\subsection{Training in absence of labels}
Deep mis-ranking loss can be used \cite{wang2020transferable} when the labels are available for $\mathcal{T}$. In this scenario, we present the case where no labels are available. In the absence of labels and inspired by unsupervised contrastive loss \cite{wang2021understanding}, we generate a positive sample $\mathbf{x}_{adv}^p$ by applying augmentation to the given sample $\mathbf{x}_{adv}^a$. The negative sample $\mathbf{x}_{adv}^n$ is generated using batch hard negative sample strategy, {that is we consider all samples except the augmented version of $\mathbf{x}_{adv}^a$ as negative samples and choose the one which is closest to $\mathbf{x}_{adv}^a$}. We then use {Eq. \ref{eq:adv-triplet}} to obtain the adversarial triplet loss.
\section{Experimental Results}
\subsection{Implementation Details} We implemented the proposed method in Pytorch framework. The GAN architecture is similar to that of the GAN used in \cite{xiao2018generating, isola2017image}. We use the models from Model Zoo \cite{modelzoo} - OSNet \cite{zhou2019omni}, MLFN \cite{chang2018multi}, HACNN \cite{li2018harmonious}, ResNet-50 and ResNet-50-FC512. We also use AlignedReID \cite{zhang2017alignedreid, AlignedReID}, LightMBN \cite{herzog2021lightweight}, and PCB \cite{sun2018beyond, PCB}.
We use an Adam optimizer with a learning rate = $10^{-5}$, $\beta_1$ = $0.5$ and $\beta_2 = 0.999$ and train the model for 40 epochs. We set $m=1$, {$\lambda = 0.01$}, and $\epsilon = 16$. In order to stabilize GAN training, we apply label flipping with 5\% flipped labels. We first present the ablation for mask and meta learning.
\subsection{Effect of mask $\mathbf{M}$}
We find that when we use mask for Resnet50 and test for different models like MLFN \cite{chang2018multi} and HACNN \cite{li2018harmonious}, there is a substantial gain in the performance as shown in Table \ref{tab:resnet50_mask}. In terms of R-1 accuracy, introduction of mask gives a boost of 42.10\% and 4.8\% for MLFN and HACNN respectively. This indicates that mask provides better transferability. Further, when we evaluate on Resnet50 itself, there is a minor change in performance which could be because mask is learnt using Resnet50 itself.
\begin{table}[H]
\caption{Trained on Market-1501 \cite{zheng2015scalable}. Setting Market-1501 $\rightarrow$ Market-1501. $l$ indicates Market-1501 labels are used for training. $\mathbf{M}$ indicates the incorporation of mask. 'Before' indicates accuracy on clean samples.}
\label{tab:resnet50_mask}
\centering
{
\begin{tabular}{c|c c | c c | c c }
\hline
Model &\multicolumn{2}{c|}{Resnet50} &\multicolumn{2}{c|}{MLFN} &\multicolumn{2}{c}{HACNN} \\
& mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
Before & 70.4& 87.9 & 74.3 &90.1 & 75.6& 90.9\\
$l$ & {0.66} & {0.41} & 3.95 &3.23 & 32.57& 42.01 \\
{$l+\text{AND}$}&{0.56} & {0.35} & 5.39 & 4.55 & 35.13 &44.20\\
{$l+\text{SAND}$}&\textbf{0.51} & \textbf{0.33} & 6.01 & 4.89 & 37.50 &45.11\\
$l+\mathbf{M}$ &0.69 & 0.50 &\textbf{2.80} & \textbf{1.87} & \textbf{31.73} & \textbf{39.99} \\
\hline
\end{tabular}
}
\end{table}
\subsection{Effect of meta learning}
We demonstrate the effect of meta learning in Table \ref{tab:resnet50_meta}. In the case of cross-dataset (Resnet50) as well as cross-dataset cross-model (MLFN) setting, we observe that introduction of meta learning gives a significant performance boost. In terms of R-1 accuracy, there is a boost of 69.87\% and 69.29\% respectively for Resnet50 and MLFN. We further observe that Resnet50 does not have a good transferability towards HACNN. This could be due to two reasons. First, Resnet50 is a basic model compared to other superior PRID models. Second, HACNN is built on Inception units \cite{szegedy2017inception}.
\begin{table}[H]
\caption{Trained on Market-1501 using MSMT-17 \cite{wei2018person} as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID \cite{zheng2017unlabeled}. $\mathcal{A}$ indicates incorporation of meta learning.}
\label{tab:resnet50_meta}
\centering
{
\begin{tabular}{c|c c | c c | c c }
\hline
{Model} &\multicolumn{2}{c|}{Resnet50} &\multicolumn{2}{c|}{MLFN} &\multicolumn{2}{c}{HACNN} \\
& mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
Before & 58.9 & 78.3 & 63.2& 81.1 & 63.2&80.1 \\
$l$ & 17.96 & 24.86 & 18.25& 24.10 & \textbf{42.75} &\textbf{58.48} \\
$l+\mathcal{A}$ &\textbf{5.80} & \textbf{7.49} & \textbf{6.15} & \textbf{7.4} & 43.12& 58.97\\
\hline
\end{tabular}
}
\end{table}
\subsection{Adversarial attack performance}
We first present the results for cross-model attack in Table \ref{tab:aligned_source_market}. We use AlignedReID model, Market-1501 \cite{zheng2015scalable} as training set and MSMT-17 \cite{wei2018person} as meta test set. The results are reported for Market-1501 and DukeMTMC-reID \cite{zheng2017unlabeled}. In case of Market-1501, it is clearly evident that the proposed method is able to achieve a strong transferability. We can see that incorporation of meta test set leads to less than halving the mAP and R-1 results compared to case when only labels are used. For instance, mAP and R-1 of AlignedReID goes down from 7.00\% and 6.38\% to 3.51\% and 2.82\% respectively. This is consistently observed for all three models. Further, the combined usage of mask and meta learning ($l+\mathbf{M}+\mathcal{A}$), denoted as MeGA, achieves best results in cross-model case of PCB and HACNN. The respective R-1 improvements are 10.00\% and 9.10\%. Thus our method is extremely effective in generating adversarial samples.
\begin{table}[H]
\caption{AlignedReID trained on Market-1501 with MSMT-17 as meta test set. M is Market-1501 and D is DukeMTMC-reID. MeGA denotes $l+\mathbf{M}+\mathcal{A}$.}
\label{tab:aligned_source_market}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c|c| c c | c c |c c }
\hline
& {Model} &\multicolumn{2}{c|}{AlignedReID} &\multicolumn{2}{c|}{PCB} &\multicolumn{2}{c}{HACNN} \\
& & mAP &R-1 &mAP& R-1&mAP &R-1 \\
\hline
M $\rightarrow$ M & Before & 77.56 & 91.18 & 78.54 & 92.87 & 75.6&90.9 \\
\cline{2-8}
&$l$ & 7.00 & 6.38 & 16.46 & 29.69 & 16.39 & 20.16\\
\cline{2-8}
&$l$ + $\mathbf{M}$ & 6.62& 5.93 & 15.96 & 28.94 & 16.01 & 19.47\\ \cline{2-8}
&$l+\mathcal{A}$ & \textbf{3.51} & \textbf{2.82} & 8.07 & 13.86 & 5.44& 5.28 \\
\cline{2-8}
&MeGA& 5.50 & 5.07 & \textbf{7.39} &\textbf{12.47} & \textbf{4.85} & \textbf{4.80} \\
\hline
M $\rightarrow$ D& $l$ & 16.04 & 21.14 & 13.35 & 15.66 & 15.94 & 21.85 \\
\cline{2-8}
&$l+\mathbf{M}$ & 16.23 & 21.72 & 13.70 & 15.97 & 16.43 & 22.17 \\
\cline{2-8}
&$l+\mathcal{A}$ & \textbf{4.69} & \textbf{5.70} & \textbf{11.10} & \textbf{12.88} & 5.40 & 6.55\\
\cline{2-8}
&MeGA & 7.70 & 9.47 & 11.81 & 14.04& \textbf{4.73} & \textbf{5.40} \\
\hline
\end{tabular}
}
\end{table}
In case of Market-1501 to DukeMTMC-reID, we observe that simply applying the meta learning ($l+\mathcal{A}$) generalizes very well. In case of AlignedReID, mAP and R-1 of 4.60\% and 5.70\% respectively, are significantly lower compared to results obtained via $l$ or $l+\mathbf{M}$ settings. The combined setting of mask and meta learning yields better results for HACNN compared to AlignedReID and PCB. This may be because of the fact that learning of mask is still tied to training set and thus may result in overfitting.
\iffalse
\begin{table}[H]
\caption{AlignedReID trained on Market with MSMT-17 as meta test set. Results are reported for DukeMTMC-reID. Cross dataset and model setting. }
\label{tab:aligned_test_duke}
\centering
\resizebox{\columnwidth}{!}
{
\begin{tabular}{c| c c c| c c c|c c c}
\hline
\hline
{Model} &\multicolumn{3}{c|}{AlignedReID} &\multicolumn{3}{c|}{PCB} &\multicolumn{3}{c}{HACNN} \\
& mAP &R-1 &R-10 & mAP &R-1 &R-10& mAP &R-1 &R-10 \\
\hline
$l+\mathbf{M}$ & 16.23 & 21.72 & 37.79 & 13.70 & 15.97 & 36.13 & 16.43 & 22.17 & 35.77 \\
\hline
$l+\mathcal{A}$ & 4.69 & 5.70 & 12.11 & 11.10 & 12.88& 29.30 & 5.40 & 6.55 & 14.13 \\
\hline
MeGA & 7.70 & 9.47 & 19.16 & 11.81 & 14.04 &31.23 & 4.73 & 5.40 & 11.57 \\
\hline
\end{tabular}
}
\end{table}
\fi
In Table \ref{tab:market-msmt_meta_duke} we discuss the results for cross-dataset and cross-model case against more models. Here also we can see that both AlignedReID and PCB lead to strong attacks against other models in a different dataset.
In Table \ref{tab:aligned_msmt}, we present the results for MSMT-17. Here, the model is trained using AlignedReID and PCB using Market-1501 and DukeMTMC-reID as meta test set. When trained and tested using AlignedReID, the R-1 accuracy drops from 67.6\% on clean samples to 17.69\%. On the other hand when trained using PCB and tested on AlignedReID, the performance drops to 16.70\%. This shows that our attack is very effective in large scale datasets such as MSMT-17.
\tabcolsep=4pt
\begin{table*}[tb]
\caption{AlignedReID and PCB trained on Market with MSMT-17 as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID.}
\label{tab:market-msmt_meta_duke}
\centering
{
\begin{tabular}{c| c c c |c c c | c c c | c c c | c c c| c c c}
\hline
{Model} &\multicolumn{3}{c|}{OSNet} & \multicolumn{3}{c|}{{LightMBN}} & \multicolumn{3}{c|}{ResNet50} & \multicolumn{3}{c|}{MLFN} & \multicolumn{3}{c|}{ResNet50FC512} & \multicolumn{3}{c}{HACNN} \\
& mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 & mAP &R-1 &R-10 \\
\hline
Before & 70.2& 87.0 & - & 73.4 & 87.9 & - & 58.9 & 78.3& - & 63.2& 81.1 &-& 64.0 & 81.0& -& 63.2 & 80.1& - \\\hline
AlignedReID & 15.31 & 22.30 & 35.00 & 16.24 & 24.13 &39.65 & 5.17 & 6.64 & 13.77 & 12.28& 16.38 & 29.39 &6.97 & 9.69&19.38 & 4.77 & 5.61 & 11.98\\
\hline
PCB & 12.27 & 14.45 & 27.49 & 12.88 & 15.70 & 28.54& 7.14 & 8.55 & 20.01 & 11.95 & 16.54 & 30.92 & 9.45 & 11.46 & 23.90 & 3.97 & 4.66 & 10.00 \\
\hline
\end{tabular}
}
\end{table*}
\begin{table}[H]
\caption{Trained on Market-1501 using DukeMTMC-reID as meta test set. Setting Market-1501 $\rightarrow$ MSMT-17.}
\label{tab:aligned_msmt}
\centering
{
\begin{tabular}{c| c c c }
\hline
{Model} &\multicolumn{3}{c}{AlignedReID} \\
& mAP &R-1 &R-10 \\
\hline
MeGA (AlignedReID)& 9.37 & 17.69 & 33.42 \\
\hline
MeGA (PCB) & 8.82 & 16.70 & 31.98\\
\hline
\end{tabular}
}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width = 0.8cm,height = .9cm, cfbox=red 1pt 1pt]{prid_images/original_img.png} \hspace{4mm}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_5.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_5_seq_11.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_7.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_10.png}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/fake_samples__epoch_20.png}\\
\includegraphics[width = 0.8cm,height = .9cm, cfbox=blue 1pt 1pt]{prid_images/market-mask.png} \hspace{4mm}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_5_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_5_query_11.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_7_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_10_query_4.jpeg}
\includegraphics[width = 0.8cm,height = .9cm]{prid_images/epoch_20_query_4.jpeg}
\caption{{Left column: Red and blue box show the given image from Market-1501 and its mask ($1-M$) respectively.
Right column:} Attacked (top) and clean (bottom) images from MSMT-17}
\label{fig:subjective}
\end{figure}
\subsection{Comparison with SOTA models}
In Table \ref{tab:comparison_aligned_TCIAA} we present the comparison with TCIAA \cite{wang2020transferable}, UAP \cite{li2019universal} and Meta-attack \cite{yang2021learning}. We observe that our method outperforms TCIAA by a huge margin. We can also see that when mis-ranking loss is naively applied in case of TCIAA$^\dagger$ \cite{yang2021learning}, the model' performance degrades. Our attack has better performance compared to both TCIAA and Meta-attack.
\begin{table}[H]
\caption{AlignedReID trained on Market with MSMT-17 as meta test set. Setting Market-1501 $\rightarrow$ DukeMTMC-reID. $^\dagger$ uses PersonX \cite{sun2019dissecting} as extra dataset.$^*$ uses PersonX for meta learning. }
\label{tab:comparison_aligned_TCIAA}
\centering
{
\begin{tabular}{c| c c c }
\hline
{Model} &\multicolumn{3}{c}{Aligned reid} \\
& mAP &R-1 &R-10 \\
\hline
Before & 67.81 &80.50 & 93.18 \\
\hline
TCIAA \cite{wang2020transferable}&14.2 & 17.7 & 32.6 \\
{MeGA$^*$ (Ours)} & {11.34} & {12.81} & {24.11} \\
MeGA (Ours) & \textbf{7.70} & \textbf{9.47} & \textbf{19.16} \\
\hline
& \multicolumn{3}{c}{PCB} \\
Before & 69.94 &84.47 & - \\
\hline
TCIAA \cite{wang2020transferable} & 31.2 & 45.4 & - \\
TCIAA$^\dagger$ \cite{wang2020transferable} & 38.0 & 51.4 & - \\
UAP \cite{li2019universal} & 29.0 & 41.9 & - \\
Meta-attack$^*$ ($\epsilon = 8$) \cite{yang2021learning} &26.9 & 39.9 & \\
\hline
{MeGA$^*$ ($\epsilon = 8$) (Ours)} & {22.91} & {31.70} & - \\
MeGA ($\epsilon = 8$) (Ours) & \textbf{18.01} & \textbf{21.85} & 44.29 \\
\hline
\end{tabular}
}
\end{table}
\iffalse
\begin{table}[H]
\caption{PCB-P4. Test on Duke. MSMT used for meta learning. $^*$ uses PersonX for meta learning.}
\label{tab:pcb_duke}
\centering
{
\begin{tabular}{c| c c c | }
\hline
\hline
\multirow{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
Before & 69.94 &84.47 & - \\
\hline
TCIAA \cite{wang2020transferable} & 31.2 & 45.4 & - \\
TCIAA$^*$ \cite{wang2020transferable} & 38.0 & 51.4 & - \\
UAP \cite{li2019universal} & 29.0 & 41.9 & - \\
Meta-attack$^*$ ($\epsilon = 8$) \cite{yang2021learning} &26.9 & 39.9 & \\
\hline
MeGA ($\epsilon = 8$) & 18.01 & 21.85 & 44.29 \\
\hline
\end{tabular}
}
\end{table}
\fi
\iffalse
\begin{table}[!htb]
\caption{Test on market. MSMT-17 used for meta learning. PCB Same dataset and same model}
\label{tab:meta_market}
\centering
{
\begin{tabular}{c| c c c | }
\hline
\hline
\multirow{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
Before & 78.54 & 92.87 & - \\
\hline
\hline
w/ label + mask & & & \\
\hline
w/ label + mask + meta & 4.26 & 6.20 & 13.12 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}[H]
\caption{Test on market. MSMT-17 used for meta learning. Same dataset and cross model. Train on aligned and test on PCB and HACNN}
\label{tab:meta_market}
\centering
{
\begin{tabular}{c| c c c | }
\hline
{Model} &\multicolumn{3}{c|}{PCB} \\
& mAP &R-1 &R-10 \\
\hline
w/ label + mask & & & \\
\hline
w/ label + mask + meta + PCB& 7.39 & 12.47 & 24.04 \\
\hline
w/ label + mask + meta + HACNN & 4.85 & 4.80 & 11.69 \\
\end{tabular}
}
\end{table}
\fi
\subsection{Subjective Evaluation}
We show the example images obtained by our algorithm in Figure \ref{fig:subjective} and top-5 retrieved results in Figure \ref{fig:retrieved_results} for the OSNet model. We can see that in the case of clean samples the top-3 retrieved images match the query ID, however, none of the retrieved images match query ID in the presence of our attack.
\begin{figure}[h]
\centering
\includegraphics[width = .9cm,height = 1.1cm, cfbox=blue 1pt 1pt]{retrieved_images/query_top000_name_0458_c1s6_032271_00.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top001_name_0458_c4s6_032891_03.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top002_name_0458_c5s3_081437_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=green 1pt 1pt]{retrieved_images/clean_top003_name_0458_c5s3_081637_05.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/clean_top004_name_0001_c1s2_037091_02.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/clean_top005_name_0001_c1s6_011741_02.jpg}\\
\includegraphics[width = .9cm,height = 1.1cm]{retrieved_images/emptimage.png}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top001_name_0431_c5s1_105373_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top002_name_0431_c2s1_104821_01.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top003_name_0431_c2s1_104746_02.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top004_name_0000_c3s1_081467_04.jpg}
\includegraphics[width = .9cm,height = 1.1cm, cfbox=red 1pt 1pt]{retrieved_images/fake_top005_name_0431_c5s1_105323_03.jpg}
\caption{Query image marked in blue border. Top 5 {retrieved} mages from OSNet for Market-1501 (top). Green colored boxes are correct match and red ones are incorrect. Retrieved images after attacking query sample (bottom).}
\label{fig:retrieved_results}
\end{figure}
\subsection{Attack using unlabelled source}
In this section we discuss the attack when source dataset $\mathcal{T}$ is unlabeled and neither the victim model nor the dataset used for training victim model are available. This is a very challenging scenario as supervised models cannot be used for attack. Towards this, we use unsupervised trained models on Market-1501 and MSMT-17 from \cite{ge2020self}. In Table \ref{tab:train_msmt_test_market}, we present results for training using MSMT-17 and testing on Market. We observe that IBNR50 obtains a mAP and R-1 accuracy of 40.7\% and 52.34\% when both labels and mask are not used. When mask is incorporated there is a substantial boost of 3.82\% in mAP and 4.81\% in R-1 accuracy in case of OSNet. These gains are even higher for MLFN and HACNN.
In case of Market-1501 to MSMT-17 in Table \ref{tab:market-msmt}, we see that the attack using only mask performs reasonably well compared to that of attacks using label or both label and mask. Due to the comparatively small size of Market-1501, even the attacks using labels are not very efficient.
\begin{table}[H]
\caption{MSMT-17 $\rightarrow$ Market-1501. R50 denotes Resnet50.}
\label{tab:train_msmt_test_market}
\centering
{
\begin{tabular}{c| c c | c c| c c}
\hline
{Model} &\multicolumn{2}{c|}{OSNet} & \multicolumn{2}{c|}{MLFN} & \multicolumn{2}{c}{HACNN} \\
& mAP &R-1 & mAP &R-1 & mAP &R-1 \\
\hline
Before &82.6 & 94.2 & 74.3 & 90.1 & 75.6 & 90. 9 \\
\hline
$l$ (R50) & 30.50 & 39.45 & 26.37 & 38.03 & 31.15 & 39.34\\
$l+\mathbf{M}$ (R50) &24.50 &33.07 & 21.76 & 32.18 & 18.81&23.66 \\
$\mathbf{M}$ (R50) & 36.5 &47.56 & 34.92& 52.61 &31.15 &39.34 \\
\hline
\hline
IBN R50 & 40.7 & 52.34 & 40.62 & 61.46 & 35.44 & 44.84 \\
\hline
$\mathbf{M}$ (IBN R50) & 36.88 & 47.53 & 35.01 & 52.79 & 30.98& 38.98 \\
\hline
\end{tabular}
}
\end{table}
\begin{table}[H]
\caption{ Market-1501 $\rightarrow$ MSMT-17.}
\label{tab:market-msmt}
\centering
{
\begin{tabular}{c| c c | c c| c c}
\hline
{Model} &\multicolumn{2}{c|}{OSNet} & \multicolumn{2}{c|}{MLFN} & \multicolumn{2}{c}{HACNN} \\
& mAP &R-1 & mAP &R-1 & mAP &R-1 \\
\hline
Before & 43.8 & 74.9 & 37.2 & 66.4 & 37.2 &64.7\\
$l$ (R50) & 31.78 & 60.43 & 25.17 & 49.33 & 28.9&54.91\\
$l+\mathbf{M}$ (R50) &29.04 &56.11 & 22.02 & 43.57 &28.26 &53.53 \\
\hline
$\mathbf{M}$ (R50) & 35.16 & 66.28 &29.16 & 56.65 &29.69& 57.81 \\
\hline
\end{tabular}
}
\end{table}
\section{Conclusion}
We present a generative adversarial attack method using mask and meta-learning techniques. The mask allows better transferability across different networks, whereas, meta learning allows better generalizability. We present elaborate results under various settings. Our ablation also shows the importance of mask and meta-learning. Elaborate experiments on Market-1501, MSMT-17 and DukeMTMC-reID shows the efficacy of the proposed method.
\bibliographystyle{IEEEtran}
| {'timestamp': '2023-01-18T02:17:49', 'yymm': '2301', 'arxiv_id': '2301.06286', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.06286'} |
\section{Introduction}
While the first traffic signals were controlled completely in open loop, various approaches have been taken to adjust the green light allocation based on the current traffic situation. To mention a few, SCOOT~\cite{robertson1991optimizing}, UTOPIA~\cite{mauro1990utopia} and SCATS~\cite{sims1980sydney}. Also, learning based approaches have been taken, e.g.,~\cite{JIN20175301}.
However, these approaches lack of formal stability, optimality, and robustness guarantees. In~\cite{nilsson2015entropy, nilsson2017generalized}, a decentralized feedback controller for traffic control was proposed, refereed to as Generalized Proportional Allocation (GPA) controller, which has both stability and maximal throughput guarantees. In those papers, an average control action for traffic signals in continuous time is given. Since the controller has several desired properties, it is motivated to investigate if this controller performs well in a micro-simulator with more realistic traffic dynamics. First of all, under the assumptions that the controller can measure the whole queue lengths at each junction, the averaged controller is throughput optimal from a theoretical perspective. With this, we mean that when the traffic dynamics is modeled as a simple system of point queues there exists no controller that can handle larger constant exogenous inflows to a network than this controller. This property of throughput-optimality also means that there are formal guarantees that the controller will not create gridlock situations in the network. As exemplified in~\cite{varaiya2013max}, feedback controllers that perform well for a single isolated junction may cause gridlock situations in a network setting.
At the same time, this controller requires very little information about the network topology and the traffic flow propagation. All information the controller needs to determine the phase activation in a junction is the queue lengths on the incoming lanes to a junction and the static set of phases. Those requirements on information make the controller fully distributed, i.e., to compute the control action in one junction, no information is required about the state in the other junctions.
The proposed traffic signal controller also has the property that it adjusts the cycle lengths depending on the demand. The fact that during higher demands, the cycle lengths should be longer to waste less service time due to phase shifts, has been suggested previously for open loop traffic signal control, see e..g~\cite{roess2011traffic}.
Another feedback control strategy for traffic signal control is the MaxPressure controller~\cite{Varaiya:13, varaiya2013max}. The MaxPressure controller utilizes the same idea as the BackPressure controller, proposed for communication networks in~\cite{tassiulas1992stability}. While the BackPressure controller controls both the routing (to which packets the should proceed after received service) and the scheduling (which subset of queues that should be severed), the MaxPressure controller only controls the latter, i.e., the phase activation but not the routing. More recently, due to the rapid development of autonomous vehicles, it has been proposed in~\cite{zaidi2018backpressure} to utilize the routing control from the BackPressure controller in traffic networks as well. The MaxPressure controller is also throughput optimal, but it requires information about the tuning ratios at each junction, i.e., how the vehicles (in average) propagate from one junction to the neighboring junctions. Although various techniques for estimating those turning ratios have been made, for example~\cite{coogan2017traffic}, with more and more drivers or autonomous vehicles doing their path planning through some routing service, it is likely to believe that the turning ratios can change in an unpredictable way when a disturbance occurs in the traffic network.
If the traffic signal controller has information about the turning ratios, other control strategies are possible as well, for instance, MPC-like as proposed in~\cite{hao2018modelI, hao2018modelII, grandinetti2018distributed} and robust control as proposed in~\cite{bianchin2018network}.
In~\cite{nilsson2018} we presented the first discretization and validation results of the GPA in a microscopic traffic simulator. Although, the results were promising, the validations were only performed on an artificial network and only compared with a fixed timed traffic signal controller. Moreover, the GPA was only discretized in a way such that the full cycle is activated. In this paper, we extend the results in~\cite{nilsson2018} by showing another discretization that does not have to utilize the full cycle and we also perform new validations. The new validations both compare the GPA to the MaxPressure controller on an artificial network (the reason for chosen a artificial network will be explained later), but also validate the GPA controller in a realistic scenario, namely for the Luxembourg city during a whole day.
The outline of the paper is as follows: In Section~\ref{sec:problem} we present the model we are using for traffic signals, together with a problem formulation of the traffic signal control problem. In Section~\ref{sec:controllers} we present two different discretization of the GPA that we are using in this study, but also give a brief description of the MaxPressure controller. In Section~\ref{sec:comparision} we compare the GPA controller with the MaxPressure controller on an artificial Manhattan-like grid, and in Section~\ref{sec:lust} we investigate how the GPA controller performs in a realistic traffic scenario. The paper is concluded with some ideas about further research.
\subsection{Notation}
We let $\mathbb{R}_+$ denote the non-negative reals. For a finite sets $\mathcal A, \mathcal B$, we let $\mathbb{R}_+^{\mathcal A}$ denote non-negative vectors indexed by the elements in $\mathcal A$, and $\mathbb{R}_+^{\mathcal A \times \mathcal B}$ the matrices indexed by elements $\mathcal A$ and $\mathcal B$.
\section{Model and Problem Formulation}\label{sec:problem}
In this section, we describe the model for traffic signals to be used throughout the paper together with the associated control problem.
We consider an arterial traffic network with signalized junctions. Let $\mathcal J$ denote the set of signalized junctions. For a junction $j \in \mathcal J$, we let $\mathcal L^{(j)}$ be the set of incoming lanes, on which the vehicles can queue up. The set of all signalized lanes in the whole network will be denoted by $\mathcal L = \cup_{j \in \mathcal J} \mathcal L^{(j)}$. For a lane $l \in \mathcal L^{(j)}$, the queue-length at time $t$ --measured in the number of vehicles-- is denoted by $x_l(t)$.
Each junction has a predefined set of \emph{phases} $\mathcal P^{(j)}$ of size $n_{p_j}$. For simplicity, we assume that phases $p_i \in \mathcal P^{(j)}$ are indexed by $i = 1, \ldots, n_{p_j}$. A phase $p \in \mathcal P^{(j)}$ is a subset of incoming lanes to the junction $j$ that can receive green light simultaneously. Throughout the paper, we will assume that for each lane $l \in \mathcal L$, there exists only one junction $j \in \mathcal J$ and at least one phase $p \in \mathcal P^{(j)}$ such that $l \in p$.
The phases are usually constructed such that the vehicles paths in a junction do not cross each other. This to avoid collisions.
Examples of this will be shown later in this paper. After a phase has been activated, it is common to signalize to the drivers that the traffic signal is turning red and give time for vehicles that are in the middle of the junction to leave it before the next phase are activated. Such time is usually referred to as clearance time. Throughout the paper we shall refer to those phases only containing red and yellow traffic light as \emph{clearance phases} (in contrast to phases, that models when lanes receives green traffic light). We will assume that each phase activation is followed by a clearance phase activation. While we will let the phase activation time vary, we will make the quite natural assumption that the clearance phases has to be activated for a fixed time.
For a given junction $j \in \mathcal J$, the set of phases can be described through a phase matrix $P^{(j)}$, where
$$P_{il}=\left\{\begin{array}{lcl}1&\text{ if }&\text{lane }l\text{ belongs to the }i\text{-th phase}\\ 0&\text{ if }&\text{otherwise\,.}\end{array}\right.$$
While the phase matrix does not contain the clearance phases, to each phase $p \in \mathcal P^{(j)}$ we will associate a clearance phase, denoted $p'$. We denote the set of real phases and their corresponding clearance phases $\bar{\mathcal P}^{(j)}$.
The controller's task in a signalized junction is to define a \emph{signal program}, $\mathcal T^{(j)} = \{ (p, t_\text{end} ) \in \bar{\mathcal P}^{(j)} \times \mathbb{R}_+ \}$, where the phase $p$ is activated until $t_\text{end}$. When $t = t_\text{end}$, the phase $p'$, where $(p', t_\text{end}) \in \mathcal T^{(j)}$, with smallest $t_\text{end} > t$ is activated. Formally, we can define the function $c^{(j)}(t)$ that gives the phase that is activated at time $t$ as follows
\begin{align*}
c^{(j)} (t) = \{ & p : (p, t_\text{end}) \in { \mathcal T}^{(j)} \mid \\ & t_\text{end} > t \text{ and } t_\text{end} \leq t'_\text{end} \textrm{ for all } (p', t'_\text{end}) \in { \mathcal T}^{(j)} \} \, .
\end{align*}
What $c^{(j)} (t)$ is doing is to find the phase with the smallest end-time greater than the current time.
\medskip
\begin{example} \label{ex:phasesandprogram}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{scope}[scale=0.5]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\draw [->, thick] (-1, -0.5) to [bend right] (0.3, 1);
\draw [->, thick] (-1, -0.5) to (1, -0.5);
\draw [->, thick] (-1, -0.5) to [bend left] (-0.7, -1);
\draw [->, thick] (1, 0.5) to [bend left] (0.7, 1);
\draw [->, thick] (1, 0.5) to (-1, 0.5);
\draw [->, thick] (1, 0.5) to [bend right] (-0.3, -1);
\node (l1) at (-1.5, -0.5) {$l_1$};
\node (l2) at (0.5, -1.5) {$l_2$};
\node (l3) at (1.5, 0.5) {$l_3$};
\node (l4) at (-0.5, 1.5) {$l_4$};
\end{scope}
\begin{scope}[scale=0.5, shift={(7, 0)}]
\begin{scope}[rotate=90]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\draw [->, thick] (-1, -0.5) to [bend right] (0.3, 1);
\draw [->, thick] (-1, -0.5) to (1, -0.5);
\draw [->, thick] (-1, -0.5) to [bend left] (-0.7, -1);
\draw [->, thick] (1, 0.5) to [bend left] (0.7, 1);
\draw [->, thick] (1, 0.5) to (-1, 0.5);
\draw [->, thick] (1, 0.5) to [bend right] (-0.3, -1);
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\node (l1) at (-1.5, -0.5) {$l_1$};
\node (l2) at (0.5, -1.5) {$l_2$};
\node (l3) at (1.5, 0.5) {$l_3$};
\node (l4) at (-0.5, 1.5) {$l_4$};
\end{scope}
\begin{scope}[scale=0.5]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\end{tikzpicture}
\caption{The phases for the junction in Example~\ref{ex:phasesandprogram}. This junction has four incoming lanes and two phases, $p_1 = \{l_1, l_3\}$ and $p_2 = \{l_2, l_4\}$. Hence there is no specific lane left-turning left.}
\label{fig:phasesexamplejunc}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.2]
\draw[->] (0, 0) -- (6.5, 0) node[right] {$t$};
\draw[-] (0, 0.1) -- (0, -0.1) node[below] {$0$} ;
\draw[-] (2.5, 0.1) -- (2.5, -0.1) node[below] {$25$} ;
\draw[-] (3, 0.1) -- (3, -0.1) node[below] {$30$} ;
\draw[-] (5.5, 0.1) -- (5.5, -0.1) node[below] {$55$} ;
\draw[-] (6, 0.1) -- (6, -0.1) node[below] {$60$};
\node (c) at (0, 0.4) {$c(t)$};
\node (p1) at (1.25, 0.4) {$p_1$};
\node (p1p) at (2.75, 0.4) {$ p_1'$};
\node (p2) at (4.25, 0.4) {$p_2$};
\node (p2p) at (5.75, 0.4) {$ p_2'$};
\begin{scope}[scale=0.20, shift={(6, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[mygreen] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[mygreen] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myred] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myred] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(13.75, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myyellow] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myyellow] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myred] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myred] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(21.25, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myred] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myred] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[mygreen] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[mygreen] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(28.75, 8)}]
\draw[thick] (-3, 1) -- (-1, 1) -- (-1,3);
\draw[thick] (-3, -1) -- (-1, -1) -- (-1, -3);
\draw[thick] (1,3) -- (1,1) -- (3,1);
\draw[thick] (1, -3) -- (1,-1) -- (3, -1);
\fill[myred] (-1, -1) -- (-1.4, -1) -- (-1.4, 0) -- (-1, 0) -- cycle;
\fill[myred] (1, 1) -- (1.4, 1) -- (1.4, 0) -- (1, 0) -- cycle;
\fill[myyellow] (1, -1) -- (1, -1.4) -- (0, -1.4) -- (0, -1) -- cycle;
\fill[myyellow] (-1, 1) -- (-1,1.4) -- (0, 1.4) -- (0, 1) -- cycle;
\end{scope}
\begin{scope}[scale=0.20, shift={(6, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(13.75, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(21.25, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\begin{scope}[scale=0.20, shift={(28.75, 8)}]
\draw[dashed] (0, 3) -- (0,1);
\draw[dashed] (-3, 0) -- (-1, 0);
\draw[dashed] (0, -1) -- (0, -3);
\draw[dashed] (3, 0) -- (1, 0);
\end{scope}
\end{tikzpicture}
\caption{Example of a signal program for the junction in Example~\ref{ex:phasesandprogram}. In this example the signal program is $\mathcal T = \{ (p_1, 25), (p_1', 30), (p_2, 55), (p_2', 60)\}$.}
\label{fig:signaltiming}
\end{figure}
Consider the junction in Fig.~\ref{fig:phasesexamplejunc} with the incoming lanes numbered as in the figure. In this case the drivers turning left have to solve the collision avoidance by themselves. The phase matrix is
$$P = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{bmatrix} \, .$$
An example of signal program is shown in Fig.~\ref{fig:signaltiming}. Here the program is $\mathcal T = \{ (p_1, 25), (p_1', 30), (p_2, 55), (p_2', 60)\}$. which means that both the phases are activated for $25$ seconds each, and the clearance phases are activated for $5$ seconds each.
\end{example}
\medskip
Moreover, we let
$$T^{(j)} = \max\{t_\text{end} \mid (p, t_\text{end}) \in {\mathcal P}^{(j)} \}$$
denote the time when the signal program for junction $j$ ends, and hence a new signal timing program has to be determined.
\section{Feedback Controllers}\label{sec:controllers}
In this section, we present three different traffic signal controllers that all determine the signal program. The first two are discretization of the GPA controller, where the first one makes sure that all the clearance phase are activated during one cycle, and the second one only activates the clearance phases if their corresponding phase has been activated. The third controller is the MaxPressure controller.
All the three controllers are feedback-based, i.e., when one signal program has reached its end, the current queue lengths are used to determine the upcoming signal program. Moreover, the GPA controllers are fully distributed, in the sense that to determine the signal program in one junction, the controller only needs information about the queue-lengths on the incoming lanes for that junction. The MaxPressure controller is also distributed in the sense that it does not requires network wide information, but it requires queue length information from the neighboring junctions as well.
For all of the controller presented in this section, we assume for simplicity of the presentation that after a phase has been activated, a clearance phase has to be activated for a fixed amount of time $T_w > 0$, that is independent of which phase that has just been activated.
\subsection{GPA with Full Clearance Cycles} \label{sec:GPAfull}
For this controller, we assume that all the clearance phases have to be activated for each cycle. When $t = T^{(j)}$, a new signal program is computed by solving the following convex optimization problem:
\begin{equation}\label{eq:gpa}
\begin{aligned}
\optmax{\begin{matrix} \hspace{0.2em} \nu\in\mathbb{R}_+^{n_{p_j}} \\ w\in\mathbb{R}_+ \end{matrix}} & \sum_{l \in \mathcal L^{(j)}} x_l(t) \log\left( (P^T\nu)_l \right) + \kappa \log(w) \, , \\
\text{subject to}\quad & \sum_{1 \leq i \leq n_{p_j}} \nu_i + w = 1 \,, \\
& w \geq \bar{w} \, .
\end{aligned}
\end{equation}
In the optimization problem above, $\kappa > 0$ and $\bar{w} \geq 0$ are tuning parameters for the controller, and their interpretation will be discussed later.
The vector $\nu$ in the solution of the optimization problem above, determines the fraction of the cycle time that each phase should be activated, where each element in $\nu$ contains this fraction. The variable $w$ tells how large fraction of the cycle time that should be allocated to the clearance phases. Observe that as long as the queue lengths are finite $w$ will be strictly greater than zero. Since we assume that each clearance phase has to be activated for a fixed amount of time, $T_w > 0$, the total cycle length $T_\text{cyc}$ for the upcoming cycle can be computed by
$$T_\text{cyc} = \frac{n_{p_j} T_w}{w} \, .$$
With the knowledge of the full-cycle length, the signal program for the upcoming cycle can be computed according to Algorithm~\ref{algo:gpafull}.
Although the optimization problem can be solved in real-time using convex solvers, the optimization problem can also be solved analytically in the spacial cases. One such case is when the phases are orthogonal, i.e., every incoming lane only belongs to one phase. If the phases are orthogonal, then $P^T \mathbbm{1} = \mathbbm{1}$. In the case of orthogonal phases and $\bar{w} = 0$, the solution to the optimization problem in~\eqref{eq:gpa} is given by
\begin{equation} \label{eq:gpaorthogonal}
\begin{aligned}
\nu_i (x(t)) &= \frac{\sum_{l \in \mathcal L^{(j)}}P_{il}x_l(t)}{\kappa+\sum_{l \in \mathcal L^{(j)}} x_l(t)}\,,\qquad i=1,\ldots,n_{p_j}\,, \\
w(x(t)) &= \frac{\kappa}{\kappa+\sum_{l \in \mathcal L^{(j)}} x_l(t)} \,.
\end{aligned}
\end{equation}
From the expression of $w$ above, a direct expression for the total cycle length can be obtained
\begin{equation*}
\displaystyle T_\text{cyc} = T_w n_{p_j} +\frac{T_w n_{p_j}}{\kappa}{\sum_{l \in \mathcal L^{(j)}} x_l(t)} \, .\label{eq:cycletime}
\end{equation*}
From the expressions above we can observe a few things. First, we see that the fraction of the cycle that each phase is activated is proportional to the queue lengths in that phase, and this explains why we done this control strategy generalized proportional allocation. Moreover, we get an interpretation of the tuning parameter $\kappa$, it tells how the cycle length $T_\text{cyc}$ should scale with the current queue lengths. If $\kappa$ is small, even small queue lengths will cause longer cycles, while if $\kappa$ is large the cycles will be short even for large queues. Hence, a too small $\kappa$ may give too long cycles, which can result in that lanes get more green-light than needed and the controller ends up giving green light to empty lanes, while vehicles in other lanes are waiting for service. On the other hand, a too large $\kappa$ may make the cycle lengths so short, so that the fraction of the cycle that each phase gets activated is too short for the drivers to react on.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{GPA with Full Clearance Cycles}\label{algo:gpafull}
\DontPrintSemicolon
\KwData{Current time $t$, local queue lengths $x^{(j)}(t)$, phase matrix $P^{(j)}$, clearance time $T_w$, tuning parameters $\kappa, \bar w$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
$(\nu, w)$ $\leftarrow$ Solution to~\eqref{eq:gpa} given $x^{(j)}(t), P^{(j)}, \kappa, \bar w$\;
$T_\text{cyc} \leftarrow n_p \cdot T_w / w$ \;
$t_\text{end} \leftarrow t$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j}$}{
$t_\text{end} \leftarrow t_\text{end} + \nu_i \cdot T_\text{cyc}$ \;
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t_\text{end})$ \Comment*[r]{Add phase $p_i$}
$t_\text{end} \leftarrow t_\text{end} + T_w$ \;
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t_\text{end})$ \Comment*[r]{Add clearance phase $p_i'$}
}
\end{algorithm}
\end{figure}
\begin{remark}
In~\cite{nilsson2017generalized} we showed that the averaged continuous time GPA controller can stabilize, and hence keep the queue-lengths bounded, the network. Moreover, this averaged version is throughput-optimal, which means that no controller can handle more exogenous inflow to network than this controller.
\end{remark}
However, when the controller is discretized, the following example shows that an upper bound on the cycle length, i.e., $\bar{w} > 0$ is required to guarantee stability even for an isolated junction.
\begin{example}\label{ex:unstableunboundedcycle}
Consider a junction with two incoming lanes with unit flow capacity, both having their own phase, and let the exogenous inflows $\lambda_1 = \lambda_2 = \lambda$, $T_w = 1$, $\bar w = 0$, $x_1(0) = A > 0$, and $x_2(0) = 0$. The control signals and the cycle time for the first iteration is then given by
\begin{align*}
u_1(x(0)) &= \frac{A}{A+\kappa} \, , \\
u_2(x(0)) &= 0 \, , \\
T(x(0)) &= \frac{A+\kappa}{\kappa}.
\end{align*}
Observe that the cycle time $T(x(0))$ is strictly increasing with $A$. After one full service cycle, i.e., at $t_1 = T(x(0))$ the queue lengths are
\begin{align*}
x_1(t_1) &= A + T(x(0)) \left(\lambda - \frac{A}{A+\kappa} \right)= \overbrace{A + \lambda \frac{A+\kappa}{\kappa} - \frac{A}{\kappa}}^{f(A)} \, , \\
x_2(t_1) &= T(x(0)) \lambda = \lambda \left( \frac{A+ \kappa}{\kappa} \right).
\end{align*}
If $x_1(t_1) = 0$, then due to symmetry, the analysis of the system can be repeated in the same way with a new initial condition. To make sure that one queue always get empty during the service cycles, it must hold that $f(A) \leq 0$. Moreover, to make sure that the other queue grows, it must also hold that $x_2(t_1) > A$ which can be equivalently expressed as
\begin{align*}
A \kappa + \lambda(A + \kappa) - A &\leq 0 \, , \\
A \kappa - \lambda(A+\kappa) &< 0 \, .
\end{align*}
The choice of $\lambda = \kappa = 0.1$ and $A= 1$ is one set of parameters satisfying the constraints above, and will hence make the queue lengths and cycle times grow unboundedly. How queue lengths and cycle times evolve in this case is shown in Fig.~\ref{fig:unstableunboundedcycle}.
\end{example}
\begin{figure}
\centering
\input{tikzpictures/exampleblowup.tikz}
\caption{How the traffic volumes evolve in time together with the cycle times for the system in Example~\ref{ex:unstableunboundedcycle}. We can observe that the cycle length increases for each cycle.}
\label{fig:unstableunboundedcycle}
\end{figure}
\medskip
Imposing an upper bound on the cycle length, and hence a lower bound on $w$ will then shrink the throughput region. An upper bound of the cycle length may occurs naturally, due to the fact that the sensors cover a limited area and hence the measurements will saturate. However, we will later observe in the simulations that $\bar{w} > 0$ may improve the performance of the controller when it is simulated in a realistic scenario, even when saturation of the queue length measurements is possible.
\subsection{GPA with Shorted Cycles}\label{sec:GPAshorted}
One possible drawback of the controller in Section~\ref{sec:GPAfull} is that it has to activate all the clearance phases in one cycle. This property implies that if the junction is empty when the signal program is computed, it will take $n_{p_j} T_w$ seconds until a new signal program is computed. Motivated by this, we also present a version of the GPA where only the clearance phases get activated if their corresponding phases have been activated. If we let $n_{p_j}'$ denote the number of phases that will be activated during the upcoming cycle, the total cycle time is given by
$$T_\text{cyc} = \frac{n_{p_j}' T_w}{w} \, .$$
How to compute the signal program in this case, is shown in Algorithm~\ref{algo:gpashorted}.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\caption{GPA with Shorted Cycles}\label{algo:gpashorted}
\DontPrintSemicolon
\KwData{Current time $t$, local queue lengths $x^{(j)}(t)$, phase matrix $P^{(j)}$, clearance time $T_w$, tuning parameters $\kappa, \bar w$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
$(\nu, w)$ $\leftarrow$ Solution to~\eqref{eq:gpa} given $x^{(j)}(t), P^{(j)}, \kappa, \bar w$\;
\Comment*[l]{Compute the number of phases to be activated}
$n_{p_j}' \leftarrow 0$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j}$}{
\If{$\nu_i > 0$}{
$n_{p_j} ' \leftarrow n_{p_j}' + 1$ \;
}
}
\uIf{$n_{p_j} ' > 0$}{
\Comment*[l]{If vehicles are present on some phases, activate those}
$T_\text{cyc} \leftarrow n'_{p_j} \cdot T_w / w$ \;
$t_\text{end} \leftarrow t$ \;
\For{$i\leftarrow 1$ \KwTo $n_p$}{
\If{$\nu_i > 0$} {
$t_\text{end} \leftarrow t_\text{end} + \nu_i \cdot T_\text{cyc}$ \;
\Comment*[l]{Add phase $p_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t_\text{end})$ \;
$t_\text{end} \leftarrow t_\text{end} + T_w$ \;
\Comment*[l]{Add clearance phase $p'_i$ }
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t_\text{end})$
}}
}
\Else{
\Comment*[l]{If no vehicles are present, hold a clearance phase for one time unit}
$\mathcal T^{(j)} \leftarrow (p'_1, t+1)$
}
\end{algorithm}
\end{figure}
\subsection{MaxPressure}
As mentioned in the introduction, the MaxPressure controller is another throughput optimal feedback controller for traffic signals. The controller computes the difference between the queue lengths and their downstream queue lengths in each phase, to determine each phase's pressure. It then activates the phase with the most pressure for a fixed time interval. To compute the pressure, the controller needs information about where the outflow from every queue will proceed. To model this, we introduce the routing matrix $R \in \mathbb{R}_+^{\mathcal E \times \mathcal E}$, whose elements $R_{ij}$ tells the fraction of vehicles that will proceed from lane $i$ in the current junction to lane $j$ in a downstream junction.
With the knowledge of the routing matrix and under the assumption that the flow rates are the same for all phases, the pressure, $w_i$, for each phase $p_i \in \mathcal P^{j}$ can then be computed as
$$w_i = \sum_{l \in p_i} \biggl( x_l(t) - \sum_k R_{lk} x_k(t) \biggr) \, .$$
The phase that should be activated is then any phase in the set $ \argmax_i w_i \,.$
Apart from the routing matrix, the MaxPressure controller has one tuning parameter, the phase duration $d > 0$. That parameter tells how long a phase should be activated, and hence how long it should take until the pressures are resampled, and a new phase activation decision is made.
How to compute the signal program with the MaxPressure controller is shown in Algorithm~\ref{algo:maxpressure}.
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H] \DontPrintSemicolon
\caption{MaxPressure}\label{algo:maxpressure}
\KwData{Current time $t$, local queue lengths $x(t)$, phase matrix $P^{(j)}$, routing matrix $R$, phase duration $d$}
\KwResult{Signal program $\mathcal T^{(j)}$}
$\mathcal T^{(j)} \leftarrow \emptyset$ \;
$n_{p_j} \leftarrow $ Number of rows in $P^{(j)}$ \;
\For{$i\leftarrow 1$ \KwTo $n_{p_j} $}{
\For{$l \in \mathcal L^{(j)}$} {
\If{$l \in p_i^{(j)}$} {
$w_i \leftarrow w_i + x_l(t) - \sum_{k} R_{lk} x_k(t)$
}
}
}
$i \leftarrow \argmax_i w_i$ \;
\Comment*[l]{Add phase $p_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p_i, t + d)$ \;
\Comment*[l]{Add clearance phase $p'_i$}
$\mathcal T^{(j)} \leftarrow \mathcal T^{(j)} + (p'_i, t+ d + T_w)$ \;
\end{algorithm}
\end{figure}
\section{Comparison Between GPA and MaxPressure} \label{sec:comparision}
\begin{figure}
\centering
\input{tikzpictures/manhattangrid.tikz}
\caption{The Manhattan-like network used in the comparison between GPA and MaxPressure. }
\label{fig:network}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\input{tikzpictures/2by2junction.tikz}
&
\input{tikzpictures/2by3junction.tikz} \\
2 by 2 junction & 2 by 3 junction
\\
& \\
\input{tikzpictures/3by2junction.tikz}
&
\input{tikzpictures/3by3junction.tikz} \\
3 by 2 junction & 3 by 3 junction
\end{tabular}
\caption{The four different types of junctions present in the Manhattan grid, together with theirs phases.} \label{fig:junction}
\end{figure}
\subsection{Simulation setting}
To compare the proposed controller and the MaxPressure controller, we simulate both controllers on an artificial Manhattan-like grid with artificial demand.
The simulator we are using is open source micro simulator SUMO~\cite{SUMO2012}, which is a simulator that simulates every single vehicle's behavior in the traffic network.
A schematic drawing of the network is shown in Fig.~\ref{fig:network}. In a setting like this, we can elaborate with the tuning ratios, and provide the MaxPressure controller both correct and incorrect turning ratios. This allows us to investigate the robustness properties of both the controllers.
The Manhattan grid in Fig.~\ref{fig:network} has ten bidirectional north to south streets (indexed A to J) and ten bidirectional east to west streets (indexed 1 to 10). All streets with an odd number or indexed by letter A, C, E, G or I consist of one lane in each direction, while the others consist of two lanes in each direction. The speed limit on each lane is 50 km/h. The distance between each junction is three hundred meters. Fifty meters before each junction, every street has an additional lane, reserved for vehicles that want to turn left. Due to the varying number of lanes, four different junction topologies exist, all shown in Fig.~\ref{fig:junction}, together with the set of possible phases. Each junction is equipped with sensors on the incoming lanes that can measure the number of vehicles queuing up to fifty meters from the junction. The sensors measure the queue lengths by the number of stopped vehicles.
Since the scenario is artificial, we can generate demand with prescribed turning ratios and hence let the MaxPressure controller to run in an ideal setting. For the demand generation, we assume that at each junction a vehicle will with probability $0.2$ will turn left, with probability $0.6$ go straight and with probability $0.2$ turn right. We do assume that all vehicles depart from lanes connected to the boundary of the network, and all vehicles will also end their trips when they have reached the boundary of the network. In other words, no vehicles will depart or arrive inside the grid. We will study the controllers' performance for three different demands, where the demand determined by the probability that a vehicle will depart from each boundary lane each second. We denote this probability $\delta$, where the probabilities for the three different demands are $\delta = 0.05$, $\delta = 0.1$ and $\delta = 0.15$. We generate vehicles for $3600$ seconds and then simulate until all vehicles have left the network.
We also compare the results for the GPA controller and the MaxPressure controller with a standard fixed time (FT) controller and a proportional fair (PF) controller, i.e., the GPA controller with full clearance cycles, but with $\kappa =0$ and a prescribed fixed cycle length. For the fixed time controller, the phases which contain a straight movement are activated for $30$ seconds and phases only containing left or right turn movements are activated for $15$ seconds. The clearance time for each phase is still set to $5$ seconds. This means that the cycle lengths for each of the four types of junctions will be $110$ seconds. This is also the fixed cycle time we are using for the proportional fairness controller.
\subsection{GPA Results}
Since the phases in this scenario are all orthogonal, the expressions in~\eqref{eq:gpaorthogonal} can be used to solve the optimization problem in~\eqref{eq:gpa}. The tuning parameter $\bar{w}$ is set to $\bar{w} = 0$ for all simulations. In Table~\ref{tab:gpamanhattan} we show how the total travel time varies for the GPA controller with shorted cycles for different values of $\kappa$. For the demand $\delta = 0.15$ and $\kappa =1$ a gridlock situation occurs, probably due to the fact that vehicles back-spills into upstream junctions. We can see that a $\kappa =10$ seems to be the best choice for $\delta = 1$ and $\delta = 0.15$, while a higher $\kappa$ slightly improves the total travel time for the lowest demand investigated. Letting $\kappa = 10$ has been shown to be reasonable for other demand scenarios in the same network setting, as observed in~\cite{nilsson2018}. How the total queue lengths varies with time for $\kappa =5$ and $\kappa = 10$ is shown in Fig.~\ref{fig:gpamanhattan}.
\begin{table}
\centering
\caption{GPA with Shorted Cycles - Manhattan Scenario}
\label{tab:gpamanhattan}
\begin{tabular}{rcc}
$\kappa$ & $\delta$ & Total Travel Time [h] \\ \hline \hline
$1$ & $0.05$ & $1398$ \\
$5$ & $0.05$ & \phantom{0}$715$ \\
$10$ & $0.05$ & \phantom{0}$699$ \\
$15$ & $0.05$ & \phantom{0}$696$ \\
$20$ & $0.05$ & \phantom{0}$690$ \\
$1$ & $0.10$ & $7636$ \\
$5$ & $0.10$ & $1898$ \\
$10$ & $0.10$ & $1992$ \\
$15$ & $0.10$ & $2263$ \\
$20$ & $0.10$ & $2495$ \\
$1$ & $0.15$ & $+\infty$ \\
$5$ & $0.15$ & $5134$ \\
$10$ & $0.15$ & $4498$ \\
$15$ & $0.15$ & $5140$ \\
$20$ & $0.15$ & $6050$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6000, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.15.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.05.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.10.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.15.csv};
\legend{GPA $\kappa=10 \, \delta = 0.05$, GPA $\kappa=10 \, \delta = 0.10$, GPA $\kappa=10 \, \delta = 0.15$ , GPA $\kappa=5 \, \delta = 0.05$, GPA $\kappa=5 \, \delta = 0.10$, GPA $\kappa=5 \, \delta = 0.15$ }
\end{axis}
\end{tikzpicture}
\caption{How the queue length varies with time when the GPA with shorted cycles are used in Manhattan grid. The GPA is tested with two different values of $\kappa=5,10$ for the three demand scenarios $\delta = 0.05, 0.10, 0.15$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:gpamanhattan}
\end{figure}
\subsection{MaxPressure Results}
The MaxPressure controller decides its control action not only based on queue-lengths on the incoming lanes, but also on the downstream lanes. It is not always clear in which downstream lane a vehicle will end up in after leaving the junction. If a vehicle can choose between several lanes that are all valid for its path, the vehicle's lane choice will be determined during the simulation, and depend upon how many other vehicles that are occupying the possible lanes. Because of this, we assume that if a vehicle can choose between several lanes, it will try to join the shortest one. To exemplify how the turning ratios are estimated in those situations, assume that Moreover, assume that the overall probability that a vehicle is turning right is $0.2$, and going straight is $0.6$. If a vehicle going straight can choose between lane $l_1$, $l_2$, but $l_2$ is also used by vehicles turning right, the probability that the vehicle going straight will queue up in lane $l_1$ is assumed to be $0.4$ and that the probability that the vehicle will queue up in lane $l_2$ is estimated to be $0.2$.
To also investigate the MaxPressure controller's robustness with respect to the routing information, we perform simulations both when the controller has the correct information about the turning probabilities, i.e., that a vehicle will turn right with probability $0.2$, continue straight with probability $0.6$ and turn left with probability $0.2$. For the simulations when the MaxPressure has the wrong turning information, the controller instead has the information that with probability $0.6$ the vehicle will turn right, with probability $0.3$ the vehicle will proceed straight and with probability $0.1$ the vehicle will turn left. In the simulations, we consider three different phase durations, $d=10$ seconds, $d=20$ seconds and $d=30$ seconds.
How the total queue lengths vary over time for the different demands is shown in Fig.~\ref{fig:mpmanhattand0.05}, Fig.~\ref{fig:mpmanhattand0.10}, and~Fig.~\ref{fig:mpmanhattand0.15}. The total travel time, both when the MaxPressure controller is operating with the right, and the wrong turning ratios are shown in Table~\ref{tab:mpmanhattan}. From these results, we can conclude that a shorter phase duration, i.e., $d = 10$, is the most efficient for all demands. This probably has to do with a longer phase duration the activation time is becoming larger than the time it takes to empty the measurable part of the queue. Another interesting observation is that if the MaxPressure controller has wrong information about the turning ratios, its performance does not decrease significantly.
\begin{table}
\centering
\caption{MaxPressure - Manhattan Scenario}
\label{tab:mpmanhattan}
\begin{tabular}{cccc}
$d$ & $\delta$ & TTT correct TR [h] & TTT incorrect TR [h] \\ \hline \hline
$10$ & $0.05$ & 858 & 856\\
$20$ & $0.05$ & 1 079 & 1 102 \\
$30$ & $0.05$ & 1 172 & 1 193 \\
$10$ & $0.10$ & 1 865 & 1 864 \\
$20$ & $0.10$ & 2 254 & 2 312 \\
$30$ & $0.10$ & 2 690 & 2 718 \\
$10$ & $0.15$ & 3 511 & 3 488 \\
$20$ & $0.15$ & 3 992 & 4 102 \\
$30$ & $0.15$ & 5 579 & 5 590 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.05.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.05.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.05.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.05.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.05.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.05$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.05}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.10.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.10.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.10.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.10.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.10.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.10$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.10}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, legend style={at={(0.5,-0.25)},anchor=north}]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.15.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d20_l0.15.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d30_l0.15.csv};
\addplot[mark=none, color=mycolor1, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d10_l0.15.csv};
\addplot[mark=none, color=mycolor2, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d20_l0.15.csv};
\addplot[mark=none, color=mycolor3, dashed, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bpw_d30_l0.15.csv};
\legend{MP $d =10$, MP $d=20$, MP $d=30$}
\end{axis}
\end{tikzpicture}
\caption{The total queue length over time in the Manhattan grid with the MaxPressure (MP) controller with right turning ratios (solid) and wrong turning ratios (dashed). The demand is $\delta = 0.15$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:mpmanhattand0.15}
\end{figure}
\subsection{Summery of the Comparison}
To better observe the difference between the GPA and MaxPressure, we have plotted the total queue length with the GPA controller with $\kappa = 5$ and $\kappa = 10$, and the best MaxPressure configuration with $d = 10$. The results are shown in Fig.~\ref{fig:comparisonl0.05}, Fig.~\ref{fig:comparisonl0.10} and Fig.~\ref{fig:comparisonl0.15}. In the figures we have also included for reference the total queue lengths for the fixed time controller and the proportional fair controller. The total travel travel times for those controllers are given in Table~\ref{tab:fixedmanhattan}. When the demand is $\delta = 0.15$, a gridlock situation occurs with the proportional fair controller, just as happened with the GPA controller with $\kappa = 1$. From the simulations, we can conclude that, for this scenario, during high demands, the MaxPressure controller performs better than the GPA controller, while during low demands the GPA performs better. One explanation for this could be that during low demands, adopting the cycle length is critical, while during high demands when almost all the sensors are covered, it is more important to keep the queue balanced between the current and downstream lanes. The proportional fair controller that does not adopt its cycle length, performs always the worst, and in most of the cases a fixed time controller performs second worst. It is just for the demand $\delta = 0.15$, and during the draining phase that the fixed time controller performs better than the GPA controller.
\begin{table}
\centering
\caption{Fixed Time and Proportional Fair Control - Manhattan Scenario}
\label{tab:fixedmanhattan}
\begin{tabular}{ccc}
Controller & $\delta$ & Total Travel Time [h] \\ \hline \hline
FT & $0.05$ & $1201$ \\
FT & $0.10$ & $2555$ \\
FT & $0.15$ & $4642$ \\
PF & $0.05$ & $1694$ \\
PF & $0.10$ & $4165$ \\
PF & $0.15$ & $+\infty$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5000, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.05.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.05.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.05.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.05.csv};
\addplot[mark=none, color=black, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf_fixed_l0.05.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time, PF}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.05$.o improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.05}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=5500, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.10.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.10.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.10.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.10.csv};
\addplot[mark=none, color=black, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf_fixed_l0.10.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time, PF}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.10$. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.10}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=6cm, ylabel={Total Queue Length [m] }, xlabel={Time [s]}, xmax=6500, legend pos=north west, scaled y ticks = false,
y tick label style={/pgf/number format/fixed,
/pgf/number format/1000 sep = \thinspace
}, , legend style={at={(0.5,-0.25)},anchor=north}, legend columns=2]
\addplot[mark=none, color=mycolor1, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k5_l0.15.csv};
\addplot[mark=none, color=mycolor2, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_pf2_k10_l0.15.csv};
\addplot[mark=none, color=mycolor3, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_bp_d10_l0.15.csv};
\addplot[mark=none, color=mycolor4, thick] table [x index=0, y index=1]{plotdata/bpvspf/queue_fixed_l0.15.csv};
\legend{GPA $\kappa =5$, GPA $\kappa =10$, MP $d = 10$, Fixed Time}
\end{axis}
\end{tikzpicture}
\caption{A comparison between different control strategies for the Manhattan grid with the demand $\delta = 0.15$. Since the proportional fair controller (PF) creates a gridlock, it is not included in the comparison. To improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:comparisonl0.15}
\end{figure}
\section{LuST scenario} \label{sec:lust}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{luxnetwork.png}
\caption{The traffic network of Luxembourg city}
\label{fig:lust}
\end{figure}
To test the proposed controller in a realistic scenario, we make use of the Luxembourg SUMO Traffic (LuST) scenario presented in~\cite{codeca2017luxembourg}\footnote{The scenario files are obtained from \url{https://github.com/lcodeca/LuSTScenario/tree/v2.0}}. The scenario models the city center of Luxembourg during a full day, and the authors of~\cite{codeca2017luxembourg} have made several adjustments from some given population data when creating the scenario, to make it as realistic as possible.
The LuST network is shown in Fig.~\ref{fig:lust}. To each of the $199$ signalized junctions, we have added a lane area detector to each incoming lane. The length of the detectors are $100$ meters, or as long as the lane is if it is shorter than $100$ meters. Those sensors are added to give the controller real-time information about the queue-lengths at each junction.
As input to the system, we are using the Dynamic User Assignment demand data. For this data-set, the drivers try to take their shortest path (with respect to time) between their current position and destination. It is assumed that $70$ percent of the vehicles can recompute their shortest path while driving, and will do so every fifth minute. This rerouting possibility is introduced in order to model the fact that more and more drivers are using online navigation with real-time traffic state information, and will hence get updates about what the optimal route choice is.
In the LuST scenario, the phases are constructed in a bit more complex way and are not always orthogonal. For non-orthogonal phases, it is not always the case that all lanes receive yellow light when a clearance phase is activated. If the lane receives a green light in the next phase as well, it will receive green light during the clearance phase too. This property makes it more difficult to shorten the cycle, and for that reason, we choose to implement the controller which activates all the clearance phases in the cycle, i.e., the controller given in Section~\ref{sec:GPAfull}.
As mentioned, the phases in the LuST scenario are not orthogonal in each junction. Hence we have to solve the convex optimization problem in~\eqref{eq:gpa} to compute the phase activation. The computation is done by using the solver CVXPY\footnote{https://cvxpy.org} in Python. Although the controller can be implemented in a distributed manner, the simulations are in this paper performed on a single computer. Despite the size of the network, and that the communication via TraCI between the controller written in Python and SUMO slows down the simulations significantly, the simulations are still running about $2.5$ times faster than real-time. This shows that there is no problem with running this controller in a real-time setting.
Since the demand is high during the peak-hours in the scenario, gridlock situations occur. Those kinds of situations is unavoidable since there will be conflicts in the car following model. To make the simulation continue to run, SUMO has a teleporting option that is utilized in the original LuST scenario. The original LuST scenario is configured such that if that a vehicle has been looked for more than $10$ minutes, it will teleport along its route until there is free space. It is therefore important when we evaluate the control strategies that we keep track of the number of teleports, to make sure that the control strategy will not create a significantly larger amount of gridlocks, compared to the original fixed time controller. In Table~\ref{tab:lust} the number of teleports are reported for each controller. It is also reported how many of those teleports that are caused directly due to traffic jam, but one should have in mind that e.g., a gridlock caused by that two vehicles want to swap lanes, is often a consequence of a congestion.
The total travel time and the number of teleports for different choices of tuning parameters are shown in Table~\ref{tab:lust}. For the fixed time controller, we keep the standard fixed time plan provided with the LuST scenario. How the queue lengths vary with time for different $\bar w$ is shown in Fig.~\ref{fig:lustkappa5} for $\kappa =5$ and in Fig.~\ref{fig:lustkappa10} for $\kappa = 10$.
From the results, we can see that any controller with $\kappa = 10$ and $\bar{w}$ within the range of investigation will improve the traffic situation. However, the controller that yields the overall shortest total travel time is the one with $\kappa =5$ and $\bar{w} = 0.40$. This result suggests that tuning the GPA only with respect to $\kappa$, and keep $\bar{w} = 0$, may not lead to the best performance with respect to total travel time, although it gives higher theoretical throughput.
\begin{table}
\caption{Comparison of the different control strategies}
\label{tab:lust}
\centering
\begin{tabular}{lcccc}
& $\kappa$ & $\bar{w}$ & Teleports (jam) & Total Travel Time [h] \\ \hline \hline
GPA & $10$ & $0$ & 76 (6) & 49 791 \\
GPA & $10$ & $0.05$ & 65 (1) & 49 708 \\
GPA & $10$ & $0.10$ & 37 (0) & 49 519 \\
GPA & $10$ & $0.15$ & 57 (19) & 49 408 \\
GPA & $10$ & $0.20$ & 50 (10) & 49 380 \\
GPA & $10$ & $0.25$ & 35 (0) & 49 265\\
GPA & $10$ & $0.30$ & 30 (0) & 48 930\\
GPA & $10$ & $0.35$ & 25 (1) & 48 922\\
GPA & $10$ & $0.40$ & 51 (0) & 48 932 \\
GPA & $10$ & $0.45$ & 49 (5) & 49 076 \\
GPA & $10$ & $0.50$ & 42 (15) & 49 383 \\
GPA & $5$ & $0$ & 668 (76) & 57 249 \\
GPA & $5$ & $0.05$ & 234 (62) & 54 870 \\
GPA & $5$ & $0.10$ & 68 (10) & 52 038 \\
GPA & $5$ & $0.15$ & 47 (9) & 50 696 \\
GPA & $5$ & $0.20$ & 50 (6) & 49 904 \\
GPA & $5$ & $0.25$ & 41 (3) & 49 454 \\
GPA & $5$ & $0.30$ & 23 (0) & 48 964 \\
GPA & $5$ & $0.35$ & 30 (1) & 48 643 \\
GPA & $5$ & $0.40$ & 35 (5) & 48 445 \\
GPA & $5$ & $0.45$ & 39 (1) & 48 503 \\
GPA & $5$ & $0.50$ & 42 (10) & 48 772 \\
Fixed time & -- & -- & 122 (80) & 54 103\\ \hline
\end{tabular}
\end{table}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=8cm, ylabel={Total Queue Length [m] }, xlabel={Time}, legend pos=north west, xmin=0, xmax=24.00, xtick={0, 4, 8, 12, 16, 20, 24},
x filter/.code={\pgfmathparse{#1/3600+0}},
xticklabel={
\pgfmathsetmacro\hours{floor(\tick)}%
\pgfmathsetmacro\minutes{(\tick-\hours)*0.6}%
\pgfmathprintnumber{\hours}:\pgfmathprintnumber[fixed, fixed zerofill, skip 0.=true, dec sep={}]{\minutes}%
},
legend columns=2, legend style={at={(0.5,-0.25)},anchor=north}
]
\addplot[mark=none, color=mycolor1] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.0.csv};
\addplot[mark=none, color=mycolor2] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.1.csv};
\addplot[mark=none, color=mycolor3] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.2.csv};
\addplot[mark=none, color=mycolor4] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.30.csv};
\addplot[mark=none] table [x index=0, y index=1]{plotdata/csv/queue_pf_k5_tmin0.40.csv};
\addplot[mark=none, color=black, dotted] table [x index=0, y index=1]{plotdata/csv/queue_static.csv};
\legend{GPA $\bar{w} = 0$, GPA $\bar{w} = 0.1$, GPA $\bar{w} = 0.2$, GPA $\bar{w} = 0.3$, GPA $\bar{w} = 0.4$, Fixed Time }
\end{axis}
\end{tikzpicture}
\caption{How the queue lengths varies with time when the traffic lights in the LuST scenario are controlled with the GPA controller and the standard fixed time controller. For the GPA controller the paramters $\kappa = 5$ and different values of $\bar{w}$ are tested. In order to improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:lustkappa5}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\begin{axis}[ymode=log, width=8cm, height=8cm, ylabel={Total Queue Length [m] }, xlabel={Time}, legend pos=north west, xmin=0, xmax=24.00, xtick={0, 4, 8, 12, 16, 20, 24},
x filter/.code={\pgfmathparse{#1/3600+0}},
xticklabel={
\pgfmathsetmacro\hours{floor(\tick)}%
\pgfmathsetmacro\minutes{(\tick-\hours)*0.6}%
\pgfmathprintnumber{\hours}:\pgfmathprintnumber[fixed, fixed zerofill, skip 0.=true, dec sep={}]{\minutes}%
},
legend columns=2, legend style={at={(0.5,-0.25)},anchor=north}
]
\addplot[mark=none, color=mycolor1] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.0.csv};
\addplot[mark=none, color=mycolor2] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.1.csv};
\addplot[mark=none, color=mycolor3] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.2.csv};
\addplot[mark=none, color=mycolor4] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.30.csv};
\addplot[mark=none] table [x index=0, y index=1]{plotdata/csv/queue_pf_k10_tmin0.40.csv};
\addplot[mark=none, color=black, dotted] table [x index=0, y index=1]{plotdata/csv/queue_static.csv};
\legend{GPA $\bar{w} = 0$, GPA $\bar{w} = 0.1$, GPA $\bar{w} = 0.2$, GPA $\bar{w} = 0.3$, GPA $\bar{w} = 0.4$, Fixed Time }
\end{axis}
\end{tikzpicture}
\caption{How the queue lengths varies with time when the traffic lights in the LuST scenario are controlled with the GPA controller and the standard fixed time controller. For the GPA controller the paramters $\kappa = 10$ and different values of $\bar{w}$ are tested. In order to improve the readability of the results, the queue-lengths are averaged over $300$ seconds intervals.}
\label{fig:lustkappa10}
\end{figure}
\section{Conclusions}
In this paper, we have discussed implementational aspects of the Generalized Proportional Allocation controller. The controller's performance was compared to the MaxPressure controller both on an artificial Manhattan-like grid and for a real scenario. In comparison with MaxPressure, it was shown that the controller performs better than the MaxPressure controller when the demand is low, but the MaxPressure performs better during high demand. Those observations hold true even if the MaxPressure controller does not have correct information about the turning ratios in each junction.
While the information about the turning ratios and the queue lengths at neighboring junctions are needed for the MaxPressure controller, the GPA controller does not require any such information. This makes the GPA controller easier to implement in a real scenario, where the downstream junction may not be signalized and equipped with sensors. We showed that it is possible to both implement the GPA controller in a realistic scenario covering the city of Luxembourg and that it improves the traffic situation compared to a standard fixed time controller.
In all simulations, we have used the same tuning parameters for all junctions in the LuST scenario, while the fixed time controller is different for different junction settings. Hence the GPA controller's performance can be even more improved by tuning the parameters specifically for each junction. Ideally, this should be done with some auto-tuning solution, but it may also be worth to take static parameters into account, such as the sensor lengths. This is a topic for future research.
\bibliographystyle{ieeetr}%
| {'timestamp': '2019-01-30T02:02:33', 'yymm': '1901', 'arxiv_id': '1901.09976', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.09976'} |
\section{INTRODUCTION}
It is generally supposed that the high energy emission from blazars --
ie BL Lac objects and quasars which display some evidence of relativistic
jets -- arises from Compton scattering of low energy seed photons.
However the evidence for this supposition is quite weak.
There has been remarkably little progress, despite a great deal
of observational effort, in determining the details of the high energy
emission models. Various possibilities exist, all of which require
that the scattering particles are the relativistic electrons in the
jet. The most popular hypothesis is the Synchrotron Self-Compton
(SSC) model in which the seed photons are the synchrotron photons from
the jet, up-scattered by their parent electrons. Alternatively the
seed photons may arise externally to the jet (the External Compton,
EC, process) or, in a combination of the two models, photons from the
jet may be mirrored back to the jet (the Mirror Compton, MC, model)
from a gas cloud before scattering up to high energies. The various
models make slightly different predictions about the lags between the
seed and Compton-scattered variations, and about the relative
amplitudes of the two components and so, in principle, the models can
be distiguished (eg see Ghisellini and Maraschi 1996 and Marscher 1996
for summaries of the predictions of the various models). Much
observational effort has therefore been devoted to attempting to find
correlated variability in the high and low energy bands.
\begin{figure*}
\begin{center}
\leavevmode
\epsfxsize 0.8\hsize
\epsffile{xkmmmm.ps}
\end{center}
\caption{X-ray, infrared and millimetre lightcurves.
The X-ray counts are the total from 3 PCUs of the PCA.
The 1.3mm data are from the JCMT (filled circles), with some points from
OVRO (open squares). The 3mm data are all from OVRO. }
\label{fig:lcurves}
\end{figure*}
In the SSC model, it has generally been expected that, as the peak of
the synchrotron photon number spectrum lies in the mm band for most
radio-selected blazars, the mm would provide the bulk of the seed
photons and so would be well correlated with the X-ray emission.
However in the case of 3C273, one of the brightest blazars, extensive
searches have been carried out for a connection between the X-ray and
millimetre bands on both daily (M$\rm^{c}$Hardy\, 1993) and monthly (Courvoisier
\it et al.~\rm 1990; M$\rm^{c}$Hardy\, 1996) timescales but no correlation has been
found. The SSC model may, however, be saved if the flaring synchrotron
component is self-absorbed at wavelengths longer than $\sim1$ mm. We
therefore undertook a search for a correlation between the X-ray
and infrared emission in 3C273; previous observations (eg Courvoisier
\it et al.~\rm 1990; Robson \it et al.~\rm 1993) have confirmed that infrared flares in
3C273 are due to variations in a synchrotron component. In the past,
large amplitude infrared flares have been seen only rarely in 3C273
(eg Courvoisier \it et al.~\rm 1990; Robson \it et al.~\rm 1993), partially because of
limited sampling which usually could not detect flares with overall
timescales $\sim$week. Nonetheless the previous sampling was
sufficient to show that such flare activity is not a continual
occurence. It may be relevant that the present observations,
during which large amplitude infrared variability was detected,
were made during a period when the millimetre flux from 3C273 was very
high.
Here we present what we believe is the
best sampled observation of correlated variability between the
synchrotron and Compton-scattered wavebands in any blazar. The
observations cover not just one flaring event, which could be due to
chance, unrelated, flaring in the two wavebands, but two large
variations. The observations, including the X-ray, infrared and
millimetre lightcurves, and cross-correlation of the X-ray and other
waveband lightcurves, are described in Section 2. The origin of the
X-ray seed photons is discussed in Section 3, the implications of the
observations are discussed in Section 4 and the overall
conclusions are given in Section 5.
\section{OBSERVATIONS}
\subsection{X-ray Observations}
During the 6 week period from 22 December 1996 to 5 February 1997,
X-ray observations were carried out twice a day by RXTE and nightly near
infrared service observations were made at the United Kingdom Infrared
Telescope (UKIRT).
The X-ray observations were made with the large area (0.7 m$^{2}$)
Proportional Counter Array (PCA) on RXTE (Bradt, Rothschild
and Swank 1993). Each observation lasted for
$\sim1$ksec. The PCA is a non-imaging device with a field of view of
FWHM $\sim1^\circ$ and so the background count rate was calculated
using the RXTE q6 background model. Standard selection
criteria were applied to reject data of particularly high background
contamination.
3C273 is detectable in each observation in the energy range 3-20 keV
and its spectrum is well fitted by a simple power law. As with other
PCA spectra (eg The Crab --see
http://lheawww.gsfc.nasa.gov/users/keith/pcarmf.html) the measured
energy index, $\alpha$=0.7, is 0.1 steeper than measured by previous
experiments, eg GINGA (Turner \it et al.~\rm 1990). The X-ray spectra, and
spectral variability during the present observations are discussed in
detail by Lawson \it et al.~\rm (in preparation). The average count rate of 45 counts
s$^{-1}$ (3-20 keV) (the total for 3 of the proportional counter units,
PCUs, of the PCA) corresponds to a flux of $1.5 \times 10^{-10}$ ergs
cm$^{-2}$ s$^{-1}$ (2-10 keV).
In figure~\ref{fig:lcurves} we present the count rate in the 3-20 keV
band. We see two large X-ray flares. The first flare begins on
approximately 1 January 1997, reaches a peak on 4 January and returns
to its pre-flare level on 10 January. The flare is quite smooth. The
second flare begins on 22 January and lasts until approximately 1
February. The initial rise is faster than that of the first flare, and
the overall shape indicates a superposition of a number of smaller
flares. X-ray spectral variations are seen during the flares (Lawson
\it et al.~\rm in preparation), showing that changes in the Doppler factor of the jet
cannot, alone, explain the observed variability.
\subsection{Infrared and Millimetre Observations}
In figure~\ref{fig:lcurves} we show 1.3 and 3 mm observations from the
James Clerk Maxwell Telescope (JCMT - see Robson \it et al.~\rm 1993 for
reduction details) and from the Owens Valley Radio Observatory
(OVRO); the latter data were obtained from the calibration
database. There is no evidence of flares of comparable amplitude to
those in the X-ray lightcurve, but the sampling is poorer and the
errors are larger.
We also show the K-band lightcurve derived from service observations
at the United Kingdom Infrared Telescope (UKIRT) from 1 January until
3 February 1997. The observations were made with the infrared imaging
camera IRCAM3 with typical exposures of 3 minutes. The observations
were made in a standard mosaic manner and the data were also reduced
in a standard manner. There are some gaps due to poor weather but
increases in the infrared flux at the same time as the X-ray flares
can be seen clearly. The average K error is $\sim1$mJy (ie 1 per
cent). Approximately half of the error comes from the Poisson noise
and the rest comes from calibration uncertainties.
\subsection{X-ray/Infrared Cross-Correlation}
We have cross-correlated the X-ray lightcurves with the millimetre and
K-band lightcurves using the Edelson and Krolik (1988) discrete
cross-correlation algorithm as coded by Bruce Peterson (private communication).
As found previously there is no correlation of the
X-ray emission with the millimetre emission but there is a very strong
correlation with the infrared emission (figure~\ref{fig:xcor}) with
correlation coefficient close to unity. The cross-correlation peaks
close to zero days lag but is asymetric. Although we can rule out the
infrared lagging the X-rays by more than about one day, a lag of the
infrared by the X-rays by up to 5 days is possible.
The observations presented here are the first to show a definite
correlation in 3C273 between the X-ray emission and that of any
potential seed photons.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{xir_0.5.ps}
\end{center}
\caption{Cross-correlation of the 3-20 keV X-ray lightcurve and
K-band lightcurves shown in figure~\ref{fig:lcurves}.
}
\label{fig:xcor}
\end{figure}
\section{THE ORIGIN OF THE X-RAY SEED PHOTONS}
An important question is whether the infrared photons are actually the
seed photons for the X-ray emission or whether they are simply tracers
of a more extended spectral continuum, with the X-rays arising from
scattering of another part of the continuum. Robson \it et al.~\rm (1993) state
that in 3C273 the onset and peak of flares occur more or less
simultaneously (ie lags of $<1$ day) from K-band to 1.1 mm.
Therefore although we have not adequately monitored at wavelengths
longer than 2.2$\mu$, we assume that the whole IR to mm continuum does
rise simultaneously.
We have therefore calculated the Compton scattered spectrum resulting
from the scattering of individual decades of seed photon energies,
from the infrared to millimetre bands. The seed photons are taken from
a typical photon distribution and are scattered by a typical electron
distribution. The resulting scattered spectra are shown in
figure~\ref{fig:scatter} and details of the photon and electron
distributions are given in the caption to figure~\ref{fig:scatter}.
It is assumed that the emission region is optically thin which, in
blazars, is true for the large majority of frequencies discussed in
figure~\ref{fig:scatter}. Note that although the electron and input
photon spectra are self-consistent as regards the SSC mechanism, the
result is general and applies to scattering of seed photons produced
by any mechanism. At the highest Compton scattered energies, ie GeV,
only the highest energy seed photons below the break in the photon
distribution (ie near infrared) are important. However at medium
energy X-rays we get approximately equal contributions from each
decade of seed photons. Thus scattered infrared photons probably
contribute about 20 per cent of the medium energy X-ray flux and the sum of
the scattered X-ray emission from lower energy seed photons exceeds
that from the infrared alone. These ratios can be altered slightly by
different choices of seed photon and electron spectral index, but the
general result is robust.
If the infrared is indeed a tracer of the seed photon continuum, we
can extrapolate to find the expected variability in the millimetre
band. The peak and minimum observed K fluxes during our observations
are 124 and 93 mJy respectively, ie a range of 31 mJy, although we
note that we do not have K observations at either the peak or minimum
of the X-ray lightcurves and so the true range of K-band variability
may be somewhat more. If the spectral index, $\alpha$, of the seed
spectrum is 0.75 (as reported by Robson \it et al.~\rm and Stevens \it et al.~\rm 1998)
we would then expect a rise of $\sim$3.7 Jy at 1.3 mm, which we cannot
rule out in the present observations and which would not have been
easy to detect in previous, less well sampled, monitoring
observations, explaining the lack of success of previous searches for
millimetre/X-ray correlations. At 3mm the predicted variability
amplitude would be 7 Jy. Robson \it et al.~\rm states that the 3mm rises lag
1mm rises by about 6 days, and 3mm decays are substantially longer,
which would all make them easier to detect, given our sampling
pattern. However, with the exception of the very last datapoint at
day 44, no deviations of more than 5 Jy from the mean level are
detected. The implication is that $\alpha \leq 0.75$ or that the
flaring component is self absorbed by 3mm. If the flaring component
has $\alpha=1.2$ as derived for the 1983 flare by Marscher and Gear
(1985), that component would have to be self absorbed by 1.3mm.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{compton.ps}
\end{center}
\caption{Compton scattered spectrum resulting from the scattering of
a seed photon spectrum stretching from $\nu=10^{8}$ to $10^{20}$ Hz.
At low frequencies the spectral index, $\alpha$, where flux,
$F, \propto \nu^{-\alpha}$, is 0.75 and, above a break frequency
of $10^{14}$ Hz, $\alpha = 1.5$. The electron number energy spectrum,
$N(\gamma) \propto \gamma^{-m}$, where $\gamma$ is the Lorentz factor
of the individual electrons, stretches from $\gamma=10$ to $10^{7}$,
with a slope, $m$, at low energies of 2.5 and $m=4.0$ above
$\gamma=10^{4}$. The proper Klein-Nishina cross section is used.
No bulk relativistic motion is included.
The thick line represents the total scattered spectrum. The
other lines represent the result of scattering seed photons with
only one decade of energy.
Note that, in the medium energy X-ray band (4 keV= $10^{18}$ Hz), seed
photons from all decades from cm to near infrared contribute equally
to the scattered flux, with each contributing about 20 per cent.}
\label{fig:scatter}
\end{figure}
\section{DISCUSSION}
There are two major observational constraints on the X-ray emission
mechanism: the relative amplitudes of the synchrotron and Compton
scattered components, and the time lag between them. Here we
attempt to constrain these parameters by modelling the X-ray lightcurve.
\subsection{Modelling the X-ray lightcurve}
If the X-ray emission is physically related to the infrared emission,
then we can parameterise the relationship by:
\[ X_{predicted}(t)= A \, (K_{flux}(t-\delta t) - K_{quiescent})^{N}
\, + \, X_{quiescent} \]
$K_{quiescent}$ is a non-varying K-band component. Robson \it et al.~\rm (1993)
show that such a component, steady on a timescale of years, is
contributed probably by warm dust in the broad line clouds,
heated to the point of evaporation.
Following Robson \it et al.~\rm we fix $K_{quiescent}=50$mJy.
$K_{flux}(t-\delta t)$ is the total observed K-band flux at time
$t-\delta t$ and $X_{predicted}(t)$ is then the predicted total X-ray
flux at time $t$. $X_{quiescent}$ is the part of the X-ray flux which
does not come from the flaring region. The variable $\delta t$ is
included to allow for lags between the X-ray and infrared variations.
Initially we set $\delta t = 0$ but, in section 4.2, we consider the
implications of allowing $\delta t$ to vary.
$A$ is the constant of proportionality
(containing information about the electron
density, magnetic field and the various flux conversion constants)
and $N$ contains information about the emission mechanism. For
example if the X-rays arise from variations in electron density then
we expect $N=2$ in the SSC and MC processes, but in the EC model $N=1$.
We have therefore performed a $\chi^{2}$ fit, using a standard
Levenburg-Marquardt minimisation routine, comparing the predicted
X-ray flux with the observed flux, in order to determine the three
unknowns, $A$, $X_{quiescent}$ and $N$. The errors on the predicted
X-ray flux are derived from the observed errors on the infrared flux.
The present infrared lightcurve is not well enough sampled to
determine all 3 parameters independently but, if $X_{quiescent}$ could
be determined precisely from other observations, then we could
determine $N$ to $\pm0.2$. Here $N$ varies from 0.5 for
$X_{quiescent}=0$ to 1.0 for $X_{quiescent}=23$ and 2.0 for
$X_{quiescent}=35$. The minimum observed value of the total X-ray
count rate during the present observations was 35 count s$^{-1}$.
Hence as some part of those 35 count s$^{-1}$ almost certainly comes
from X-ray components which are not associated with the flaring
activity, eg a Seyfert-like nucleus or other parts of the jet, then
the maximum allowed value of $N$ is probably just below 2. Typical
RXTE count rates outside of major flaring periods are in the range
20-25 counts s$^{-1}$ and fluxes observed by previous satellites (eg
see Turner \it et al.~\rm 1990) correspond to the same flux range. If that
count rate represents the true value of $X_{quiescent}$, then $N$ is
probably nearer unity, favouring EC models, or SSC or MC models in
which variations in the magnetic field strength play an important part
in flux variations.
\subsection{Implications of lightcurve modelling for lags}
Comparison of the best-fit predicted and observed X-ray fluxes reveals
that, in the first flare, the predicted fluxes exceed the observed
fluxes on the rise and the reverse is true on the fall. A better fit,
at least for the first flare, occurs if the predicted lightcurve is
delayed by about a day (in other words, the observed IR leads the
X-rays). We therefore introduced a variety of time shifts, $\delta
t$ above,
into the IR lightcurve, and also separately considered the first
and second flares, and refitted. We applied simple linear
interpolation to estimate the IR flux at the exact (shifted) time of
the X-ray observations. The results are shown in
figure~\ref{fig:lagschi}.
When considering all of the IR data, we obtain a plot (top panel of
figure~\ref{fig:lagschi}) which is rather similar to the
cross-correlation plot (figure~\ref{fig:xcor}), which is not too
surprising as the analysis techniques are similar, although the
modelling in principle allows us to quantify the goodness of fit.
We are cautious of overinterpreting the above datasets and so, we
prefer to plot figure~\ref{fig:lagschi} in terms of raw $\chi^{2}$
rather than probabilities which might be taken too literally. As in
many analyses where the errors are small, slight (real) differences in
data streams lead to low probabilities of agreement even though
overall agreement is very good. Here a minor variation in either
X-rays or IR from a region not associated with the flare could
provide that small difference. However the change in relative
goodness of fit can be easily seen from the $\chi^{2}$ plots.
When we consider separately the IR data from the first flare
(ie the 11 data points up to day 20 of 1997), or from the second flare
(the remaining 5 data points) we obtain much better fits. We find
that the first flare is best fitted if the IR leads the X-rays by
about 0.75 days. We are again cautious in ascribing exact errors
to the lag but changes of $\delta \chi^{2}$ of 6.4, corresponding
to 40 per cent confidence, occur in the first flare at 0.25 days from
the minimum value. A lag of the X-rays by the IR by less
than 0.25 days is ruled out at the 99.97 per cent confidence level.
The more limited data of the second flare is, however,
best fitted by simultaneous IR and X-ray variations.
Again with caution, we note that Lawson \it et al.~\rm (in preparation) find
different X-ray spectral behaviour between the two flares. In the
first flare the spectrum hardens at the flare onset but, at the peak,
the spectrum is softer than the time averaged spectrum; in the second
flare the hardness tracks the flux quite closely with the hardest
emission corresponding to the peak flux. Thus there do appear to be
differences between the two flares. However whether the observed
differences are due to differences in, for example the physical
parameters of the emitting region (eg density, magnetic field
strength), the strength of any exciting shock, or the geometry of the
emitting regions, is not yet clear but is an interesting subject
for future investigations.
Although not really intended for such analysis, blind application of
Keith Horne's Maximum Entropy Echo Mapping software to the whole
dataset also leads to the conclusion that the IR leads the X-rays by
0.75 days (Horne, private communication).
As an example we show, in figure~\ref{fig:xpred}, the observed X-ray
lightcurve and the predicted lightcurve, based on parameters derived
from fitting to just the first flare with the IR leading by 0.75 days
(the best fit). We see that such a lag does not fit the second flare
well. In particular the predicted X-ray fluxes for the second flare
all lie above the observed fluxes by about 4 counts s$^{-1}$ and the
predicted fluxes now slightly lag (by about half a day) the observed
fluxes. One possible explanation of the excess is that $X_{quiescent}$
is lower during the second flare. From our long term weekly
monitoring (in preparation) we note that the two flares shown here are
actually superposed on a slowly decreasing trend of the correct slope
to explain the excess. Inclusion of such a trend into our fitting
procedure does produce a slightly better fit for the overall dataset,
but the different lags between the first and second flare still
prevent a good overall fit from being obtained. We therefore favour
the explanation that the long term lightcurve is actually made
up of a number of short timescale (week) flares, superposed on a more
slowly varying (months) `quiescent' component, rather than proposing
that the lightcurve is made up entirely of short flares, with no
underlying `quiescent' component.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{lagschi.ps}
\end{center}
\caption{
Results of comparing the observed X-ray lightcurve with that predicted
from the infrared variations, with all parameters allowed to remain
free apart from the X-ray/infrared lag. The numbers of degrees of
freedom are 13 (both flares), 8 (first flare) and 2 (second flare).
Note that it is impossible
to obtain a good fit to both X-ray flares simultaneously but acceptable
fits can be obtained to both fits individually. However the lags are
different for the two flares with the X-rays lagging the infrared by
$\sim0.75$ days in the first flare but the X-rays and infrared being
approximately simultaneous in the second flare.
}
\label{fig:lagschi}
\end{figure}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize 1.0\hsize
\epsffile{xpredict.ps}
\end{center}
\caption{Observed X-ray lightcurve (histogram) and the best fit
predicted X-ray flux (filled squares) based on the parameters derived
from fitting the infrared observations to the first flare (11 data points)
only. The best-fit parameters are $A=0.47$, $N=0.98$ and $X_{quiescent}=22.1$.
Following from figure~\ref{fig:lagschi} the lead of the
IR over the observed X-rays is fixed at 0.75 days,
the best-fit value for the first flare.
The observed X-ray errorbars (see figure~\ref{fig:lcurves})
are not repeated here to avoid cluttering the diagram.
Note how the predicted X-ray fluxes for the second `flare' are then
systematically overestimated and also slightly lag the observed
X-ray fluxes.}
\label{fig:xpred}
\end{figure}
\subsection{The X-ray Emission Mechanism}
The similarity between the present infrared variations and those of
previous infrared variations where the whole IR to mm continuum varied
together (Robson \it et al.~\rm 1993), and the lack of any other likely source
of rapidly variable infrared radiation, means that the varying
component of the infrared flux is almost certainly synchrotron
radiation from the jet. The very strong correlation between the X-ray
and infrared lightcurves shows that the same electrons which produce
the infrared synchrotron emission must also produce the scattered
X-ray emission. The original version of the EC
model (Dermer and Schlickeiser 1993)
in which the high energy variations are
caused by variations in the external seed photons is thus ruled out.
The next version, in which the electrons in the jet which produce the
infrared synchrotron emission also scatter an all-pervading ambient
nuclear photon field (Sikora, Begelman and Rees 1994)
is also ruled out, at least for the first flare,
as we would then expect exactly simultaneous X-ray and
infrared variations.
The remaining possible emission mechanisms are the SSC process,
which must occur at some level, and the MC process. In the SSC process
we expect, for moderate variations such as those observed here where
the emission region probably remains optically thin, that the X-ray
flares will lag the IR flares (in the source frame) by approximately
the light travel time across the radius of the emission region. The lag is
because most photons will not be scattered where they
are produced but will typically travel the radius of the emission
region before being scattered. In this model we can therefore deduce
the radius if we know the bulk Lorentz factor of the jet.
In the MC model the low energy photons also lead the high energy
photons, in this case by approximately the light travel time between
the emission region in the jet and the cloud.
If the cloud forms part of the broad line region we
might reasonably expect lags of order days.
The EC model is ruled out by the IR/X-ray lag but both the SSC and MC
models are consistent with the lag. The parameter $N$ is not yet well
defined but the present indications are that it is closer to 1 than to
2, which, for the SSC and MC models, implies that changes in magnetic
field strength are at least partially responsible for the observed
variations. The MC compton scattered flux has a higher dependence on
the bulk Lorentz factor of the jet than does the SSC mechanism, but
that factor is very hard to measure.
\section{CONCLUSIONS}
We have demonstrated, for the first time, a strong relationship between
the X-ray emission and that in any other lower frequency band in
3C273. We have shown that the IR and X-ray emission in 3C273 are very
strongly correlated. By means of a simple calculation we have shown that
each decade of the synchrotron spectrum from the cm to IR bands probably
contributes equally (at about 20 per cent per decade) to the Compton scattered
X-ray flux. Overall the lag between the IR and X-ray bands is very small
but, in at least the first flare, the IR
leads the X-ray emission by $\sim0.75\pm0.25$ days.
This lag rules out the EC model but is consistent with either the
SSC or MC model.
We have attempted to measure the parameter $N$ which determines the
relationship between the seed photon and Compton
scattered flux. The present data do not greatly constrain $N$ although
they indicate that 2 is the absolute upper limit and that a lower
value is probable. In terms of the SSC or MC models the implication is
that changes in the magnetic field strength are responsible for
at least part of the observed variations and, for $N=1$, could
be responsible for all of the variations.
Because of their intrinsic similarity, the SSC and MC models
are hard to distinguish. However if it were possible to measure
IR/X-ray lags for a number of flares, of similar amplitude, in the
same source, then in the SSC model one would expect broadly similar
lags in each case, assuming that the emission comes from physically
similar emission regions. However in the MC model the reflecting
clouds will probably be at a variety of different distances and
so the lags should be different in each case.
We may also examine variations in optical and UV emission line
strength. If synchrotron radiation from the jet is irradiating
surrounding clouds (MC process), then we would expect the resultant
recombination line radiation to vary with similar amplitude to, and
simultaneously with, the synchrotron emission. However in
the SSC process we would expect no change in emission line
strength.
Further X-ray/IR observations with $\sim$few hour time resolution
are required to refine the lag found here and to determine whether
the lag is different in different flares.
\\
{\bf Acknowledgements} We are very pleased to thank the management and
operatonal staff of both RXTE and UKIRT for their cooperation in
scheduling and carrying out these observations.
We thank Keith Horne for running our data through his MEMECHO software.
IM$\rm ^{c}$H thanks PPARC for grant support
and APM was
supported in part by NASA Astrophysical Theory Grant NAG5-3839.
| {'timestamp': '1999-07-27T15:48:08', 'yymm': '9907', 'arxiv_id': 'astro-ph/9907383', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9907383'} |
\subsection*{Abstract.}
``Dual composition'', a new method of constructing energy-preserving
discretizations of conservative PDEs, is introduced. It extends
the summation-by-parts approach to arbitrary differential operators
and conserved quantities. Links to pseudospectral, Galerkin,
antialiasing, and Hamiltonian methods are discussed.
}
\medskip
\subsection*{1. Introduction}
For all $u,v\in C^1([-1,1])$,
$$
\int_{-1}^1 v \partial_x w\, dx =
-\int_{-1}^1 w \partial_x v\, dx + [vw]_{-1}^1,
$$
so the operator $\partial_x$ is skew-adjoint on $\{v\in C^1([-1,1]:
v(\pm1)=0\}$ with respect to the $L^2$ inner product $\ip{}{}$. Take $n$
points $x_i$, a real function $v(x)$, and
estimate $v'(x_i)$ from the values $v_i := v(x_i)$. In vector
notation, ${\mathbf v}' = D {\mathbf v}$, where $D$ is a differentiation matrix.
Suppose that
the differentiation matrix has the form $D = S^{-1}A$, in which $S$
induces a discrete approximation
$$\ip{{\mathbf v}}{{\mathbf w}}_S := {\mathbf v}^{\mathrm T} S {\mathbf w}\approx \int vw\,dx=\ip{v}{w},$$
of the inner product. Then
\begin{equation}
\label{byparts}
\ip{{\mathbf v}}{D{\mathbf w}}_S + \ip{D{\mathbf v}}{{\mathbf w}}_S = {\mathbf v}^{\mathrm T} S S^{-1} A {\mathbf w} + {\mathbf v}^{\mathrm T} A^{\mathrm T}
S^{-\mathrm T} S {\mathbf w} = {\mathbf v}^{\mathrm T}(A+A^{\mathrm T}){\mathbf w},
\end{equation}
which is zero if $A$ is antisymmetric
(so that $D$ is skew-adjoint with respect to $\ip{\,}{}_S$),
or equals $[vw]_{-1}^1$ if $x_1=-1$, $x_n=1$, and
$A+A^{\mathrm T}$ is zero except for $A_{nn}=-A_{11}=\frac{1}{2}$.
Eq. (\ref{byparts}) is known as a ``summation by parts'' formula;
it affects the energy flux of methods built from $D$.
More generally, preserving structural features such as skew-adjointness
leads to natural and robust methods.
Although factorizations $D=S^{-1}A$ are ubiquitous in finite element
methods, they have been less studied elsewhere. They were introduced
for finite difference methods in \cite{kr-sc} (see \cite{olsson} for
more recent developments) and for spectral methods in \cite{ca-go}, in which
the connection between spectral collocation and Galerkin methods was used
to explain the skew-adjoint structure of some differentiation matrices.
Let ${\operator H}(u)$ be a continuum conserved quantity, the {\em energy.}
We consider PDEs
\begin{equation}
\label{eq:hamilt_pde}
\dot u = {\operator D}(u)\frac{\delta\H}{\delta u}
\mbox{,}
\end{equation}
and corresponding ``linear-gradient'' spatial discretizations
\cite{mclachlan2,mclachlan1,mqr:prl}, ODEs of
the form
\begin{equation}
\label{eq:lin_grad}
\dot {\mathbf u} = L({\mathbf u}) \nabla H({\mathbf u})
\end{equation}
with appropriate discretizations of $u$, ${\operator D}$, ${\operator H}$, and
$\delta/\delta u$. For a PDE of the form (\ref{eq:hamilt_pde}), if
${\operator D}(u)$ is formally skew-adjoint, then $d{\operator H}/dt$ depends only on the
total energy flux through the boundary; if this flux
is zero, ${\operator H}$ is an integral. Analogously, if
(\ref{eq:lin_grad}) holds, then
$\dot H = \frac{1}{2}(\nabla H)^{\mathrm T} (L+L^{\mathrm T}) \nabla H$,
so that $H$ cannot increase if the symmetric part of $L$ is
negative definite, and $H$ is an integral if $L$ is antisymmetric.
Conversely, all systems with an integral can be written in
``skew-gradient'' form ((\ref{eq:lin_grad}) with $L$ antisymmetric)
\cite{mqr:prl}.
Hamiltonian systems are naturally in the form
(\ref{eq:hamilt_pde}) and provide examples.
This paper summarizes \cite{mc-ro}, which contains
proofs and further examples.
\subsection*{2. Discretizing conservative PDEs}
In (\ref{eq:hamilt_pde}), we want to allow constant operators such as
${\operator D}=\partial_x^n$ and ${\operator D} = \left(
\begin{smallmatrix}0 & 1 \\ -1 & 0\\ \end{smallmatrix}
\right)$, and nonconstant ones such as
${\operator D}(u) = u\partial_x + \partial_x u$.
These differ in the class of functions and boundary conditions which make
them skew-adjoint, which suggests Defn. 1 below.
Let (${\functionspace F},\ip{}{})$ be an inner product space.
We use two subspaces ${\functionspace F}_0$ and ${\functionspace F}_1$ which can be infinite dimensional
(in defining a PDE) or finite dimensional (in defining a discretization).
We write $\{f_j\}$ for a basis of
${\functionspace F}_0$, $\{g_j\}$ for a basis of ${\functionspace F}_1$, and expand $u=u_j f_j$,
collecting the coefficients $(u_j)$ into a vector ${\mathbf u}$.
A cardinal basis is one in which $f_j(x_i) = \delta_{ij}$, so that
$u_j = u(x_j)$.
\begin{definition}
A linear operator
$${\operator D}: {\functionspace F}_0\times {\functionspace F}_1 \to {\functionspace F},\quad {\operator D}(u)v\mapsto w\mbox{,}$$
is {\em formally skew-adjoint} if there is a functional $b(u,v,w)$,
depending only on the boundary values of $u$, $v$, and $w$ and their
derivatives up to a finite order, such that
$$
\ip{v}{{\operator D}(u)w} = -\ip{w}{{\operator D}(u)v}+b(u,v,w)\quad \forall\, u\in {\functionspace F}_0
,\ \forall\, v,w\in {\functionspace F}_1 .
$$
${\functionspace F}_1$ is called a {\em domain of interior skewness} of ${\operator D}$.
If $b(u,v,w) = 0$ $\forall\,u\in{\functionspace F}_0$, $\forall\,v,w\in{\functionspace F}_1$,
${\functionspace F}_1$ is called a {\em domain of skewness} of ${\operator D}$,
and we say that ${\operator D}$ is skew-adjoint.
\end{definition}
\begin{example}\rm Let ${\functionspace F}^{\rm pp}(n,r) = \{u\in C^r([-1,1]):u|_{[x_i,x_{i+1}]}
\in {\functionspace P}_n\}$ be the piecewise polynomials of degree $n$ with $r$ derivatives.
For ${\operator D}=\partial_x$,
${\functionspace F}^{\rm pp}(n,r)$, $n,\ r\ge 0$, is a domain of interior
skewness, i.e., continuity suffices,
and $\{u\in{\functionspace F}^{\rm pp}(n,r):u(\pm 1)=0\}$ is a domain of skewness.
\end{example}
\begin{example}\rm
With $D(u) = 2(u\partial_x + \partial_x u) + \partial_{xxx}$, we have
$$
\ip{v}{{\operator D}(u)w}+\ip{w}{{\operator D}(u)v} = [w_{xx}v - w_x v_x + w v_{xx} + 2 uvw],$$
so suitable domains of interior skewness are ${\functionspace F}_0 = {\functionspace F}^{\rm
pp}(1,0)$, ${\functionspace F}_1={\functionspace F}^{\rm pp}(3,2)$, i.e., more smoothness is required
from $v$ and $w$ than from $u$.
A boundary condition which makes ${\operator D}(u)$ skew is $\{v:
v(\pm 1)=0,\ v_x(1)=v_x(-1) \}$.
\end{example}
\begin{definition} ${\functionspace F}_0$ is
{\em natural for ${\operator H}$} if $\forall u \in {\functionspace F}_0$ there exists
$\frac{\delta {\operator H}}{\delta u}\in{\functionspace F}$ such that
\[
\lim_{\varepsilon\rightarrow 0}
\frac{ {\operator H}(u+\varepsilon v) - {\operator H}(u) }{ \varepsilon }
= \ip{v}{\frac{\delta {\operator H}}{\delta u}}
\quad \forall\, v\in{\functionspace F}
\mbox{.}
\]
\end{definition}
The naturality of ${\functionspace F}_0$ often follows from the vanishing of the
boundary terms, if any, which appear of the first variation of ${\operator H}$,
together with mild smoothness assumptions.
We use appropriate
spaces ${\functionspace F}_0$ and ${\functionspace F}_1$ to generate spectral, pseudospectral, and
finite element discretizations which have discrete energy
$H:={\operator H}|_{{\functionspace F}_0}$ as a conserved quantity. The discretization of the
differential operator ${\operator D}$ is a linear operator $\overline{\operator D}
:{\functionspace F}_1\to{\functionspace F}_0$, and the discretization of the variational derivative
$\frac{\delta\H}{\delta u}$ is $\overline{\frac{\delta\H}{\delta u}}\in{\functionspace F}_1$.
Each of $\overline {\operator D}$ and $\overline\frac{\delta\H}{\delta u}$ is a weighted residual
approximation \cite{finlayson}, but each uses spaces of
weight functions different from its space of trial functions.
\begin{definition}
$S$ is the matrix of $\ip{}{}|_{{\functionspace F}_0\times{\functionspace F}_1}$, i.e.
$S_{ij} := \ip{f_i}{g_j}$.
$A(u)$ is the matrix of the linear operator ${\operator A}:(v,w)\mapsto\ip{v}{{\operator D}(u)w}$,
i.e. $A_{ij}(u) := \ip{g_i}{{\operator D}(u)g_j}$.
\end{definition}
\begin{proposition}
Let ${\functionspace F}_0$ be natural for ${\operator H}$ and let $S$ be nonsingular. Then for
every $u\in{\functionspace F}_0$ there is a unique element
$\overline{\frac{\delta {\operator H}}{\delta u}}\in{\functionspace F}_1$ such that
\[
\ip{w}{\overline{\frac{\delta {\operator H}}{\delta u}}} =
\ip{w}{\frac{\delta {\operator H}}{\delta u}} \quad \forall\, w\in{\functionspace F}_0
\mbox{.}
\]
Its coordinate representation is $S^{-1}\nabla H$ where $H({\mathbf u}):={\operator H}(u_i f_i)$.
\end{proposition}
\begin{proposition}
\label{prop:D}
Let $S$ be nonsingular. For every $v\in{\functionspace F}_1$, there exists a
unique element $\overline{\D}v\in{\functionspace F}_0$ satisfying
\[
\ip{\overline{\D}v}{w} = \ip{{\operator D} v}{w} \quad \forall\, w\in{\functionspace F}_1 \mbox{.}
\]
The map $v\mapsto\overline{\D}v$ is linear, with matrix representation $D:=S^{-\mathrm T} A$.
\end{proposition}
\begin{definition}
$\overline{{\operator D}}\overline{\frac{\delta{\operator H}}{\delta u}}:{\functionspace F}_0\to{\functionspace F}_0$
is the {\em dual composition discretization} of
${\operator D}\frac{\delta{\operator H}}{\delta u}$.
\end{definition}
Its matrix representation is $S^{-\mathrm T} A S^{-1} \nabla H$.
The name ``dual composition'' comes from the dual roles played
by ${\functionspace F}_0$ and ${\functionspace F}_1$ in defining $\overline{{\operator D}}$
and $\overline{\frac{\delta\H}{\delta u}}$
which is necessary so that their composition has the required
linear-gradient structure.
Implementation and accuracy of
dual composition and Galerkin discretizations are similar. Because
they coincide in simple cases, such methods are widely used already.
\begin{proposition}
If ${\functionspace F}_1$ is a domain of skewness, the matrix $S^{-\mathrm T} A S^{-1}$
is antisymmetric, and the system of ODEs
\begin{equation}
\label{eq:disc}
\dot{\mathbf u}
=
S^{-\mathrm T} A S^{-1} \nabla H
\end{equation}
has $H$ as an integral. If, in addition, ${\operator D}$ is constant---i.e.,
does not depend on $u$---then the system (\ref{eq:disc}) is Hamiltonian.
\end{proposition}
The method of dual compositions also yields
discretizations of linear differential operators ${\operator D}$ (by taking
${\operator H}=\frac{1}{2}\ip{u}{u}$), and discretizations of variational
derivatives (by taking ${\operator D}=1$).
It also applies to formally {\em self}-adjoint
${\operator D}$'s and to mixed (e.g. advection-diffusion) operators, where
preserving symmetry gives control of the energy.
The composition of two weighted residual discretizations is not
necessarily itself of weighted residual type. The simplest case is
when ${\functionspace F}_0={\functionspace F}_1$ and we compare the dual composition to the
{\em Galerkin discretization}, a weighted
residual discretization of ${\operator D} \frac{\delta {\operator H}}{\delta u}$ with
trial functions and weights both in ${\functionspace F}_0$. They are the same when
projecting $\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$, applying ${\operator D}$, and
again projecting to ${\functionspace F}_0$, is equivalent to directly projecting
${\operator D}\frac{\delta\H}{\delta u}$ to ${\functionspace F}_0$.
For brevity, we assume ${\functionspace F}_0={\functionspace F}_1$ for the rest of Section 2.
\begin{proposition}
\label{prop:galerkin}
$\overline{{\operator D}}\overline{\frac{\delta\H}{\delta u}}
$ is the Galerkin approximation of
${\operator D} \frac{\delta\H}{\delta u}$ if and only if
$ {\operator D} \big( \overline{\frac{\delta\H}{\delta u}} - \frac{\delta\H}{\delta u} \big) \perp {\functionspace F}_0.$
This occurs if
(i) ${\operator D}({\functionspace F}_0^\perp)\perp{\functionspace F}_0$, or
(ii) $\overline{\operator D}$ is exact and applying ${\operator D}$ and orthogonal
projection to ${\functionspace F}_0$ commute, or
(iii) $\overline{\frac{\delta {\operator H}}{\delta u}}$ is exact,
i.e., $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$.
\end{proposition}
Fourier spectral methods with ${\operator D}=\partial_x^n$ satisfy (ii), since
then ${\functionspace F}$ has an orthogonal
basis of eigenfunctions ${\mathrm e}^{ijx}$ of ${\operator D}$, and differentiating
and projecting (dropping the high modes) commute. This is illustrated
later for the KdV equation.
The most obvious situation in which $\frac{\delta\H}{\delta u}\in{\functionspace F}_0$ is when
${\operator H}=\frac{1}{2}\ip{u}{u}$, since then
$\frac{\delta\H}{\delta u}=u\in{\functionspace F}_0$ and ${\operator D}\frac{\delta {\operator H}}{\delta u}={\operator D} u$,
and the discretization of ${\operator D}$ is obviously the Galerkin one!
When the functions $f_j$ are nonlocal, $D$ is often called the
spectral differentiation matrix. The link to standard pseudospectral
methods is that some Galerkin methods are pseudospectral.
\begin{proposition}
\label{prop:pseudo}
If ${\operator D}({\functionspace F}_1)\subseteq{\functionspace F}_1$, then $\overline{{\operator D}}v={\operator D} v$,
i.e., the Galerkin approximation of the derivative is exact.
If, further, $\{f_j\}$ is a cardinal basis,
then $D$ is the standard pseudospectral differentiation matrix,
i.e. $D_{ij} = {\operator D} f_j(x_i)$.
\end{proposition}
We want to emphasize that although $A$, $S$, and $D$ depend on the basis,
$\overline{\operator D}$ depends only on ${\functionspace F}_0$ and ${\functionspace F}_1$, i.e., it is
basis and grid independent.
In the factorization $D=S^{-\mathrm T} A$, the (anti)symmetry of $A$ and $S$ is basis
independent, unlike that of $D$. These points are well known in
finite elements, less so in pseudospectral methods.
\begin{example}[\bf Fourier differentiation\rm]\rm
Let ${\functionspace F}_1$ be the trigonometric polynomials of degree $n$, which is
closed under differentiation (so that Prop. \ref{prop:pseudo}) applies,
and is a domain of skewness of ${\operator D}=\partial_x$. In any basis, $A$ is
antisymmetric. Furthermore, the two popular bases, $\{\sin(j x)_{j=1}^n,
\cos(j x)_{j=0}^n\}$, and the cardinal basis on equally-spaced grid
points, are both orthogonal, so that $S=\alpha I$ and $D=S^{-1}A$ is
antisymmetric in both cases.
\end{example}
\begin{example}[\bf Polynomial differentiation\rm]\rm
\label{sec:cheb}
${\functionspace F}_1={\functionspace P}_n([-1,1])$ is a domain of interior skewness which is
closed under ${\operator D}=\partial_x$, so pseudospectral differentiation
factors as $D=S^{-1}A$ in any basis. For a cardinal
basis which includes $x_0=-1$, $x_n=1$, we have $(A+A^{\mathrm T})_{ij}=-1$
for $i=j=0$, $1$ for $i=j=n$, and 0 otherwise, making obvious
the influence of the boundary.
For the Chebyshev points $x_i = -\cos(i
\pi/n)$, $i=0,\dots,n$, $A$ can be evaluated first in a basis
$\left\{ T_i \right\}$ of Chebyshev polynomials:
one finds
$A_{ij}^{\rm cheb} = 2 j^2/(j^2-i^2)$ for $i-j$ odd, and
$S_{ij}^{\rm cheb} -2(i^2+j^2-1)/
[((i+j)^2-1)((i-j)^2-1)]$ for $i-j$ even, with other entries 0.
Changing to a cardinal basis by
$F_{ij} = T_j(x_i) = \cos(i j \pi/n)$, a
discrete cosine transform, gives $A=F^{-1} A^{\rm cheb} F^{-\mathrm T}$.
For example, with $n=3$
(so that $(x_0,x_1,x_2,x_3)=(-1,-\frac{1}{2}, \frac{1}{2},1)$), we have
$$ D =
{\scriptstyle \frac{1}{6}}
\left(
\begin{smallmatrix}
-19 & 24 & -8 & 3 \\
2 & -6 & -2 & 6 \\
-6 & 2 & 6 & -2 \\
-3 & 8 & -24 & 19 \\
\end{smallmatrix}
\right)
= S^{-\mathrm T} A =
{\scriptstyle \frac{1}{256}}
\left(
\begin{smallmatrix}
4096 & -304 & 496 & -1024\\
-304 & 811 & -259 & 496\\
496 & -259 & 811 & -304\\
-1024 & 496 & -304 & 4096\\
\end{smallmatrix}
\right)
{\scriptstyle \frac{1}{270}}
\left(
\begin{smallmatrix}
-135 & 184 & -72 & 23 \\
-184 & 0 & 256 & -72\\
72 & -256 & 0 & 184 \\
-23 & 72 & -184 & 135
\end{smallmatrix}
\right).
$$
$S$ and $A$ may be more amenable to study than $D$ itself.
All their eigenvalues are very well-behaved; none are spurious. The
eigenvalues of $A$ are all imaginary and, as $n\to\infty$, uniformly
fill $[-i\pi,i\pi]$ (with a single zero eigenvalue corresponding
to the Casimir of $\partial_x$).
The eigenvalues of $S$ closely approximate the
quadrature weights of the Chebyshev grid.
\end{example}
For ${\operator D}\ne\partial_x$, $\overline{{\operator D}}$ may be quite expensive
and no longer pseudospectral. (There is in general no
$S$ with respect to which the pseudospectral approximation of
${\operator D} v$ is skew-adjoint.) However, $\overline{{\operator D}}v$ can
be computed quickly if fast transforms between cardinal and
orthonormal bases exist. We evaluate ${\operator D} v$ exactly for
$v\in{\functionspace F}_1$ and then project $S$-orthogonally to ${\functionspace F}_1$.
\begin{example}[\bf Fast Fourier Galerkin method\rm]\rm
\label{fastfourier}
Let ${\operator D}(u)$ be linear in $u$, for example, ${\operator D}(u) = u\partial_x
+ \partial_x u$. Let $u,\ v\in{\functionspace F}_1$, the trigonometric
polynomials of degree $n$. Then ${\operator D}(u)v$ is
a trigonometric polynomial of degree $2n$, the first $n$ modes of
which can be evaluated exactly using antialiasing and Fourier
pseudospectral differentiation. The approximation whose error
is orthogonal to ${\functionspace F}_1$ is just these first $n$ modes, because $S=I$
in the spectral basis. That is, the antialiased
pseudospectral method is here identical to the Galerkin method, and hence
skew-adjoint. Antialiasing makes pseudospectral methods conservative.
This is the case of the linear ${\operator D}$'s of the Euler fluid equations.
\end{example}
\begin{example}[\bf Fast Chebyshev Galerkin method\rm]\rm
Let ${\operator D}(u)$ be linear in $u$ and let $u,\ v\in{\functionspace F}_1={\functionspace P}_n$.
With respect to the cardinal basis on the Chebyshev grid with $n+1$
points, $\overline{{\operator D}}(u)v$ can be computed in time ${\mathcal O}(n \log n)$ as follows:
(i)
Using an FFT, express $u$ and $v$ as Chebyshev polynomial
series of degree $n$;
(ii) Pad with zeros to get Chebyshev polynomial series of formal
degree $2n$;
(iii) Transform back to a Chebyshev grid with $2n+1$ points;
(iv) Compute the pseudospectral approximation of ${\operator D}(u)v$ on the
denser grid. Being a polynomial of degree $\le 2n$, the
corresponding Chebyshev polynomial series is exact;
(v) Convert ${\operator D}(u)v$ to a Legendre polynomial series using a fast
transform \cite{al-ro};
(vi) Take the first $n+1$ terms. This produces
$\overline{{\operator D}}(u)v$, because the Legendre
polynomials are orthogonal.
(vii) Convert to a Chebyshev polynomial series with $n+1$ terms
using a fast transform;
(viii) Evaluate at the points of the original Chebyshev grid using an FFT.
\end{example}
\subsection*{3. Examples of the dual composition method}
\begin{example}[\bf The KdV equation\rm]\rm
$ \dot u + 6 u u_x + u_{xxx}=0$ with
periodic boundary conditions has features which can be used to illustrate
various properties of the dual composition method. Consider two of its
Hamiltonian forms,
$$
\dot u = {\operator D}_1\frac{\delta\H_1}{\delta u}\mbox{, } {\operator D}_1 =
\partial_x\mbox{, } {\operator H}_1 = \int\big( -u^3+\frac{1}{2} u_x^2\big)\,dx\mbox{,}$$
and
$$
\dot u = {\operator D}_2\frac{\delta {\operator H}_2}{\delta u}\mbox{, } {\operator D}_2 =
-(2u\partial_x + 2\partial_x u + \partial_{xxx})\mbox{, } {\operator H}_2 =
\frac{1}{2}\int u^2\,dx\mbox{.}$$
In the case ${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm trig}$, $v:=\overline{\frac{\delta\H_1}{\delta u}}$
is the orthogonal projection to ${\functionspace F}_0$ of $\frac{\delta\H_1}{\delta u}=-3u^2-u_{xx}$; this can be
computed by multiplying out the Fourier series and dropping all but
the first $n$ modes, or by antialiasing.
Then $\overline{{\operator D}}_1 v = v_x$, since
differentiation is exact in ${\functionspace F}^{\rm trig}$. Since ${\operator D}_1$
is constant, the discretization is a Hamiltonian system, and since
$\overline{{\operator D}}_1$ is exact on constants, it also
preserves the Casimir ${\mathcal C}=\int u\,dx$.
In this formulation, Prop. \ref{prop:galerkin} (ii) shows that
the dual composition and Galerkin approximations of ${\operator D}_1\frac{\delta\H_1}{\delta u}$ coincide,
for differentiation does not map high modes to lower modes, i.e.,
${\operator D}_1({\functionspace F}^{{\rm trig}\perp})\perp{\functionspace F}^{\rm trig}$.
In the second Hamiltonian form, $H_2 = \frac{1}{2}{\mathbf u}^{\mathrm T} S {\mathbf u}$, $\frac{\delta\H_2}{\delta u} =
S^{-1}\nabla H_2 = {\mathbf u},$ and the Galerkin approximation of $\frac{\delta\H_2}{\delta u}$ is exact,
so that Prop. \ref{prop:galerkin} (iii) implies that the composition
$\overline{\operator D}_2\overline{\frac{\delta\H_2}{\delta u}}$ {\em also} coincides with the Galerkin
approximation. $\overline{{\operator D}}_2v$ can evaluated using antialiasing
as in Example \ref{fastfourier}. $\overline{{\operator D}}_2$ is
not a Hamiltonian operator, but still generates a skew-gradient
system with integral $H_2$. Thus in this (unusual) case,
the Galerkin and antialiased pseudospectral methods coincide and have
three conserved quantities,
$H_1$, $H_2$, and ${\mathcal C}|_{{\functionspace F}^{\rm trig}}$.
The situation for finite element methods with
${\functionspace F}_0={\functionspace F}_1={\functionspace F}^{\rm pp}(n,r)$ is different.
In the first form, we need $r\ge 1$ to ensure
that ${\functionspace F}_0$ is natural for ${\operator H}_1$; in the second form, naturality is
no restriction, but we need $r\ge2$ to ensure that ${\functionspace F}_1$ is a domain
of interior skewness. The first dual composition method
is still Hamiltonian with
integral $H_1$ and Casimir $C=u_i\int f_i\, dx$, but because
$\overline{\operator D}_1$ does not commute with projection to ${\functionspace F}_1$, it is {\em
not} a standard Galerkin method.
In the second form, $\frac{\delta\H_2}{\delta u}=u$ is still exact, so the
dual composition and Galerkin methods still coincide.
However, they are not Hamiltonian.
\end{example}
\begin{example}[\bf An inhomogeneous wave equation\rm]\rm
When natural and skew boundary conditions conflict, it is necessary
to take ${\functionspace F}_0\ne{\functionspace F}_1$. Consider
$ \dot q = a(x)p$, $\dot p = q_{xx}$, $q_x(\pm1,t)=0$.
This is a canonical Hamiltonian system with
$$ {\operator D} = \left(\begin{matrix}0 & 1 \\ -1 & 0 \\\end{matrix}\right),\
{\operator H} = \frac{1}{2}\int_{-1}^1 \big(a(x)p^2 + q_x^2\big)\, dx,\
\frac{\delta {\operator H}}{\delta q} = -q_{xx},\
\frac{\delta {\operator H}}{\delta p} = a(x)p.$$
Note that (i) the boundary condition is
natural for ${\operator H}$, and (ii)
no boundary conditions are required for ${\operator D}$ to be skew-adjoint in $L^2$.
Since $\overline{\frac{\delta\H}{\delta u}}$ is computed with trial functions in ${\functionspace F}_1$, we
should not include $q_x(\pm1)=0$ in ${\functionspace F}_1$, for this would be to
enforce $(-q_{xx})_x=0$.
In \cite{mc-ro} we show that a spectrally accurate dual composition method is
obtained with
$ {\functionspace F}_0 = \{ q\in {\functionspace P}_{n+2}: q_x(\pm 1)=0 \} \times {\functionspace P}_n$ and
$ {\functionspace F}_1 = {\functionspace P}_n\times {\functionspace P}_n$.
\end{example}
\subsection*{4. Quadrature of Hamiltonians}
\label{sec:quadrature}
Computing $\nabla H =\nabla{\operator H}(u_j f_j)$ is not always possible in closed form.
We would like to approximate ${\operator H}$ itself by quadratures in real space.
However, even if the discrete $H$ and its gradient are spectrally accurate
approximations, they cannot always be used to construct spectrally
accurate Hamiltonian discretizations.
In a cardinal basis,
let ${\operator H}=\int h(u)dx$ and define the
quadrature Hamiltonian $H_q:= h( u_j) w_j = {\mathbf w}^{\mathrm T} h({\mathbf u})$
where $w_j = \int f_j dx$ are the quadrature weights.
Since $\nabla H_q = W h'({\mathbf u})$, $\frac{\delta\H}{\delta u}\approx W^{-1}\nabla H_q$,
Unfortunately,
$DW^{-1}\nabla H_q$ is not a skew-gradient system, while
$D S^{-1} \nabla H_q$ is skew-gradient, but is not an accurate approximation.
$D W^{-1} \nabla H_q$ can only be a skew-gradient
system if $DW^{-1}$ is antisymmetric, which occurs in three general cases.
(i) On a constant grid, $W$ is a multiple of the identity, so
if $D$ is antisymmetric, $D W^{-1}$ is too.
(ii) On an arbitrary grid with $D=\left(
\begin{smallmatrix}
0 & I \\
-I & 0\\
\end{smallmatrix}\right)$,
$DW^{-1}$ is antisymmetric.
(iii) On a Legendre grid with ${\functionspace F}_0={\functionspace F}_1$,
$S=W$, and $D W^{-1} = W^{-1} A W^{-1}$ is antisymmetric.
The required compatibility between
$D$ and $W$ remains an intriguing and frustrating obstacle to the
systematic construction of conservative discretizations of strongly
nonlinear PDEs.
| {'timestamp': '1999-07-05T23:30:00', 'yymm': '9907', 'arxiv_id': 'math/9907024', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9907024'} |
\section{Introduction}
Researchers studying
the amenability Thompson's group $F$ will be familiar with a distrust of experimental methods applied to this problem.
Part of this scepticism stems from the fact that (if it is amenable) $F$ is known to have a very quickly growing \emph{F\o lner function} \cite{Moore-Folner}.
However, experimental algorithms investigating amenability are rarely based on F\o lner's criteria directly, and
to date
no identification is made in the literature of a mechanism by which a quickly growing F\o lner function could interfere with a given experimental method.
In this paper we identify such a mechanism for a recent algorithm proposed by first author, A. Rechnitzer, and E.
J. Janse van Rensburg \cite{ERR}, which was designed to experimentally detect amenability via the Grigorchuk-Cohen
characterisation in terms
of the cogrowth function. We will refer to this as the ERR algorithm in the sequel.
We show that, in the ERR algorithm, estimates of the asymptotic cogrowth rate
are compromised by sub-dominant behaviour in the reduced-cogrowth function.
However,
even though sub-dominant behaviour in the cogrowth function may interfere with estimates of the asymptotic growth rate, the ERR algorithm can still be used to estimate other properties of the cogrowth function to high levels of accuracy.
In particular we are able re-purpose the algorithm to quickly estimate initial values of the cogrowth function even for groups for which the determination of the asymptotic growth rate is not
possible (for example groups with unsolvable word problem).
The present work started out as an independent verification by the second author
of the experimental results in \cite{ERR}, as part of his PhD research.
More details can be found in \cite{CamPhD}.
The article is organised as follows.
In Section~\ref{sec:prelim} we give the necessary background on amenability, random walks and cogrowth, followed by a summary of previous experimental work on the amenability of $F$. In Section~\ref{sec:R} a function quantifying
the sub-dominant properties of the reduced-cogrowth function is defined.
In Section~\ref{sec:ERRsection} the ERR algorithm is summarised, followed by an analysis of two types of pathological behaviour in Section~\ref{sec:pathological_behaviour}. The first of these is easily handled, while the second is shown to depend on sub-dominant terms in the reduced-cogrowth function.
In Section~\ref{sec:appropriation} the ERR method is modified to provide estimates of initial cogrowth values. Using this the first 2000 terms for the cogrowth function of Thompson's group $F$ are estimated.
\section{Preliminaries}\label{sec:prelim}
We begin with a definition of terms and a quick survey of experimental work done on estimating amenability.
\subsection{Characterisations of amenability}
The following characterisation of amenability is due to Grigorchuk \cite{grigorchuk1980Cogrowth} and Cohen \cite{cohen1982cogrowth}. A shorter proof of the equivalence of this criteria with amenability was provided by Szwarc \cite{Szwarc_on_grig_cohen}.
\begin{defn}
\label{defn:amenabilityCogrowth}
Let $G$ be a finitely generated non-free group with symmetric generating set $S$.
Let $c_n$ denote the number of freely reduced words of length $n$
over $S$
which are equal to the identity in $G$.
Then $G$ is amenable if and only if
$$\limsup_{n\rightarrow\infty} c_n^{1/n}=|S|-1.$$
Equivalently, let $d_n$ denote the number of words (reduced and unreduced) of length $n$ over $S$
which are equal to the identity. Then $G$ is amenable if and only if
$$\limsup_{n\rightarrow\infty} d_n^{1/n}=|S|.$$
The function $n\mapsto c_n$ is called the {\em reduced-cogrowth function} for $G$ with respect to $S$, and $n\mapsto d_n$ the {\em cogrowth function}.
\end{defn}
Kesten's criteria for amenability is given in terms of the probability of a random walk on the group returning to
its starting point.
\begin{defn}\label{defn:amenabilityKesten}
Let $G$ be a finitely generated group, and let $\mu$ be a symmetric
measure on $G$. The random walk motivated by $\mu$
is a Markov chain on the group starting at the identity where the probability of moving from $x$ to $y$ is $\mu(x^{-1}y)$.
Note the distribution after $n$ steps is given by the $n$-fold
convolution power of $\mu$, which we denote as $\mu_n$. That is, $\mu_n(g)$ is the probability that an $n$-step walk starting at $e$ ends at $g$.
By Kesten's criteria \cite{Kesten} a group is amenable
if and only if $$\limsup_{n\rightarrow\infty} (\mu_n(e))^{1/n}=1.$$
\end{defn}
Pittet and Saloff-Coste proved that the asymptotic
decay rate of the probability of return function is independent
of measure chosen, up to the usual equivalence \cite{stabilityRandomWalk}.
For finitely generated groups we can choose the
random walk motivated by the uniform probability measure on a finite generating set. This random walk is called a \emph{simple
random walk} and corresponds exactly with a random walk on the Cayley graph.
For this measure the probability of return is given by
\begin{equation}\label{eqn:mu-d}
\mu_n(e) =\frac{d_n}{|S|^n},\end{equation}
where the (reduced and non-reduced) cogrowth terms $d_n$ are calculated with
respect to the support of the measure.
Thus the cogrowth
function arises from a special case of return probabilities.
F\o lner's characterisation of amenability \cite{Folner} can be phrased in several
ways. Here we give the definition for finitely generated
groups.
\begin{defn}
\label{defn:amenabilityFolner}
Let $G$ be a group with finite generating set $S$. For each
finite subset $F\subseteq G$, we denote by $|F|$ the number of
elements in $F$. The {\em boundary} of a finite set $F$ is defined to be
$$\partial F=\lbrace
g\in G\;:\;g\notin F, gs\in F \text{ for some }s\in S
\rbrace.$$
A finitely generated group $G$ is amenable if and only if there exists
a sequence of finite subsets $F_n$ such that
$$
\lim_{n\rightarrow\infty} \frac{\vert \partial F_n \vert}{\vert F_n\vert}
=0.$$
\end{defn}
Vershik \cite{Vershik-folner-function} defined the following function as a way to quantify how much of the Cayley graph must be considered before sets with a given isoperimetric profile can be found.
\begin{defn}
The F\o lner function of a group is
$$f(n)=\min\left\lbrace |F|\;:\;\frac{|\partial F|}{|F|}<\frac{1}{n} \right\rbrace.$$
\end{defn}
Significant literature exists on F\o lner functions. It is known that
there exists finitely
presented amenable groups with F\o lner functions
growing faster than $n^{n^n}$
(\cite{KrophollerMartino} Corollary~6.3)
and finitely generated groups (iterated wreath product of $k$ copies of $\Z$) with F\o lner functions
growing faster than $\displaystyle n^{n^{\iddots}}$ of height $k$ for arbitrary $k$ \cite{ershler2003isoperimetric}.
\subsection{Experimental work on the amenability of $F$}
Richard Thompson's group $F$ is the group with
presentation
\begin{equation}
\langle a,b \mid [ab^{-1},a^{-1}ba],[ab^{-1},a^{-2}ba^2] \rangle
\label{eqn:Fpresentation}
\end{equation}
where $[x,y]=xyx^{-1}y^{-1}$ denotes the commutator of two elements. See for example \cite{CFP} for a more detailed introduction to this group.
Whether or not $F$ is amenable has attracted a large amount of interest, and has so far evaded many different attempts at a proof of both positive and negative answers.
The following is a short summary of experimental work previously done on Thompson's group $F$.
\begin{itemize}
\item[\cite{ComputationalExplorationsF}]
Burillo, Cleary and Wiest 2007.
The authors randomly choose words and reduce them to a normal form to test if they represent the identity element. From this they estimate the proportion of words of length $n$ equal to the identity, as a way to compute the asymptotic growth rate of the cogrowth function.
\item[\cite{Arzhantseva}]
Arzhantseva, Guba, Lustig, and Pr{\'e}aux 2008.
The authors study the {\em density} or least upper bound for the average vertex degree of any finite subgraph of the Cayley graph; an $m$-generated group is amenable if and only if the density of the corresponding Cayley graph is $2m$ (considering inverse edges as distinct). A computer program is run and data is collected on a range of amenable and non-amenable groups. They find a finite subset
in $F$ with density $2.89577$ with respect to the $2$ generator presentation above. (To be amenable one would need to find sets whose density approaches $4$).
Subsequent theoretical work of Belk and Brown gives sets with density approaching $3.5$ \cite{BelkBrown}.
\item[\cite{ElderCogrowthofThompsons}]
Elder, Rechnitzer and Wong 2012.
Lower bounds on the cogrowth rates of various groups are obtained by computing the dominant eigenvalue of the adjacency matrix of truncated Cayley graphs. These bounds are extrapolated to estimate the cogrowth rate.
As a byproduct the first 22 coefficients of the cogrowth series are computed exactly.
\item[\cite{Haagerup}]
Haagerup, Haagerup, and Ramirez-Solano 2015.
Precise lower bounds of certain norms of elements in the group ring of $F$ are computed, and
coefficients of the first 48 terms of the cogrowth series are computed exactly.
\item[\cite{ERR}]
Elder, Rechnitzer and van Rensburg 2015.
The {\em Metropolis Monte Carlo} method from statistical mechanics is adapted to estimate the asymptotic growth rate of the cogrowth function by running random walks on the set of all trivial words in a group. The results obtained for Thompson's group $F$ suggest it to be non-amenable.
We describe their method in more detail in Section~\ref{sec:ERRsection} below.
\end{itemize}
Justin Moore \cite{Moore-Folner} (2013) has shown that if $F$ were amenable then its F\o lner function would increase
faster than a tower of $n-1$ twos,
$$2^{2^{2^{\iddots}}}$$
This result has been proposed as an obstruction to all computational methods for approximating amenability; a computationally infeasibly large portion of the Cayley graph must be considered before sets with small boundaries can be found.
However, in all but one of the experimental algorithms listed above computing F\o lner sets was not the principle aim.
In order to understand how a bad F\o lner function affects the performance of these methods, we need to understand the connection between convergence properties of the respective limits in the various characterisations of amenability.
\section{Quantifying sub-dominant cogrowth behaviour}
\label{sec:R}
The F\o lner function
quantifies the rate of convergence of the limit in Definition~\ref{defn:amenabilityFolner}. We consider the following definitions as an attempt to quantify the rate of convergence of the limits in Definition~\ref{defn:amenabilityCogrowth}.
\begin{defn}\label{defn:R}
Let $G$ be a finitely generated group with symmetric generating set
$S$.
Let
$c_n$ be the number of all reduced
trivial words of length $n$ and let $C=\limsup c_n^{1/n}.$
Define
$$\RR(n)=\min
\left\lbrace k \;:\;\frac{c_{2k+2}}{c_{2k}}>C^2-\frac{1}{n}
\right\rbrace$$
\end{defn}
Definition \ref{defn:R} uses only even word lengths (and hence $C^2$ instead of $C$). This is necessary because group presentations with only even length relators have no odd length trivial words.
For this paper we will only consider the function $\RR$ for amenable groups, in which case $C=|S|-1$ except when the group is free (infinite cyclic).
A similar definition may be made for the cogrowth function.
\begin{defn}\label{defn:Rprime}
For $G$ a finitely generated group with symmetric generating set
$S$ we may define
$$\RR'(n)=\min
\left\lbrace k \;:\;\frac{d_{2k+2}}{d_{2k}}>D^2-\frac{1}{n}
\right\rbrace$$
where
$d_n$ be the number of all (reduced and non-reduced)
trivial words of length $n$ and $D=\limsup c_n^{1/n}.$
\end{defn}
Literature already exists studying the convergence properties of return probabilities, and we suspect that
the function $\RR'$ is a reformulation of the {\em $L^2$-isoperimetric
function} \cite{BendikovPittetSauer}.
\begin{example}
For the trivial group with some finite symmetric generating set $S$ we have $c_0=1, c_k=|S|(|S|-1)^{k-1}$ for $k\geq 1$ so
$\frac{c_{2k+2}}{c_{2k}}\geq (|S|-1)^2$ and $\RR(n)=0$.
Similarly since $d_k=|S|^k$ we have
$\RR(n)=\RR'(n)=0$.
\end{example}
Aside from the trivial group, it is usually easier to compute $\RR'$ (or its asymptotics) than it is to obtain $\RR$.
For this reason we first consider $\RR'$ functions for various groups, and then prove that for infinite, amenable, non-free groups $\RR'$ and $\RR$ have the same asymptotic behaviour.
\begin{example}
For any finite group
the rate of growth of $d_n$ is the dominant eigenvalue of the adjacency matrix of the
Cayley graph, and some simple analysis shows that $\RR'(n)$ is at most logarithmic in $n$.
\end{example}
Define $f\precsim g$ if there exist constants $a, b > 0$, such that for $x$ large enough, $f(x) \leq ag(bx)$. Then $f\sim g$ ($f$ and $g$ are asymptotic) if $f\precsim g$ and $g\precsim f$.
Table \ref{tab:differentRn} provides a sample of amenable groups for which the asymptotics of $\RR'(n)$, the F\o lner function and probabilities of return are known \cite{ershler2003isoperimetric,randomWalkWreathProducts,PittetSCsolvable2003}.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{2.5}
\begin{tabular}
{
|>{\centering\arraybackslash}p{2.7cm}
|>{\centering\arraybackslash}p{2.7cm}
|>{\centering\arraybackslash}p{3.3cm}
|>{\centering\arraybackslash}p{2.7cm}
|}
\hline
Example & $\mathcal{F}(n)$ & $\mu_n (e)$ & $\mathcal{R}'(n)$ \\
\hline \hline
trivial & $\sim$ constant & $\sim$ constant & $\sim$ constant \\
\hline
$\Z^k$ & $\sim n^{k}$ & $\sim n^{-k/2}$ & $\sim n$ \\
\hline
$BS(1,N)$ & $\sim e^n$ & $\sim e^{-n^{1/3}}$
& $\sim n^{3/2}$ \\
\hline
$\Z\wr\Z$ & $n^n$ & $\sim e^{-n^{1/3}(\ln n)^{2/3}}$ & $\sim\ln(n) n^{3/2}$\\
\hline
$\Z\wr\Z\wr\dots \wr\Z$ $(d-1)$-fold wreath product & $n^{n^{n^{\iddots^n}}}$ (tower of $d-1$ $n$'s) & $\sim e^{-n^{\frac{d}{d+2}}(\ln n)^{\frac{2}{d+2}}}$ & $\sim\ln(n) n^{(d+2)/2}$\\
\hline
\end{tabular}
\end{center}
\caption{Comparing asymptotics of the probabilities of return, the F\o lner function $\mathcal{F}$, and $\RR'$ for various
groups.
\label{tab:differentRn}}
\end{table}
The results for the asymptotics of $\RR'(n)$ were derived directly from the known asymptotics for $\mu_n$. A discussion of these methods will appear in \cite{CamPhD}. In practice however it proved quicker to guess the asymptotics and then refine using the following method.
\begin{prop}\label{prop:ProvingRn}
The asymptotic results for $\RR'(n)$ in Table \ref{tab:differentRn} are correct.
\end{prop}
\begin{proof}
For a given group suppose
$\mu_n(e)\sim g(n)$ where $g$ is a continuous real valued function, as in Table \ref{tab:differentRn}.
Then $d_n\sim |S|^n g(n)$.
Finding $\RR'(n)$ requires solving the equation
\begin{equation}\frac{d_{2k+2}}{d_{2k}}=|S|^2-\frac{1}{n}
\label{eqn:Rnkn}
\end{equation} for $k=k(n)$.
This is equivalent to solving
$$1=n\left(
|S|^2-\frac{d_{2k+2}}{d_{2k}}
\right)$$
for $k$.
Suppose $f(n)$ is a function where
\begin{equation}\label{eqn:methodRn}
L =\lim_{n\rightarrow\infty}
n
\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right)
\end{equation}
exists and is non-zero.
If $L=1$ then
$$\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right) \sim \frac{1}{n}$$
and so
$$\frac{d_{2f(n)+2}}{d_{2f(n)}} \sim |S|^2-\frac{1}{n}.$$
Then
$k(n)\sim f(n)$ satisfies Equation \ref{eqn:Rnkn}. Therefore $\RR'(n)$ is asymptotic to $f(n).$
If $L$ exists and is non-zero then
$$\left(
|S|^2-\frac{d_{2f(n)+2}}{d_{2f(n)} }
\right) \sim \frac{L}{n}.$$
Then $$\left(
|S|^2-\frac{d_{2f(Ln)+2}}{d_{2f(Ln)} }
\right) \sim \frac{L}{Ln}=\frac{1}{n}$$
and so
$\RR'(n)\sim f(L n)$.
The derivations of candidates for $f(n)$ in each case in Table \ref{tab:differentRn} is performed in \cite{CamPhD}. The results in the table do not include the constant $L$ since the probabilities of return used as input are only correct up to scaling.
We leave the calculation of Equation \ref{eqn:methodRn} for the results from
Table \ref{tab:differentRn} as an exercise.
\end{proof}
\subsection{Converting from cogrowth to reduced-cogrowth}
We now prove an equivalence between the sub-dominant behaviour of
the cogrowth and reduced-cogrowth functions. This allows us to borrow the previously listed results for $\RR'$ when discussing $\RR$ and the ERR method.
The dominant and sub-dominant cogrowth behaviour can be
analysed from the generating functions for these sequences.
\begin{defn}
Let $d_n$ denote the number of trivial words of length $n$ in
a finitely generated group. The \emph{cogrowth series} is
defined to be $$D(z)=\sum_{n=0}^\infty d_n z^n.$$
Let $c_n$ denote the number of reduced trivial words. Then
$$C(z)=\sum_{n=0}^\infty c_n z^n$$ is said to be the
\emph{reduced-cogrowth series}.
\end{defn}
$D$ and $C$ are the generating functions for $d_n$ and $c_n$ respectively, and are
related in the following way.
Let $|S|=2p$ be the size of a symmetric generating set.
Then from \cite{KouksovRationalCogrowth,cogrowthConvertWoess}
\begin{equation}
C(z)=\frac{1-z^2}{1+(2p-1)z^2}D\left(
\frac{z}{1+(2p-1)z^2}
\right)
\label{eqn:cFromD}
\end{equation}
and
\begin{equation}
D(z)=\frac{1-p+p\sqrt{1-4(2p-1)z^2}}{1-4p^2z^2}
C\left(
\frac{1-\sqrt{1-4(2p-1)z^2}}{2(2p-1)z}
\right).
\label{eqn:dFromC}
\end{equation}
The dominant and sub-dominant growth properties of the cogrowth functions may be analysed by considering the
singularities of these generating functions.
For a detailed study of the relationship between singularities
of generating functions
and sub-dominant behaviours of coefficients see \cite{flajolet2009analytic}.
We now outline an example of how the composition of functions (as in Equations~\ref{eqn:cFromD} and \ref{eqn:dFromC}) effects
the growth properties of the series coefficients.
\begin{example}\label{ex:CvsD}
Consider $$f(z)=\left(
1-\frac{z}{r}
\right)^{-p}.$$
Then (for positive $p$) $f(z)$ has a singularity at $z=r$, and this defines the
radius of convergence of $f(z)$ and the asymptotic
growth rate of the series coefficients of the
expansion of $f(z)$. It
also determines the principle sub-dominant term contributing
to the growth of the coefficients.
In this example, the coefficients will grow like $ n^{p-1}r^{-n}.$
We wish to investigate what happens to this growth behaviour
when we compose the function $f$ with a function $g$.
Consider $f(g(z))$ for some function $g$ for which $g(0)=0$.
The singularities of $g$ are inherited by $f(g(z))$; if $g$ is
analytic everywhere then the only singularities of $f(g(z))$
will occur when $g(z)=r$. In this case, the new radius of convergence
will be the minimum $|z|$ such that $g(z)=r$. Importantly, however, the principle sub-dominant growth term of the
series coefficients will remain polynomial of degree $p-1$.
A variation on this behaviour will occur if there is an
$r_0$ for which $g(z)$ is
analytic on the ball of radius $r_0$, and $g(z)=r$ for some
$z$ in this region. Again, when this occurs, the new radius of
convergence is obtained by solving $g(z)=r$ and the type
of the principle sub-dominant term in the growth of the
coefficients remains unchanged.
If there does not exist such an $r_0$, the principle singularity
of $g(z)$ will dominate the growth properties of the
coefficients.
\end{example}
\begin{prop}\label{prop:nonFreeCvsD}
Let $G$ be an {infinite} amenable group generated by $p$ elements and their inverses.
Then the principle sub-dominant terms contributing to the growth of $d_n$ and $c_n$ are asymptotically equivalent, except when the group is infinite cyclic.
\end{prop}
\begin{proof}
For an amenable group generated by $p$ elements and their inverses the radius of convergence for $D(z)$ is exactly
$1/2p$. This follows immediately from Definition \ref{defn:amenabilityCogrowth}.
Now from Equation~\ref{eqn:cFromD}, the reduced-cogrowth
series is obtained by composing the cogrowth series with
$$p(z)=\frac{z}{1+(2p-1)z^2}$$ and then multiplying by
$$q(z)=\frac{1-z^2}{1+(2p-1)z^2}.$$
Both of these functions are analytic inside the ball of radius
$1/\sqrt{2p-1}$.
Now
\begin{equation}\label{eq:alternateCogrowthProof}
p\left(\frac{1}{2p-1}\right)=\frac{1}{2p},
\end{equation}
the singularity of $D(z)$. Hence, $1/(2p-1)$ is a singularity of $D(p(z))$, and hence of $C(z)$. Note that if the group is infinite cyclic, then $p=1$ and $1/(2p-1)$ and $1/\sqrt{2p-1}$ are equal. In this scenario the radius of convergence of $p(z)$ is reached at the same moment that
$p(z)$ reaches the radius of convergence of $D(z)$. This means that both $p$ and $q$ contribute to the principle singularity, and this explains why the reduced and non-reduced cogrowth functions
for the infinite cyclic group exhibit such different behaviour.
If $p>1$ then $1/(2p-1)$
is inside the ball of radius $1/\sqrt{2p-1}$ (ie, inside the region of convergence for $p$ and $q$). Thus, the singularity of $D$ is reached before $z$ approaches the singularity of $p$ and $q.$
In this case the substitutions in Equation~\ref{eqn:cFromD} change the location of the principle singularity, but do not change the type of the singularity, or the form of the principle sub-dominant
term contributing to the growth of the series coefficients.
\end{proof}
\begin{cor}\label{cor:RvsRr}
Suppose $G$ is a finitely generated, infinite amenable group that is not the infinite cyclic group. Then $\RR$ is
asymptotically
equivalent to $\RR'$.
\end{cor}
\begin{rmk}
An alternate proof of the Grigorchuk/Cohen characterisation of amenability is easily constructed from an analysis of the singularities of $C(z)$ and $D(z)$. For example, Equation \ref{eq:alternateCogrowthProof} proves the first result from Definition \ref{defn:amenabilityCogrowth}.
This argument also picks up that the infinite cyclic group presents a special case. Though amenable, $\limsup_{n\rightarrow\infty}c_n\neq |S|-1$.
For this group we have $\RR(n)\sim 0$ while $\RR'(n)\sim n$.
\end{rmk}
\subsection{Sub-dominant behaviour in the cogrowth of $F$
\label{sec:subDomInF}
}
The groups $BS(1,N)$ limit to $\Z\wr\Z$ in the space of marked groups. This implies that the growth of the function $\mathcal{R}'$ and hence $\RR$ for $BS(1,N)$ increases with $N$. This is consistent with Table \ref{tab:differentRn}, since these results do not include scaling constants. This leads to the following result.
\begin{prop}\label{prop:connectBStoThompsons}
If Thompson's group $F$ is amenable, its $\mathcal{R}$ function grows faster than the $\mathcal{R}$ function for any $BS(1,N)$. In particular, it is asymptotically super-polynomial.
\end{prop}
\begin{proof}
By the convergence of $BS(1,N)$ to $\Z\wr\Z$ in the space of marked groups we have that, for any $N$, the function $\RR'$ for $BS(1,N)$ grows slower than the corresponding function for $\Z\wr\Z$. In \cite{stabilityRandomWalk} it is proved that, for finitely generated groups, the probability of return cannot asymptotically exceed the probability of return of any finitely generated subgroup. This implies that, for finitely generated amenable groups, the $\RR'$ function of the group must grow faster than the $\RR'$ function of any finitely generated subgroup. Since there is a subgroup of $F$ isomorphic to
$\Z\wr\Z$, $\RR'(n)$ for $F$ must grow faster than $\RR'(n)$ for $\Z\wr\Z$ and hence $BS(1,N)$.
Since $F$ contains every finite depth iterated wreath products of $Z$ (\cite{GubaSapir} Corollary 20),
the probability of return for $F$ decays faster than
$$e^{-n^{\frac{d}{d+2}}(\ln n)^{\frac{2}{d+2}}}$$ for any $d$.
Taking the limit as $d$ approaches infinity of the corresponding values for $\RR'$ and then doing the conversion from $\RR'$ to $\RR$ gives the final result.
\end{proof}
Note that if $F$ is non-amenable, then even though it still contains these subgroups, they do not affect the $\RR'$ function. In this scenario it is still true that the return probability for $F$ decays faster than the interated wreath product, because $F$ would have exponentially decaying return probability. For non-amenable groups the return probability does not identify the principle sub-dominant term in $d_n$, and hence does not correlate directly with $\RR'$.
\section{The ERR algorithm}
\label{sec:ERRsection}
We start by summarising the original work
by the first author, Rechnitzer and van Rensburg. Only the details
directly pertinent to the present paper are discussed here, for a more
detailed analysis of the random walk
algorithm and a derivation
of the stationary distribution we refer the
reader to \cite{ERR}. For the sake of brevity the
random walk performed by the algorithm will be referred to as the ERR random walk.
Recall that a group presentation, denoted $\langle S \mid R \rangle$, consists
of a set $S$ of formal symbols (the generators) and a set $R$ of words written
in $S^{\pm 1}$ (the relators) and corresponds to the quotient
of the free group on $S$ by the normal closure of the
relators $R$. In our paper, as in \cite{ERR}, all groups
will be finitely presented: both $S$ and $R$ will be finite.
Furthermore, the implementation of the algorithm assumes both
$S$ and $R$ to be symmetric, that is, $S=S^{-1}$ and $R=R^{-1}$.
In addition, for convenience $R$ is enlarged to be closed under cyclic permutation.
Recall that
$c_n$ counts the
number of reduced words in $S$ of length $n$ which represent
the identity in the group (that is, belong to the normal
closure of $R$ in the free group).
\subsection{The ERR random walk}
The ERR random walk is not a random walk on the Cayley graph of a group, but instead a random walk on
the set of trivial words for the group presentation.
This makes the algorithm extremely easy to implement, since it does not require an easily computable normal
form or even a solution to the word problem.
The walk begins at the empty word, and constructs
new
trivial words from the current trivial word
using one of two moves:
\begin{itemize}
\item (conjugation by $x\in S$).
In this move an element is chosen from
$S$ according to a predetermined probability distribution.
The current word is conjugated
by the chosen generator and then freely reduced to produce
the new candidate word.
\item (insertion of a relator). In this move a relator is
chosen from $R$ according to a predetermined
distribution and inserted into the current word at a position
chosen uniformly at random . In order to maintain the detailed
balance criteria (from which the stationary distribution
is derived)
it is necessary to allow only those insertions which
can be immediately reversed by inserting the inverse of the
relator at the same position. To this end the a notion of \emph{left insertion} is introduced;
after relators are inserted free reduction is done on only the left hand side of the relator.
If after this the word is not freely reduced the move is rejected.
\end{itemize}
Transition probabilities are defined which determine whether
or not the trivial word created with these moves is accepted as the new state. These probabilities involve parameters $\alpha\in\mathbb{R}$ and
$\beta\in (0,1)$ which may be adjusted
to control the distribution of the walk.
Let the current word be $w$ and the candidate word be $w'$.
\begin{itemize}
\item If $w'$ was obtained from $w$ via a conjugation it is accepted as the new
current state with probability
$$\min \left\lbrace 1,
\left(\frac{\left\vert w'\right\vert+1}
{\left\vert w\right\vert+1}\right)^{1+\alpha}
\beta^{\left\vert w'\right\vert-\left\vert w\right\vert}
\right\rbrace.$$
\item If $w'$ was obtained from $w$ via an insertion it is accepted as the new
state with probability
$$\min \left\lbrace 1,
\left(\frac{\left\vert w'\right\vert+1}
{\left\vert w\right\vert+1}\right)^{\alpha}
\beta^{\left\vert w'\right\vert-\left\vert w\right\vert}
\right\rbrace.$$
If $w'$ is not accepted the new
state remains as $w$.
\end{itemize}
These probabilities are chosen so that the distribution
on the set of all trivial words given by
$$\pi\left(u\right)=\frac{\left(\left|u\right|+1\right)^{1+\alpha}
\beta^{\left|u\right|}}
{Z},$$
(where $Z$ is a normalizing constant) can be proved to be the unique stationary
distribution of the Markov chain, and the limiting
distribution of the random walk.
The following result is then given.
\begin{prop}[\cite{ERR}]
As $\beta$ approaches $$\beta_c = \frac1{\limsup_{n\rightarrow\infty} (c_n)^{1/n}}$$
the expected value
of the word lengths visited approaches infinity.
\end{prop}
This result leads to the following method for estimating the value of $\beta_c$. For each presentation,
random walks are run with different values of $\beta$. Average
word length is plotted against $\beta$. The results obtained for Thompson's group $F$ are reproduced in Figure \ref{fig:ERRpaperThompsons}. The values for $\beta$ at which the data points diverge gives an indication of
$\beta_c$, and hence the amenability or otherwise of the group.
\begin{figure}
\includegraphics[width=110mm]{graph_thomp1.pdf}
\caption{The results from \cite{ERR} of the ERR algorithm applied to the standard presentation of Thompson's group $F$. Each data point plots the average word length of an ERR random walk against the paramater $\beta$ used.
\label{fig:ERRpaperThompsons}
}
\end{figure}
Random walks were run on presentations for a selection of amenable and non-amenable groups, including Baumslag-Solitar groups, some free product examples whose cogrowth series are known \cite{KouksovFreeProduct}, the genus 2 hyperbolic surface group, a finitely presented group related to the basilica group, and Thompson's group $F$.
The data in Figure \ref{fig:ERRpaperThompsons} appears to show fairly convincingly that the location of $\beta_c$
is a long way from the value of $\frac13$ expected were the group amenable.
It is noted in \cite{ERR}
that a long
random walk may be split into shorter segments,
and the variation in average word lengths of the segments gives an
estimation of the error in the estimated expected word length.
\begin{rmk}\label{rmk:implementation_details}
In the original work reported in \cite{ERR}, the algorithm was coded in {c++}, words were stored as linked lists,
the GNU Scientific Library was used to generate pseudo-random numbers, and {\em parallel tempering} was
used to speed up the convergence
of the random walk. For independent verification the second author
coded the algorithm in python, kept words as strings, used the python package \emph{random}, and no
tempering was used. Results obtained were consistent with those in \cite{ERR}. The experimental analysis
and modifications described in this paper use the python version of the second author.
\end{rmk}
\section{Investigating Pathological Behaviour\label{sec:pathological_behaviour}}
The theory underpinning the ERR random walk is
complete --- the random walk is certain to converge to the stationary
distribution. This does not preclude, however, convergence
happening at a computationally
indetectible rate.
Since there are finitely presented groups with unsolvable word problem, there is no chance of deriving bounds on the
rates of convergence of the walk in
any generality.
In the process of independently verifying the results in \cite{ERR}, however, we were able to identify two properties of
group presentations which appear to slow the rate of convergence.
The first of these is unconnected with the F\o lner function, and
does not pose any problem to the implementation of the ERR algorithm to Thompson's group $F$.
It does, however, refute the claim in (\cite{ERR} Section 3.7) that the method can be successfully applied to infinite presentations.
\subsection{Walking on the wrong group}\label{subsec:wrong_group}
It is easy to see from the probabilistic selection criteria
used by the ERR random walk that moves which increase the word
length by a large amount are rejected with high probability.
This poses a problem for group presentations containing long relators since insertion moves that attempt to insert a long relator will be
accepted much less often than moves which attempt to insert a shorter relation.
The following example makes this explicit.
\begin{lem}
All presentations of the form
$$\left\langle a,b\mid abab^{-1}a^{-1}b^{-1},\;a^n b^{-n-1}\right\rangle$$
describe the trivial group.
\end{lem}
\begin{proof}
Since $a^n=b^{n+1}$ we have $a^nba= b^{n+1}ba=bb^{n+1}a=ba^{n+1}=bab^{n+1}.$
Since $aba=bab$ we have
$a^iba=a^{i-1}bab$ so $a^nba=a^{n-1}bab=a^{n-2}bab^2=\dots =bab^{n}$.
Putting these results together gives $bab^n=bab^{n+1}$ and hence $b$ is trivial. The result follows.
\end{proof}
By increasing $n$ we can make the second relator arbitrarily large
without affecting the group represented by the presentation, or the
group elements represented by the generators. This implies that
ERR random walks for each of these presentations should converge
to the same stationary distribution.
Changing the presentation, however, does change the number of
steps in the ERR random walk needed to reach certain trivial words (such as the word `$a$').
ERR random walks were performed on these presentations for
$n= 1, 2, \dots,19$. As well as recording the average
word length of words visited, the number of \emph{accepted}
insertions of each relator was recorded.
\begin{table
\begin{center}
\begin{tabular}
{|>{\centering\arraybackslash}p{1cm}|>{\centering\arraybackslash}p{2cm}|>
{\centering\arraybackslash}p{3cm}|>{\centering\arraybackslash}p{3cm}|}
\hline
$n$ & number of steps & number of accepted insertions of small
relator & number of accepted insertions for big relator \\
\hline
1 & $2.0\times 10^8$ & $2977228$ & $7022772$ \\
\hline
2 & $3.6\times 10^8$ & $4420185$ & $5579815$ \\
\hline
3 & $6.1\times 10^8$ & $6323376$ & $3676624$ \\
\hline
4 & $9.0\times 10^8$ & $8016495$ & $1983505$ \\
\hline
5 & $1.2\times 10^9$ & $9088706$ & $911294$ \\
\hline
6 & $1.4\times 10^9$ & $9621402$ & $378598$ \\
\hline
7 & $1.5\times 10^9$ & $9850251$ & $149749$ \\
\hline
8 & $1.7\times 10^9$ & $9943619$ & $56381$ \\
\hline
9 & $1.8\times 10^9$ & $9977803$ & $22197$ \\
\hline
10 & $1.9\times 10^9$ & $9991680$ & $8320$ \\
\hline
11 & $2.1\times 10^9$ & $9997122$ & $2878$ \\
\hline
12 & $2.2\times 10^9$ & $9998720$ & $1280$ \\
\hline
13 & $2.2\times 10^9$ & $9999585$ & $415$ \\
\hline
14 & $2.3\times 10^9$ & $9999938$ & $62$ \\
\hline
15 & $2.4\times 10^9$ & $10000000$ & $0$ \\
\hline
16 & $2.6\times 10^9$ & $10000000$ & $0$ \\
\hline
17 & $2.7\times 10^9$ & $10000000$ & $0$ \\
\hline
18 & $2.8\times 10^9$ & $10000000$ & $0$ \\
\hline
19 & $2.9\times 10^9$ & $10000000$ & $0$ \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:trivial_relator_acceptences} The ERR
algorithm applied to the trivial group with presentation
$\left\langle a,b \mid aba=bab,\;a^n=b^{n+1} \right\rangle$ for various $n$. As $n$ increases, the longer relator is successfully
inserted less frequently.}
\end{table}
Table \ref{tab:trivial_relator_acceptences} shows the sharp decline in the number
of accepted insertions of the second relator as $n$ increases.
Indeed, for $n>14$ there were no instances in which the longer relator
was successfully inserted. Unsurprisingly, walks for large $n$ did not converge to the same distribution as those where $n$ was small, and for large $n$ the data did not accurately predict the asymptotic growth rate of the cogrowth function. For these $n$ the ERR random walk was actually
taking place on $\langle a,b\mid abab^{-1}a^{-1}b^{-1} \rangle$, which is a presentation for the 3-stand braid group, which is non-amenable.
Note that, given enough time, the longer relator would be
successfully sampled, and that an infinite random walk is still
guaranteed to converge to the theoretical distribution for the trivial group.
Such convergence,
however, may take a computationally infeasible amount of time.
\begin{claim}
The presence of long relators in the input presentation slows
the rate at which an ERR random walk converges to the stationary distribution. Therefore, the ERR method cannot be reliably extended to accept infinite
presentations.
\end{claim}
This result is not surprising.
In \cite{BenliGrigHarpe} an infinitely presented
amenable group is given for which any truncated presentation
(removing all but a finite number of relators) is non-amenable.
The ERR method could not expect to succeed on this group even if
long relators were sampled often; since the ERR random walk can only be run for a finite time there can
never be a representative sampling of an infinite set of relators, so ERR would incorrectly conclude this group is non-amenable.
The pathological presentations of the trivial group
studied here form
a sequence of presentations for amenable (trivial) groups which
approach a non-amenable group in the space of marked groups.
The failure of the ERR method to predict amenability for
these groups suggests that one does not need
particularly elaborate or large presentations to produce pathological behaviour.
However, we remark that this behaviour is easily monitored. In addition to counting
the number of attempted moves of the walk, one should record the relative number of successful insertions of each relator.
In the case of Thompson's group $F$ the two relators have similar lengths, and in our experiments both were sampled with comparable frequency.
Further analysis of this phenomena appears \cite{CamPhD}.
\subsection{Sub-dominant behaviour in cogrowth.\label{subsec:subdom}}
Recall that the solvable Baumslag-Solitar groups $BS(1,n)=\langle a,t\mid tat^{-1}a^{-n}\rangle$
are the only two generator, single-relator, amenable groups \cite{OneRelatorAmenable}; for each of these groups $\beta_c=1/3$.
In \cite{ERR} walks were run on $BS(1,1)=\Z^2,\;BS(1,2)$ and $BS(1,3)$ and for these groups
the random walk behaved as predicted with divergence occurring at the moment when $\beta$ exceeded $\beta_c$.
It may be surprising then to see the output of some ERR walks run $BS(1,7)$ shown in Figure \ref{fig:ERR-BS17}.
\begin{figure}
\includegraphics[width=110mm]{ERRgraphBS17.pdf}
\caption{A graph (as in \cite{ERR}), of average word length of ERR random walks plotted against the parameter $\beta$. The orange points come from walks where $\alpha=3$, and the blue points come from walks where $\alpha=0$. The vertical line at $1/3$ marks the expected asymptote.
\label{fig:ERR-BS17}
}
\end{figure}
It is clear that, for this group, the divergence for $\beta>\beta_c$ predicted by the theory is not occurring. This is further seen in Figure \ref{fig:bs17-distribution}, which shows the progression over time of one of the random walks used to generate Figure \ref{fig:ERR-BS17}.
\begin{figure
\includegraphics[width=110mm]{BS1N_over_time.pdf}
\caption{\label{fig:bs17-distribution}
The distribution of ERR random walks on $BS(1,7)$ with $\alpha=3$ and $\beta =0.34$.
This is a plot of word length against number of steps taken. The data represents ten ERR random walks overlaid on top of each other. As can be seen, none of the walks diverged.
Each dot represents the average
word length over 10000 accepted relator insertions. There is no divergence at this $\beta$ value, even though the group is amenable.
}
\end{figure}
The results in Figure \ref{fig:bs17-distribution} show the
word lengths visited for ten ERR random walks (superimposed) performed on $BS(1,7)$, with $\alpha=3$ and $\beta=0.34$.
Since the group has only a single relator, which was successfully inserted into the word 10000 times, it is not an error of the type identified in Subsection~\ref{subsec:wrong_group}.
The ERR method relies on the divergence of the average word length to identify $\beta_c$, so application of the method in this case will not accurately identify the amenability of
$BS(1,7)$.
Divergence of the ERR random walk (when $\beta>\beta_c$) relies on the abundance of long trivial words. For most presentations, at all points in an ERR walk there are always more moves which lengthen the word than shorten it, but the probabilistic selection criteria ensures balance. More specifically, the parameter $\beta$ imposes a probabilistic barrier which increases exponentially with attempted increase in word length.
When $\beta >\beta_c$ this exponential cap is insufficient, and the word length diverges.
Recall that for a given word length $n$ the function $\RR(n)$ quantifies how many reduced-trivial words there are of length similar to $n$.
The results in Table \ref{tab:differentRn} imply that, for many groups, large word lengths must be reached before the asymptotic growth rate is reflected by a local abundance of longer trivial words.
We have noted in Section \ref{sec:subDomInF} that the
convergence properties of $BS(1,N)$ in the space of marked groups requires $\RR(n)$ to grow more quickly as $N$ increases. We now show that the growth rate of
$\RR(n)$ is sufficient to cause the pathological
behaviour noted above.
To this end we postulate a hypothetical cogrowth function for which
we can explicitly identify and control $\RR(n)$.
\begin{example}
\label{ex:fictional_cogrowth}
Suppose that for some group on two generators and $q>0,\;p\in (0,1)$, the reduced-cogrowth is known to be exactly $$c_n=3^{n-qn^p}.$$
Then $
\limsup_{n\rightarrow\infty} c_n^{1/n}
= 3$
and so the group is amenable.
It may easily be verfied by the methods outlined in Proposition
\ref{prop:ProvingRn} that
$$\RR(n)=\left( 9\log(3)q p2^p n\right)^{\frac{1}{1-p}}.$$
Note that as $p$ approaches $1$, the exponent ${\frac{1}{1-p}}$ approaches infinity. This increases both the degree of the polynomial in $n$, and the coefficient $ \left(9\log(3)q p2^p\right)^{\frac{1}{1-p}}$.
Even though we do not know a group presentation with
precisely this cogrowth function,
by varying $p$ and $q$
this hypothetical example models the groups listed in Table~\ref{tab:differentRn}.
Figure \ref{fig:pathological1}
shows the effect of increasing the parameter $p$ on the ERR random walk
distribution. Note that this figure is not the output of any computer simulation, rather it models the distributions for an ERR random walk on an amenable group with the hypothetical cogrowth function, for $\alpha=0,\beta=0.335$ and $q=1$.
\begin{figure
\includegraphics[width=110mm]{pathologicalExample.png}
\caption{
\label{fig:pathological1}
Graphs of $c_n(n+1) 0.335^n$ for $c_n=3^{n-n^p}$.
}
\end{figure}
Recall that for $\beta<\beta_c$ the theoretical distribution of word lengths visited by the ERR random walk is
$$\Pr(@n)=\frac{c_n(n+1)^{\alpha+1}\beta^n}{Z}$$ where $Z$ is a normalizing
constant.
For $\beta>\beta_c$ the distribution cannot be normalised. In this case the function $c_n(n+1)^{\alpha+1}\beta^n$ still
contains information about the behaviour of the walk. If the
random walk reaches a word of length $x$ then the relative heights of $c_n(n+1)^{\alpha+1}\beta^n$ either side of $x$
describe the relative probabilities of increasing or decreasing
the word length in the next move.
From Figure \ref{fig:pathological1} we see that, for $p=0.3$,
the slope of $c_n(n+1)^{\alpha+1}\beta^n$ is always positive, so at all word lengths probabilities are uniformly in favour of increasing the word length.
However, as $p$ increases (and the growth rate for $\RR(n)$ increases) a `hump' appears at short word lengths. A random walk for such a group would tend to get stuck in the `hump'.
Indeed, for $p=0.39$ the distribution looks much less like
a walk diverging towards infinite word lengths and much more like the
distributions for $BS(1,7)$ used to produce Figure~\ref{fig:ERR-BS17}, where the average word length in the ERR walk remained finite.
\end{example}
The distributions in Figure \ref{fig:pathological1} exhibit a
mechanism which can explain
anomalous behaviour previously observed.
When $\RR(n)$ increases quickly the ERR random walk may adhere to the behaviour predicted by the theory and simultaneously give anomalous results about the asymptotics of the cogrowth function. In this sense if \cite{ERR} contains incorrect answers it is because the original ERR algorithm as it was initially proposed asks the wrong question. The ERR walk does not measure asymptotic properties of the cogrowth function; it provides information about the cogrowth function
only for word lengths visited by the walk. This observation forms the basis of Section
\ref{sec:appropriation}.
Note that increasing the parameter $\alpha$ pushes the algorithm towards
longer word lengths. Thus, any pathological behaviour caused
by the growth of $\RR(n)$ could theoretically be
overcome by increasing $\alpha$.
If $\RR(n)$ is known, then it may be used to calculate
how large words have to get before divergence occurs. A method to do this is outlined by the following example.
Suppose that ERR
random walks are run on a two generator group with $\beta=0.34$ (as in Figure \ref{fig:bs17-distribution}). If we eliminate the $\alpha$ term of the stationary distribution (which, being polynomial, becomes insignificant for long word lengths) the divergence
properties are controlled by the contest between
$0.34^n$ and $c_n$. That is, divergence will occur when
$c_{2n+2}/c_{2n}>1/0.34^2=3-1/17$; the word length at which divergence will occur is
$\RR(17)$.
If this value is known $\alpha$ may be increased
until the walk visits words of this length.
This process, however, requires specific information about $\RR(n)$ including all scaling constants. It is hard to imagine a group for which the sub-dominant cogrowth behaviour was known to this level of precision, but dominant cogrowth behaviour (and hence the amenability question for the group) was still unknown.
\subsection{Reliability of the ERR results for Thompson's group $F$}
In Proposition~\ref{prop:connectBStoThompsons} we saw that the $\mathcal{R}$ function for $F$ grows faster than that of any iterated wreath product of $\Z$'s, and certainly faster than that of any $BS(1,N)$ group. Since the ERR method fails to predict the amenability of these groups for $N$ as low as $7$, and this behaviour is consistent with the pathological behaviour caused by $\RR$, we conclude that the data encoded in Figure \ref{fig:ERRpaperThompsons} does not imply the non-amenability of $F$, and so the conclusion of the paper \cite{ERR} that $F$ appears to be non-amenable based on this data is unreliable.
\section{Appropriation of the ERR algorithm \label{sec:appropriation}}
The original implementation of the ERR random walk uses only the
mean length of words visited in an attempt to estimate asymptotic behaviour of the cogrowth function.
In this section we show that,
using the full
distribution of word lengths visited, it is possible to
estimate specific values
of the cogrowth function.
When doing a long random walk, the probability of arriving at a word of
length $n$
can be estimated by multiplying the number of words of that length by the asymptotic probability that the walk ends at a word of this length, $\pi(n)$.
That is,
\[
\Pr(@n)\approx c_n\pi(n)=c_n\frac{\left(n+1\right)^{\alpha}\beta^{n}}{Z}.
\]
The proportion of the time that the walks spends at words of length
$n$, however, gives us another estimate of $\Pr(@n)$. If we let
$W_n$ be the number of times the walk visits a word of
length $n$ then we have that
\[
\Pr(@n)\approx\frac{W_n}{Y},
\]
where $Y$ is equal to the length of the walk. From this we obtain
\[
\frac{W_n}{Y}\approx c_n\frac{\left(n+1\right)^{\alpha}\beta^{n}}{Z}.
\]
For two different values, $n$ and $m$, we obtain the result
\begin{eqnarray*}
\frac{W_m}{W_n} & \approx & \frac{c_m\left(m+1\right)^{\alpha}\beta^{m}}{c_n\left(n+1\right)^{\alpha}\beta^{n}},
\end{eqnarray*}
Thus,
\begin{equation}
c_m\approx c_n
\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m}.
\label{eqn:cogrowth_estimate}
\end{equation}
Equation~\ref{eqn:cogrowth_estimate} provides a method of estimating the value
of $c_m$ using some known or previously estimated value of $c_n$
and the distribution
of word lengths visited from an ERR random
walk.
Let's try a quick implementation of this for Thompson's group $F$,
where the first 48 cogrowth terms of which are known \cite{Haagerup}.
We ran an ERR random walk of length exceeding $10^{12}$ steps
on the standard presentation (Equation~\ref{eqn:Fpresentation}) f or $\alpha=3$ and $\beta=0.3$. The frequency of word
length visited is shown in Table~\ref{table:singleThompsonsGroupDist}.
\begin{table
\[\begin{array}{|c|r|}
\hline
n & W_n \\
\hline
0 & 32547326274\\
10 & 56273373521 \\
12 & 31613690578\\
14 & 26477475739\\
16 & 13576713156\\
18 & 9684082360\\
20 & 5444250723\\
22 & 3360907182\\
24 & 1905434239\\
26 & 1121735814\\
28 & 638093341\\
30 & 367320461\\
32 & 208025510\\
34 & 118432982\\
36 & 65983874\\
38 & 37210588\\
40 & 20642387\\
42 & 11332618\\
44 & 6243538\\
46 & 3421761\\
48 & 1863477\\
\hline\end{array}\]
\caption{Data collected from an ERR random walk of length $Y=1.8\times 10^{11}$ with $\alpha=3$ and $\beta=0.3$ on the standard presentation
for Thompson's group $F$.
\label{table:singleThompsonsGroupDist}
}
\end{table}
\begin{table}[h]
\[\begin{array}{|c|c|c|c|c|c|c|}
\hline
n & \text{exact} & \text{estimate} & \text{\begin{tabular}{c}percentage\\ error\end{tabular}}\\
\hline
10 & 20 & 19.9988 & .006 \\
12 & 64 & 63.9928 & .01\\
14 & 336 & 335.969 & .01\\
16 & 1160 & 1160.23 & .02\\
18 & 5896 & 5893.13 & .05\\
20 & 24652 & 24667.2 & .06\\
22 & 117628 & 117588 & .03\\
24 & 531136 & 530650 & .09\\
26 & 2559552 & 2551340 & .3\\
28 & 12142320 & 12116600 & .2\\
30 & 59416808 & 59353400 & .1\\
32 & 290915560 & 290848000 & .02\\
34 & 1449601452 & 1453990000 & .3\\
36 & 7269071976 & 7206930000 & .8\\
38 & 36877764000 & 36583500000 & .8\\
40 & 1.8848\times 10^{11} & 1.8461\times 10^{11} & 2\\
42 & 9.7200\times 10^{11} & 9.3078\times 10^{11} & 4\\
44 & 5.0490\times 10^{12} & 4.7504\times 10^{12} & 6\\
46 & 2.6423\times 10^{13} & 2.4308\times 10^{13} & 8\\
48 & 1.3920\times 10^{14} & 1.245\times 10^{14} & 10\\
\hline\end{array}\]
\caption{Estimate of the first 48 terms of the cogrowth function
for Thompson's group $F$, constructed from an ERR random walk of $Y=1.8\times 10^{11}$ steps with $\alpha=3$ and $\beta=0.3$. Exact values from \cite{Haagerup}.
\label{tab:unsophisticatedThompsons48}
}
\end{table}
We used Equation~\ref{eqn:cogrowth_estimate} and the data in Table~\ref{table:singleThompsonsGroupDist} to estimate
$c_{10}$ from $c_0$, and then this estimate was used to estimate
$c_{12}$. (Note that the shortest non-empty trivial words are of length 10. Since the relators in the standard presentation of $F$ are even in length there are no odd length relators.)
Using the data and the previous estimate for $c_{n-2}$, estimates were made of the first 48 terms, and these compared to the correct value in
Table \ref{tab:unsophisticatedThompsons48}.
This implementation of Equation~\ref{eqn:cogrowth_estimate} may be refined in several ways. Firstly, in many groups we have exact
initial values of $c_n$ for more than the trivial result $c_0=1$. In this
case these initial values can be used to estimate subsequent terms. In this paper we are primarily concerned with testing the efficacy of this
method for determining cogrowth, and so do not make use of such data.
Secondly, in the above implementation the only cogrowth value
used to estimate $c_n$ was $c_{n-2}$. Instead, estimates
for $c_n$ may be made from $c_k$ for any $k<n$. These estimates
may then be averaged to form an estimate for $c_n$. Note,
however, that if only one ERR random walk is used, and each of the $c_k$ is itself estimated from previous values of the same distribution there may be issues with interdependence.
This leads naturally to the following refinement --- to obtain several independent estimates for a given cogrowth value
several ERR random walks can be run with different
values for the parameters $\alpha $ and $\beta$.
\subsection{The ERR-R algorithm.}
The ERR-R algorithm accepts as input a group presentation and
the cogrowth value with
$c_0=1$. As above, recursive application of
Equation~\ref{eqn:cogrowth_estimate} is used to produce estimates for longer
word lengths. However, in each step previous estimates for a range of $c_n$ are used to produce new estimates.
A detailed analysis of the error incurred with each application of Equation \ref{eqn:cogrowth_estimate} is performed in
Section \ref{subsec:error_analysis}. All error bounds which appear in subsequent graphs are constructed using these techniques.
Unsurprisingly, the errror analysis in Section \ref{subsec:error_analysis} predicts that the largest errors are incurred
when data is used from the tails of
random walk distributions. Ideally then, a separate random walk should be run for each $c_n$, with parameters $\alpha$ and
$\beta$ chosen so that the sampled word lengths occupy the peaks
of the distribution.
If many estimates are to be made this is computationally infeasible. Instead we performed ERR random walks
using a range of
$\alpha$ and $\beta$ values, which can be chosen so that all word lengths of interest are visited often.
When estimating $c_m$, one estimate was made from each random walk distribution and from each $c_n,$ $m-100<n<m$. To avoid using the tails of distributions only data points which were greater than 10\% of the max height were used.
Using Equation~\ref{eqn:errorEstimate} each estimate was assigned a weight equal to the inverse of the estimated error.
The final value for $c_m$ was taken as the weighted average of the estimates, and the error in $c_m$ was taken to be the
weighted average of the individual error estimates.
Random walk data was obtained as before using the python code of the second author as described in Remark~\ref{rmk:implementation_details}.
\subsection{Application to the examples in Section~\ref{sec:pathological_behaviour}}
Applying the ERR-R algorithm can be used to analyse in more detail the pathological behaviours analysed in this paper.
Unsurprisingly, for the presentations of the trivial group given in \ref{subsec:wrong_group} which ignore the long relator, the ERR-R estimates for cogrowth values align closely with the three strand braid group.
For $BS(1,N)$ we can use estimates of initial cogrowth to analyse how $\RR$ increases with $N$. This is shown, for example in Figure \ref{fig:nthRootBS1N} which exhibits the behaviour predicted by the convergence to $\Z\wr\Z$ in the space of marked groups.
Further analysis of these presentations will appear in \cite{CamPhD}.
\begin{figure
\includegraphics[width=110mm]{data_processing_BS1N.png}
\caption{
Estimates for $c_n^{1/n}$ for the groups $BS(1,N)$, $N=2\dots 7$. As $N$ increases the curves takes longer to approach the asymptote.
\label{fig:nthRootBS1N}
}
\end{figure}
\subsection{Application to surface group}
The fundamental group of a surface of genus 2 has presentation $\langle
a,b,c,d\mid [a,b][c,d]
\rangle$.
The cogrowth of this group has received a lot of attention, and good upper and lower bounds are known for the asymptotic rate of growth \cite{Gouezel,Nag}.
ERR random walks were run on this surface group with $\alpha=3,\;30,\;300$
and $\beta=0.281,\;0.286,\;0.291,\dots,0.351$. Estimates were made for $c_n$ as well as the error $\Delta c_n$.
The resultant upper and lower bounds for $c_n^{1/n}$ are shown in
Figure \ref{fig:nthRootSurface}.
\begin{figure
\includegraphics[width=110mm]{surface_group_estimates.png}
\caption{Upper and lower bounds for the $n$-th root of the cogrowth function
for the fundamental group of a surface of genus 2 as calculated from ERR random walks. The horizontal lines (indistinguishable at this scale) identify the known upper and lower bounds. Note that after 12000 recursive applications of Equation~\ref{eqn:cogrowth_estimate} the error in the $n$-th root is still only approximately 0.01. \label{fig:nthRootSurface}}
\end{figure}
\subsection{Application to Thompson's group $F$}
We now apply the more sophisticated implementation of the method to $F$. Recall that we can compare the first 48 values with exact values obtained by Haagerup {\em et al.}. Our method allows us to go much further than this though, which we do.
ERR random walks were run on $F$ with $\alpha=3,13,23,33,53,63$ and $\beta=0.28,0.29,\dots 0.37$.
Collection of experimental data is ongoing. Table \ref{tab:sophisticatedThompsons48} shows comparisons between estimates for $c_n^{1/n}$ and the actual values, for $n\leq 48$, as well as the estimates for the error obtained from the experimental data.
\begin{table}
\[\begin{array}{|c|r|r|c|c|c|c|}
\hline
n & \text{exact} & \text{estimate} & \text{\begin{tabular}{c} error\ (\%)\end{tabular}}& \text{\begin{tabular}{c}predicted\\ error\ (\%)\end{tabular}}\\
\hline
10 & 20 & 19.9996 & 0.002 & .03\\
12 & 64 & 63.9981 & 0.003 & 0.06\\
14 & 336 & 335.999 & 0.0002& 0.07\\
16 & 1160 & 1159.96 & 0.003& 0.1\\
18 & 5896 & 5895.98 & 0.0003& 0.1\\
20 & 24652 & 24653.1 & 0.005& 0.1\\
22 & 117628 & 117625 & 0.003& 0.2\\
24 & 531136 & 531098 & 0.007& 0.2\\
26 & 2559552 & 2558950 & 0.02& 0.2\\
28 & 12142320 & 12138200 & 0.03& 0.3\\
30 & 59416808 & 59408300 & 0.01& 0.3\\
32 & 290915560 & 290861000 & 0.02& 0.3\\
34 & 1449601452 & 1449260000 & 0.02& 0.3\\
36 & 7269071976 & 7268550000 & 0.007& 0.4\\
38 & 36877764000 & 36876700000 & 0.003& 0.5\\
40 & 1.8848\times 10^{11} & 1.88491 \times 10^{11} & 0.003& 0.5\\
42 & 9.7200\times 10^{11} & 9.7205 \times 10^{11} & 0.005& 0.5\\
44 & 5.0490\times 10^{12} & 5.05097\times 10^{12} & 0.04& 0.6\\
46 & 2.6423\times 10^{13} & 2.64353\times 10^{13} & 0.05& 0.6\\
48 & 1.3920\times 10^{14} & 1.39246\times 10^{14} & 0.03& 0.7\\
\hline\end{array}\]
\caption{Estimate of the first 48 terms of the cogrowth function
for Thompson's group $F$, constructed from 60 ERR random walks. Exact values from \cite{Haagerup}.
\label{tab:sophisticatedThompsons48}
}
\end{table}
\begin{rmk}
Table \ref{tab:sophisticatedThompsons48} shows a marked increase in the degree of accuracy of the estimates over those of Table \ref{tab:unsophisticatedThompsons48}. This suggests the method of using multiple distributions and weighted averages is effective. Note that there are approximately $10^{12}$ trivial words of length 48 so the walks could not possibly have visited each one. The sample of words visited by the walk seem to reflect the space as a whole reasonably accurately.
\end{rmk}
Figure \ref{fig:thompsonsNthRoot} shows our estimates for upper and lower bounds of $c_n^{1/n}$ for $n\leq 2000$.
\begin{figure}
\includegraphics[width=110mm]{2000Thompsons.pdf}
\caption{
Estimates of $c_n^{1/n}$ for Thompsons group $F$ for $n\leq 2000$, using the ERR-R method.
The figure
includes upper and lower bounds, but at this scale the estimated error
is to small for the bounds to be distinguished.
\label{fig:thompsonsNthRoot}}
\end{figure}
\subsection{Error analysis}\label{subsec:error_analysis}
Here we identify a method by which error in cogrowth estimates my be estimated. We stress that this is a statistical measurement of error, rather than theoretical.
Recall Equation~\ref{eqn:cogrowth_estimate}.
Suppose that $c_n$ is known up to $\pm\Delta c_n$,
and that the error in the measurements $W_m$ and $W_n$ are
$\pm\Delta W_m$ and $\pm\Delta W_n$ respectively. Then,
from elementary calculus, the error in
$c_m$ is given by
\begin{align}
\nonumber
\Delta c_m
\approx &\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta c_n\\
\nonumber
&+ \frac{c_n}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta W_m\\
\nonumber
&+ c_n\frac{W_m}{W_n^2}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m} \Delta W_n\\
\nonumber
=&c\left(n\right)
\frac{W_m}{W_n}
\left(\frac{n+1}{m+1}\right)^{\alpha}\beta^{n-m}
\left(
\frac{\Delta c_n}{c_n}
+\frac{\Delta W_m}{W_m}
+\frac{\Delta W_n}{W_n}
\right)\\
\approx&c_m
\left(
\frac{\Delta c_n}{c_n}
+\frac{\Delta W_m}{W_m}
+\frac{\Delta W_n}{W_n}
\right).\label{eqn:errorEstimate}
\end{align}
Hence the proportional error in the estimate of $c_m$ is
approximately equal to the sum of the proportional errors in $c_n,\,W_m$ and $W_n$. It is clear from this that if Equation~\ref{eqn:cogrowth_estimate} is used recursively (building new
estimates based on previously estimated cogrowth values) the proportional
error in $c_n$ is certain to increase. Note, the factor controlling the rate of growth in the proportional error of
estimates is the proportional error in $\Delta W_n$. If this
is constant as $n$ increases the proportional error in $c_n$ will grow linearly with $n$.
To calculate useful error margins for $c_n$ it is necessary to quantify $\Delta W_n$. Here we employ the same method
used in the ERR paper; walks are split into $M$ segments and
the number of times the walk visits words of length $n$ is recorded
for each segment. Let $x_{i,n}$ denote the number of times the walk
visited words of length $n$ in the $i$th segment. Then $W_n$ is taken to be the average of $x_{i,n}$ for $i=1\dots M$ and the error in
$W_n$ is calculated from the statistical variance of these values,
\begin{equation}\label{eqn:errorInWn}
\Delta W_n=\sqrt{\frac{\var\lbrace x_{i,n}\rbrace_{1\leq i\leq M}}{M-1}}.
\end{equation}
\begin{example}
Equations \ref{eqn:errorEstimate} and \ref{eqn:errorInWn} were used to produce the estimates of the error in the estimates contained in
Table~\ref{tab:sophisticatedThompsons48}.
Note that the estimated error is much larger then the actual error.
\end{example}
\subsection{Error in the $n$-th root of $c_n$}
We have noted that recursive uses of Equation~\ref{eqn:cogrowth_estimate} will result in an increasing proportional error in $c_n$.
However, it is the $n$-th root of $c_n$ which reflects the amenability of a group. Let
$\gamma_n=c_n^{1/n}$ and
$\Delta \gamma_n$ denote the error of the estimate for $\gamma_n$.
Once again from elementary calculus we obtain that for a
given $n$
\begin{align}
\nonumber
\Delta \gamma_n
&\approx\frac{1}{n}c_n^{\frac{1}{n}-1}\Delta c_n\\
\nonumber
&=\frac{1}{n}c_n^{\frac{1}{n}}\frac{\Delta c_n}{c_n}\\
\nonumber
&=\gamma_n\frac{1}{n}\frac{\Delta c_n}{c_n}\\
\text{and so }\frac{\Delta \gamma_n}{\gamma_n} &\approx\frac{1}{n}\frac{\Delta c_n}{c_n}.\label{eqn:errorInNthRoot}
\end{align}
Thus, if $\frac{\Delta c_n}{ c_n}$ increases at most linearly, $\frac{\Delta \gamma_n}{\gamma_n}$ can be expected to remain constant.
The values for $c_n$ grow exponentially, so a linearly increasing proportional error in $c_n$ corresponds with a massive increase in the absolute error in $c_n$. In contrast, $\gamma_n$ approaches a constant, so the proportional error depends linearly on the absolute error.
Thus it is not surprising that our experimental results show that even when the error in
cogrowth estimates grows large, the error in
the $n$-th root grows very slowly.
\section{Conclusion}
Several ideas emerge from this study.
Firstly, researchers performing experimental mathematics to determine the amenability of a group need to take care that their algorithm is not susceptible to interference from sub-dominant behaviours. For the reduced-cogrowth function the sub-dominant behaviour is identified by $\RR$. Amenability is an asymptotic property, and
the interference of sub-dominant behaviours on experimental algorithms can be subtle and nuanced. In particular, we have shown that, if Thompson's group $F$ is amenable, its function $\RR$ grows faster than any polynomial. This implies that the prediction of non-amenability of $F$ in \cite{ERR} is unreliable.
We have also shown that, despite potential inaccuracies in estimates of asymptotics, the ERR-R method can produce accurate results for initial cogrowth values.
These
are interesting in their own right. Indeed, if Thompson's group is not amenable, then its $\RR$ function need not be super-polynomial and results from experimental methods might well inform the construction of conjectures regarding cogrowth.
In this context the original benefits of the ERR algorithm still stand:
it requires no group theoretic computational software, no solution to the word problem, and remains a computationally inexpensive way to quickly gain insight into the cogrowth function of a finitely presented group.
\section*{Acknowledgements}
The authors wish to thank Andrew Rechnitzer and
Andrew Elvey-Price
for helpful feedback on this work.
| {'timestamp': '2016-11-07T02:02:10', 'yymm': '1608', 'arxiv_id': '1608.06703', 'language': 'en', 'url': 'https://arxiv.org/abs/1608.06703'} |
\section{Iterated finite element method}\label{app:iter-fem}
The rational approximation $u_{h,m}^R$
of the solution $u$ to~\eqref{e:Lbeta} introduced in \S\ref{subsec:rat-approx}
is defined in terms of the discrete operators $P_{\ell,h} = p_\ell(L_h)$
and $P_{r,h} = p_r(L_h)$ via~\eqref{e:uhr}.
Since the differential operator $L$
\kk{in~\eqref{e:L-div}} is of
second order,
their continuous counterparts
$P_\ell = p_\ell(L)$ and $P_r=p_r(L)$ in~\eqref{e:ur} are
differential operators
of order $2(m+m_\beta)$ and $2m$, respectively.
Using a standard Galerkin approach for solving~\eqref{e:ur} would
therefore require finite element basis functions~$\{\varphi_j\}$
in the Sobolev space $H^{m+m_\beta}(\mathcal{D})$,
which are difficult to construct in more than one space dimension.
This can be avoided by using
a modified version of
the iterated Hilbert space approximation method
by \cite{lindgren11},
and in this section we give the details of this procedure.
Recall from \S\ref{subsec:discrete} that
$V_h \subset V$ is a finite element space with continuous
piecewise \kk{linear} basis functions $\{\varphi_j\}_{j=1}^{n_h}$
defined with respect to
a regular triangulation~$\mathcal{T}_h$ of the domain $\overline{\mathcal{D}}$
with mesh \kk{width} $h := \max_{T\in\mathcal{T}_h} \operatorname{diam}(T)$.
For computing the finite element approximation, we start by factorizing
the polynomials~$q_1$ and $q_2$ in the rational approximation
$\hat{r}$ of $\hat{f}(x) = x^{\beta-m_\beta}$
in terms of their roots,
\begin{align*}
q_1(x) = \sum_{i=1}^m c_i x^i = c_m \prod_{i=1}^{m} (x - r_{1i} )
\quad
\text{and}
\quad
q_2(x) = \sum_{j=1}^{m+1} b_j x^j = b_{m+1} \prod_{j=1}^{m+1} (x - r_{2j} ).
\end{align*}
We use these expressions to reformulate~\eqref{e:xbeta} as
\begin{align*}
x^{-\beta} = f(x^{-1}) \approx \hat{r}(x^{-1}) x^{- m_\beta}
= \frac{c_m \prod_{i=1}^{m} (1 - r_{1i}x )}{ b_{m+1} x^{m_\beta-1} \prod_{j=1}^{m+1} (1 - r_{2i}x ) },
\end{align*}
where, again, we have expanded the fraction with $x^m$.
This representation shows that we can equivalently
define the rational SPDE approximation $u_{h,m}^R$
as the solution to~\eqref{e:uhr}
with $P_{\ell,h},P_{r,h}$ redefined as
$P_{\ell,h} = b_{m+1} L_h^{m_\beta-1} \prod_{j=1}^{m+1} (\kk{\mathrm{Id}_{h}} - r_{2j} L_h)$
and
$P_{r,h} = c_m \prod_{i=1}^{m} ( \kk{\mathrm{Id}_{h}} - r_{1i} L_h )$,
\kk{where $\mathrm{Id}_{h}$ denotes the identity on $V_h$}.
We use the formulation of \eqref{e:uhr}
as a system outlined in \eqref{e:nested-discrete}:
We first solve $P_{\ell,h} x_{h,m} = \cW_h$
and compute then
$u_{h,m}^R = P_{r,h} x_{h,m}$.
To this end,
we define the functions $x_k \in L_2(\Omega;V_h)$
for $k\in\{1,\ldots,m+m_\beta\}$
iteratively by
\begin{align*}
b_{m+1} ( \kk{\mathrm{Id}_{h}} - r_{21} L_h ) x_1 &= \cW_h, \\
( \kk{\mathrm{Id}_{h}} - r_{2k} L_h ) x_k &= x_{k-1}, \qquad k = 2,\ldots,m+1, \\
L_h x_k &= x_{k-1}, \qquad k = m+2,\ldots,m+m_\beta,
\end{align*}
\kk{noting} that $x_{m+m_\beta} = x_{h,m}$.
{\allowdisplaybreaks By
\kk{recalling the bilinear form $a_L$ from~\eqref{e:a-L}
and}
expanding $x_k = \sum_{j=1}^{n_h} x_{kj} \varphi_j$
with respect to the finite element basis,
we find that the stochastic weights
\kk{$\mv{x}_{k} = (x_{k1}, \ldots, x_{k{n_h}})^{\ensuremath{\top}}$
satisfy
\begin{gather*}
\sum_{j=1}^{n_h} x_{1j} \, b_{m+1}
\left(\scalar{\varphi_j, \varphi_i}{L_2(\mathcal{D})} - r_{21} \, a_L(\varphi_j, \varphi_i) \right)
= \scalar{\cW_h, \varphi_i}{L_2(\mathcal{D})}, \\
\sum_{j=1}^{n_h} x_{kj}
\left( \scalar{\varphi_j, \varphi_i}{L_2(\mathcal{D})} - r_{2k} \, a_L(\varphi_j, \varphi_i) \right)
= \sum_{j=1}^{n_h} x_{k-1,j} \, \scalar{\varphi_j, \varphi_i}{L_2(\mathcal{D})},
\quad 2\leq k\leq m+1 \\
\sum_{j=1}^{n_h} x_{kj} \,
a_L(\varphi_j, \varphi_i)
= \sum_{j=1}^{n_h} x_{k-1,j} \, \scalar{ \varphi_j, \varphi_i}{L_2(\mathcal{D})},
\quad m+2 \leq k \leq m+m_\beta,
\end{gather*}
where} each of these equations holds for $i = 1,\ldots, n_h$.
Recall from \kk{\S\ref{subsec:discrete}} that
$\cW_h$ is white noise in $V_h$.
This entails the distribution
\kk{$\bigl(\scalar{\cW_h, \varphi_i}{L_2(\mathcal{D})} \bigr)_{i=1}^{n_h} \sim \proper{N}(\mv{0},\mv{C})$},
where $\mv{C}$ is the mass matrix with elements
$C_{ij} = \scalar{\varphi_j, \varphi_i}{L_2(\mathcal{D})}$
and, therefore,
\kk{$\mv{x}_k \sim
\proper{N}\bigl(\mv{0},
\mv{P}_{\ell,k}^{-1} \mv{C} \mv{P}_{\ell,k}^{-\ensuremath{\top}}
\bigr)$}
for every $k \in\{ 1,...,m+m_\beta\}$.
Here, the matrix \kk{$\mv{P}_{\ell,k}$ is defined by
\begin{align*}
\mv{P}_{\ell,k} =
\begin{cases}
b_{m+1} \mv{C} \, \mv{L}_{k},
& k=1,\ldots, m+1, \\
b_{m+1} \mv{C} \left( \mv{C}^{-1} \mv{L} \right)^{k-m-1}
\mv{L}_{m+1},
& k = m+2, \ldots , m+m_\beta,
\end{cases}
\end{align*}
where $\mv{L}_{k}
:= \prod_{j=1}^{k} \left( \mv{I} - r_{2j}\mv{C}^{-1}\mv{L} \right)$,
with identity matrix $\mv{I} \in \mathbb{R}^{n_h \times n_h}$},
and the entries of $\mv{L}$ are given \kk{by
\begin{align*}
L_{ij}
:=
a_L(\varphi_j, \varphi_i)
=
\scalar{\mv{H} \nabla\varphi_j, \nabla\varphi_i}{L_2(\mathcal{D})}
+
\left( \kappa^2 \varphi_j, \varphi_i \right)_{L_2(\mathcal{D})},
\qquad
i,j = 1, \ldots, n_h,
\end{align*}
cf.~\eqref{e:L-div}--\eqref{e:a-L}}.
In particular, the weights $\mv{x}$ of $x_{h,m}$ have distribution
\begin{align}\label{e:xdistribution}
\mv{x} \sim
\proper{N}\left( \mv{0}, \mv{P}_{\ell}^{-1} \mv{C} \mv{P}_{\ell}^{-\ensuremath{\top}} \right),
\qquad
\text{where}
\qquad
\mv{P}_{\ell} := \mv{P}_{\ell,m+m_\beta}.
\end{align}
Note also that for the Mat\'ern \kk{case, i.e., $L = \kappa^{2} - \Delta$},
we have $\mv{L} = \kappa^{2}\mv{C} + \mv{G}$,
where $\mv{G}$ is the stiffness matrix with elements
$G_{ij} = \scalar{\nabla\varphi_j, \nabla\varphi_i}{L_2(\mathcal{D})}$.}
To calculate the final approximation
$u_{h,m}^R = P_{r,h} x_{h,m}$,
we apply a similar iterative procedure.
Let $u_1,\ldots,u_m$ be defined by
\begin{align*}
u_1 &= c_m (\kk{\mathrm{Id}_h} - r_{11}L_h) x_{h,m}, \\
u_k &= (\kk{\mathrm{Id}_h} - r_{1k}L_h) u_{k-1}, \qquad\quad k = 2,\ldots,m.
\end{align*}
Then $u_{h,m}^R = c_m \bigl(\prod_{i=1}^m (\mathrm{Id} - r_{1i} L_h) \bigr)x_{h,m} = u_m$
and the weights $\mv{u}_{k}$ of $u_k$
can be obtained from the weights of $x_{h,m}$ via
\[
\mv{u}_{k} = \mv{P}_{r,k} \, \mv{x},
\quad
\text{where}
\quad
\mv{P}_{r,k}
:= c_m \prod_{i=1}^{k} \left( \mv{I} - r_{1i} \mv{C}^{-1} \mv{L} \right).
\]
By~\eqref{e:xdistribution}, the distribution
of the weights $\mv{u}$ of the final rational approximation $u_{h,m}^R$
is thus given by
\begin{align*}
\mv{u} \sim
\proper{N}\left( \mv{0},
\mv{P}_{r} \mv{P}_{\ell}^{-1} \mv{C}
\mv{P}_{\ell}^{-\ensuremath{\top}} \mv{P}_{r}^{\ensuremath{\top}} \right),
\qquad
\text{where}
\qquad
\mv{P}_{r} := \mv{P}_{r,m}.
\end{align*}
To obtain sparse matrices $\mv{P}_{\ell}$
and $\mv{P}_{r}$,
we approximate the mass matrix $\mv{C}$ by a diagonal matrix
$\widetilde{\mv{C}}$ with diagonal elements
$\widetilde{C}_{ii} = \sum_{j=1}^{n_h} C_{ij}$.
The effect of this ``mass lumping''
was motivated theoretically by \cite{lindgren11},
and was empirically shown to be small by \cite{bolin13comparison}.
\section{Convergence analysis}\label{app:convergence}
In this section we give the details of the convergence result
stated in Theorem~\ref{thm:strong}.
As mentioned in \S\ref{subsec:error},
we choose $\hat{r}=\hat{r}_h$ as the $L_\infty$-best
rational approximation of $\hat{f}(x) = x^{\beta - m_\beta}$
on the interval $J_h$ for each $h$. We furthermore assume that
the operator $L$ \kk{in~\eqref{e:L-div}}
is normalized such that $\lambda_1 \geq 1$
and, thus, $J_h \subset J \subset [0,1]$.
Recall that Proposition~\ref{prop:uh}
provides a bound for $\norm{u - u_h}{L_2(\Omega; L_2(\mathcal{D}))}$.
Therefore, it remains is to estimate the strong error
between $u_{h,m}^R$ and $u_h$
induced by the rational approximation of $f(x) = x^\beta$.
To this end, recall the construction
of the rational approximation $u_{h,m}^R$
\kk{from} \S\ref{subsec:rat-approx}:
We first decomposed $f$
as $f(x) = \hat{f}(x) x^{m_\beta}$,
where $\hat{f}(x) = x^{\beta-m_\beta}$, and
then used a rational approximation
$\hat{r} = \frac{q_1}{q_2}$ of $\hat{f}$
on the interval $J_h= \bigl[ \lambda_{n_h,h}^{-1}, \lambda_{1,h}^{-1} \bigr]$
with $q_1 \in \mathcal{P}^m(J_h)$ and $q_2 \in \mathcal{P}^{m+1}(J_h)$
to define the approximation $r(x) := \hat{r}(x) x^{m_\beta}$ of $f$.
Here, $\mathcal{P}^m(J_h)$ denotes the set of polynomials
$q\colon J_h \to \mathbb{R}$ of degree $\deg(q) = m$.
In the following, we assume that $\hat{r} = \hat{r}_h$ is the best rational approximation
of $\hat{f}$ of this form, i.e.,
\begin{align*}
\norm{\hat{f} - \hat{r}_h}{C(J_h)}
=
\inf\left\{
\norm{\hat{f} - \hat{\rho}}{C(J_h)}
:
\hat{\rho} = \tfrac{q_1}{q_2}, \
q_1\in\mathcal{P}^m(J_h), \
q_2\in \mathcal{P}^{m+1}(J_h) \right\},
\end{align*}
where $\norm{g}{C(J)} := \sup_{x\in J} |g(x)|$.
For the analysis, we treat the two cases
$\beta\in(0,1)$ and $\beta \geq 1$ separately.
If $\beta \geq 1$, then $\hat{\beta} := \beta - m_\beta \in [0,1)$.
Thus, if $\hat{r}_{*}$ denotes the best rational approximation
of~$\hat{f}$ on the interval $[0,1]$,
we find
\citep[][Theorem~1]{stahl2003rational}
\begin{align*}
\norm{ \hat{f} - \hat{r}_h }{C(J_h)}
\leq \sup_{x\in[0,1]} | \hat{f}(x) - \hat{r}_{*}(x) |
\leq \hat{C} e^{-2\pi \sqrt{\hat{\beta}m}},
\end{align*}
where the constant $\hat{C}>0$ is continuous in $\hat{\beta}$ and independent of $h$ and the degree~$m$.
Since $x^{m_\beta} \leq 1$ for all $x\in J_h$, we obtain for $r_h(x) := \hat{r}_h(x) x^{m_\beta}$
the same bound,
\begin{align}\label{e:r-beta>1}
\norm{ f - r_h }{C(J_h)}
\leq \sup_{x\in J_h} | \hat{f}(x) - \hat{r}_h(x) |
\leq \hat{C} e^{-2\pi \sqrt{\hat{\beta}m}}.
\end{align}
If $\beta\in(0,1)$, then $\hat{\beta} \in (-1,0)$ and
we let $\widetilde{r}$
be the best approximation of $\widetilde{f}(x) := x^{|\hat{\beta}|}$
on $[0,1]$.
A rational approximation of $\widetilde{f}$
on the different interval
$\widetilde{J}_h := [\lambda_{1,h},\lambda_{n_h,h}]$
is then given by
$\widetilde{R}_h(\widetilde{x}) := \lambda_{n_h,h}^{|\hat{\beta}|} \widetilde{r}(\lambda_{n_h,h}^{-1} \widetilde{x})$
with error
\begin{align*}
\sup_{\widetilde{x}\in \widetilde{J}_h} | \widetilde{f}(\widetilde{x}) - \widetilde{R}_h(\widetilde{x}) |
\leq \lambda_{n_h,h}^{|\hat{\beta}|} \sup_{x\in[0,1]} | \widetilde{f}(x) - \widetilde{r}(x) |
\leq \widetilde{C} \lambda_{n_h,h}^{|\hat{\beta}|} e^{-2\pi \sqrt{|\hat{\beta}|m}},
\end{align*}
where the constant $\widetilde{C} > 0$ depends only on $|\hat{\beta}|$.
On $J_h = \bigl[ \lambda_{n_h,h}^{-1}, \lambda_{1,h}^{-1} \bigr]$
the function $\widetilde{R}_h(x^{-1})$ is
an approximation of $\hat{f}(x) = x^{\hat{\beta}} = \widetilde{f}(x^{-1})$ and
\begin{align*}
\norm{\hat{f} - \hat{r}_h}{C(J_h)}
\leq \sup_{x\in J_h} | \hat{f}(x) - \widetilde{R}_h(x^{-1}) |
\leq \sup_{\widetilde{x} \in \widetilde{J}_h} | \widetilde{f}(\widetilde{x}) - \widetilde{R}_h(\widetilde{x}) |
\leq \widetilde{C} \lambda_{n_h,h}^{|\hat{\beta}|} e^{-2\pi \sqrt{|\hat{\beta}|m}}.
\end{align*}
Finally, we use again the estimate $x^{m_\beta} \leq 1$ on $J_h$
to derive
\begin{align}\label{e:r-beta<1}
\norm{ f - r_h }{C(J_h)}
\leq \norm{\hat{f} - \hat{r}_h}{C(J_h)}
\leq \widetilde{C} \lambda_{n_h,h}^{|\hat{\beta}|} e^{-2\pi \sqrt{|\hat{\beta}|m}}.
\end{align}
Proposition~\ref{prop:uh} and the estimates \eqref{e:r-beta>1}--\eqref{e:r-beta<1}
yield Theorem~\ref{thm:strong}, which is proven below.
\begin{proof}[Proof of Theorem~\ref{thm:strong}]
By Proposition~\ref{prop:uh}, it suffices
to bound $\mathbb{E} \norm{u_h - u_{h,m}^R}{L_2(\mathcal{D})}^2$.
To this end, let $\cW_h = \sum_{j=1}^{n_h} \xi_j e_{j,h}$
be a Karhunen--Lo\`{e}ve expansion of $\cW_h$,
where $\{e_{j,h}\}_{j=1}^{n_h}$ are the
$L_2(\mathcal{D})$-orthonormal eigenvectors of $L_h$
corresponding to the eigenvalues
$\{\lambda_{j,h}\}_{j=1}^{n_h}$.
By construction and owing to boundedness and invertibility of $L_h$,
we have for~$u_{h,m}^R$ in~\eqref{e:uhr}
that $u_{h,m}^R = P_{\ell,h}^{-1} P_{r,h} \cW_h = r_h(L_h^{-1}) \cW_h$
and we estimate
\begin{align*}
\mathbb{E} \norm{u_h - u_{h,m}^R}{L_2(\mathcal{D})}^2
&=
\mathbb{E} \sum_{j=1}^{n_h} \xi_j^2 \left( \lambda_{j,h}^{-\beta} - r_h(\lambda_{j,h}^{-1}) \right)^2
\leq
n_h \max_{1\leq j \leq n_h} \bigl| \lambda_{j,h}^{-\beta} - r_h(\lambda_{j,h}^{-1}) \bigr|^2.
\end{align*}
By~\eqref{e:r-beta>1} and~\eqref{e:r-beta<1}, we can bound the last term \kk{by
\begin{align*}
\max_{1\leq j \leq n_h} \bigl| \lambda_{j,h}^{-\beta} - r_h(\lambda_{j,h}^{-1}) \bigr|^2
&\leq
\left( \sup_{x\in J_h}
|f(x) - r_h(x)| \right)^2
\lesssim
\lambda_{n_h,h}^{2\max\{(1 - \beta),0\}} e^{-4\pi \sqrt{|\beta-m_\beta| m}} .
\end{align*}
By \cite[Theorem~6.1]{strang2008}
we have $\lambda_{n_h,h} \lesssim \lambda_{n_h} \lesssim n_h^{2/d}$,
for sufficiently small $h\in(0,1)$,
where the last bound follows
from the Weyl asymptotic~\eqref{e:lambdaj}.
Finally, $n_h\lesssim h^{-d}$
by quasi-uniformity of the triangulation~$\mathcal{T}_h$.
Thus, we conclude
\begin{align*}
\mathbb{E} \norm{u_h - u_{h,m}^R}{L_2(\mathcal{D})}^2
&\lesssim h^{-4\max\{(1 - \beta),\, 0\} - d} e^{-4\pi \sqrt{|\beta-m_\beta| m}},
\end{align*}
which combined with Proposition~\ref{prop:uh} proves Theorem~\ref{thm:strong}}.
\end{proof}
\section{A comparison to the quadrature approach}\label{subsec:rat-comparequad}
\cite{bolin2017numerical} proposed another method
which can be applied to
\kk{simulate} the solution~$u$ to \eqref{e:Lbeta}
numerically.
The approach therein is
to express the discretized equation \eqref{e:uh}
as $L_h^{\tilde{\beta}}L_h^{\lfloor \beta \rfloor} u_h = \cW_h$,
where $\tilde{\beta} = \beta-\lfloor\beta\rfloor \in [0,1)$.
Since $L_h^{\lfloor \beta \rfloor} u_h = f$ can be solved
by using non-fractional methods,
the focus was on the case $\beta\in(0,1)$
when constructing the approximative solution.
From the Dunford--Taylor calculus \citep[{\S}IX.11]{yosida1995}
one has in this case the following representation
of the discrete inverse,
\begin{align*}
L_h^{-\beta}
&=
\frac{\sin(\pi\beta)}{\pi}
\int_0^{\infty} \lambda^{-\beta}
\left(\lambda \, \kk{\mathrm{Id}_h} + L_h \right)^{-1} \, \mathrm{d} \lambda.
\end{align*}
\cite{bonito2015} introduced a quadrature approximation $Q_{h,k}^\beta$
of this integral
after a change of variables $\lambda = e^{-2y}$
and based on an equidistant grid
for $y$ with step size~$k>0$, \kk{i.e.,
\begin{align*
Q_{h,k}^\beta := \frac{2 k \sin(\pi\beta)}{\pi}
\sum_{j=-K^{-}}^{K^{+}} e^{2\beta y_j}
\left( \kk{\mathrm{Id}_h} + e^{2 y_j} L_h \right)^{-1},
\quad\text{where}\quad
y_j := j k.
\end{align*}
Exponential convergence
of order $\mathcal{O}\bigl( e^{-\pi^2/(2k)} \bigr)$}
of the operator $Q_{h,k}^\beta$
to the discrete fractional inverse $L_h^{-\beta}$
was proven for
$K^{-} := \left\lceil \frac{\pi^2}{4\beta k^2} \right\rceil$
and
$K^{+} := \left\lceil \frac{\pi^2}{4(1-\beta) k^2} \right\rceil$.
By calibrating the number of quadrature nodes with the
number of basis functions in the FEM,
an explicit rate of convergence for the strong error
of the approximation
$u_{h,k}^Q = Q_{h,k}^\beta \cW_h$
was derived \citep[][Theorem 2.10]{bolin2017numerical}.
Motivated by the asymptotic convergence of the method,
\kk{it was suggested to choose
$k\leq - \tfrac{\pi^2}{4\beta\ln(h)}$
in order to balance the errors
induced by the quadrature and by a FEM of mesh size $h$
\citep[][Table~1]{bolin2017numerical}.
This corresponds} to a total number of
$K = K^- + K^+ + 1 > \tfrac{4\beta\ln(h)^2}{\pi^2(1-\beta)}$
quadrature nodes.
The analogous result for the degree $m$
of the approximation $u_{h,m}^R$ is given
in Remark~\ref{rem:calibrate-h-m},
suggesting the lower bound $m \geq \tfrac{\ln(h)^2}{\pi^2 (1-\beta)}$,
i.e., $K = 4\beta m$ asymptotically.
Furthermore, if we let \kk{$c_j := e^{2 y_j}$ and
\begin{align*}
P_{\ell,h}^Q := \prod_{j=-K^{-}}^{K^{+}} c_j^{-\beta} \left( \mathrm{Id}_h + c_j L_h \right),
\quad
P_{r,h}^Q := \frac{2 k \sin(\pi\beta)}{\pi}
\sum_{i=-K^{-}}^{K^{+}} \prod_{j \neq i} c_j^{-\beta}
\left( \mathrm{Id}_h + c_j L_h \right),
\end{align*}
we} find that the quadrature-based
approximation $u_{h,k}^Q$
can equivalently be defined
as the solution to the non-fractional SPDE
\begin{align}\label{e:quad-rat}
P_{\ell,h}^Q u_{h,k}^Q = P_{r,h}^{Q} \cW_h
\quad
\text{in } \mathcal{D}.
\end{align}
\begin{remark}\label{rem:quad}
A comparison of~\eqref{e:quad-rat} with~\eqref{e:uhr}
illustrates that $u_{h,k}^Q$
can be seen as a rational approximation
of degree $K^{-} + K^{+}$,
where the specific choice of the coefficients
is implied by the quadrature.
In combination with the remark above
that $K=4\beta m$ quadrature nodes are needed
to balance the errors,
this shows that the computational cost
for achieving a given accuracy
with the rational approximation from \S\ref{subsec:rat-approx}
is lower than with the quadrature method,
since $\beta > d/4$.
\end{remark}
\section{Parameter identifiability}\label{sec:measureproof}
This section contains the proof of Theorem~\ref{thm:measure}. For the proof, we will use the following theorem from \citep{stuart2010} which we restate here to for convenience.
\begin{theorem}[Stuart, 2010]\label{thm:stuart}
Two Gaussian measures $\mu_i = \proper{N}(m_i,\mathcal{C}_i)$,
$i\in\{1,2\}$, on a Hilbert space~$\mathcal{H}$
are either singular or equivalent.
They are equivalent if and only if the following three conditions
are satisfied:
\begin{enumerate}[label = \normalfont\Roman*.]
\item\label{stuart-1} $\operatorname{Im}\bigl(\mathcal{C}_1^{1/2}\bigr)
= \operatorname{Im}\bigl(\mathcal{C}_2^{1/2}\bigr) := E$,
\item\label{stuart-2} $m_1-m_2 \in E$,
\item\label{stuart-3} the operator $T:=
\bigl(\mathcal{C}_1^{-1/2}\mathcal{C}_2^{1/2}\bigr)
\bigl(\mathcal{C}_1^{-1/2}\mathcal{C}_2^{1/2}\bigr)^* - I$
is Hilbert-Schmidt in $\bar{E}$, where $\,^*$ denotes the $\mathcal{H}$-adjoint operator,
and $I$ the identity on $\mathcal{H}$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:measure}]
Since the two Gaussian measures have the same mean,
we only have to verify conditions~\ref{stuart-1} and~\ref{stuart-3}
of Theorem~\ref{thm:stuart}.
We first prove that condition~\ref{stuart-1}
can hold only if $\beta_1=\beta_2$.
To this end, we use the equivalence of
condition~\ref{stuart-1} with the existence
of two constants $c', c''>0$ such that
\begin{align}\label{eq:stuart-lemma}
\scalar{v, \mathcal{C}_1 v}{L_2(\mathcal{D})}
\leq c'
\scalar{v, \mathcal{C}_2 v}{L_2(\mathcal{D})}
\quad
\text{and}
\quad
\scalar{v, \mathcal{C}_2 v}{L_2(\mathcal{D})}
\leq c''
\scalar{v, \mathcal{C}_1 v}{L_2(\mathcal{D})},
\end{align}
where $\mathcal{C}_i := \mathcal{Q}_i^{-1} = \tau_i^{-2}(\kappa_i^2 - \Delta)^{-2\beta_i}$,
$i\in\{1,2\}$,
see Lemma 6.15 of \citet{stuart2010}.
In what follows, let $\lambda_j^\Delta$, $j\in\mathbb{N}$,
denote the positive eigenvalues
(in nondecreasing order)
of the Dirichlet or Neumann
Laplacian $-\Delta\colon\mathscr{D}(\Delta)\to L_2(\mathcal{D})$,
where the type of homogeneous boundary conditions
is the same
as for $L_1$ and $L_2$.
By Weyl's law~\eqref{e:lambdaj}, there exist
constants $\underline{c},\bar{C}>0$ such that
\[
\underline{c} \, j^{2/d} \leq \lambda_j^{\Delta} \leq \bar{C} j^{2/d}
\qquad
\forall j \in \mathbb{N}.
\]
Furthermore, we let $\{e_j\}_{j\in\mathbb{N}}$
denote a system of eigenfunctions
corresponding to $\bigl\{\lambda_j^\Delta \bigr\}_{j\in\mathbb{N}}$
which is orthonormal in $L_2(\mathcal{D})$.
Now assume that $\beta_2 > \beta_1$
and let $j_0\in\mathbb{N}$ be sufficiently large so that
$\kappa_1^2 < \bar{C} j_0^{2/d}$.
Then, we have
\[
\frac{(\kappa_2^2 + \lambda_j^{\Delta})^{2\beta_2}}{
(\kappa_1^2 + \lambda_j^{\Delta})^{2\beta_1}}
>
\frac{\underline{c}^{2\beta_2}}{(2\bar{C})^{2\beta_1}} \,
j^{4(\beta_2-\beta_1)/d}
\qquad
\forall j\in\mathbb{N}, \
j \geq j_0.
\]
For any $N\in\mathbb{N}$,
we can thus
choose $j_* = j_*(N)\in\mathbb{N}$ sufficiently large
such that
\[
\scalar{e_{j_*}, \mathcal{C}_1 e_{j_*}}{L_2(\mathcal{D})}
=
\tau_1^{-2} (\kappa_1^2 + \lambda_{j_*}^{\Delta})^{-2\beta_1}
>
N
\tau_2^{-2} (\kappa_2^2 + \lambda_{j_*}^{\Delta})^{-2\beta_2}
=
N
\scalar{e_{j_*}, \mathcal{C}_2 e_{j_*}}{L_2(\mathcal{D})},
\]
in contradiction with the
first relation in~\eqref{eq:stuart-lemma}, and
$\mu_1, \mu_2$ are not equivalent
if $\beta_1\neq \beta_2$.
Furthermore, condition~\ref{stuart-1}
is satisfied if $\beta_1=\beta_2 = \beta>d/4$, since then,
for all $v\in L_2(\mathcal{D})$,
\begin{align*}
\scalar{v, \mathcal{C}_1 v}{L_2(\mathcal{D})}
&=
\sum_{j\in\mathbb{N}} \tau_1^{-2}
(\kappa_1^2 +\lambda_j^\Delta)^{-2\beta} \scalar{v,e_j}{L_2(\mathcal{D})}^2 \\
&\leq
\tau_1^{-2} \tau_2^2
\left(\min\left\{ 1, \kappa_1^{2}\kappa_2^{-2} \right\} \right)^{-2\beta}
\sum_{j\in\mathbb{N}} \tau_2^{-2}
(\kappa_2^2 +\lambda_j^\Delta)^{-2\beta} \scalar{v,e_j}{L_2(\mathcal{D})}^2 \\
&=
\tau_1^{-2} \tau_2^2
\max\left\{ 1, \kappa_1^{-4\beta}\kappa_2^{4\beta} \right\}
\scalar{v, \mathcal{C}_2 v}{L_2(\mathcal{D})},
\end{align*}
and, similarly,
$\scalar{v, \mathcal{C}_2 v}{L_2(\mathcal{D})}
\leq
\tau_2^{-2} \tau_1^2
\max\bigl\{ 1, \kappa_2^{-4\beta}\kappa_1^{4\beta} \bigr\} \scalar{v, \mathcal{C}_1 v}{L_2(\mathcal{D})}$.
Thus,~\eqref{eq:stuart-lemma} and condition~\ref{stuart-1}
of Theorem~\ref{thm:stuart} hold.
Assuming that $\beta_1=\beta_2 = \beta>d/4$,
it remains now to show that condition~\ref{stuart-3}
of Theorem~\ref{thm:stuart}
is satisfied if and only if $\tau_1=\tau_2$.
To this end, we first note that the operator
$T := \mathcal{C}_1^{-1/2}\mathcal{C}_2
\mathcal{C}_1^{-1/2} - I$ has eigenfunctions
$\{e_j\}_{j\in\mathbb{N}}$ and eigenvalues
\[
\tau_1^2 \tau_2^{-2}
(\kappa_1^2 + \lambda_j^\Delta)^{2\beta}
(\kappa_2^2 + \lambda_j^\Delta)^{-2\beta}
-1 ,
\qquad
j\in\mathbb{N}.
\]
Therefore, $T$ is Hilbert--Schmidt in $\bar{E}$
if and only if
\begin{equation}\label{eq:app:HS-T}
\sum_{j\in\mathbb{N}}
\left(
\tau_1^2 \tau_2^{-2}
(\kappa_1^2 + \lambda_j^\Delta)^{2\beta}
(\kappa_2^2 + \lambda_j^\Delta)^{-2\beta}
-1
\right)^2 < \infty.
\end{equation}
Since $x\mapsto (1+x)^{1/(2\beta)}$
is monotonically increasing in $x>0$,
again by the Weyl asymptotic,
for any $\varepsilon_0 > 0$, we can find
an index
$j_0\in\mathbb{N}$ such that
\begin{equation}\label{eq:app:j0}
\frac{\kappa_2^2}{\lambda_j^\Delta} + 1
\leq
(1+\varepsilon_0)^{1/(2\beta)}
\qquad
\forall j\in\mathbb{N}, \
j \geq j_0.
\end{equation}
Assume that $\tau_1\neq\tau_2$ and without loss of generality
let $\tau_1> \tau_2$.
Then pick $\varepsilon_0>0$ such that
$\tau_1^{2} \tau_2^{-2} \geq 1 + 2\varepsilon_0$,
and $j_0\in\mathbb{N}$ such that~\eqref{eq:app:j0} holds.
These choices give
\begin{align*}
\tau_1^2 \tau_2^{-2}
\left(
\frac{\kappa_1^2 + \lambda_j^\Delta}{\kappa_2^2 + \lambda_j^\Delta}
\right)^{2\beta}
\geq
\tau_1^2 \tau_2^{-2}
(\kappa_2^2/ \lambda_j^\Delta +1 )^{-2\beta}
\geq
(1 + 2\varepsilon_0)
(1 + \varepsilon_0)^{-1}>1,
\end{align*}
for all $j\in\mathbb{N}$ with $j \geq j_0$.
Thus, the series in~\eqref{eq:app:HS-T}
is unbounded,
\begin{align*}
\sum_{j\in\mathbb{N}}
\left(
\tau_1^2 \tau_2^{-2}
(\kappa_1^2 + \lambda_j^\Delta)^{2\beta}
(\kappa_2^2 + \lambda_j^\Delta)^{-2\beta}
-1
\right)^2
&\geq
\sum_{j\geq j_0}
\left(
(1 + 2\varepsilon_0)
(1 + \varepsilon_0)^{-1}
-1
\right)^2 \\
&=
\sum_{j\geq j_0}
\varepsilon_0^2
\left( 1 + \varepsilon_0
\right)^{-2}
=\infty .
\end{align*}
We conclude that condition~\ref{stuart-3}
of Theorem~\ref{thm:stuart}
is not satisfied if $\tau_1\neq\tau_2$.
Finally, let $\beta_1=\beta_2=\beta$, $\tau_1=\tau_2$
and assume without loss of generality that
$\kappa_2 > \kappa_1$ (if $\kappa_1=\kappa_2$,
\eqref{eq:app:HS-T} is evident).
By the mean value theorem, applied for the
funtion $x\mapsto x^{2\beta}$, for every $j\in\mathbb{N}$,
there exists
$\widetilde{\kappa}_j \in (\kappa_1, \kappa_2)$
such that
\[
(\kappa_2^2 + \lambda_j^\Delta)^{2\beta} - (\kappa_1^2 + \lambda_j^\Delta)^{2\beta}
=
2\beta (\widetilde{\kappa}_j^2 + \lambda_j^\Delta)^{2\beta-1} (\kappa_2^2-\kappa_1^2).
\]
Hence, we can bound the series in~\eqref{eq:app:HS-T}
as follows,
\begin{align*}
\sum_{j\in\mathbb{N}}
&\left(
\frac{ (\kappa_1^2 + \lambda_j^\Delta)^{2\beta} -
(\kappa_2^2 + \lambda_j^\Delta)^{2\beta} } {
(\kappa_2^2 + \lambda_j^\Delta)^{2\beta} }
\right)^2
=
4 \beta^2 (\kappa_2^2-\kappa_1^2)^2
\sum_{j\in\mathbb{N}}
\left(
\frac{
(\widetilde{\kappa}_j^2 + \lambda_j^\Delta)^{2\beta-1} }{
(\kappa_2^2 + \lambda_j^\Delta)^{2\beta}}
\right)^2 \\
&\leq
4 \beta^2 (\kappa_2^2-\kappa_1^2)^2
\sum_{j\in\mathbb{N}}
(\widetilde{\kappa}_j^2 + \lambda_j^\Delta)^{-2}
\leq
4 \beta^2 (\kappa_2^2-\kappa_1^2)^2
\underline{c}^{-2}
\sum_{j\in\mathbb{N}}
j^{-4/d} < \infty.
\end{align*}
Here, $\sum_{j\in\mathbb{N}} j^{-4/d}$ converges,
since $4/d > 1$ for $d\in\{1,2,3\}$.
This proves equivalence of
the Gaussian measures if
$\beta_1=\beta_2$ and $\tau_1=\tau_2$.
\end{proof}
\end{appendix}
\section{Introduction}
One of the main challenges in spatial statistics is to handle large data sets.
A reason for this is that the computational cost for likelihood evaluation and spatial prediction
is in general cubic in the number $N$ of observations of a Gaussian random field.
A tremendous amount of research has been devoted
to coping with this problem
and various methods have been suggested
\citep[see][for a recent review]{heaton2017methods}.
A common approach is to define an approximation $u_h$ of a Gaussian random field $u$
on a spatial domain $\mathcal{D}$ via a basis expansion,
\begin{equation}\label{e:basisexp}
u_h(\mv{s}) = \sum_{j=1}^n u_j \, \varphi_j(\mv{s}),
\qquad
\mv{s}\in\mathcal{D},
\end{equation}
where
$\varphi_j \colon \mathcal{D} \to \mathbb{R}$ are fixed basis functions
and $\mv{u} = (u_1,\ldots, u_n)^{\ensuremath{\top}} \sim \proper{N}(\mv{0},\mv{\Sigma}_{\mv{u}})$
are stochastic weights.
The computational effort can then be reduced by choosing \kk{$n\ll N$.
However,} methods based on such low-rank
approximations tend to remove fine-scale variations of the process.
For this reason, methods which instead exploit sparsity
for reducing the computational cost have gained popularity in recent years.
One can construct sparse approximations either of the covariance matrix of the measurements \citep{furrer2006covariance},
or of the inverse of the covariance matrix \citep{datta2016hierarchical}.
Alternatively, one can let the precision matrix~$\mv{\Sigma}_{\mv{u}}^{-1}$
of the weights in~\eqref{e:basisexp} be sparse,
as in the stochastic partial differential equation (SPDE) approach
by \cite{lindgren11}, where usually $n\approx N$.
To increase the accuracy \db{further}, several combinations of
the methods mentioned above have been considered \citep[e.g.,][]{sang2012full}
and multiresolution approximations of the process
have been exploited
\citep{nychka2015multiresolution, katzfuss2017multi}.
However, \db{theoretical error bounds have not been derived for most of these methods},
which necessitates tuning these approximations for each specific model.
In this work we propose a new class of approximations,
\kk{whose members we refer to as \emph{rational
stochastic partial differential equation approximations}
or \emph{rational approximations} for short}.
Our approach is similar to some of the above methods
in the sense that an expansion \eqref{e:basisexp}
with compactly supported basis functions is exploited.
The main novelty is that neither the covariance matrix $\mv{\Sigma}_{\mv{u}}$
nor the precision matrix $\mv{\Sigma}_{\mv{u}}^{-1}$ of the weights $\mv{u}$
\kk{is} assumed to be sparse.
The covariance matrix is instead a product
\kk{$\mv{\Sigma}_{\mv{u}} = \mv{P} \mv{Q}^{-1}\mv{P}^{\ensuremath{\top}}$,
where $\mv{P}$ and $\mv{Q}$ are sparse matrices
and the sparsity pattern of $\mv{P}$} is a subset
of \kk{that} of~$\mv{Q}$.
We show that the resulting approximation
facilitates inference and prediction
at the same computational cost as a Markov approximation
with $\mv{\Sigma}_{\mv{u}}^{-1} = \mv{Q}$, and at a higher accuracy.
For the theoretical framework of our approach,
we consider a Gaussian random field on a bounded domain $\mathcal{D} \subset \mathbb{R}^d$
which can be represented as the solution $u$ to the SPDE
\begin{align}\label{e:Lbeta}
L^\beta u = \cW \quad\text{in }\mathcal{D},
\end{align}
where $\cW$ is Gaussian white noise on $\mathcal{D}$,
and $L^{\beta}$ is a fractional power of
a second-order differential operator $L$
which determines the covariance \kk{structure} of $u$.
Our \kk{rational approximations are}
based on two components:
\begin{enumerate*}[label=(\roman*)]
\item a finite element method (FEM)
with continuous and piecewise polynomial basis functions $\{\varphi_j\}_{j=1}^n$, and
\item a rational approximation of the function $x^{-\beta}$.
\end{enumerate*}
We explain how to perform these two steps in practice
in order to explicitly compute the matrices $\mv{P}$ and $\mv{Q}$.
Furthermore, we derive an upper bound
for the strong mean-square error of the rational approximation.
This bound provides an explicit rate of convergence in terms
of the mesh size of the finite element discretization,
which facilitates tuning the approximation
without empirical tests
for each specific model.
Examples of random fields
which can be expressed as solutions
to SPDEs of the form \eqref{e:Lbeta}
include approximations of Gaussian Mat\'ern fields \citep{matern60}.
Specifically, if $\mathcal{D}:=\mathbb{R}^d$
a zero-mean Gaussian Mat\'ern field can be viewed
as a solution $u$ to
\begin{equation}\label{e:statmodel}
(\kappa^2 - \Delta)^{\beta} \, (\tau u) = \cW,
\end{equation}
where $\Delta$ denotes the Laplacian \citep{whittle63}.
The constant parameters $\kappa, \tau > 0$
determine the practical correlation range
and the variance of $u$.
The exponent~$\beta$ defines the smoothness parameter
$\nu$ of the Mat\'ern covariance function
via the relation $2\beta= \nu + d/2$
and, thus, the differentiability of the field.
\kk{For applications, variance, range and differentiability
typically are the most important properties
of the Gaussian field.
For this reason, the Mat\'ern model is
highly popular in spatial statistics
and has become the} \db{standard choice
for Gaussian process priors
in machine learning \citep{Rasmussen2006}.}
\kk{Since \eqref{e:statmodel}
is a special case of \eqref{e:Lbeta}
we believe that
the outcomes of this work
will be of great relevance
for many statistical applications,
see also \S\ref{sec:application}}.
In contrast to covariance-based models,
\db{the SPDE} \kk{approach additionally} has the advantage
that it allows for a number of generalizations
of stationary Mat\'ern fields including
\begin{enumerate*}[label=(\roman*)]
\item non-stationary fields
\kk{generated by non-stationary}
differential operators
\citep[e.g.,][]{fuglstad2015non-stationary},
\item fields on more general domains
such as the sphere
\citep[e.g.,][]{lindgren11}, and
\item non-Gaussian fields
\citep{wallin15}.
\end{enumerate*}
\cite{lindgren11} showed
that, if $2\beta\in\mathbb{N}$, one can construct accurate approximations of the form \eqref{e:basisexp}
for Gaussian Mat\'ern fields
on bounded domains $\mathcal{D}\subsetneq\mathbb{R}^d$,
such that $\mv{\Sigma}_{\mv{u}}^{-1}$ is sparse.
To this end, \eqref{e:statmodel} is considered on $\mathcal{D}$
and the differential operator $\kappa^2-\Delta$ is augmented
with appropriate boundary conditions.
The resulting \kk{SPDE} is then solved
approximately by means of a FEM.
\kk{Due to}
\db{the implementation in the R-INLA software,
this \kk{approach} has become \kk{widely used},
see \citep{bakka2018} for a \kk{comprehensive}
list of recent applications}.
However, the restriction $2\beta\in\mathbb{N}$ implies
a significant limitation for the
flexibility of the method.
In particular, it is therefore not
directly applicable to the important special case
of exponential covariance ($\nu=1/2$) on $\mathbb{R}^2$,
where $\beta=3/4$.
In addition, restricting the value of $\beta$
complicates \kk{estimating}
the smoothness of the process from data. \kk{In fact, $\beta$ typically is fixed
when the method is used in practice,
since identifying the value of $2\beta \in \mathbb{N}$
with the highest likelihood
requires a separate estimation of
all the other parameters in the model
for each value of $\beta$}.
A common justification for fixing $\beta$ is to argue
that it is not practicable to estimate
the smoothness of a random field from data.
\db{However, there are certainly applications for which
it is feasible to estimate the smoothness. We provide an example of this in~\S\ref{sec:application}}.
\db{Furthermore, having the correct smoothness of the model is particularly important for interpolation, and the fact that the Mat\'ern model allows for estimating the smoothness from data was the main reason for why
\cite{stein99} recommended the model.}
The rational SPDE approach
\kk{presented in this work}
facilitates an estimation of $\beta$
from data by providing an approximation of $u$
which is computable for all fractional powers \kk{$\beta > d/4$ (i.e., $\nu>0$),
where $d\in\mathbb{N}$ is the dimension of the spatial
domain $\mathcal{D}\subset\mathbb{R}^d$}.
It thus enables to include this parameter in
likelihood-based (or Bayesian) parameter estimation
for both stationary and non-stationary models.
Although the SPDE approach has been considered
in the non-fractional case also for non-stationary models,
\cite{lindgren11} showed convergence of the approximation
only for the stationary model \eqref{e:statmodel}.
Our analysis \kk{in \S\ref{sec:rational}} closes this gap
since we consider the general model \eqref{e:Lbeta}
which covers the non-stationary case
\db{and several other previously proposed
\kk{generalizations} of the Mat\'ern model}.
The structure of this article is as follows:
We briefly review existing methods
for the SPDE approach in the fractional case in \S\ref{sec:SPDE}.
In \S\ref{sec:rational} the rational SPDE approximation
is introduced and a result on its strong convergence is stated.
The procedure of \kk{applying
the rational SPDE approach to} statistical inference
is addressed in \S\ref{sec:comp}.
\S\ref{sec:numerics} contains numerical experiments
which illustrate the accuracy of the proposed method.
\db{The identifiability of the parameters in the Mat\'ern SPDE model \eqref{e:statmodel} is discussed in \S\ref{sec:inference}, where we derive necessary and sufficient conditions for equivalence of the induced Gaussian measures.}
In \S\ref{sec:application} we present an application
to climate data,
where we consider fractional and non-fractional models
for both stationary and non-stationary covariances.
We conclude with a discussion in \S\ref{sec:discussion}.
Finally, the article contains
four appendices providing details
about \begin{enumerate*}[label=(\Alph*)]
\item the finite element discretization,
\item the convergence analysis,
\item a comparison with the quadrature method by \cite{bolin2017numerical}, and
\item the equivalence of Gaussian measures.
\end{enumerate*}
The method developed in this work has been implemented
in the R \citep{Rlanguage} package rSPDE, \db{available online at \url{https://bitbucket.org/davidbolin/rspde}}.
\section{The SPDE approach in the fractional case until now}\label{sec:SPDE}
A reason for why the approach by \cite{lindgren11}
only works for integer values of $2\beta$
is given by \cite{rozanov1977markov},
who showed that a Gaussian random field
on $\mathbb{R}^d$ is Markov
if and only if its spectral density
can be written as the reciprocal of a polynomial,
$\widetilde{S}(\mathbf{k}) = (\sum_{j=0}^m b_j \|\mathbf{k}\|^{2j})^{-1}$.
Since the spectrum of a Gaussian Mat\'ern field is
\begin{equation}\label{eq:maternSpec}
S(\mathbf{k}) = \frac{1}{(2\pi)^d}\frac{1}{(\kappa^2 + \|\mathbf{k}\|^2)^{2\beta}},
\qquad
\mathbf{k}\in\mathbb{R}^d,
\end{equation}
the precision matrix $\mv{Q}$ will therefore
not be sparse unless $2\beta\in\mathbb{N}$. For $2\beta\notin\mathbb{N}$,
\cite{lindgren11} suggested to compute a Markov approximation
by choosing $m = \ceil{2\beta}$ and selecting the coefficients
$\mv{b} = (b_1, \ldots, b_m)^{\ensuremath{\top}}$
so that the deviation between the spectral densities
$\int_{\mathbb{R}^d} w(\mathbf{k})(S(\mathbf{k}) - \widetilde{S}(\mathbf{k}))^2 \,\mathrm{d}\mathbf{k}$
is minimized.
For this measure of deviation, $w$ is some suitable weight function
which should be chosen to get a good approximation
of the covariance function.
For the method to be useful in practice,
the coefficients $b_j$ should be given explicitly in terms
of the parameters~$\kappa$ and $\nu$.
Because of this, \cite{lindgren11}
proposed a weight function that enables an analytical evaluation of the integral,
\begin{align*
\int_{\kappa^2}^{\infty}\Bigl[ z^{2\beta} - \sum_{j=0}^m b_j (z-\kappa^2)^{j} \Bigr]^2 z^{-2m -1 - \theta} \, \mathrm{d} z,
\end{align*}
where $\theta > 0$ is a tuning parameter.
By differentiating this integral with respect to the parameters and
setting the differentials equal to zero,
a system of linear equations is obtained,
which can be solved to find the coefficients $\mv{b}$.
The resulting approximation depends strongly on $\theta$,
and one could use numerical optimization to find a good value of $\theta$
for a specific value of $\beta$, or use the choice $\theta = 2\beta - \floor{2\beta}$,
which approximately minimizes the maximal distance between the covariance functions \citep{lindgren11}.
This method was used for the comparison in \citep{heaton2017methods}, and we will use it
as a baseline method when analyzing the accuracy of the rational SPDE approximations in later sections.
Another Markov approximation based on the spectral density
was proposed by \cite{roininen2014sparse}.
These Markov approximations may be sufficient in certain applications;
however,
any approach based on the spectral density
or the covariance function is difficult to generalize
to models on more general domains than~$\mathbb{R}^d$,
non-stationary models, or non-Gaussian models.
Thus, such methods cannot be used if the full potential
of the SPDE approach should be kept for fractional values of $\beta$.
There is a rich literature on methods for solving deterministic fractional PDEs
\citep[e.g.,][]{bonito2015, gavrilyuk2004, jin2015, nochetto2015},
and some of the methods that have been proposed could be used
to compute approximations of the solution to the SPDE~\eqref{e:statmodel}.
However, any deterministic problem becomes
more sophisticated when randomness is included.
Even methods developed specifically for sampling solutions to
SPDEs like~\eqref{e:statmodel} may be
\db{difficult to use directly for statistical applications, \kk{when}
likelihood evaluations, spatial predictions \kk{or} posterior sampling
are needed}.
\kk{For instance, it has been unclear
if the sampling approach by \cite{bolin2017numerical}, which is
based on a quadrature approximation
for an integral representation of the
fractional inverse $L^{-\beta}$,
could be used for statistical inference.}
\db{In Appendix~\ref{subsec:rat-comparequad} we show
that it can be viewed as a
(less computationally efficient) version
of the rational SPDE approximations developed in this work.
\kk{Consequently}, the results in \S\ref{sec:comp}
on how to use the rational SPDE approach
for inference \kk{apply} also this that method.
In \S\ref{subsec:numerics-matern}
we compare the performance of
the two methods in practice within the scope of a numerical experiment.}
\section{Rational approximations for fractional SPDEs}\label{sec:rational}
In this section we propose an explicit scheme for
approximating solutions to a class of SPDEs including~\eqref{e:statmodel}.
Specifically, in
\kk{\S\ref{subsec:fractional}--\S\ref{subsec:discrete}}
we introduce
the fractional order equation of interest
as well as its finite element discretization.
In \S\ref{subsec:rat-approx}
we propose a non-fractional equation, whose solution
after specification of certain coefficients
approximates the random field of interest.
For this approximation, we provide
\kk{a rigorous error bound}
in \S\ref{subsec:error}.
Finally, in \S\ref{subsec:rat-coeff} we address the computation of the
coefficients in the rational approximation.
\subsection{The fractional order equation}\label{subsec:fractional}
\kk{With the objective of allowing}
for more general Gaussian random fields
than the Mat\'ern \kk{class},
we consider the fractional order equation \eqref{e:Lbeta},
where $\mathcal{D} \subset \mathbb{R}^d$, $d\in\{1,2,3\}$,
is \kk{an open}, bounded, convex \kk{polytope},
\kk{with closure $\overline{\mathcal{D}}$}, and
$\cW$ is Gaussian white noise in $L_2(\mathcal{D})$.
Here and below, $L_2(\mathcal{D})$ is the Lebesgue space
of square-integrable real-valued functions,
which is equipped with the inner product
$\scalar{w,v}{L_2(\mathcal{D})} := \int_{\mathcal{D}} w(\mv{s}) v(\mv{s}) \, \mathrm{d} \mv{s}$.
The Sobolev space of order $k\in\mathbb{N}$ is denoted by
$H^k(\mathcal{D}) := \left\{ w \in L_2(\mathcal{D}) :
D^{\gamma} w \in L_2(\mathcal{D}) \ \forall \, |\gamma|\leq k \right\}$
and $H^1_0(\mathcal{D})$ is the subspace of $H^1(\mathcal{D})$ containing
functions with vanishing trace.
We assume that the operator
$L\colon\mathscr{D}(L) \to L_2(\mathcal{D})$
is a linear second-order
differential operator in divergence form,
\begin{equation}\label{e:L-div}
L u = - \nabla \cdot(\mv{H} \nabla u) + \kappa^2 u,
\end{equation}
whose domain of definition $\mathscr{D}(L)$
depends on the choice of boundary conditions
on $\partial\mathcal{D}$.
Specifically, we impose homogeneous Dirichlet
or Neumann boundary conditions
and set $V=H^1_0(\mathcal{D})$ or $V=H^1(\mathcal{D})$,
respectively.
Furthermore, we let
the functions $\mv{H}$ and $\kappa$
in~\eqref{e:L-div}
satisfy the following assumptions:
\begin{enumerate}[label=\Roman*.]
\item\label{ass:coeff-H}
$\mv{H}\colon\overline{\mathcal{D}}\to\mathbb{R}^{d\times d}$
is symmetric,
Lipschitz continuous on the closure $\overline{\mathcal{D}}$, i.e.,
there exists a constant
$C_{\operatorname{Lip}} > 0$ such that
\[
| H_{ij}(\mv{s}) - H_{ij}(\mv{s'}) |
\leq
C_{\operatorname{Lip}} \norm{\mv{s} - \mv{s'}}{}
\quad
\forall \mv{s}, \mv{s'}\in \overline{\mathcal{D}},
\quad
\forall i,j\in\{1,\ldots,d\},
\]
%
and
uniformly positive definite, i.e.,
\[
\exists C_0 > 0 :
\quad
\operatorname{ess} \inf_{\mv{s}\in\mathcal{D}} \boldsymbol{\xi}^\ensuremath{\top} \mv{H}(\mv{s}) \boldsymbol{\xi}
\geq C_0 \|\boldsymbol{\xi} \|^2
\qquad
\forall \boldsymbol{\xi} \in \mathbb{R}^d;
\]
\item\label{ass:coeff-kappa}
$\kappa \colon \mathcal{D} \to \mathbb{R}$
is bounded,
$\kappa\in L_{\infty}(\mathcal{D})$.
\end{enumerate}
If \ref{ass:coeff-H}--\ref{ass:coeff-kappa}
are satisfied, the differential operator
$L$ in~\eqref{e:L-div} induces a symmetric,
continuous and coercive
bilinear form $a_L$ on $V$,
\begin{equation}\label{e:a-L}
a_L \colon V\times V \to \mathbb{R},
\qquad
a_L (u,v)
:=
(\mv{H}\nabla u, \nabla v)_{L_2(\mathcal{D})}
+
(\kappa^2 u, v)_{L_2(\mathcal{D})},
\end{equation}
and its domain
is given by
$\mathscr{D}(L) =
H^2(\mathcal{D})\cap V$.
\kk{Furthermore,
Weyl's law
\citep[see, e.g.,][Thm.~6.3.1]{Davies1995}
shows that the eigenvalues $\{\lambda_j\}_{j\in\mathbb{N}}$
of the elliptic differential operator~$L$
in~\eqref{e:L-div},
in nondecreasing order,
satisfy the spectral asymptotics
\begin{align}\label{e:lambdaj}
\lambda_j \eqsim j^{2/d}
\qquad
\text{as }j\to\infty.
\end{align}
Thus}, \db{existence and uniqueness of the solution $u$
to \eqref{e:Lbeta}
\kk{readily} follow from Proposition~2.3 and Lemma~2.1
of \cite{bolin2017numerical}. We formulate this as a proposition.}
\begin{proposition}\label{prop:regularity}
\db{Let $L$ be given by \eqref{e:L-div} where $\mv{H}$ and $\kappa$
\kk{satisfy} the assumptions \ref{ass:coeff-H}--\ref{ass:coeff-kappa}
above and assume $\beta > d/4$.
Then \eqref{e:Lbeta} has a \kk{unique solution $u$
in $L_2(\Omega;L_2(\mathcal{D}))$}.}
\end{proposition}
\kk{The assumptions \ref{ass:coeff-H}--\ref{ass:coeff-kappa}
on the differential operator $L$
are satisfied, e.g., by the
Mat\'ern operator $L = \kappa^2 - \Delta$,
in which case the condition $\beta>d/4$
on the fractional exponent in~\eqref{e:Lbeta}
corresponds to a positive smoothness parameter $\nu$,
i.e., to a non-degenerate field.
Moreover, the equation~\eqref{e:Lbeta}
as considered in our work
includes several previously proposed non-fractional
non-stationary models as special cases,
such as} \db{the non-stationary Mat\'ern models
by \cite{lindgren11},
the models with locally varying anisotropy
by \cite{fuglstad2015non-stationary},
and the barrier models by \cite{bakka2019}.
Thus, Proposition~\ref{prop:regularity} shows existence
and uniqueness of the fractional versions of all these models,
which can be treated \kk{in practice by}
using the results of the following sections}.
\subsection{\kk{The discrete model}}\label{subsec:discrete}
In order to discretize the problem,
we assume that
$V_h \subset V$ is a finite element space with continuous
piecewise \kk{linear} basis functions $\{\varphi_{j}\}_{j=1}^{n_h}$
defined with respect to
a triangulation $\mathcal{T}_h$ of the domain $\overline{\mathcal{D}}$
of mesh \kk{width}
$h := \max_{T\in\mathcal{T}_h} h_T$, where $h_T := \operatorname{diam}(T)$
\db{is the diameter of the element $T\in\mathcal{T}_h$}.
Furthermore, the family $(\mathcal{T}_h)_{h\in(0,1)}$ of triangulations
inducing the finite-dimensional subspaces $(V_h)_{h\in(0,1)}$ of $V$
is supposed to be quasi-uniform, i.e.,
there exist constants $C_1, C_2 > 0$ such that
$\rho_T \geq C_1 h_T$ and
$h_T \geq C_2 h$ for all
$T\in\mathcal{T}_h$ and $h \in (0,1)$.
Here,
\kk{$\rho_T>0$} is radius of largest ball inscribed in $T\in\mathcal{T}_h$.
The \kk{discrete operator $L_h\colon V_h \to V_h$ is defined
in terms of the bilinear form $a_L$ in~\eqref{e:a-L} via
the relation
$\scalar{L_h \phi_h, \psi_h}{L_2(\mathcal{D})}
=
a_L( \phi_h, \psi_h )$
which holds for all
$\phi_h,\psi_h \in V_h$}.
We then consider the following SPDE
on the \kk{finite-dimensional} state space $V_h$,
\begin{align}\label{e:uh}
L_h^\beta u_h = \cW_h
\quad
\text{in }\mathcal{D},
\end{align}
where $\cW_h$ is Gaussian white noise in
$V_h$, i.e.,
$\cW_h = \sum_{j=1}^{n_h} \xi_j e_{j,h}$
for a basis $\{e_{j,h}\}_{j=1}^{n_h}$ of $V_h$ which
is orthonormal in $L_2(\mathcal{D})$ and
$\xi_j \sim \proper{N}(0,1)$ for all $j=1,\ldots,n_h$.
\kk{We note that the
assumptions~\ref{ass:coeff-H}--\ref{ass:coeff-kappa}
from \S\ref{subsec:fractional}
on the functions $\mv{H}$ and $\kappa$
combined with the convexity of $\mathcal{D}$
imply that the operator
$L$ in~\eqref{e:L-div} is
$H^2(\mathcal{D})$-regular, i.e.,
if $f\in L_2(\mathcal{D})$, then the solution
$u\in V$ to $Lu=f$ satisfies $u\in H^2(\mathcal{D}) \cap V$,
see, e.g.,~\cite[Thm.~3.2.1.2]{Grisvard:2011}
for the case of Dirichlet boundary conditions.
By combining this observation
with the spectral asymptotics~\eqref{e:lambdaj}
we see that the assumptions in Lemmata
3.1 and 3.2 of \cite{bolin2017numerical} are satisfied
(since then, in their notation, $r=s=q=2$ and $\alpha=2/d$)
and we obtain an
error estimate for the finite element approximation
$u_h = L_h^{-\beta} \cW_h$
in~\eqref{e:uh} for all $\beta\in(d/4,1)$.
Furthermore, since their derivation
requires only that $\beta>d/4$,
we can formulate this result for all such
values of $\beta$ in the following proposition.}
\begin{proposition}\label{prop:uh}
\kk{Suppose that $\beta > d/4$ and
that $L$ is given by \eqref{e:L-div}
where $\mv{H}$ and $\kappa$ satisfy the
assumptions~\ref{ass:coeff-H}--\ref{ass:coeff-kappa}
from \S\ref{subsec:fractional}.
Let $u$, $u_h$ be the solutions
to~\eqref{e:Lbeta} and~\eqref{e:uh}, respectively.
Then, there exists a constant $C>0$ such that,
for sufficiently small $h$,}
\begin{align*}
\kk{\norm{u - u_h}{L_2(\Omega;L_2(\mathcal{D}))}
\leq C h^{\min\{ 2\beta-d/2, \,2 \}}.}
\end{align*}
\end{proposition}
\subsection{The rational approximation}\label{subsec:rat-approx}
Proposition~\ref{prop:uh}
shows that the mean-square error between $u$ and $u_h$ in $L_2(\mathcal{D})$
converges to zero as $h\to 0$.
It remains to describe
how an approximation of the random field $u_h$
with values in the finite-dimensional state space $V_h$
can be constructed.
For $\beta\in\mathbb{N}$ one can
use, e.g., the iterated finite element
method presented in Appendix~\ref{app:iter-fem}
to compute $u_h$ in~\eqref{e:uh} directly.
In the following, we construct approximations of
$u_h$ if $\beta\not\in\mathbb{N}$ is a fractional exponent.
For this purpose, we aim at finding a
non-fractional equation
\begin{align}\label{e:uhr}
P_{\ell,h} u_{h,m}^R = P_{r,h} \cW_h
\quad
\text{in }\mathcal{D},
\end{align}
such that $u_{h,m}^R$ is a
good approximation of $u_h$, and where
the operator $P_{j,h} := p_j(L_h)$
is defined in terms of a polynomial
$p_{j}$ of degree \kk{$m_j\in\mathbb{N}_0$, for $j\in\{\ell,r\}$}.
Since the so-defined operators $P_{\ell,h}$, $P_{r,h}$
commute, this will lead to a nested
SPDE model of the form
\begin{equation}\label{e:nested-discrete}
\begin{split}
P_{\ell,h} x_{h,m} &= \cW_h \hspace*{1.15cm} \text{in }\mathcal{D}, \\
u^R_{h,m} &= P_{r,h} x_{h,m} \quad \text{in }\mathcal{D},
\end{split}
\end{equation}
which facilitates efficient computations,
see \S\ref{sec:comp} and Appendix~\ref{app:iter-fem}.
Comparing the initial equation~\eqref{e:Lbeta} with
\begin{align}\label{e:ur}
P_\ell u_m^R = P_r \cW
\quad
\text{in }\mathcal{D},
\end{align}
where $P_j := p_j(L)$, $j\in\{\ell,r\}$,
motivates the choice $m_\ell - m_r \approx \beta$
in order to obtain a similar smoothness of
$u_m^R = (P_r^{-1} P_\ell)^{-1} \cW$
and $u = L^{-\beta} \cW$ in~\eqref{e:Lbeta}.
In practice, we \kk{first} choose a degree $m\in\mathbb{N}$
and \db{then set}
\begin{align}\label{e:betac}
m_r := m
\qquad
\text{and}
\qquad
m_\ell := m + m_{\beta},
\qquad
\text{where}
\qquad
m_{\beta} := \max\{ 1, \floor{\beta} \}.
\end{align}
In this case, the solution $u_{m}^R$ of~\eqref{e:ur}
has the same smoothness as the solution
$v$ of the non-fractional equation
$L^{\floor{\beta}} v = \cW$, if $\beta\geq 1$, and as $v$ in
$L v = \cW$, if $\beta<1$.
Furthermore, for fixed $h$, the degree $m$ controls
the accuracy of the approximation~$u_{h,m}^R$.
We now turn to the problem of defining
the non-fractional operators $P_{\ell,h}$ and~$P_{r,h}$
in~\eqref{e:uhr}.
In order to compute $u_h$ in~\eqref{e:uh} directly,
one would have to apply the discrete fractional inverse
$L_h^{-\beta}$ to the noise term $\cW_h$ on the right-hand side.
Therefore, a first idea would be to approximate
the function $x^{-\beta}$ on the spectrum of $L_h$
by a rational function $\widetilde{r}$ and
to use $\widetilde{r}(L_h) \cW_h$
as an approximation of $u_h$.
This is, in essence, the approach
proposed by \cite{harizanov2016optimal}
to find optimal solvers for the problem $\mv{L}^{\beta}\mv{x} = \mv{f}$,
where $\mv{L}$ is a sparse symmetric positive definite matrix.
However, the spectra of $L$ and of $L_h$ as $h\to 0$
\kk{(considered as operators on $L_2(\mathcal{D})$)}
are unbounded and, thus, it would be necessary
to normalize the spectrum of $L_h$ for every $h$,
since it is not feasible to construct the rational approximation $\widetilde{r}$
on an unbounded interval.
We aim at an approximation
$L_h^{-\beta} \approx p_\ell(L_h)^{-1} p_r(L_h)$,
where in practice the \kk{choice} of $p_\ell$ and $p_r$
can be made independent of $L_h$ and $h$.
Thus, we pursue another idea.
In contrast to the
\kk{operator $L$ in~\eqref{e:L-div},
its inverse $L^{-1}\colon L_2(\mathcal{D}) \to L_2(\mathcal{D})$
is compact and, thus}, the spectra
of $L^{-1}$ and of $L_{h}^{-1}$ are bounded subsets of
the compact intervals $J:=\bigl[ 0,\lambda_{1}^{-1} \bigr]$
and
$J_h := \bigl[ \lambda_{n_h,h}^{-1}, \lambda_{1,h}^{-1} \bigr] \subset J$,
respectively,
where $\lambda_{1,h}, \lambda_{n_h,h} > 0$
are the smallest and the largest eigenvalue of $L_h$.
\kk{This motivates
a rational approximation $r$
of the function $f(x) := x^\beta$
on~$J$ and to deduce
the non-fractional equation~\eqref{e:uhr}
from $u_{h,m}^R = r(L_h^{-1}) \cW_h$}.
\kk{In order to achieve our envisaged choice \eqref{e:betac}
of different polynomial degrees $m_\ell$ and~$m_r$,
we decompose $f$ via $f(x) = \hat{f}(x) x^{m_\beta}$,
where $\hat{f}(x):=x^{\beta-m_\beta}$.
We approximate $\hat{f}\approx\hat{r} := \frac{q_1}{q_2}$ on $J_h$,
where $q_1(x) := \sum_{i=0}^m c_i x^i$ and
$q_2(x) := \sum_{j=0}^{m+1} b_j x^j$ are polynomials of degree $m$ and $m+1$, respectively,
and use
$r(x) := \hat{r}(x) x^{m_\beta}$ as an approximation for $f$.
This construction leads
(after expanding the fraction with~$x^m$)
to a rational approximation
$\frac{p_r}{p_\ell}$ of $x^{-\beta}$,
\begin{align}\label{e:xbeta}
x^{-\beta} = f(x^{-1})
\approx \hat{r}(x^{-1}) x^{-m_\beta}
= \frac{q_1(x^{-1})}{q_2(x^{-1}) x^{m_\beta}}
= \frac{\sum_{i=0}^{m} c_i x^{m-i}}{\sum_{j=0}^{m+1} b_j x^{m + m_\beta -j}}.
\end{align}
where the polynomials
$p_r(x) := \sum_{i=0}^{m} c_i x^{m-i}$
and
$p_\ell(x) := \sum_{j=0}^{m+1} b_j x^{m + m_\beta -j}$
are of
degree $m$
and $m+m_\beta$, respectively,
i.e., \eqref{e:betac} is satisfied}.
The operators $P_{\ell,h}$, $P_{r,h}$ in \eqref{e:uhr}
are defined accordingly,
\begin{align}\label{e:loprop}
P_{\ell,h} := p_\ell(L_h) = \sum_{j=0}^{m+1} b_j L_h^{m+m_\beta-j},
\qquad
P_{r,h} := p_r(L_h) = \sum_{i=0}^{m} c_i L_h^{m-i}.
\end{align}
Their continuous counterparts in~\eqref{e:ur} are
$P_\ell := p_\ell(L)$ and $P_r := p_r(L)$.
We note that, for \eqref{e:betac} to hold,
any choice $m_2 \in \{0,1,\ldots, m+m_\beta\}$
would have been permissible
for the polynomial degree of $q_2$,
if $m$ is the degree of $q_1$.
The reason for setting $m_2 = m+1$ is
that this is the maximal choice which
is universally applicable
for all values of
\kk{$\beta > d/4$}.
In the following we refer to $u_{h,m}^R$ in \eqref{e:uhr}
with $P_{\ell,h}$, $P_{r,h}$ defined by \eqref{e:loprop}
as the rational \kk{SPDE} approximation of degree $m$.
We emphasize that this approximation
relies (besides the finite element discretization)
only on the rational approximation of the function~$\hat{f}$.
In particular, no information about the operator~$L$
except for a lower bound of the eigenvalues
is needed.
In the Mat\'ern case, we have $L = \kappa^2 - \Delta$
(with certain boundary conditions)
and an obvious lower bound of the eigenvalues
is therefore given by $\kappa^2$.
\subsection{An error bound for the
rational approximation}\label{subsec:error}
In this \kk{subsection} we justify the
\kk{approach proposed
in \S\ref{subsec:discrete}--\S\ref{subsec:rat-approx}}
by providing an upper bound for the strong mean-square error
$\norm{u - u_{h,m}^R}{L_2(\Omega; L_2(\mathcal{D}))}$.
Here $u$ and $u_{h,m}^R$ are the solutions of~\eqref{e:Lbeta} and \eqref{e:uhr}
and the rational approximation $u_{h,m}^R$
is constructed as described in \S\ref{subsec:rat-approx},
assuming that $\hat{r}=\hat{r}_h$ is the $L_\infty$-best
rational approximation of $\hat{f}(x) = x^{\beta - m_\beta}$
on the interval $J_h$ for each $h$.
This means that \kk{$\hat{r}_h$}
minimizes the error in the supremum norm
on $J_h$ among all
rational approximations of the chosen degrees in numerator and denominator.
How such approximations can be computed is discussed in \S\ref{subsec:rat-coeff}.
The theoretical analysis presented in Appendix~\ref{app:convergence}
results in the following theorem,
showing strong convergence of the rational approximation $u_{h,m}^R$
to the exact solution $u$.
\begin{theorem}\label{thm:strong}
\kk{Suppose that $\beta > d/4$ and
that $L$ is given by \eqref{e:L-div}
where $\mv{H}$ and $\kappa$ satisfy the
assumptions~\ref{ass:coeff-H}--\ref{ass:coeff-kappa}
from \S\ref{subsec:fractional}.
Let $u$, $u_{h,m}^R$ be the solutions
to~\eqref{e:Lbeta} and~\eqref{e:uhr}, respectively.
Then, there is a constant $C>0$,
independent of $h, m$,
such that,
for sufficiently small $h$,}
\begin{align*}
\kk{\norm{u - u_{h,m}^R}{L_2(\Omega;L_2(\mathcal{D}))}
\leq C \Bigl( h^{\min\{ 2\beta-d/2, \, 2 \}}
+ h^{\min\{2(\beta-1),\, 0\}-d/2}
e^{-2\pi \sqrt{|\beta-m_\beta|m}} \Bigr).}
\end{align*}
\end{theorem}
\begin{remark}\label{rem:calibrate-h-m}
In order to calibrate the accuracy of the rational approximation
with the finite element error,
we choose $m\in\mathbb{N}$ such that
\kk{$ e^{-2\pi \sqrt{|\beta - m_\beta| m}}
\propto h^{2 \max\{\beta,\, 1\}}$.
The strong rate of mean-square convergence is then
$\min\{ 2\beta - d/2, \, 2 \}$.}
\end{remark}
\begin{remark}\label{rem:p-fem}
\kk{If the functions $\mv{H}$ and $\kappa$ of the
operator $L$ in~\eqref{e:L-div} are smooth,
$\mv{H}\in C^\infty(\overline{\mathcal{D}})^{d\times d}$ and
$\kappa\in C^\infty(\overline{\mathcal{D}})$
(as, e.g., in the Mat\'ern case)
and if the domain $\mathcal{D}$ has a smooth boundary,
the higher-order strong mean-square convergence rate
$\min\{ 2\beta - d/2, \, p+1 \}$
can be proven for a finite element method
with continuous basis functions
which are piecewise polynomial
of degree at most $p\in\mathbb{N}$.
Thus, for $\beta>1$,
finite elements with
$p > 1$ may be meaningful}.
\end{remark}
\subsection{Computing the coefficients of the rational approximation}\label{subsec:rat-coeff}
As explained in \S\ref{subsec:rat-approx}, the coefficients
$\{c_i\}_{i=0}^{m}$ and $\{b_j\}_{j=0}^{m+1}$ needed for defining the operators
$P_{\ell,h}$, $P_{r,h}$ in~\eqref{e:loprop}
are obtained from a rational approximation $\hat{r} = \hat{r}_h$
of $\hat{f}(x) = x^{\beta - m_\beta}$ on $J_h$.
For each $h$, this approximation can, e.g., be computed
with the second Remez algorithm \citep{remez1934determination},
which generates the coefficients
of the $L_\infty$-best approximation.
The error analysis for the resulting
approximation $u_{h,m}^R$ in~\eqref{e:uhr} was performed
in \S\ref{subsec:error}.
Despite the theoretical benefit of generating the
$L_\infty$-best approximation, the Remez algorithm
is often unstable in computations
and, therefore, we use a different method in our simulations.
However, versions of the Remez scheme were used, e.g., by \cite{harizanov2016optimal}.
A simpler and computationally more stable way
of choosing the rational approximation is, for instance,
the Clenshaw--Lord Chebyshev--Pad\'e algorithm \citep{baker1996pade}.
\db{To \kk{further} improve the stability of the method,
we will rescale the operator \kk{$L$} so that
\kk{its eigenvalues are bounded from below by one},
which for the Mat\'ern case corresponds
to reformulating the SPDE~\eqref{e:statmodel} as
$(\mathrm{Id} - \kappa^{-2}\Delta)^{\beta} (\widetilde{\tau} u) = \cW$
\kk{and using $L = \mathrm{Id} - \kappa^{-2}\Delta$,
where $\mathrm{Id}$ denotes the identity on $L_2(\mathcal{D})$
and $\widetilde{\tau} := \kappa^{2\beta} \tau$}. }
In order to avoid computing
a different rational approximation $\hat{r}$
for each finite element mesh \kk{width} $h$,
in practice we compute the approximation $\hat{r}$
only once on the interval $J_{*} := [\delta,1]$,
where $\delta\in(0,1)$ should ideally be chosen
such that $J_h \subset J_{*}$ for all considered
mesh sizes $h$.
For the numerical experiments later,
we will use $\delta = 10^{-(5+m)/2}$
when computing rational approximations of order $m$,
which gives acceptable results for all values of $\beta$.
As an example, the coefficients
computed with the Clenshaw--Lord Chebyshev--Pad\'e algorithm
on $J_{*}$
for the case of exponential covariance on $\mathbb{R}^2$
are shown in Table~\ref{tab:coeffs}.
\begin{table}
\centering
\begin{tabular}{lccccccccc}
\toprule
$m$ & $b_0$ & $c_0$ & $b_1$ & $c_1$ & $b_2$ & $c_2$ & $b_3$ & $c_3$ & $b_4$ \\
\cmidrule(r){2-10}
1 & 1.69e-2 & 7.69e-2 & 8.06e-1 & 1 & 2.57e-1 & & & & \\
2 & 8.08e-4 & 5.30e-3 & 1.98e-1 & 4.05e-1 & 1.07 & 1 & 1.41e-1 & & \\
3 & 3.72e-5 & 3.27e-4 & 3.03e-2 & 8.57e-2 & 6.84e-1 & 1.00 & 1.28 & 1 & 9.17e-2 \\
\bottomrule
\end{tabular}
\vspace{0.5\baselineskip}
\caption{\label{tab:coeffs}Coefficients of the rational approximation for $\beta = 3/4$
(exponential cov.\ on~$\mathbb{R}^2$) for $m=1,2,3$,
normalized so that $c_{m} = 1$.}
\end{table}
\section{Computational aspects of the rational approximation}
\label{sec:comp}
In the non-fractional case,
the sparsity of the precision matrix for the weights~$\mv{u}$ in \eqref{e:basisexp}
facilitates
fast computation of samples, likelihoods,
and other quantities of interest
for statistical inference.
The purpose of this section
is to show that the rational SPDE approximation
proposed in \S\ref{sec:rational}
preserves these good computational properties.
\kk{The representation~\eqref{e:nested-discrete} shows that $u_{h,m}^R$
can be seen as a Markov random field $x_{h,m}$,
transformed by the operator $P_{r,h}$.
Solving this latent model
as explained in Appendix~\ref{app:iter-fem},
yields an approximation of the form \eqref{e:basisexp}},
where $\mv{\Sigma}_{\mv{u}} = \mathbf{P}_r\mv{Q}^{-1}\mathbf{P}_r^{\ensuremath{\top}}$.
Here $\mathbf{P}_\ell, \mathbf{P}_r \in~\mathbb{R}^{n_h\times n_h}$ correspond to
the discrete operators~$P_{\ell,h}$ and $P_{r,h}$ in~\eqref{e:loprop}, respectively.
The matrix $\mv{Q} := \mathbf{P}_\ell^{\ensuremath{\top}} \mv{C}^{-1} \mathbf{P}_\ell$
is sparse if the mass matrix $\mv{C}$
with respect to the finite element basis $\{\varphi_j\}_{j=1}^{n_h}$
is replaced by the diagonal lumped mass matrix $\widetilde{\mv{C}}$,
see Appendix~\ref{app:iter-fem}.
By defining $\mv{x} \sim \proper{N}(\mv{0},\mv{Q}^{-1})$,
we have $\mv{u} = \mathbf{P}_r\mv{x}$,
which is a transformed Gaussian Markov random field (GMRF).
Choosing $\mv{x}$ as a latent variable instead of $\mv{u}$
thus enables us to use all computational methods,
which are available for GMRFs \citep[see][]{rue05},
also for the rational SPDE approximation.
As an illustration, we consider the following hierarchical model,
with a latent field $u$ which is a rational approximation of \eqref{e:Lbeta},
\begin{equation}\label{e:model-yi}
\begin{split}
y_i &= u(\mathbf{s}_i) + \varepsilon_i, \quad i=1,\ldots, N, \\
P_\ell u &= P_r \cW \qquad\quad\, \,\, \text{in }\mathcal{D},
\end{split}
\end{equation}
where $u$ is observed under
i.i.d.~Gaussian measurement noise
$\varepsilon_i \sim \proper{N}(0,\sigma^2)$.
Given that one can treat this case,
one can easily adapt the method to be used for inference in combination with MCMC or INLA \citep{rue09}
for models with more sophisticated likelihoods.
Defining the matrix $\mv{A}$ with entries
$A_{ij}= \varphi_j(\mathbf{s}_i)$
and the vector $\mv{y} = (y_1,\ldots, y_N)^{\ensuremath{\top}}$
gives us the discretized model
\begin{equation}\label{e:model-ygivenx}
\begin{split}
\mv{y}|\mv{x} &\sim \proper{N}(\mv{A} \mathbf{P}_r \mv{x},\sigma^2\mv{I}), \\
\mv{x} &\sim \proper{N}(\mv{0}, \mv{Q}^{-1}).
\end{split}
\end{equation}
In this way, the problem has been reduced
to a standard latent GMRF model
and a sparse Cholesky factorization
of $\mv{Q}$ can be used for sampling $\mv{x}$
from \db{$\proper{N}(\mv{0}, \mv{Q}^{-1})$}
as well as to evaluate \db{its log-density}~$\log\pi_x(\mv{x})$.
Samples of $\mv{u}$ can then be obtained
from samples of $\mv{x}$
via $\mv{u} = \mathbf{P}_r \mv{x}$.
For evaluating \db{the log-density of $\mv{u}$}, $\log\pi_u(\mv{u})$,
the relation $\log\pi_u(\mv{u}) = \log\pi_x(\mathbf{P}_r^{-1}\mv{u})$
can be exploited.
Furthermore, the posterior distribution of $\mv{x}$ is
\kk{given by}
$\mv{x}|\mv{y} \sim \proper{N}\bigl( \mv{\mu}_{\mv{x}|\mv{y}}, \mv{Q}_{\mv{x}|\mv{y}}^{-1} \bigr)$,
where $\mv{\mu}_{\mv{x}|\mv{y}} = \sigma^{-2} \mv{Q}_{\mv{x}|\mv{y}}^{-1}
\mathbf{P}_r^{\ensuremath{\top}} \mv{A}^{\ensuremath{\top}} \mv{y}$
and
$\mv{Q}_{\mv{x}|\mv{y}}
= \mv{Q} + \sigma^{-2} \mathbf{P}_r^{\ensuremath{\top}} \mv{A}^{\ensuremath{\top}} \mv{A} \mathbf{P}_r$
is a sparse matrix.
Thus, simulations from \db{the distribution of $\mv{x}|\mv{y}$},
and evaluations
of \db{the corresponding log-density}
$\log\pi_{x|y}(\mv{x})$, can be performed efficiently
via a sparse Cholesky factorization of $\mv{Q}_{\mv{x}|\mv{y}}$.
Finally, the marginal data log-likelihood is proportional to
\begin{align*}
\log| \mathbf{P}_\ell | - \frac{1}{2} \log| \mv{Q}_{\mv{x}|\mv{y}} |
- N \log\sigma
- \frac{1}{2} \left( \mv{\mu}_{\mv{x}|\mv{y}}^{\ensuremath{\top}} \mv{Q} \, \mv{\mu}_{\mv{x}|\mv{y}}
+ \sigma^{-2} \left\|\mv{y} - \mv{A} \mathbf{P}_r \mv{\mu}_{\mv{x}|\mv{y}} \right\|^2\right).
\end{align*}
We therefore conclude that all
computations needed for statistical inference
can be facilitated by sparse Cholesky factorizations of
$\mathbf{P}_\ell$ and $\mv{Q}_{\mv{x}|\mv{y}}$.
\begin{remark}\label{rem:compcosts}
From the specific form of the matrices $\mathbf{P}_\ell$ and $\mathbf{P}_r$
addressed in Appendix~\ref{app:iter-fem},
we can infer that the number of non-zero elements in $\mv{Q}_{\mv{x}|\mv{y}}$
for a rational SPDE approximation of degree $m$
will be the same as the number of non-zero elements in $\mv{Q}_{\mv{x}|\mv{y}}$
for the standard (non-fractional) SPDE approach with $\beta = m+m_\beta$.
Thus, also the computational cost
will be comparable for these two cases.
\end{remark}
\begin{remark}
The matrix $\mv{Q}_{\mv{x}|\mv{y}}$ can be
ill-conditioned for $m>1$ if a FEM approximation
with piecewise \kk{linear} basis functions is used.
The numerical stability for large values of $m$ can likely
be improved by increasing the polynomial degree of the FEM basis functions, see also Remark~\ref{rem:p-fem}.
\end{remark}
\section{Numerical experiments}\label{sec:numerics}
\subsection{The Mat\'ern covariance on $\mathbb{R}^2$}\label{subsec:numerics-matern}
As a first test, we investigate the performance of rational SPDE approach
for Gaussian Mat\'ern fields, without including the finite element discretization in space.
The spectral density $S$ of the solution to \eqref{e:statmodel} on $\mathbb{R}^2$
is given by \eqref{eq:maternSpec},
whereas the spectral density for the non-discretized rational SPDE approximation $u_m^R$ in~\eqref{e:ur} is
\begin{align}\label{e:S-rat}
S_R(\mv{k}) \propto \kappa^{4\beta}\left(\frac{\sum_{i=1}^m c_i(1+\kappa^{-2}\|\mv{k}\|^2)^{m-i}}{\sum_{j=1}^{m+1} b_j(1+\kappa^{-2}\|\mv{k}\|^2)^{m+m_\beta-j}}\right)^2.
\end{align}
We compute the coefficients as described in \S\ref{subsec:rat-coeff}.
To this end, we apply an implementation of the
Clenshaw--Lord Chebyshev--Pad\'e algorithm
provided by the Matlab package Chebfun \citep{driscoll2014chebfun}.
By performing a partial fraction decomposition of \eqref{e:S-rat},
expanding the square, transforming to polar coordinates, and using the equality
\begin{align*}
\int_0^{\infty} \frac{\omega J_0(\omega h)}{(\omega^2+a^2)(\omega^2+b^2)} \,\mathrm{d}\omega
= \frac{1}{(b^2-a^2)}(K_0(ah)-K_0(bh))
\end{align*}
we are able to compute the corresponding covariance function $C_R(h)$ analytically.
Here, $J_0$ is a Bessel function of the first kind
and $K_0$ is a modified Bessel function of the second kind.
To measure the accuracy of the approximation,
we compare $C_R(h)$ to the true Mat\'ern covariance function $C(h)$
for different values of $\nu$, where $\kappa=\sqrt{8\nu}$ is chosen such that
the practical correlation range
\kk{$r=\sqrt{8\nu}/\kappa$}
equals one in all cases.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\linewidth]{rational_L2_error}
\includegraphics[width=0.49\linewidth]{rational_max_error}
\includegraphics[width=0.4\linewidth]{fig1_legend}
\end{center}
\caption{\label{fig:rational_errors}
The $L_2$- and $L_{\infty}$-errors of the covariance functions for different values of $\nu$ for the different approximation methods. When $\nu=1$, all methods are exact.
}
\end{figure}
To put the accuracy of the rational approximation in context,
the Markov approximation by \cite{lindgren11} and the quadrature method by \cite{bolin2017numerical} are also shown.
For the quadrature method, $K=12$ quadrature nodes are used,
which results in an approximation with the same computational cost
as a rational approximation of degree $m=11$, see Appendix~\ref{subsec:rat-comparequad}.
Figure~\ref{fig:rational_errors} shows the normalized error in the $L_2$-norm
and the error with respect to $L_\infty$-norm
for different values of $\nu$,
both with respect to the
interval~$[0,2]$ of length twice the practical correlation range, i.e.,
\begin{align*}
\left( \frac{\int_0^2 (C(h)-C_a(h))^2 \, \mathrm{d} h}{\int_0^2 C(h)^2 \, \mathrm{d} h} \right)^{1/2}
\quad
\text{and}
\quad
\sup_{h\in[0,2]} |C(h) - C_a(h)|.
\end{align*}
Here,
$C_a$ is the covariance function
obtained by the respective approximation method.
Already for $m=3$, the rational approximation performs better than both
the Markov approximation and the quadrature approximation for all values of $\nu$.
It also decreases the error for the case of an exponential covariance by several orders of magnitude.
All methods are exact when $\nu=1$,
since this is the non-fractional case.
The Markov and rational methods
show errors decreasing to zero as $\nu=1$,
whereas the error of the quadrature method has
\kk{a singularity at $\nu=1$}.
The performance of the quadrature method can be improved (although not the behaviour near $\nu=1$)
by increasing the number of quadrature nodes, see Appendix~\ref{subsec:rat-comparequad}.
This is reasonable if the method is needed only for sampling from the model,
but implementing this method for statistical applications,
which require kriging or likelihood evaluations, is not feasible
since the computational costs then are comparable
to the standard SPDE approach with $\beta=K$.
Finally, it should be noted that the Markov method
also is exact at $\nu=2$ ($\beta = 1.5$)
since the spectrum of the process then is the reciprocal of a polynomial.
The rational and quadrature methods
cannot \kk{exploit} this fact, since
\kk{these approximations
are based on the corresponding differential operator
instead of the spectral density}.
This is the prize that has to be paid
in order to \kk{formulate} a method which works
not only for the stationary Mat\'ern fields
but also for non-stationary and non-Gaussian models.
\subsection{Computational cost and the finite element error}
From the study in the previous subsection,
we infer that the rational SPDE approach performs well
for Mat\'ern fields with arbitrary smoothness.
However, as for the standard SPDE approach, we need to discretize
the problem in order to be able to use the method in practice, e.g., for inference.
This induces an additional error source,
which means that one should balance the two errors
by choosing the degree $m$ of the rational approximation
appropriately with respect to the FEM error.
\kk{A calibration
based on the theoretical results has been suggested in
Remark~\ref{rem:calibrate-h-m}}.
In this section we address this issue in practice
and investigate the computational cost of the rational SPDE approximation.
As a test case, we compute approximations
of a Gaussian Mat\'ern field
with unit variance and
practical correlation range $r=0.1$
on the unit square in $\mathbb{R}^2$.
We assume homogeneous Neumann boundary conditions
for the Mat\'ern operator $\kappa^2-\Delta$ in \eqref{e:statmodel}.
For the discretization, we use
a FEM with a nodal basis of continuous piecewise \kk{linear} functions
with respect to a mesh induced by a Delaunay triangulation
of a regular lattice on the domain, with a total of $n_h$ nodes.
We consider three different meshes
with $n_h = 57^2, 85^2, 115^2$,
which corresponds to $h \approx r/4, r/6, r/8$.
In order to measure the accuracy,
we compute the covariances between the
midpoint of the domain $\tilde{\mv{s}}_{*}$
and all other nodes in the lattice $\{\tilde{\mv{s}}_{j}\}_{j=1}^{n_h}$
for the Mat\'ern field
and the rational SPDE approximations
and calculate the error similarly
to the $L_2$-error in \S\ref{subsec:numerics-matern},
\begin{align*}
\left( \frac{ \sum_{j=1}^{n_h} ( C(\| \tilde{\mv{s}}_{*} - \tilde{\mv{s}}_{j}\|) - \Sigma^{\mv{u}}_{j,*} )^2 }{ \sum_{j=1}^{n_h} C(\| \tilde{\mv{s}}_{*} - \tilde{\mv{s}}_{j}\|)^2 } \right)^{1/2},
\end{align*}
where $\mv{\Sigma}^{\mv{u}} = \mathbf{P}_r\mathbf{P}_\ell^{-1} \mv{C} \mathbf{P}_\ell^{-\ensuremath{\top}} \mathbf{P}_r^{\ensuremath{\top}}$
is the covariance matrix of $\mv{u}$, see Appendix~\ref{app:iter-fem}.
As a consequence of imposing boundary conditions,
the error of the covariance is larger close
to the boundary of the domain.
However, we compare this error to the error of
the non-fractional SPDE approach, which has the same boundary effects.
As measures of the computational cost,
we consider the time it takes to sample $\mv{u}$
and to evaluate $\log|\mv{Q}_{\mv{x}|\mv{y}}|$
for the model \eqref{e:model-ygivenx} with $\sigma=1$,
when~$\mv{y}$ is a vector of noisy observations
of the latent field at $1000$ locations, drawn at random in the domain
(a similar computation time is needed to evaluate $\mv{\mu}_{\mv{x}|\mv{y}}$).
The results for rational SPDE approximations of different degrees
for the case $\beta = 3/4$ (exponential covariance)
are shown in Table~\ref{tab:femerrors}.
Furthermore,
we perform the same experiment
when the standard (non-fractional) SPDE approach is used for $\beta = 2,3,4$.
As previously mentioned in Remark~\ref{rem:compcosts},
the computational cost of the rational SPDE approximation of degree~$m$
should be comparable to the standard SPDE approach with $\beta= m+1$.
Table~\ref{tab:femerrors} validates this claim.
One can also note that the errors of the rational SPDE approximations
are similar to those of the standard SPDE approach,
and that the reduction in error
when increasing from $m=2$ to $m=3$
is small for all cases,
indicating that the error induced by the rational approximation
is small compared to the FEM error, even for a low degree~$m$.
This is also the reason for why, in particular in the pre-asymptotic region,
one can in practice choose the degree $m$ smaller than the value suggested in
Remark~\ref{rem:calibrate-h-m},
which gives $m\approx 6, 7, 8$ for $\beta=3/4$ and the three considered finite element meshes.
\begin{table}[t]
\centering
\begin{tabular}[b]{llcccccc}
\toprule
&& \multicolumn{3}{c}{Rational SPDE approximation} & \multicolumn{3}{c}{\db{Standard SPDE approach}}\\
\cmidrule(r){3-5} \cmidrule(r){6-8}
$n$ & & $m=1$ & $m=2$ & $m=3$ & $\beta = 2$ & $\beta = 3$ & $\beta = 4$\\
\cmidrule(r){1-8}
\multirow{2}{*}{$57^2$} & Error & 1.849 & 1.339 & 1.415 & 2.259 & 2.173 & 2.147\\
& Time & 1.5 (3.2) & 1.8 (5.2) & 2.7 (8.7) & 1.7 (2.6) & 1.7 (3.9) & 2.2 (6.3)\\
\cmidrule(r){2-8}
\multirow{2}{*}{$85^2$} & Error & 1.720 & 0.757 & 0.807 & 0.953 & 0.928 & 0.921\\
& Time & 3.1 (8.4) & 5.0 (14) & 7.6 (25) & 3.0 (8.2) & 5.8 (13) & 7.9 (22)\\
\cmidrule(r){2-8}
\multirow{2}{*}{$115^2$} & Error & 1.559 & 0.526 & 0.501 & 0.509 & 0.498 & 0.494\\
& Time & 7.6 (22) & 11 (34) & 18 (57) & 6.3 (18) & 11 (35) & 18 (53)\\
\bottomrule
\end{tabular}
\vspace{0.5\baselineskip}
\caption{\label{tab:femerrors}Covariance errors ($\times 100$) and
computing times in seconds ($\times 100$)
for sampling from the rational SPDE approximation~$\mv{u}$
\db{(with $\beta=3/4$)}
and, in parentheses,
for evaluating $\log|\mv{Q}_{\mv{x}|\mv{y}}|$.
For reference, these values are also given
for the standard SPDE approach with $\beta=2,3,4$.}
\end{table}
\section{Likelihood-based inference of Mat\'{e}rn parameters}\label{sec:inference}
The computationally efficient evaluation of the likelihood
of the rational SPDE approximation facilitates
likelihood-based inference for all parameters of the Mat\'ern model,
including~$\nu$ which until now had to be fixed
when using the SPDE approach.
In this section we first discuss the identifiability of the model parameters and then investigate the accuracy of this approach
within the scope of a simulation study.
\subsection{Parameter identifiability}\label{subsec:measure}
\kk{A common reason for fixing the smoothness
in Gaussian Mat\'ern models is the result by \citet{Zhang2004}
which shows that all three Mat\'ern parameters
cannot be estimated consistently under infill asymptotics.
More precisely, for a fixed smoothness parameter $\nu$,
one cannot estimate both the variance of the field,
$\phi^2$, and the scale parameter, $\kappa$, consistently.
However, the quantity $\phi^2\kappa^{2\nu}$ can be estimated consistently.
The derivation of this result relies on the
equivalence of Gaussian measures corresponding
to Mat\'ern fields \citep[Theorem~2]{Zhang2004}.
The following theorem provides the analogous result
for the Gaussian measures induced by the class
of random fields specified via \eqref{e:statmodel}
on a bounded domain}.
The proof can be found in Appendix~\ref{sec:measureproof}.
\begin{theorem}\label{thm:measure}
Let $\mathcal{D}\subset\mathbb{R}^d$, $d\in\{1,2,3\}$,
be bounded, open and connected.
For $i\in\{1,2\}$,
let $\beta_i > d/4$, $\kappa_i, \tau_i > 0$, and
let
$\mu_i:=\proper{N}(m_i,\mathcal{Q}_i^{-1})$
be a Gaussian measure
on $L_2(\mathcal{D})$
with mean $m_i := 0$ and precision operator
$\mathcal{Q}_i := \tau_i^{2}L_i^{2\beta_i}$,
where, for $i\in\{1,2\}$, the operators $L_i:=\kappa_i^2 - \Delta$
are augmented with the same homogeneous Neumann or Dirichlet boundary
conditions.
Then, $\mu_1$ and $\mu_2$ are equivalent
if and only if $\beta_1=\beta_2$ and $\tau_1 = \tau_2$.
\end{theorem}
\kk{Note that, for $\mathcal{D}:=\mathbb{R}^d$,
the parameter $\tau$
is related to the variance
of the Gaussian random field
via
$\phi^2 = \Gamma(\nu)(\tau^2\Gamma(2\beta)(4\pi)^{d/2}\kappa^{2\nu})^{-1}$.
Thus, $\tau^{-2}\propto \phi^2\kappa^{2\nu}$,
which means that Theorem~\ref{thm:measure}
is in accordance with the result by \citet{Zhang2004}.
Since the Gaussian measures induced by the operators
$L_1= \tau(\kappa_1+\Delta)^{\beta}$
and $L_2= \tau(\kappa_2+\Delta)^{\beta}$ are equivalent,
we will not be able to consistently estimate $\kappa$
under infill asymptotics.
Yet, Theorem~\ref{thm:measure}
suggests that it is possible
to estimate $\tau$ and $\beta$ consistently.
In fact, with Theorem~\ref{thm:measure} available,
it is straightforward to show
that $\tau$ can be estimated consistently for a fixed $\nu$
by exploiting the same arguments
as in the proof of \citep[Theorem 3]{Zhang2004}.
However, it is beyond the scope
of this article
to show that both $\nu$ and $\tau$
can be estimated consistently
which would also extend the results
by \citet{Zhang2004}}.
\subsection{Simulation study}\label{subsec:estimationstudy}
\db{To numerically investigate the accuracy of likelihoood-based parameter estimation based on the rational SPDE appraoch, }
we again assume homogeneous Neumann boundary conditions for
the Mat\'ern operator in \eqref{e:statmodel} and consider
the standard latent model \eqref{e:model-yi}
from \S\ref{sec:comp}.
We take the unit square as the domain of interest,
set $\sigma^2=0.1$, $\nu=0.5$ and choose $\kappa$ and $\tau$
so that the latent field has variance $\phi^2=1$ and
practical correlation range~\kk{$r=0.2$}.
For the FEM, we take a mesh based on a regular lattice on the domain,
extended by twice the correlation range in each direction
to reduce boundary effects, yielding a mesh with approximately $3500$ nodes.
As a first test case, we use simulated data from the discretized model.
We simulate $50$ replicates of the latent field,
each with corresponding noisy observations at $1000$ measurement locations
drawn at random in the domain.
This results in a total of $50000$ observations,
which we use to estimate the parameters of the model.
We draw initial values for the parameters at random
and then numerically optimize the likelihood of the model
with the function \texttt{fminunc} in Matlab.
This procedure is repeated $100$ times, each time with a new simulated data set.
As a second test case, we repeat the simulation study,
but this time we simulate the
data from a Gaussian Mat\'ern field with an exponential covariance function
instead of from the discretized model.
For the estimation,
we compute the rational SPDE approximation for the same
finite element mesh as in the first test case.
To investigate the effect of the mesh resolution on the parameter estimates,
we also estimate the parameters using a uniformly refined mesh
with twice as many nodes.
The average computation time for
evaluating the likelihood is approximately $0.16s$
for the coarse mesh and $0.4s$ for the fine mesh.
This computation time is affine with respect to
the number of replicates, and with only one replicate
it is $0.09s$ for the coarse mesh and $0.2s$ for the fine mesh.
\begin{table}
\centering
\begin{tabular}{ccccc}
\toprule
& & Rational samples & \multicolumn{2}{c}{Mat\'ern samples}\\
\cmidrule(r){3-3} \cmidrule(r){4-5}
& Truth & Estimate & Coarse mesh & Fine mesh \\
$\kappa$ & 10 & 10.026 (0.5661) & 10.966 (1.8060) & 10.864 (0.4414)\\
$\phi^2$ & 1.0 & 1.0014 (0.0228) & 1.1089 (0.6155) & 0.9743 (0.0210)\\
$\sigma^2$ & 0.1 & 0.1001 (0.0009) & 0.3016 (0.0036) & 0.2320 (0.0044)\\
$\nu$ & 0.5 & 0.5011 (0.0168) & 0.5554 (0.0991) & 0.5462 (0.0138)\\
\bottomrule
\end{tabular}
\vspace{0.5\baselineskip}
\caption{\label{tab:estimation}Results of the parameter estimation.
For each parameter estimate, the mean of 100 different estimates is shown, with the corresponding standard deviation in parentheses.}
\end{table}
The results of the parameter estimation can be seen in Table~\ref{tab:estimation},
where the true parameter values are shown
together with the mean and standard deviations
of the $100$ estimates for each case.
Notably, we are able to estimate all parameters
accurately in the first case.
For the second case, the finite element discretization
seems to induce a small bias, especially for the nugget estimate ($\sigma^2$)
that depends on the resolution of the mesh.
The bias in the nugget estimate is not surprising
since the increased nugget compensates
for the FEM error.
The bias could be decreased
by choosing the mesh more carefully,
also taking the measurement locations into account.
In practice, however, this bias will
not be of great importance,
since the optimal nugget for the discretized model
should be used
It should be noted that there are several other methods
for decreasing the computational cost of likelihood-based inference for stationary Mat\'ern models.
The major advantage of the rational SPDE approach is
that it is directly applicable to more complicated
non-stationary models, which we will use in the next section when analyzing real data.
\section{Application}\label{sec:application}
In this section we illustrate
for the example of a climate reanalysis data set
how the rational SPDE approach can be used for spatial modeling.
Climate reanalysis data is generated
by combining a climate model with observations
in order to obtain a description of the recent climate.
We use reanalysis data generated with
the Experimental Climate Prediction Center Regional Spectral Model (ECPC-RSM)
which was originally prepared for
the North American Regional Climate Change Assessment Program (NARCCAP)
by means of NCEP/DOE Reanalysis \citep{mearns2007,mearns2009regional}.
As variable we consider average summer precipitation
over the conterminous U.S.\ for a 26 year period from 1979 to 2004.
The average value for each grid cell and year is computed
as the average of the corresponding daily values
for the days in June, July, and August.
\db{In order to \kk{obtain} data
which can be modelled by a Gaussian distribution,
we follow \cite{genton2015cross}
and transform the data by taking the cube root.
We then subtract the mean over the 26 years
from each grid cell so that
we can assume that the data has zero mean
and focus on the correlation structure of the residuals.}
The resulting residuals for the year 1979
are shown in Figure~\ref{fig:application_data}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{new_application_data.png}
\end{center}
\caption{\label{fig:application_data}Average summer precipitation residuals (in cm)
for 1979 and the \kk{FEM mesh}.
}
\end{figure}
The 4106 observed residuals for each year are modelled as independent realizations
of a zero-mean Gaussian random field with a nugget effect.
That is, the measurement $Y_{ij}$ at spatial location $\mv{s}_i$
for year $j$ is modelled as
$Y_{ij} = u_j(\mv{s}_i) + \varepsilon_{ij}$,
where $\varepsilon_{ij} \sim \proper{N}(0,\sigma^2)$ are independent,
and \kk{$\{u_j(\mv{s})\}_j$} are independent realizations of a
zero-mean Gaussian random field $u(\mv{s})$.
\db{The analysis of \cite{genton2015cross}
revealed that an exponential covariance model
is suitable for a subset of this data set.
Because of this, a natural first choice is
to use a stationary Mat\'ern model \eqref{e:statmodel},
either with $\beta = 0.75$ (exponential covariance)
or with a general $\beta$ which we estimate from the data.
However, since we have data for a larger spatial region
than \cite{genton2015cross}, one would suspect
that a non-stationary model for $u(\mv{s})$ might be needed.
The standard non-stationary model for the SPDE approach,
as \kk{first} suggested by \cite{lindgren11} and used in many applications since then, is
\begin{equation}\label{eq:nonstat_model}
(\kappa(\mv{s})^2 - \Delta)^{\beta} \, (\tau(\mv{s})u(\mv{s})) = \cW(\mv{s}),
\quad
\mv{s}\in\mathcal{D}
\end{equation}
where \kk{$\beta=1$ is fixed}.
Until now, it has not been possible to use
\kk{the model} \eqref{eq:nonstat_model}
with fractional smoothness.
\kk{Therefore, our main question is now:
What is more important for this
data---the fractional smoothness
$\beta$ or the non-stationary parameters?}
We thus consider four different SPDE models for $u(\mv{s})$.
\kk{Two of them are non-fractional models,
where $\beta=1$ is fixed,
and for the other two (fractional)
models, we estimate the fractional order
$\beta$ jointly with the other parameters
from the data.
For both cases,
we consider stationary Mat\'ern
and non-stationary models,
where the latter are formulated via \eqref{eq:nonstat_model}
with}}
\begin{align*}
\log\kappa(\mv{s})
=
\kappa_0
+ \kappa_{\rm a} \psi_{\rm a}(\mv{s})
+ \sum_{i,j=1}^{2}\sum_{k,\ell=1}^{2}
\kappa_{ij}^{k\ell} \,
\psi_{i}^{k}(\widetilde{s}_1) \,
\psi_{j}^{\ell}(\widetilde{s}_2),
\end{align*}
and the same model is used for $\tau(\mv{s})$.
Here,
$\psi_{j}^{1}(\widetilde{s}) := \sin(j\pi\widetilde{s})$,
$\psi_{j}^{2}(\widetilde{s}) := \cos(j\pi\widetilde{s})$,
$\psi_{\rm a}(\mv{s})$ is the altitude at location $\mv{s}$,
and $\widetilde{\mv{s}} = (\widetilde{s}_1,\widetilde{s}_2)$
denotes the spatial coordinate after rescaling
so that the observational domain is mapped to the unit square.
Thus, $\log\kappa(\mv{s})$ and $\log\tau(\mv{s})$ are modelled
by the altitude covariate and 16 additional Fourier basis functions
to capture large-scale trends in the parameters.
The altitude covariate and the eight Fourier basis functions
$\left\{\psi_1^{k}(\widetilde{s}_1) \psi_j^{\ell}(\widetilde{s}_2)
: j,k,\ell=1,2\right\}$
are shown
in Figure~\ref{fig:basis_functions}.
\begin{figure}[t]
\begin{center}
\begin{minipage}{0.25\linewidth}
\includegraphics[width=\linewidth]{app_cov1}
\includegraphics[width=\linewidth]{app_cov2}
\includegraphics[width=\linewidth]{app_cov3}
\end{minipage}
\begin{minipage}{0.25\linewidth}
\includegraphics[width=\linewidth]{app_cov4}
\includegraphics[width=\linewidth]{app_cov5}
\includegraphics[width=\linewidth]{app_cov6}
\end{minipage}
\begin{minipage}{0.25\linewidth}
\includegraphics[width=\linewidth]{app_cov7}
\includegraphics[width=\linewidth]{app_cov8}
\includegraphics[width=\linewidth]{app_cov9}
\end{minipage}
\begin{minipage}{0.05\linewidth}
\includegraphics[width=\linewidth]{alt_cb}
\includegraphics[width=\linewidth]{cov_cb}
\end{minipage}
\end{center}
\caption{\label{fig:basis_functions}\kk{Nine basis functions
modeling the parameters for the non-stationary models}.}
\end{figure}
We discretize each model with respect to
the finite element mesh shown in Figure~\ref{fig:application_data},
assuming homogeneous Neumann boundary conditions.
The mesh has $5021$ nodes and was computed using R-INLA \citep{lindgren2015software}.
For the fractional models, we set $m=1$ in the rational approximation
and, for each model, the model parameters are estimated
by numerical optimization of the log-likelihood
as described in \S\ref{sec:comp}.
The log-likelihood values for the four models
can be seen in Table~\ref{tab:application}.
The parameter estimates for the stationary
non-fractional ($\beta=\nu=1$) model are
$\kappa = 0.67$, $\tau = 5.44$, and $\sigma = 0.014$,
which implies a standard deviation $\phi = 0.077$
and a practical range $\rho = 4.21$.
The estimates for the fractional model
are $\kappa = 0.20$, $\tau = 10.58$, $\sigma = 0.012$,
and $\beta= 0.72$, corresponding
to $\phi = 0.081$, $\rho = 9.21$,
and a smoothness parameter $\nu = 0.44$.
We note that the fractional model has a longer correlation range.
\kk{This is likely to be caused
by the non-fractional model
underestimating the range $\rho$
in order to compensate for the wrong local behaviour
of the covariance function
induced by the smoothness parameter $\nu=1$}.
\kk{Figure~\ref{fig:application_vars} shows
the estimated marginal standard deviation $\phi(\mv{s})$
for the two non-stationary models
(computed using the estimates of
the parameters for $\kappa(\mv{s})$ and $\tau(\mv{s})$)
and $0.7$ contours of the correlation function
for selected locations in the domain.
The estimate of $\beta$
for the non-stationary fractional model is $0.723$.
Also for the non-stationary models,
we observe a slightly longer
practical correlation range
$\rho(\mv{s})$ for the fractional model}.
\begin{figure}[t]
\begin{center}
\begin{minipage}{0.45\linewidth}
\begin{center}
\textcolor{white}{$\beta$}Fractional model\textcolor{white}{$\beta$}\\
\end{center}
\includegraphics[width=\linewidth]{nf_std}
\includegraphics[width=\linewidth]{nf_correlations}
\end{minipage}
\begin{minipage}{0.45\linewidth}
\begin{center}
$\beta=1$ model\\
\end{center}
\includegraphics[width=\linewidth]{n1_std}
\includegraphics[width=\linewidth]{n1_correlations}
\end{minipage}
\begin{minipage}{0.065\linewidth}
\raisebox{2cm}{\includegraphics[width=\linewidth]{std_cb}}
\end{minipage}
\end{center}
\caption{\label{fig:application_vars}Estimated marginal standard deviations (top row) and contours of $0.7$ correlation of the correlation function for selected locations marked with red crosses (bottom row), for the fractional (left column) and $\beta=1$ (right column) models.}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{lccccc}
\toprule
& Log-likelihood & RMSE & CRPS & LS & Time\\
Stationary $\beta=1$ & 219773 & 4.206 & 2.295 & 177.2 & 0.125\\
Stationary fractional & 220255 & 4.167 & 2.274 & 178.2 & 0.412\\
Non-stationary $\beta=1$ & 225969 & 4.194 & 2.266 & 182.1 & 0.121\\
Non-stationary fractional & 226095 & 4.170 & 2.254 & 182.4 & 0.416\\
\bottomrule
\end{tabular}
\vspace{0.5\baselineskip}
\caption{\label{tab:application}\kk{Model-dependent results for
(i) the log-likelihood,
(ii) the pseudo-crossvalidation scores
(RMSE, CRPS, LS, each $\times 100$) averaged over ten replicates, and
(iii) the computational time for one evaluation of the likelihood
averaged over $100$ computations}.}
\end{table}
To investigate the predictive accuracy of the models,
a pseudo-crossvalidation study is performed.
We choose $10\%$ of the spatial observation locations at random,
and use the corresponding observations for each year
to predict the values at the remaining locations.
The accuracy of the four models is measured
by the root mean square error~(RMSE),
the average continuous ranked probability score~(CRPS),
and the average log-score (LS).
This procedure is repeated ten times,
where in each iteration new locations are chosen at random
to base the predictions on.
\kk{The average scores for the ten iterations
are shown in Table~\ref{tab:application}.
Recall that low RMSE and CRPS values
resp.\ high LS values correspond to good scores}.
\kk{We observe}
\db{that the predictive performance
of the non-stationary non-fractional ($\beta=1$) model
is similar to the stationary fractional model
in terms of CRPS, and actually worse in terms of RMSE.
This clearly indicates that the data should be analysed
by a fractional model.
Although the non-stationary fractional model
has a better performance in terms of CRPS and LS
than the stationary fractional model,
the difference is quite small
given that the non-stationary model has $38$ parameters,
compared to $4$ for the stationary model.
Thus, the fractional smoothness seems to be the most important aspect for this data.
The fact that the rational SPDE approach allows us
to make these comparisons and
\kk{to verify the smoothness parameter,
for stationary and non-stationary models,
is one of its most important features}}.
\section{Discussion}\label{sec:discussion}
We have introduced the rational SPDE approach
providing a new type of computationally efficient approximations
for a class of Gaussian random fields.
These are based on an extension of the SPDE approach by \cite{lindgren11}
to models with differential operators of general fractional orders.
For these approximations, explicit rates of strong convergence have been derived
and we have shown how to calibrate the degree of the rational approximation
with the mesh size of the FEM to achieve these rates. \db{The results can also be combined with the results in \citep{bolin2018weak} to obtain explicit rates of weak convergence (convergence of functionals of the random field).}
Our approach can, e.g., be used to approximate
stationary Mat\'ern fields with general smoothness, and it is also directly
applicable
to more complicated non-stationary models,
where the covariance function may be unknown.
A general fractional order of the differential operator
opens up for new applications of the SPDE approach,
such as to Gaussian fields with exponential covariances on $\mathbb{R}^2$.
For the Mat\'ern model and its extensions,
it furthermore facilitates likelihood-based (or Bayesian)
inference of all model parameters.
The specific structure of the approximation
then in turn enables a combination with INLA or MCMC
in situations where the Gaussian model is a part
of a more complicated hierarchical model.
We have illustrated the rational SPDE approach
for stationary and for non-stationary Mat\'ern models.
A topic for future research is to apply the approach
to other random field models in statistics
which are difficult to approximate by GMRFs,
such as to models with long-range dependence \citep{lilly2017fractional}
based on the fractional Brownian motion.
Another topic for future research is to modify the
fractional SPDE approach by replacing the FEM basis
by a multiresolution basis and to compare this approach
to other multiresolution approaches such as \citep{katzfuss2017multi}.
Finally, it is also of interest to extend the method to non-Gaussian versions
of the SPDE-based Mat\'ern models \citep{wallin15},
since the Markov approximation considered by \cite{wallin15} is only computable
under the restrictive requirement $\beta \in \mathbb{N}$.
| {'timestamp': '2019-04-19T02:24:52', 'yymm': '1711', 'arxiv_id': '1711.04333', 'language': 'en', 'url': 'https://arxiv.org/abs/1711.04333'} |
\section{\boldmath Axion solution of the strong $CP$ puzzle and the axion mass}
Already in the early days of Quantum Chromodynamics (QCD) it was realised that the most generic
Lagrangian of QCD contained also a term of the form
${\mathcal L}_{\rm QCD} \supset -
\frac{\alpha_s}{8\pi}\, \bar\theta \,G_{\mu\nu}^b \tilde{G}^{b,\mu\nu}$,
where $\alpha_s$ is the strong coupling, $G_{\mu\nu}^b$ is the gluonic field strength, $\tilde{G}^{b,\mu\nu}$
its dual, and $\bar\theta \in [-\pi,\pi]$ an angular parameter. This term violates parity ($P$) and time-reversal ($T$)
invariances
and, due to the $CPT$ conservation theorem, also $CP$ invariance. Consequently, it induces $CP$ violation in
flavor-diagonal strong interactions,
notably non-zero electric dipole moments of nuclei. However, none have been detected up to date. The best constraint
currently comes from the electric dipole moment of the neutron, which is bounded by $|d_n|< 2.9\times 10^{-26} e$\,cm.
A comparison with the
prediction, $d_n \sim e \bar\theta m^\ast_q/m_n \sim 6\times 10^{-17}e$\,cm, where $m^\ast_q \equiv m_u m_d/(m_u+m_d)$ is the reduced quark and $m_n$ the neutron mass, leads to the conclusion that $|\bar\theta |< 10^{-9}$.
This is the strong $CP$ puzzle.
In Peccei-Quinn (PQ) extensions~\cite{Peccei:1977hh} of the Standard Model (SM), the symmetries of the latter
are extended by a global $U(1)_{\rm PQ}$ symmetry which is
spontanously broken by the vacuum expectation value (VEV)
of a new complex singlet scalar
field, $\langle{|\sigma |^2}\rangle=v_{\rm PQ}^2/2$, which is assumed to be much larger than the Higgs VEV.
SM quarks or new exotic quarks are supposed to carry PQ charges such that
$U(1)_{\rm PQ}$ is also broken by the gluonic triangle anomaly,
$\partial_\mu J_{U(1)_{\rm PQ}}^\mu \supset
-\frac{\alpha_s}{8\pi}\,N_{\rm DW}\, G_{\mu\nu}^a \tilde G^{a\,\mu\nu}$,
where $N_{\rm DW}$ is a model-dependent integer.
Under these circumstances and at energies above the confinement scale $\Lambda_{\rm QCD}$
of QCD, but far below $v_{\rm PQ}$, the PQSM
reduces to the SM plus a pseudo Nambu-Goldstone boson~\cite{Weinberg:1977ma,Wilczek:1977pj} -- the axion $A$ --
whose field, $\theta (x) \equiv A(x)/f_A\in [-\pi,\pi]$, corresponding to the angular degree of freedom
of $\sigma$, acts as a space-time dependent $\bar\theta$
parameter,
${\mathcal L}_\theta \supset
\frac{f_A^2}{2} \,\partial_\mu \theta \partial^\mu \theta
- \frac{\alpha_s}{8\pi}\,\theta(x)\,G_{\mu\nu}^c {\tilde G}^{c,\mu\nu}$,
with $f_A \equiv v_{\rm PQ}/N_{\rm DW}$.
Therefore, the $\overline\theta$-angle can be eliminated by a shift $\theta (x) \to \theta (x) -\overline\theta$.
At energies below $\Lambda_{\rm QCD}$,
the effective potential of the shifted field, which for convenience we again denote by $\theta(x)$, will then coincide
with the vacuum energy of QCD as a function of $\overline\theta$, which, on general
grounds, has an absolute
minimum at $\theta =0$, implying that there is no strong $CP$ violation: $\langle \theta\rangle =0$. In particular,
$V(\theta ) = \frac{1}{2} \chi \theta^2 + {\mathcal O}(\theta^4) $,
where $\chi\equiv \int d^4x\, \langle q(x)\,q(0)\rangle$, with $q(x)\equiv \frac{\alpha_s}{8\pi}\,G_{\mu\nu}^c(x) {\tilde G}^{c,\mu\nu}(x)$, is the topological susceptibility.
A recent lattice determination found~\cite{Borsanyi:2016ksw}
$\chi = [75.6(1.8)(0.9) {\rm MeV}]^4$, which agrees well with the result from NLO chiral perturbation theory~\cite{diCortona:2015ldu},
$\chi = [75.5(5) {\rm MeV}]^4$, leading to the following prediction of the axion mass in terms of the
axion decay constant $f_A$,
\begin{equation}
\label{zeroTma}
m_A\equiv \frac{1}{f_A}\sqrt{\frac{d^2 V}{d\theta^2}}{|_{\theta = 0}}= \frac{\sqrt{\chi}}{f_A} =
57.0(7)\, \left(\frac{10^{11}\rm GeV}{f_A}\right)\mu \textrm{eV}.
\end{equation}
\section{Axion cold dark matter in the case of post-inflationary PQ symmetry breaking}
In a certain range of its decay constant, the axion not only solves the strong $CP$ puzzle, but is also a cold dark matter
candidate~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. The extension of this range depends critically on the cosmological history. It is particularly constrained in the case on which we concentrate here: post-inflationary PQ symmetry restoration and subsequent breaking.\footnote{Remarkably, this case is strongly favored in the case of
saxion (modulus of $\sigma$) or saxion/Higgs inflation~\cite{Ballesteros:2016xej}.}
In the early universe, after the PQ phase transition, the axion field takes on
random initial values in domains of the size of the causal horizon.
Within each domain, the axion field evolves according to
\begin{equation}
\label{KG}
\ddot \theta + 3 H(T) \dot\theta + \frac{\chi(T)}{f_A^2} \sin\theta= 0 ,
\end{equation}
with temperature dependent Hubble expansion rate $H(T)\sim T^2/M_P$ and topological
susceptibility~\cite{Pisarski:1980md} $\chi (T)\propto T^{-(7 + 3/n_f)}$, for temperatures far above the QCD
quark-hadron crossover, $T_c^{\rm QCD}\simeq 150$\,MeV ($n_f$ is the number of active quark flavors).
At very high temperatures, $v_{\rm PQ} > T\gg T_c^{\rm QCD}$, the Hubble friction term is much larger than the potential term in (\ref{KG}), $3 H(T)\gg \sqrt{\chi(T)}/f_A$, and the axion field is frozen at its initial value. At temperatures around a GeV,
however, when $\sqrt{\chi(T)}/f_A \simeq 3 H(T)$, the field starts to
evolve towards the minimum of the potential and to oscillate around the $CP$ conserving ground state.
Such a spatially coherent oscillation has an equation of state
like cold dark matter, $w_A \equiv p_A/\rho_A \simeq 0$ (here $p_A$ and $\rho_A$ are the pressure and the
energy density of the axion field, respectively).
Averaging over the initial values of the axion field in the many domains filling our universe -- at temperatures around
a GeV the size of a domain is around a mpc -- one
obtains~\cite{Borsanyi:2016ksw,Ballesteros:2016xej}
$\Omega_A^{\rm (VR)}h^2 =
(3.8\pm 0.6 )\times 10^{-3} \,\left(f_A \over { 10^{10}\, {\rm GeV}}\right)^{1.165}$,
for the fractional contribution of axion cold dark matter to the energy density of the universe
from this so-called vacuum realignment (VR) mechanism~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}.
Here, the exponent, $1.165$, arises from the temperature dependence of
$\chi(T)$ at $T\sim $\,GeV, which has recently been determined quite precisely
from lattice QCD~\cite{Borsanyi:2016ksw}. Requiring, that the axion dark matter abundance
should not exceed the observed one, this result implies a lower limit on the axion
mass~\cite{Borsanyi:2016ksw}:
\begin{equation}
m_A > 28(2)\,\mu{\rm eV} \,.
\end{equation}
However, so far we have neglected that the domain-like structure discussed above comes along with a network of one and two dimensional topological defects -- strings~\cite{Davis:1986xc} and domain walls~\cite{Sikivie:1982qv} -- which are formed at the boundaries of the domains. Their collapse will also produce axions.
Axion strings are formed at the same time when the domain-like structure appears, cf. at the PQ phase transition.
In the string cores, of typical radius $1/m_\rho$, where $m_\rho \equiv \sqrt{2\lambda_\sigma} v_{\rm PQ}$ is the mass of the saxion (the particle excitation of the saxion field),
topology hinders the breaking of the PQ symmetry and a huge energy density
is stored. As the network evolves, the overall string length decreases by straightening and collapsing loops.
Moreover, some energy is radiated in the form of low-momentum axions. The energy density in the network of global strings is expected to reach a scaling behaviour,
$\rho_{\rm S} = \zeta \frac{\mu_{\rm S}}{t^2}$,
with string tension $\mu_{\rm S} \equiv \pi v_{\rm PQ}^2 \ln\left(\frac{m_\rho t }{\sqrt{\zeta}}\right)$,
where $\zeta$ is independent of time.
This scaling behavior implies that the number density of axions radiated from strings (S) can be
estimated as
\begin{equation}
\label{Nastring}
{n_{A}^{\rm (S)}(t)}
\simeq
\frac{\zeta}{\epsilon}\frac{v_{\rm PQ}^2}{t}\left[\ln\left(\frac{m_\rho t}{\sqrt{\zeta}}\right)-3\right] ,
\end{equation}
where the dimensional parameter $\epsilon$ gives a measure of the average energy of the radiated axions in units
of the Hubble scale, $\epsilon \equiv \langle E_A\rangle/(2\pi /t)$.
A number of field theory simulations have indicated that the network of strings evolves indeed toward the scaling solution
with $\zeta = {\mathcal O}(1)$ and $\epsilon ={\mathcal O}(1)$.
The latter value implies that most of the axions produced from strings become non-relativistic during the radiation-dominated era and contribute to the cold dark matter abundance. Adopting the values~\cite{Kawasaki:2014sqa}
$\zeta = 1.0 \pm 0.5$ and $\epsilon = 4.02 \pm 0.70$, one finds from (\ref{Nastring}) for the contribution of strings to today's dark matter abundance~\cite{Ballesteros:2016xej}
$\Omega_A^{\rm (S)} h^2
\approx
7.8^{+6.3}_{-4.5} \times 10^{-3} \times N_{\rm DW}^2
\left( \frac{f_A}{10^{10}\ {\rm GeV}}\right)^{1.165}$,
where the upper and lower end correspond to the maximum and minimum values obtained by using the above
error bars on $\zeta$ and $\epsilon$. They do not take into account a possible large theoretical error due to the fact that
the field theory simulations can only be performed at values of $\ln ( m_\rho t )\sim$\,a few, much smaller than the realistic value, $\sim 50$, and thus require an extrapolation.
Domain walls appear at temperatures of the order of a GeV, when the axion field, in any of the causally connected domains at this epoch, relaxes into one of the $N_{\rm DW}$ distinct but degenerate minima of the effective potential
effective potential, $V(A,T) = \chi (T) \left[ 1- \cos ( N_{\rm DW} A/v_{\rm PQ} )\right]$, in the
interval $-\pi v_{\rm PQ}\leq A \leq +\pi v_{\rm PQ}$. Between the domains, there appear two dimensional
topological defects dubbed domain walls whose thickness and stored energy density is controlled by $\chi (T)$. Importantly, strings are always attached by $N_{\rm DW}$ domain walls,
due to the fact that the value of the phase of the PQ field $\sigma$ must vary from $-\pi$ to $\pi$ around the string core.
Therefore, hybrid networks of strings and domain walls, so-called string-wall systems, are formed at $T={\mathcal O}(1)$\,GeV.
Their evolution strongly depends on the model-dependent value of $N_{\rm DW}$.
For $N_{\rm DW} = 1$, strings are pulled by one domain wall, which causes the disintegration into smaller pieces of a wall bounded by a string~\cite{Vilenkin:1982ks}.
String-wall systems are short-lived in this case, and their collapse (C) contributes an amount~\cite{Kawasaki:2014sqa}
$\Omega_A^{\rm (C)} h^2
\approx
3.9^{+2.3}_{-2.1} \times 10^{-3} \times
\left( \frac{f_A}{10^{10}\ {\rm GeV}}\right)^{1.165}$
to dark matter,
resulting in a total abundance
\begin{equation}
\Omega_A h^2
\approx \left( \Omega_A^{\rm{(VR)}} + \Omega_A^{\rm{(S)}} + \Omega_A^{\rm{(C)}}\right) h^2
\approx 1.6^{+1.0}_{-0.7}\times 10^{-2}\times \left(\frac{f_A}{10^{10}\,\mathrm{GeV}}\right)^{1.165}.
\label{omega_a_tot_short}
\end{equation}
Therefore, in post-inflationary PQ symmetry breaking models with $N_{\rm DW}=1$, the axion may explain all of cold dark matter in the universe
if its decay constant and mass are in the range
\begin{equation}
\label{mass_range}
f_A \approx (3.8-9.9)\times 10^{10}\,{\rm GeV}\hspace{3ex} \Leftrightarrow\hspace{3ex}
m_A \approx (58 - 150)\ \mu{\rm eV}\,.
\end{equation}
This prediction, however, has recently been challenged by the results from a new field theory simulation
technique
designed to work directly at high string tension with $\ln (m_\rho t)\sim 50$ and
to treat vacuum realignment, string, and string-wall contributions in a unified way~\cite{Klaer:2017ond}.
The reported dark matter axion mass,
\begin{equation}
m_A=(26.2 \pm 3.4)\,\mu{\rm eV}\,,
\end{equation}
where the error now only includes the uncertainty from $\chi(T)$,
is significantly lower than (\ref{mass_range}). It indicates that axions from strings and walls are negligible,
despite of the fact that the string networks appear to have a higher energy density ($\zeta \sim 4$) than those observed in conventional field theoretic simulations ($\zeta\sim 1$). This implies that the produced axions have a larger energy, $\epsilon \sim 40$,
and that dynamics at smaller scales -- outside the range of applicability of the new simulation method~\cite{Klaer:2017ond} -- can be relevant for the determination of the axion DM abundance.
Further studies on the dynamics of string-wall systems are required to include precise modelling of physics at smaller distance scales.
Fortunately, there are new axion dark matter direct detection experiments aiming to probe
the mass region of interest for $N_{\rm DW}=1$ models with post-inflationary PQ symmetry breaking, notably CULTASK~\cite{Chung:2016ysi}, HAYSTAC~\cite{Zhong:2018rsr}, and
MADMAX~\cite{TheMADMAXWorkingGroup:2016hpc}.
For $N_{\rm DW} > 1$, the string-wall systems are stable, since the strings are pulled in
$N_{\rm DW}$ different directions. The existence of such stable domain walls is firmly excluded by standard cosmology~\cite{Zeldovich:1974uw}. Stability can be avoided if there exist further interactions which explicitly break the
PQ symmetry, e.g.
${\mathcal L} \supset
g M_P^4 \left(\frac{\sigma}{M_P} \right)^N
+h.c.$,
where $g$ is a complex dimensionless coupling, $M_{P}$
is the reduced Planck mass, and $N$ is an integer ($>4$). The appearance of such terms is motivated by the fact that global symmetries are not protected from effects of quantum gravity.
They give rise to an additional contribution in the low energy effective potential of the axion field, which lifts the degeneracy of the minima of the QCD induced potential by an amount~\cite{Ringwald:2015dsf}
$\Delta V \simeq -2 |g| M_P^4 \left(\frac{v_{\rm PQ}}{\sqrt{2}M_P} \right)^N \left[
\cos \left( \frac{2\pi N}{N_{\rm DW}} + \Delta_D \right) - \cos \Delta_D \right]
$,
where $\Delta_D = \arg(g) - N \overline{\theta}$,
and acts like a volume pressure on domain walls.
If $\Delta V$ is small, domain walls live for a long time and emit a lot of axions, potentially overclosing the universe.
On the other hand, if $\Delta V$ is large, it shifts the location of the minimum of the axion effective potential and leads to large $CP$ violation, spoiling the axionic solution of the strong $CP$ problem.
A detailed investigation of the parameter space exploiting the results of field theory simulations~\cite{Kawasaki:2014sqa}
showed~\cite{Ringwald:2015dsf} that there exists a valid region in parameter space if
$N = 9$ or $10$.\footnote{The absence of PQ symmetry breaking operators with $4<N<9$ can be naturally explained if the PQ symmetry arises accidentally as a low energy remnant
from a more fundamental
discrete symmetry~\cite{Ringwald:2015dsf,Ernst:2018bib}.} In the case of $N_{\rm DW}=6$ and $N=9\,(10)$, and allowing a mild tuning of $|g|$,
the axion can explain the observed dark matter abundance
for
\begin{equation}
4.4\times 10^7\,(1.3\times 10^9)\,{\rm GeV} < f_A < 1\times 10^{10}\,{\rm GeV}\ \Leftrightarrow \
0.56\,{\rm meV} < m_A < 130\,(4.5)\,{\rm meV}\, .
\end{equation}
Intriguingly, a DFSZ axion ($N_{\rm DW}=6$) in such a mass range can explain the accumulating hints of excessive energy losses of stars in various stages of their evolution~\cite{Giannotti:2017hny}.
In this range, axion dark matter direct detection may be difficult, but not impossible~\cite{Horns:2012jf,Baryakhtar:2018doz}.
Fortunately, it is aimed to be probed by the fifth force experiment ARIADNE~\cite{Arvanitaki:2014dfa} and the helioscope
IAXO~\cite{Armengaud:2014gea}.
\section*{References}
| {'timestamp': '2018-05-25T02:07:49', 'yymm': '1805', 'arxiv_id': '1805.09618', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.09618'} |
\section{Introduction \label{sec:intro}}
Subluminous B stars (sdBs) show similar colours and spectral characteristics to main sequence stars of
spectral type B, but are less luminous. Compared to main sequence B stars, the hydrogen Balmer lines in the spectra
of sdBs are stronger while the helium lines are much weaker. The strong line broadening and the early confluence of the
Balmer series is caused by the high surface gravities ($\log\,g\simeq5.0-6.0$) of these compact stars
($R_{\rm sdB}\simeq0.1-0.3\,R_{\rm \odot}$). Subluminous B stars are considered to be core helium-burning stars with
very thin hydrogen envelopes and masses of about half a solar mass (Heber \cite{heber86}) located at the extreme end of the horizontal branch (EHB).
\subsection{Hot subdwarf formation \label{sec:formation}}
The origin of EHB stars is still unknown (see Heber
\cite{heber09} for a review). The key question is how all but a tiny fraction of the red-giant progenitor's hydrogen envelope was removed at the same time at which the helium core has attained the mass ($\simeq0.5\,M_{\rm \odot}$) to ignite the helium flash. The reason for this high mass loss at the tip of the red giant branch (RGB) is unclear. Several single-star scenarios are under discussion (D'Cruz et al. \cite{dcruz96}; Sweigart \cite{sweigart97}; De Marchi \& Paresce \cite{demarchi96}; Marietta et al. \cite{marietta00}), which require either a fine-tuning of parameters or extreme environmental conditions that are unlikely to be met for the bulk of the observed subdwarfs in the field.
According to Mengel et al. (\cite{mengel76}), the required strong mass loss can occur in a close-binary system. The progenitor of the sdB star has to fill its Roche lobe near the tip of the red-giant branch (RGB) to lose a large part of its hydrogen envelope. The merger of close binary white dwarfs was investigated by Webbink (\cite{webbink84}) and Iben \& Tutukov (\cite{iben84}), who showed that an EHB star can form when two helium core white dwarfs (WDs) merge and the product is sufficiently massive to ignite helium. Politano et al. (\cite{politano08}) proposed that the merger of a red giant and a low-mass main-sequence star during a common envelope (CE) phase may lead to the formation of a rapidly rotating single hot subdwarf star.
Maxted et al. (\cite{maxted01}) determined a very high fraction of radial velocity variable sdB stars, indicating that about two thirds of the sdB stars in the field are in close binaries with periods of less than 30 days (see also Morales-Rueda et al. \cite{morales03}; Napiwotzki et al. \cite{napiwotzki04a}; Copperwheat et al. \cite{copperwheat11}). Han et al. (\cite{han02,han03}) used binary population synthesis models to study the stable Roche lobe overflow (RLOF) channel, the common envelope ejection channel, where the mass transfer to the companion is dynamically unstable, and the He-WD merger channel.
The companions are mostly main sequence stars or white dwarfs. If the white dwarf companion is sufficiently massive, the merger of the binary system might exceed the Chandrasekhar mass and explode as a type Ia supernova. Indeed, Maxted et al. (\cite{maxted00}) found the sdB+WD binary KPD\,1930$+$2752 to be a system that might qualify as a supernova Ia progenitor (see also Geier et al. \cite{geier07}). In Paper~I of this series (Geier et al. \cite{geier10b}) more candidate systems with massive compact companions, either massive white dwarfs or even neutron stars and black holes, have been found. Furthermore, Geier et al. (\cite{geier11c}) reported the discovery of an eclipsing sdB binary with a brown dwarf companion.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{tefflogg_vsini.eps}}
\caption{$T_{\rm eff}-\log{g}$-diagram for the entire sample (not RV-variable) under study.
The helium main sequence (HeMS) and the EHB band (limited by the zero-age
EHB, ZAEHB, and the terminal-age EHB, TAEHB) are superimposed with EHB evolutionary tracks for solar metallicity taken from
Dorman et al. (\cite{dorman93}) labelled with their masses. Open circles mark objects where only upper limits could be derived for $v_{\rm rot}\sin{i}$, filled circles objects with significant $v_{\rm rot}\sin{i}$. The size of the symbols scales with the value of $v_{\rm rot}\sin{i}$.}
\label{fig:tefflogg}
\end{center}
\end{figure}
\subsection{Rotation on the horizontal branch \label{sec:rotation}}
The rotational properties of horizontal branch (HB) stars both in globular clusters and in the field all the way from the red to the blue end have been studied extensively in the last decades (Peterson \cite{peterson83b}, \cite{peterson85}; Peterson et al. \cite{peterson83a}, \cite{peterson95}; Behr et al. \cite{behr00a}, \cite{behr00b}; Kinman et al. \cite{kinman00}; Recio-Blanco et al. \cite{recio02}, \cite{recio04}; Behr \cite{behr03a}, \cite{behr03b}; Carney et al. \cite{carney03}, \cite{carney08}). Most of these investigations were motivated by the puzzling horizontal branch morphologies in some globular clusters and the search for second or third parameters responsible for this phenomenon. The most interesting result of these studies is the discovery of a significant change in the rotational velocities of blue horizontal branch (BHB) stars when their effective temperatures exceed $\simeq11\,500\,{\rm K}$. HB stars cooler than this threshold value show ${v_{\rm rot}\sin\,i}$ values up to $40\,{\rm km\,s^{-1}}$, while the hotter stars rotate with velocities lower than $\simeq10\,{\rm km\,s^{-1}}$.
The transition in rotational velocity is accompanied by a jump towards brighter magnitudes in the colour-magnitude diagram (Grundahl et al. \cite{grundahl99}) and a change in the atmospheric abundance pattern. Stars cooler than $\simeq11\,500\,{\rm K}$ show the typi\-cal abundances of their parent population (e.g. For \& Sneden \cite{for10}), while stars hotter than that are in general depleted in helium and strongly enriched in iron and other heavy elements such as titanium or chromium. Lighter elements such as magnesium and silicon on the other hand have normal abundances (Behr et al. \cite{behr03a,behr03b}; Fabbian et al. \cite{fabbian05}; Pace et al. \cite{pace06}). Diffusion processes in the stellar atmosphere are most likely responsible for this effect. Michaud et al. (\cite{michaud83}) predicted such abundance patterns before the anomalies were observed (see also Michaud et al. \cite{michaud08}). Caloi (\cite{caloi99}) explained the sharp transition between the two abundance patterns as the disappearance of subsurface convection layers at a critical temperature. Sweigart (\cite{sweigart02}) indeed found that thin convective layers below the surface driven by hydrogen ionization should exist and shift closer to the surface when the effective temperature increases. At about $12\,000\,{\rm K}$ the convection zone reaches the surface and the outer layer of the star becomes fully radiative. Since convection is very efficient in mixing the envelope, diffusion processes do not operate in HB star atmospheres of less than $12\,000\,{\rm K}$.
Slow rotation is considered as a prerequisite for diffusion. Michaud (\cite{michaud83}) was the first to show that meridional circulation stops the diffusion process as soon as the rotational velocity reaches a critical value and could explain the chemical peculiarity of HgMn stars in this way. Quievy et al. (\cite{quievy09}) performed similar calculations for BHB stars and showed that the critical rotational velocity is somewhere near $\simeq20\,{\rm km\,s^{-1}}$ at the transition temperature of $11\,500\,{\rm K}$. This means that the atmospheric abundances of stars with lower ${v_{\rm rot}\sin\,i}$ should be affected by diffusion processes.
What causes the slow rotation that allows diffusion to happen, is still unclear. Sills \& Pinsonneault (\cite{sills00}) used a standard stellar evolution code and modelled the distribution of rotational velocities on the BHB. In order to reproduce the two populations of fast and slow rotators they assumed two distinct main sequence progenitor populations with different rotational veloci\-ties. In their picture the slowly rotating BHBs originate from slowly rotating main sequence stars.
Another possible explanation is the spin-down of the surface layers by diffusion itself. Sweigart (\cite{sweigart02}) argued that the radiative levitation of iron triggers a weak stellar wind that carries away angular momentum. Vink \& Cassisi (\cite{vink02}) showed that such winds are radiatively driven.
Brown (\cite{brown07}) used a stellar evolution code including rotation and modelled the distribution of rotational velocities on the BHB. This code allows one to follow the evolution of the progenitor star through the He-flash. Brown (\cite{brown07}) argues that no signifi\-cant angular momentum is exchanged between the stellar core and stellar envelope during the flash. The surface rotation of their models highly depends on the rotation of the surface convection zone, which contains most of the outer envelope's angular momentum. Hot BHB stars without surface convection zone rotate slower than the cooler ones with convection zone. This approach allows one to reproduce the observed ${v_{\rm rot}\sin\,i}$-distribution of BHB stars without assuming bimodal stellar po\-pulations (Brown et al. \cite{brown08}).
While the rotational properties of horizontal branch stars both in
globular clusters and in the field are thoroughly examined, the investigation of EHB stars has mostly been restricted to close binary systems, where tidal interaction plays a major role (Geier et al. \cite{geier10b}). Very few apparently single EHB stars have been studied so far, all of which are slow rotators ($<10\,{\rm km\,s^{-1}}$, e.g. Heber et al. \cite{heber00}; Edelmann \cite{edelmann01}).
In this paper we determine the projected rotational velocities of more than a hundred sdB stars by measuring the broadening of metal lines. In Paper~I (Geier et al. \cite{geier10b}) the rotational properties of sdBs in close binary system were derived and used to clarify the nature of their unseen companions. Here we focus on the rotational properties of apparently single sdBs and wide binary systems, for which tidal interactions become negligible.
In Sect.~\ref{sec:obs} we give an overview of the observations of high-resolution spectra and the atmospheric parameters of our sample. The determination of the rotational properties of 105 sdB stars are described in Sect.~\ref{sec:rotlow}, the results are interpreted in Sect.~\ref{sec:distrib} and compared to the corresponding results for BHB stars in Sect.~\ref{sec:bhb}. The implications for the sdB formation scenarios and the further evolution to the white dwarf cooling tracks are discussed in Sect.~\ref{sec:implications} and Sect.~\ref{sec:wd}, respectively. Finally, a summary is given in Sect.~\ref{sec:summary}.
\section{Observations and atmospheric parameters \label{sec:obs}}
ESO-VLT/UVES spectra were obtained in the course of the ESO Supernovae Ia
Progenitor Survey (SPY, Napiwotzki et al. \cite{napiwotzki01, napiwotzki03})
at spectral resolution $R\simeq20\,000-40\,000$ covering
$3200-6650\,{\rm \AA}$ with two small gaps at $4580\,{\rm \AA}$ and
$5640\,{\rm \AA}$. Each of the 50 stars was observed at least twice (Lisker et al. \cite{lisker05}).
Another sample of 46 known bright subdwarfs was observed with the
FEROS spectrograph ($R=48\,000$, $3750-9200\,{\rm \AA}$) mounted at the ESO/MPG
2.2m telescope (Geier et al. \cite{geier12}).
Six stars were observed with the FOCES spectrograph
($R=30\,000$, $3800-7000\,{\rm \AA}$) mounted at the CAHA 2.2m telescope (Geier et al. \cite{geier12}).
Two stars were observed with the HIRES instrument ($R=45\,000$,
$3600-5120\,{\rm \AA}$) mounted at the Keck telescope (Heber et al. \cite{heber00}).
One star was observed with the HRS fiber spectrograph at the Hobby Eberly Telescope ($R=30\,000$, $4260-6290\,{\rm \AA}$, Geier et al. \cite{geier10b}).
Because a wide slit was used in the SPY survey and the seeing
disk did not always fill the slit, the instrumental profile of some of the UVES spectra was seeing-dependent.
This has to be accounted for to estimate the instrumental resolution (see Paper~I).
The resolution of the spectra taken with the fiber spectrographs FEROS and FOCES was assumed to be constant.
The single spectra of all programme stars were radial-velocity (RV) corrected and co-added in
order to achieve higher signal-to-noise.
Atmospheric parameters of the stars observed with UVES have been determined by Lisker et al. (\cite{lisker05}). HD\,205805 and Feige\,49 have been analysed by Przybilla et al. (\cite{przybilla06}), the two sdB pulsators KPD\,2109+4401 and PG\,1219+534 by Heber et al. (\cite{heber00}), and the sdB binaries PG\,1725+252 and TON\,S\,135 by Maxted et al. (\cite{maxted01}) and Heber (\cite{heber86}), respectively. The rest of the sample was analysed in Geier et al. (\cite{geier12}) and a more detailed publication of these results is in preparation. We adopted the atmospheric parameters given in Saffer et al. (\cite{saffer94}) for $[$CW83$]$\,1758$+$36.
The whole sample under study is listed in Tables~\ref{tab:vrot} and \ref{tab:vrotrv} and the effective temperatures are plotted versus the surface gravities in Fig.~\ref{fig:tefflogg}. Comparing the positions of our sample stars to evolutionary tracks, we conclude that all stars are concentrated on or above the EHB, which is fully consistent with the theory. We point out that the inaccuracies in the atmospheric parameters do not significantly affect the derived projected rotational velocities.
\section{Projected rotational velocities from metal lines
\label{sec:rotlow}}
To derive $v_{\rm rot}\,\sin{i}$, we compared the observed spectra
with rotationally broadened, synthetic line profiles using a semi-automatic
analysis pipeline. The profiles were computed for the appropriate atmospheric parameters using the LINFOR program (developed by Holweger, Steffen and Steenbock at Kiel university, modified by Lemke \cite{lemke97}).
For a standard set of up to 187 unblended metal lines from 24 different ions and with
wavelengths ranging from $3700$ to $6000\,{\rm \AA}$, a model grid with
appropriate atmospheric parameters and different elemental abundances was
automatically generated with LINFOR. The actual number of lines used as input
for an individual star depends on the wavelength coverage. Owing to the
insufficient quality of the spectra and the pollution with telluric features
in the regions blueward of $3700\,{\rm \AA}$ and redward of
$6000\,{\rm \AA}$ we excluded them from our analysis. A simultaneous fit of
elemental abundance, projected rotational velocity and radial velocity was
then performed separately for each identified line using the FITSB2
routine (Napiwotzki et al. \cite{napiwotzki04b}). A detailed investigation of statistical and systematic
uncertainties of the techniques applied is presented in Paper~I. Depending on the quality of the data and
the number of metal lines used, an accuracy of about $1.0\,{\rm km\,s^{-1}}$ can be achieved.
For the best spectra with highest resolution the detection limit is about $5.0\,{\rm km\,s^{-1}}$.
Projected rotational velocities of 105 sdBs have been measured (see Tables~\ref{tab:vrot}, \ref{tab:vrotrv}). Ninety-eight sdBs do not show any RV variability. In addition, seven are radial velocity variable systems with orbital periods of about a few days (see Table~\ref{tab:vrotrv}).
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_all.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for the full sample. Objects with limits below the detection limit have been stacked into the first dotted bin.}
\label{fig:distriball}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_single.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 71 single stars from our sample using the same binning as in Fig.~\ref{fig:distriball}. The solid grey line marks the distribution of ${v_{\rm rot}\sin\,i}$ under the assumption of randomly oriented rotation axes and a constant ${v_{\rm rot}=7.65\,{\rm km\,s^{-1}}}$, which matches the observed distribution very well.}
\label{fig:distribsingle}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_comp.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 16 sdBs with companions visible in the spectra using the same binning as in Fig.~\ref{fig:distriball}.}
\label{fig:distribcomp}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{distrib_RV.eps}}
\caption{Distribution of ${v_{\rm rot}\sin\,i}$ for 8 radial velocity variable sdBs with orbital periods exceeding $\simeq1.2\,{\rm d}$ using the same binning as in Fig.~\ref{fig:distriball}.}
\label{fig:distribrv}
\end{center}
\end{figure}
For eleven stars of our sample upper limits for the projected rotational velocities have already been published (Heber et al. \cite{heber00}; Edelmann et al. \cite{edelmann01}) based on the same spectra as used here (see Table~\ref{tab:vrotlit}). Only for PHL\,932 and PG\,0909$+$276 our measured $v_{\rm rot}\sin{i}$ deviate significantly from the results of Edelmann et al. (\cite{edelmann01}). This is most likely because they used fewer metal lines in their study.
Przybilla et al. (\cite{przybilla06}) performed an NLTE analysis of Feige\,49 and HD\,205805 using the same FEROS spectra as we do here and derived a ${v_{\rm rot}\sin\,i}$ below the detection limit. Again our measurements are consistent with their results, because they are very close to the detection limit we derived for FEROS spectra of sdBs ($\simeq5\,{\rm km\,s^{-1}}$, see Paper~I).
\section{Projected rotational velocity distributions \label{sec:distrib}}
The projected rotational velocities of our full sample of 98 stars without radial velocity variations are all low ($<10\,{\rm km\,s^{-1}}$, see Table~\ref{tab:vrot}). Taking into account the uncertainties, one can see that there is no obvious trend with the atmosperic parameters (see Fig.~\ref{fig:tefflogg}).
Fig.~\ref{fig:distriball} shows the distribution of ${v_{\rm rot}\sin\,i}$ binned to the average measurement error ($1.5\,{\rm km\,s^{-1}}$). Eleven stars that had only fairly weak upper limits of $10\,{\rm km\,s^{-1}}$, were sorted out.
The distribution is very uniform and shows a prominent peak at $6-8\,{\rm km\,s^{-1}}$. Because we can only determine the projected rotation, the true rotational velocities of most stars in the sample should be about $7-8\,{\rm km\,s^{-1}}$.
\subsection{Single-lined sdBs}
Our sample contains 71 single-lined sdBs, of which the ${v_{\rm rot}\sin\,i}$ could be constrained. Ten stars of which we were only able to derive upper limits of $10\,{\rm km\,s^{-1}}$ were sorted out. Fig.~\ref{fig:distribsingle} shows the ${v_{\rm rot}\sin\,i}$ distribution of this subsample. Most remarkably, the distribution is almost identical to that of the full sample. Adopting a random distribution of inclination angles and a constant ${v_{\rm rot}}$ of $\simeq8\,{\rm km\,s^{-1}}$, the observed ${v_{\rm rot}\sin\,i}$-distribution can indeed be well reproduced (see Fig.~\ref{fig:distriball}). We therefore conclude that most single sdBs in our sample have very similar rotation velocities.
\subsection{Double-lined sdB binaries}
Our sample contains 18 sdBs with visible spectral signatures of cooler main sequence (MS) companions (e.g. Mg\,{\sc i}, Lisker et al. \cite{lisker05}). Again, two stars with upper limits of $10\,{\rm km\,s^{-1}}$ were excluded.
The orbital periods of these systems are long. Green et al. (\cite{green06}) have argued that such systems should have periods of many months or years. Recently, Deca et al. (\cite{deca12}) were able to determine the orbital period $P\simeq760\,{\rm d}$ of the sdB+K binary PG\,1018$-$047. Similar periods were reported by \O stensen \& van Winckel (\cite{oestensen12}) for eight such binaries. The separations of the components are so wide that tidal interaction is negligible. Main-sequence companions do therefore not affect the rotational properties of the sdB stars in this type of binaries.
The distribution for sdBs with composite spectra is displayed in Fig.~\ref{fig:distribcomp}. Taking into account the much smaller sample size, the result is again similar. We therefore conclude that the rotational properties of sdBs in wide binaries with MS companions are the same as those of single sdBs, although they have probably formed in a very different way (see Sect.~\ref{sec:implications}).
\subsection{Pulsating sdBs}
Two types of sdB pulsators are known. The slow pulsations of the V\,1093\,Her stars (sdBV$_{\rm s}$, Green et al. \cite{green03}) are not expected to influence the line broadening significantly (see Geier et al. \cite{geier10b}). For the short-period pulsators (V\,361\,Hya type, sdBV$_{\rm r}$, Charpinet et al. \cite{charpinet97}; Kilkenny et al. \cite{kilkenny97}) unresolved pulsations can severely affect or even dominate the broadening of the metal lines and therefore fake high ${v_{\rm rot}\sin\,i}$. Telting et al. (\cite{telting08}) showed that this happens in the case of the hybrid pulsator Balloon\,090100001 using the same method as in this work. Unresolved pulsations are also most likely responsible for the high line broadening ($39\,{\rm km\,s^{-1}}$) measured for the strong pulsator PG\,1605+072 (Heber et al. \cite{heber99}, \cite{heber00}).
Our sample contains three known long-period pulsators (PHL\,44, Kilkenny et al. \cite{kilkenny07}; PHL\,457, Blanchette et al. \cite{blanchette08}; LB\,1516, Koen et al. \cite{koen10}) and two short-period ones (KPD\,2109$+$4401, Bill\`{e}res et al. \cite{billeres98}; PG\,1219$+$534, O'Donoghue et al. \cite{odonoghue99}). The ${v_{\rm rot}\sin\,i}$ of KPD\,2109$+$4401 is indeed among the highest of all sdBs in our sample ($10.5\pm1.6\,{\rm km\,s^{-1}}$), but it is unclear if this might not be partly due to unresolved pulsations. Jeffery \& Pollacco (\cite{jeffery00}) measured RV variations of $2\,{\rm km\,s^{-1}}$ for KPD\,2109$+$4401. Taking this into account, the sdBs rotational velocity may be slightly lower than measured. The ${v_{\rm rot}\sin\,i}$ of the other pulsators are not peculiar.
For most stars in our sample it is not clear whether they are pulsators or not, because no light curves of sufficient quality are available. Because only about $5\%$ of all sdBs show pulsations detectable from the ground, one may conclude that the contamination by pulsators should be quite low. Thanks to the extensive photometric surveys for sdB pulsators conducted by Bill\`{e}res et al. (\cite{billeres02}), Randall et al. (\cite{randall06}) and \O stensen et al. (\cite{oestensen10}), we know that 27 stars from our sample do not show short-period pulsations.
Restricting ourselves to these objects and again excluding those with visible companions, we constructed a "pure" sample of 16 single sdBs, for which the rotational broadening is proven to be disturbed neither by the presence of a companion nor by pulsations. The associated ${v_{\rm rot}\sin\,i}$ distribution does not differ from the other distributions (see Figs.~\ref{fig:distriball}-\ref{fig:distribcomp}). We therefore conclude that unresolved pulsations do not significantly affect our results.
\subsection{Radial velocity variable sdBs}
In Paper~I we showed that the ${v_{\rm rot}\sin\,i}$ distribution of sdBs in close binary systems is strongly affected by the tidal interaction with their companions, but that this influence becomes negligible if the orbital periods of the binaries become longer than $\simeq1.2\,{\rm d}$. It is instructive to have a look at the ${v_{\rm rot}\sin\,i}$-distribution of these long-period radial velocity variable systems. From Paper~I we selected all seven binaries with periods longer than $1.2\,{\rm d}$, for which tidal synchronisation is not established. We added the system LB\,1516, a binary with yet unknown orbital parameters, but for which Edelmann et al. (\cite{edelmann05}) provided a lower limit for the period of the order of days\footnote{TON\,S\,135 was not included because the orbital period of $\simeq4\,{\rm d}$ given in Edelmann et al. (\cite{edelmann05}) is not very significant and shorter periods cannot be excluded yet.}.
Fig.~\ref{fig:distribrv} shows the associated distribution. Given the small sample size and although two stars have somewhat higher ${v_{\rm rot}\sin\,i}=10-12\,{\rm km\,s^{-1}}$, the distribution is again very similar to the distributions shown before (see Figs.~\ref{fig:distriball}-\ref{fig:distribcomp}). Subdwarf B stars in close binaries obviously rotate in the same way as single stars or sdBs with visible companions if the orbital period is sufficiently long.
\section{Comparison with BHB stars \label{sec:bhb}}
Projected rotational velocities of BHB stars have been determined for many globular cluster and field stars (Peterson et al. \cite{peterson95}; Behr \cite{behr03a, behr03b}; Kinman et al. \cite{kinman00}; Recio-Blanco et al. \cite{recio04}). The results are plotted against the effective temperature in Fig.~\ref{fig:vsiniteff}. The characteristic jump in ${v_{\rm rot}\sin\,i}$ at a temperature of about $\simeq11\,500\,{\rm K}$ can be clearly seen. The sdB sequence basically extends the BHB trend to higher temperatures. The ${v_{\rm rot}\sin\,i}$ values remain at the same level as observed in hot BHB stars.
Comparing the ${v_{\rm rot}\sin\,i}$ of BHB and EHB stars, one has to take into account that the radii of both types of horizontal branch stars are quite different, which translates directly into very different angular momenta. While sdBs have surface gravities $\log{g}$ between $5.0$ and $6.0$, the surface gravities of BHB stars range from $\log{g}=3.0$ to $4.0$. The BHB stars with the same rotational velocities as EHB stars have higher angular momenta. Assuming rigid rotation, the same inclination angle of the rotation axis, and the same mass of $\simeq0.5\,M_{\rm \odot}$ for BHB and EHB stars, one can calculate the quantity ${v_{\rm rot}\sin\,i}\times g^{-1/2}$, which is directly proportional to the angular momentum. The surface gravities of the sdBs were taken from the literature (see Sect.~\ref{sec:obs}), those for the BHB stars from Behr (\cite{behr03a, behr03b}) and Kinman et al. (\cite{kinman00}). Since Peterson et al. (\cite{peterson95}) and Recio-Blanco et al. (\cite{recio04}) did not determine surface gravities for their BHB sample, we adopted a $\log{g}$ of $3.0$ for stars with temperatures below $\simeq10\,000\,{\rm K}$ and $3.5$ for the hotter ones as suggested by the results of Behr (\cite{behr03a, behr03b}) and Kinman et al. (\cite{kinman00}).
In Fig.~\ref{fig:lteff} ${v_{\rm rot}\sin\,i}\times g^{-1/2}$ is plotted against $T_{\rm eff}$. The transition between BHB and EHB stars is smooth. Since the progenitors of the EHB stars lost more envelope material on the RGB, the EHB stars are expected to have lower angular momenta than the BHB stars. This is consistent with what can be seen in the Fig.~\ref{fig:lteff}.
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{vsiniteff.eps}}
\caption{Projected rotational velocity plotted against effective temperature. The grey squares mark BHB and some sdB stars taken from Peterson et al. (\cite{peterson95}), Behr (\cite{behr03a, behr03b}), Kinman et al. (\cite{kinman00}), and Recio-Blanco et al. (\cite{recio04}). Upper limits are marked with grey triangles. The black diamonds mark the sdBs from our sample. The vertical line marks the jump temperature of $11\,500\,{\rm K}$.}
\label{fig:vsiniteff}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{lteff.eps}}
\caption{${v_{\rm rot}\sin\,i}\times g^{-1/2}$ plotted against effective temperature. The grey squares mark BHB and some sdB stars taken from Peterson et al. (\cite{peterson95}), Behr (\cite{behr03a, behr03b}), Kinman et al. (\cite{kinman00}), and Recio-Blanco et al. (\cite{recio04}). Upper limits are marked with grey triangles. The black diamonds mark the sdBs from our sample. The vertical line marks the jump temperature of $11\,500\,{\rm K}$. Typical uncertainties for the sdBs are given in the upper right corner.}
\label{fig:lteff}
\end{center}
\end{figure}
\section{Implications for hot subdwarf formation \label{sec:implications}}
The uniform distribution of low projected rotational velocities in single and wide binary sdBs has consequences for the open question of hot subdwarf formation. As shown in this study, sdBs appear to rotate at low but spectroscopically detectable velocities of $8-10\,{\rm km\,s^{-1}}$. These results are remarkably similar to those derived for their cooler relatives, the BHB stars. Hot subdwarfs are likely formed through binary interaction or merging, which is also accompanied by a transfer of angular momentum. The rotational properties of sdB stars therefore allow one to constrain possible formation scenarios.
\subsection{Uniform rotation of EHB stars and mass loss on the RGB}
The rotational properties of sdBs residing on the EHB are very similar to those of hot BHB stars. The only exception is that the EHB stars obviously lost more envelope in the red-giant phase and therefore retained less angular momentum. How the envelope is lost does not affect the rotational velocities of sdB stars, since the ${v_{\rm rot}\sin\,i}$-distribution of RV variable systems with orbital periods sufficiently long to neglect the tidal influence of the companion (Fig.~\ref{fig:distribrv}) is similar to those of apparently single sdB stars (Fig.~\ref{fig:distribsingle}) and for sdB stars with visible main sequence companions (Fig.~\ref{fig:distribcomp}).
The abundance patterns of sdBs are dominated by diffusion processes very similar to those of the hot BHB stars (Geier et al. \cite{geier10a}). No surface convection zone should be present, and according to the model of Brown (\cite{brown07}) the angular momentum of the outer layers should be low. Stellar winds and magnetic fields may help to slow down the upper layers of the star. However, Unglaub (\cite{unglaub08}) showed that the weak winds predicted for sdB stars are most likely fractionated and are therefore not able to carry away the most abundant elements hydrogen and helium.
Angular momentum gained or retained from the formation process may also be stored in the stellar core, which may be rapidly rotating. Kawaler \& Hostler (\cite{kawaler05}) proposed such a scenario and suggested an asteroseismic approach to probe the rotation of the inner regions of sdBs. Van Grootel et al. (\cite{vangrootel08}) and Charpinet et al. (\cite{charpinet08}) performed such an analysis for the two short-period sdB pulsators Feige\,48 and PG\,1336$-$018, respectively, and found no deviation from rigid rotation at least in the outer layers of these stars down to about half the stellar radius. But these results may not be representative, because both stars are in close binary systems and are synchronised by the tidal influence of their companions (Geier et al. \cite{geier10b}). The rigid body rotation may have been caused by this effect and may not be a general feature of sdBs. Another setback of these analyses is the problem that p-mode pulsations are not suited to probe the innermost regions of sdBs. In contrast to that, g-mode pulsations reach the stellar core and it should be possible to measure the rotational properties of the whole stellar interior with asteroseismic methods. With the availability of high-precision light curves from the Kepler and CoRoT missions, the analysis of g-mode pulsators became possible and first results have been published by van Grootel et al. (\cite{vangrootel10}) and Charpinet et al. (\cite{charpinet11b}).
For the RV variable systems CE ejection is the only feasible formation channel. The systems with visible companions may have lost their envelopes via stable RLOF. Very recently, \O stensen et al. (\cite{oestensen12}) and Deca et al. (\cite{deca12}) reported the discovery of sdB+MS binaries with orbital periods up to $\simeq1200\,{\rm d}$, which may have been sufficiently close for mass transfer.
However, the visible companions to the sdBs may still have been separated by too much for an interaction with the subdwarf progenitors. More detailed binary evolution calculations are needed to solve this problem. Common envelope ejection and stable RLOF form similar sdB stars, because in both cases the hydrogen envelope is removed and the helium burning should start under similar conditions. It would therefore not be surprising if their ${v_{\rm rot}\sin\,i}$-distributions were to look similar.
\subsection{Where are the He-WD merger products?}
The ${v_{\rm rot}\sin\,i}$-distribution of the single sdB stars (Fig.~\ref{fig:distribsingle}) is particularly hard to understand in the context of the WD merger scenario. If a certain fraction or even all of the apparently single sdBs would have been formed in this way, one would not expect a ${v_{\rm rot}\sin\,i}$-distribution that resembles that of the post-CE or post-RLOF sdBs. Gourgouliatos \& Jeffery (\cite{gourgouliatos06}) showed that the merger product of two WDs would rotate faster than break-up velocity, if angular momentum were conserved. These authors concluded that angular momentum must be lost during the merger process. One way to lose angular momentum are stellar winds and magnetic fields. Another explanation may be the interaction with the accretion disc during the merger. If the less massive object is disrupted, it should form an accretion disc around the more massive component. The WD can only gain mass if angular momentum is transported outward in the disc. This process is expected to spin down the merger product (Gourgouliatos \& Jeffery \cite{gourgouliatos06}). According to a model proposed by Podsiadlowski (priv. comm.), the merger is accompanied by a series of outbursts caused by the ignition of helium. These flashes remove angular momentum from the merged remnant and should slow it down to rotational velocities of less than $20\,{\rm km\,s^{-1}}$.
However, even if it is possible to slow down the merged remnant of two He-WDs, it is very unlikely that the merger pro\-ducts would have a ${v_{\rm rot}\sin{i}}$-distribution almost identical to sdBs, of which we know that they were formed via CE-ejection or maybe stable RLOF. This would require an extreme fine-tuning of parameters, unless there is an as yet unknown mechanism at work, which leads to uniform rotation of the radiative, diffusion-dominated atmospheres. It is therefore questionable whether our sample contains stars that were formed by an He-WD merger or a CE-merger event. If this is not the case and because of the size of our sample, it would be safe to conclude that the merger channel does not contribute significantly to the observed population of single hydrogen-rich sdO/Bs in contrast to the models of Han et al. (\cite{han02}, \cite{han03}).
This conclusion is consistent with the most recent results by Fontaine et al. (\cite{fontaine12}), who studied the empirical mass distribution of sdB stars derived from eclipsing binary systems and asteroseismic analyses. The lack of sdB stars more massive than $\simeq0.5\,M_{\odot}$, which would be the outcome of the merger channel, led to the conclusion that mergers are less frequent in the formation process of isolated sdB stars than predicted by theory.
The only known single and fast rotating hot subdwarf star EC\,22081$-$1916 (Geier et al. \cite{geier11a}) may be the rare outcome of a CE merger event as suggested by Politano et al. (\cite{politano08}). It is unique among $\simeq100$ sdBs of our sample.
Possible candidates for WD-merger products are the helium rich sdOs (He-sdOs, Str\"oer at al. \cite{stroeer07}), since Hirsch et al. (\cite{hirsch09}) measured ${v_{\rm rot}\sin\,i}$ values of $20-30\,{\rm km\,s^{-1}}$ for some of those stars. Although their velocities are not particularly high, they are significantly different from the typical ${v_{\rm rot}\sin\,i}$ of sdBs. However, while the He-sdOs were first considered as single stars (Napiwotzki et al. \cite{napiwotzki08}), evidence grows that a fraction of them resides in close binaries (Green et al. \cite{green08}; Geier et al. \cite{geier11b}). At least those He-sdOs could not have been formed by a He-WD merger.
\subsection{Alternative formation scenarios}
Because the canonical binary scenario for sdB formation, which rests on the three pillars CE ejection, stable RLOF and He-WD merger, turned out to be very successful not only in explaining the properties of sdBs in the field (Han et al. \cite{han02}, \cite{han03}), but also in globular clusters (Han \cite{han08}) and the UV-upturn phenomenon in old galaxies (Han et al. \cite{han07}), the possible lack of merger candidates poses a problem.
Alternative formation scenarios such as CE ejection triggered by substellar companions (Soker \cite{soker98}; Bear \& Soker \cite{bear12}) may be responsible for the formation of apparently single sdBs. Evidence grows that such objects are quite common around sdB stars (e.g. Silvotti et al. \cite{silvotti07}; Geier et al. \cite{geier11c}; Charpinet et al. \cite{charpinet11a}). In the light of the results presented here and other recent observational evidence, the conclusion has to be drawn that the question of sdB formation is still far from settled.
\section{Connection to white dwarfs \label{sec:wd}}
Owing to their thin hydrogen envelopes, hot subdwarf stars will not evolve to the asymptotic giant branch (AGB-manqu\'e, Dorman et al. \cite{dorman93}). After about $100\,{\rm Myr}$ of core He-burning on the EHB and a shorter episode of He-shell burning, these objects will join the WD cooling sequence.
The rotational properties of single WDs are difficult to determine. Owing to the high pressure in the dense WD atmospheres, the spectral lines of WDs are strongly broadened and hence do not appear to be suitable to measure ${v_{\rm rot}\sin{i}}$. However, the H${\rm_\alpha}$ line often displays a sharp line core, which is caused by NLTE effects. In a small fraction of the WD-population metal lines are visible. However, excellent high-resolution spectra are necessary to constrain the projected rotational velocity (Berger et al. \cite{berger05}).
The derived upper limits ($\simeq10-50\,{\rm km\,s^{-1}}$) are consistent with the much lower rotational velocities of pulsating WDs derived with asteroseismic methods ($\simeq0.2-3.5\,{\rm km\,s^{-1}}$, Kawaler \cite{kawaler03}). Most single WDs are therefore obviously rather slow rotators. The reason for this is most likely a significant loss of mass and angular momentum due to stellar winds and thermal pulses in the AGB-phase, as has been shown by Charpinet et al. (\cite{charpinet09}).
The properties of WDs evolved from sdB progenitors on the other hand should be very different. Since the hot subdwarfs bypass the AGB-phase, both their masses and their angular momenta are expected to remain more or less constant when evolving to become WDs.
The average mass of these sdB remnants ($\simeq0.47\,M_{\rm \odot}$) is expected to be significantly lower than the average mass of normal WDs ($\simeq0.6\,M_{\rm \odot}$). But more importantly, the rotational velocities of these WDs must be very high. We have shown that single sdBs have small, but still detectable ${v_{\rm rot}\sin{i}}$. Assuming rigid rotation and conservation of mass and angular momentum, the rotational velocity at the surface scales with the stellar radius. Because the radius decreases by a factor of about $10$, the rotational velocity should increase by a factor of about $100$. Assuming an average ${v_{\rm rot}\simeq8\,{\rm km\,s^{-1}}}$ for single sdBs, WDs evolved through an EHB-phase should therefore have an average ${v_{\rm rot}\simeq800\,{\rm km\,s^{-1}}}$. Because about $1\%$ of all WDs are expected to have evolved through an EHB-phase, we expect a similar fraction of extremely fast rotating, low-mass WDs. These high ${v_{\rm rot}\sin{i}}$-values should be easily detectable even in medium-resolution spectra. The sample of WDs with observed spectra from the Sloan Digital Sky Survey (Eisenstein et al. \cite{eisenstein06}) for example should contain more than $100$ of these objects.
\section{Summary \label{sec:summary}}
We extended a project to derive the rotational properties of sdB stars and determined the projected rotational velocities of 105 sdB stars by measuring the broadening of metal lines using high-resolution spectra. All stars in our sample have low ${v_{\rm rot}\sin{i}}<10\,{\rm km\,s^{-1}}$. For $\simeq75\%$ of the sample we were able to determine significant rotation. The distribution of projected rotational velocities is consistent with an average rotation of $\simeq8\,{\rm km\,s^{-1}}$ for the sample. Furthermore, the $v_{\rm rot}\sin{i}$-distributions of single sdBs, hot subdwarfs with main sequence companions vi\-sible in the spectra and close binary systems with periods exceeding $1.2\,{\rm d}$ are similar. The BHB and EHB stars are related in terms of surface rotation and angular momentum. Hot BHBs with diffusion-dominated atmospheres are slow rotators like the EHB stars, which lost more envelope and therefore angular momentum on the RGB. The uniform rotation distributions of single and wide binary sdBs pose a challenge to our understanding of hot subdwarf formation. Especially the high fraction of He-WD mergers predicted by theory seems to be inconsistent with our results. We predict that the evolutionary channel of single sdB stars gives birth to a small population of rapidly rotating WDs with masses lower than average.
\begin{table*}[t!]
\caption{Projected rotational velocities of single sdBs and sdBs with visible companions.}
\label{tab:vrot}
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B/V}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument \\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HE\,0151$-$3919 & 20\,800 & 14.3$^{\rm B}$ & 66 & 1.06 & 27 & $<5.0$ & UVES \\
EC\,21494$-$7018 & 22\,400 & 11.2$^{\rm V}$ & 85 & & 16 & 8.6 $\pm$ 1.8 & FEROS \\
EC\,15103$-$1557 & 22\,600 & 12.9$^{\rm V}$ & 163 & & 8 & 6.5 $\pm$ 1.6 & FEROS \\
HD\,4539 & 23\,000 & 10.1$^{\rm B}$ & 112 & & 21 & 3.9 $\pm$ 1.0 & FEROS \\
EC\,11349$-$2753 & 23\,000 & 12.5$^{\rm B}$ & 185 & & 49 & 4.7 $\pm$ 1.0 & FEROS \\
EC\,14345$-$1729 & 23\,300 & 13.1$^{\rm V}$ & 117 & & 40 & 6.2 $\pm$ 1.0 & FEROS \\
HE\,0539$-$4246 & 23\,300 & 14.5$^{\rm B}$ & 40 & 0.87 & 19 & $<10.0$ & UVES \\
HE\,2307$-$0340$^{\rm no}$ & 23\,300 & 15.8$^{\rm B}$ & 61 & 0.89 & 17 & $<5.0$ & UVES \\
PG\,1432$+$004$^{\rm nr}$ & 23\,600 & 12.0$^{\rm B}$ & 170 & & 13 & 4.7 $\pm$ 1.0 & FEROS \\
EC\,19563$-$7205$^{\rm c}$ & 23\,900 & 12.8$^{\rm B}$ & 85 & & 34 & 9.8 $\pm$ 1.0 & FEROS \\
EC\,20106$-$5248 & 24\,500 & 12.6$^{\rm V}$ & 114 & & 47 & 7.8 $\pm$ 1.0 & FEROS \\
BD$+$48$^{\circ}$\,2721 & 24\,800 & 10.5$^{\rm B}$ & 326 & & 10 & 4.7 $\pm$ 1.4 & FOCES \\
HD\,205805 & 25\,000 & 9.9$^{\rm B}$ & 255 & & 20 & 4.5 $\pm$ 1.0 & FEROS \\
HE\,0321$-$0918$^{\rm no}$ & 25\,100 & 14.7$^{\rm B}$ & 37 & 1.22 & 7 & 5.6 $\pm$ 2.3 & UVES \\
PG\,1653$+$131 & 25\,400 & 14.1$^{\rm B}$ & 68 & & 32 & 8.3 $\pm$ 1.0 & FEROS \\
HE\,2237$+$0150 & 25\,600 & 15.8$^{\rm B}$ & 40 & 0.78 & 11 & 8.5 $\pm$ 1.8 & UVES \\
PG\,0342$+$026 & 26\,000 & 11.1$^{\rm B}$ & 190 & & 54 & 6.2 $\pm$ 1.0 & FEROS \\
PG\,2122$+$157$^{\rm c}$ & 26\,000 & 15.0$^{\rm B}$ & 67 & 0.78 & 13 & 7.9 $\pm$ 1.4 & UVES \\
GD\,108 & 26\,100 & 13.3$^{\rm B}$ & 97 & & 6 & 6.0 $\pm$ 1.8 & FEROS \\
Feige\,65 & 26\,200 & 11.8$^{\rm B}$ & 150 & & 18 & 7.2 $\pm$ 1.1 & FOCES \\
PHL\,44$^{\rm l}$ & 26\,600 & 13.0$^{\rm B}$ & 85 & & 31 & 8.4 $\pm$ 1.0 & FEROS \\
HE\,0513$-$2354 & 26\,800 & 15.8$^{\rm B}$ & 21 & 0.99 & 18 & $<10.0$ & UVES \\
HE\,0135$-$6150 & 27\,000 & 16.3$^{\rm B}$ & 37 & 0.71 & 13 & 5.5 $\pm$ 1.7 & UVES \\
SB\,815 & 27\,000 & 10.6$^{\rm B}$ & 85 & & 48 & 7.3 $\pm$ 1.0 & FEROS \\
HE\,2201$-$0001 & 27\,100 & 16.0$^{\rm B}$ & 35 & 1.10 & 28 & $<5.0$ & UVES \\
PG\,2205$+$023 & 27\,100 & 12.9$^{\rm B}$ & 36 & & 9 & $<10.0$ & FEROS \\
PG\,2314$+$076$^{\rm nb}$ & 27\,200 & 13.9$^{\rm B}$ & 71 & & 6 & 6.0 $\pm$ 2.2 & FEROS \\
SB\,485 & 27\,700 & 13.0$^{\rm B}$ & 112 & 0.71 & 24 & 7.2 $\pm$ 1.0 & UVES \\
KUV\,01542$-$0710$^{\rm c}$ & 27\,800 & 16.3$^{\rm B}$ & 58 & 0.92 & 8 & 7.2 $\pm$ 2.1 & UVES \\
HE\,2156$-$3927$^{\rm c}$ & 28\,000 & 14.1$^{\rm B}$ & 62 & 0.61 & 16 & 7.0 $\pm$ 1.2 & UVES \\
EC\,03591$-$3232 & 28\,000 & 11.2$^{\rm V}$ & 131 & & 34 & 4.8 $\pm$ 1.0 & FEROS \\
EC\,12234$-$2607 & 28\,000 & 13.8$^{\rm B}$ & 60 & & 19 & 6.8 $\pm$ 1.4 & FEROS \\
PG\,2349$+$002 & 28\,000 & 12.0$^{\rm B}$ & 68 & & 11 & 5.7 $\pm$ 1.5 & FEROS \\
HE\,2322$-$0617$^{\rm c,no}$ & 28\,100 & 15.7$^{\rm B}$ & 62 & 0.70 & 15 & 6.8 $\pm$ 1.3 & UVES \\
PG\,0258$+$184$^{\rm c,no}$ & 28\,100 & 15.2$^{\rm B}$ & 48 & 0.99 & 12 & 7.2 $\pm$ 1.7 & UVES \\
HE\,0136$-$2758$^{\rm no}$ & 28\,200 & 16.2$^{\rm B}$ & 29 & 1.20 & 27 & $<5.0$ & UVES \\
HE\,0016$+$0044$^{\rm no}$ & 28\,300 & 13.1$^{\rm B}$ & 58 & 0.67 & 14 & 6.5 $\pm$ 1.3 & UVES \\
PG\,1549$-$001$^{\rm no}$ & 28\,300 & 14.8$^{\rm B}$ & 45 & 1.16 & 20 & 5.6 $\pm$ 1.1 & UVES \\
HE\,2349$-$3135 & 28\,500 & 15.6$^{\rm B}$ & 53 & 1.13 & 13 & 10.0 $\pm$ 1.7 & UVES \\
EC\,01120$-$5259 & 28\,900 & 13.5$^{\rm V}$ & 73 & & 19 & 5.8 $\pm$ 1.2 & FEROS \\
HE\,0007$-$2212$^{\rm no}$ & 29\,000 & 14.8$^{\rm B}$ & 53 & 0.64 & 21 & 7.4 $\pm$ 1.0 & UVES \\
LB\,275$^{*}$ & 29\,300 & 14.9$^{\rm B}$ & 48 & 1.16 & 20 & 5.6 $\pm$ 1.1 & UVES \\
EC\,03263$-$6403 & 29\,300 & 13.2$^{\rm V}$ & 32 & & 40 & $<5.0$ & FEROS \\
HE\,1254$-$1540$^{\rm c,no}$ & 29\,700 & 15.2$^{\rm B}$ & 54 & 0.75 & 20 & 7.2 $\pm$ 1.3 & UVES \\
PG\,1303$+$097 & 29\,800 & 14.3$^{\rm B}$ & 51 & & 18 & 6.1 $\pm$ 1.5 & FEROS \\
HE\,2222$-$3738 & 30\,200 & 14.2$^{\rm B}$ & 61 & 0.83 & 28 & 8.7 $\pm$ 1.0 & UVES \\
HE\,2238$-$1455 & 30\,400 & 16.0$^{\rm B}$ & 48 & 0.80 & 14 & $<5.0$ & UVES \\
EC\,03470$-$5039 & 30\,500 & 13.6$^{\rm V}$ & 53 & & 9 & 7.3 $\pm$ 2.0 & FEROS \\
Feige\,38 & 30\,600 & 12.8$^{\rm B}$ & 148 & & 34 & 5.3 $\pm$ 1.0 & FEROS \\
HE\,1038$-$2326$^{\rm c}$ & 30\,600 & 15.8$^{\rm B}$ & 34 & 1.27 & 28 & $<5.0$ & UVES \\
PG\,1710$+$490 & 30\,600 & 12.1$^{\rm B}$ & 80 & & 11 & 7.1 $\pm$ 1.6 & FOCES \\
HE\,0447$-$3654 & 30\,700 & 14.6$^{\rm V}$ & 44 & & 11 & 7.3 $\pm$ 1.8 & FEROS \\
EC\,14248$-$2647 & 31\,400 & 12.0$^{\rm V}$ & 104 & & 14 & 7.0 $\pm$ 1.5 & FEROS \\
HE\,0207$+$0030$^{\rm no}$ & 31\,400 & 14.7$^{\rm B}$ & 27 & 1.30 & 7 & 5.1 $\pm$ 2.3 & UVES \\
KPD\,2109$+$4401$^{\rm s}$ & 31\,800 & 13.2$^{\rm B}$ & 136 & & 9 & 10.5 $\pm$ 1.6 & HIRES \\
EC\,02542$-$3019 & 31\,900 & 12.8$^{\rm B}$ & 65 & & 13 & 7.3 $\pm$ 1.5 & FEROS \\
$[$CW83$]$\,1758$+$36$^{\rm nb}$ & 32\,000 & 11.1$^{\rm B}$ & 110 & & 5 & 5.7 $\pm$ 1.4 & FOCES \\
TON\,S\,155$^{\rm c}$ & 32\,300 & 14.9$^{\rm B}$ & 35 & 0.85 & 14 & $<5.0$ & UVES \\
EC\,21043$-$4017 & 32\,400 & 13.1$^{\rm V}$ & 65 & & 8 & 5.6 $\pm$ 1.8 & FEROS \\
EC\,20229$-$3716 & 32\,500 & 11.4$^{\rm V}$ & 153 & & 29 & 4.5 $\pm$ 1.0 & FEROS \\
HS\,2125$+$1105$^{\rm c}$ & 32\,500 & 16.4$^{\rm B}$ & 29 & 0.80 & 8 & 6.0 $\pm$ 2.4 & UVES \\
HE\,1221$-$2618$^{\rm c}$ & 32\,600 & 14.9$^{\rm B}$ & 35 & 1.06 & 11 & 6.8 $\pm$ 1.6 & UVES \\
HS\,2033$+$0821$^{\rm no}$ & 32\,700 & 14.4$^{\rm B}$ & 43 & 1.14 & 37 & $<5.0$ & UVES \\
HE\,0415$-$2417$^{\rm no}$ & 32\,800 & 16.2$^{\rm B}$ & 34 & 0.83 & 10 & $<10.0$ & UVES \\
EC\,05479$-$5818 & 33\,000 & 13.1$^{\rm V}$ & 81 & & 20 & 5.8 $\pm$ 1.1 & FEROS \\
HE\,1200$-$0931$^{\rm c,no}$ & 33\,400 & 16.2$^{\rm B}$ & 30 & 0.86 & 12 & $<5.0$ & UVES \\
\hline
\\
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{llllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B}$ & S/N & seeing & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument \\
& [K] & [mag] & & [arcsec] & & [${\rm km\,s^{-1}}$] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
PHL\,932 & 33\,600 & 12.0$^{\rm B}$ & 102 & 1.10 & 12 & 9.0 $\pm$ 1.3 & UVES \\
HE\,1422$-$1851$^{\rm c,no}$ & 33\,900 & 16.3$^{\rm B}$ & 14 & 0.58 & 10 & $<10.0$ & UVES \\
PHL\,555 & 34\,100 & 13.8$^{\rm B}$ & 56 & 0.88 & 17 & 6.9 $\pm$ 1.2 & UVES \\
HE\,1419$-$1205$^{\rm c}$ & 34\,200 & 16.2$^{\rm B}$ & 28 & 0.69 & 16 & $<10.0$ & UVES \\
PG\,1219$+$534$^{\rm s}$ & 34\,300 & 12.4$^{\rm B}$ & 140 & & 11 & 5.7 $\pm$ 1.4 & HIRES \\
HS\,2216$+$1833$^{\rm c}$ & 34\,400 & 13.8$^{\rm B}$ & 54 & 0.90 & 11 & 5.3 $\pm$ 1.6 & UVES \\
HE\,1050$-$0630$^{\rm no}$ & 34\,500 & 14.0$^{\rm B}$ & 59 & 1.20 & 28 & 7.3 $\pm$ 1.4 & UVES \\
HE\,1519$-$0708$^{\rm no}$ & 34\,500 & 15.6$^{\rm B}$ & 20 & 0.84 & 8 & 9.0 $\pm$ 2.4 & UVES \\
HE\,1450$-$0957 & 34\,600 & 15.1$^{\rm B}$ & 32 & 0.71 & 6 & 9.0 $\pm$ 2.4 & UVES \\
EC\,13047$-$3049 & 34\,700 & 12.8$^{\rm V}$ & 68 & & 5 & 6.8 $\pm$ 3.6 & FEROS \\
HS\,1710$+$1614$^{\rm no}$ & 34\,800 & 15.7$^{\rm B}$ & 38 & 1.30 & 13 & $<5.0$ & UVES \\
PHL\,334 & 34\,800 & 12.5$^{\rm B}$ & 87 & & 13 & $<5.0$ & FEROS \\
Feige\,49 & 35\,000 & 13.2$^{\rm B}$ & 119 & & 40 & 6.2 $\pm$ 1.0 & FEROS \\
HE\,2151$-$1001$^{\rm s}$ & 35\,000 & 15.6$^{\rm B}$ & 42 & 0.66 & 6 & 6.7 $\pm$ 2.4 & UVES \\
PG\,0909$+$164$^{\rm s}$ & 35\,300 & 13.9$^{\rm B}$ & 52 & & 4 & $<10.0$ & FEROS \\
HE\,1021$-$0255$^{\rm no}$ & 35\,500 & 15.3$^{\rm B}$ & 40 & 1.61 & 11 & $<10.0$ & UVES \\
PG\,0909$+$276$^{\rm nb}$ & 35\,500 & 13.9$^{\rm B}$ & 82 & & 13 & 9.3 $\pm$ 1.4 & FOCES \\
HE\,0101$-$2707 & 35\,600 & 15.0$^{\rm B}$ & 67 & 0.85 & 12 & 8.1 $\pm$ 1.5 & UVES \\
EC\,03408$-$1315 & 35\,700 & 13.6$^{\rm V}$ & 66 & & 11 & 8.8 $\pm$ 1.8 & FEROS \\
HE\,1352$-$1827$^{\rm c}$ & 35\,700 & 16.2$^{\rm B}$ & 24 & 0.85 & 5 & 8.2 $\pm$ 2.7 & UVES \\
PG\,1207$-$032$^{\rm no}$ & 35\,700 & 13.1$^{\rm B}$ & 50 & 0.64 & 9 & 6.6 $\pm$ 1.6 & UVES \\
HE\,0019$-$5545 & 35\,700 & 15.8$^{\rm B}$ & 38 & 0.76 & 7 & 5.9 $\pm$ 2.3 & UVES \\
GD\,619 & 36\,100 & 13.9$^{\rm B}$ & 96 & 0.81 & 10 & 6.1 $\pm$ 1.5 & UVES \\
HE\,1441$-$0558$^{\rm c,no}$ & 36\,400 & 14.4$^{\rm B}$ & 30 & 0.70 & 8 & 6.9 $\pm$ 2.0 & UVES \\
HE\,0123$-$3330 & 36\,600 & 15.2$^{\rm B}$ & 48 & 0.66 & 8 & 6.9 $\pm$ 1.8 & UVES \\
PG\,1505$+$074 & 37\,100 & 12.2$^{\rm B}$ & 153 & & 4 & $<5.0$ & FEROS \\
HE\,1407$+$0033$^{\rm no}$ & 37\,300 & 15.5$^{\rm B}$ & 35 & 0.72 & 9 & $<10.0$ & UVES \\
PG\,1616$+$144$^{\rm nb}$ & 37\,300 & 13.5$^{\rm B}$ & 44 & & 4 & $<10.0$ & FEROS \\
EC\,00042$-$2737$^{\rm c}$ & 37\,500 & 13.9$^{\rm B}$ & 37 & & 9 & $<10.0$ & FEROS \\
PHL\,1548 & 37\,400 & 12.5$^{\rm B}$ & 90 & & 10 & 9.1 $\pm$ 1.6 & FEROS \\
PB\,5333$^{\rm nb}$ & 40\,600 & 12.5$^{\rm B}$ & 66 & & 2 & $<10.0$ & FEROS \\
$[$CW83$]$\,0512$-$08 & 38\,400 & 11.3$^{\rm B}$ & 124 & & 14 & 7.7 $\pm$ 1.1 & FEROS \\
\hline
\\
\end{tabular}
\tablefoot{The average seeing is only given if the spectra were obtained with a wide
slit in the course of the SPY survey. In all other cases the seeing should not
influence the measurements. $^{\rm c}$Main sequence companion visible in the spectrum (Lisker et al. \cite{lisker05}). $^{\rm s}$Pulsating subdwarf of V\,361\,Hya type. $^{\rm l}$Pulsating subdwarf of V\,1093\,Her type. No short-period pulsations have been detected either by $^{\rm nb}$Bill\`{e}res et al. (\cite{billeres02}), $^{\rm nr}$Randall et al. (\cite{randall06}) or $^{\rm no}$\O stensen et al. (\cite{oestensen10}). $^{*}$Misidentified as CBS\,275 in Lisker et al. (\cite{lisker05}).}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Projected rotational velocities of radial velocity variable sdBs.}
\label{tab:vrotrv}
\begin{center}
\begin{tabular}{lllllll}
\hline
\noalign{\smallskip}
System & $T_{\rm eff}$ & $m_{B/V}$ & S/N & $N_{\rm lines}$ & ${v_{\rm rot}\,\sin\,i}$ & Instrument\\
& [K] & [mag] & & & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
TON\,S\,135 & 25\,000 & 13.1$^{\rm B}$ & 45 & 35 & 6.4 $\pm$ 1.0 & FEROS \\
LB\,1516$^{\rm l}$ & 25\,200 & 12.7$^{\rm B}$ & 58 & 23 & 6.0 $\pm$ 1.3 & FEROS \\
PHL\,457$^{\rm l}$ & 26\,500 & 13.0$^{\rm B}$ & 59 & 47 & 6.1 $\pm$ 1.0 & FEROS \\
EC\,14338$-$1445 & 27\,700 & 13.5$^{\rm V}$ & 71 & 39 & 8.9 $\pm$ 1.0 & FEROS \\
PG\,1725$+$252 & 28\,900 & 11.5$^{\rm B}$ & 45 & 11 & 7.4 $\pm$ 1.1 & HRS \\
PG\,1519$+$640 & 30\,300 & 12.1$^{\rm B}$ & 104 & 11 & 9.4 $\pm$ 1.4 & FOCES \\
PG\,2151$+$100 & 32\,700 & 12.9$^{\rm B}$ & 69 & 9 & 9.0 $\pm$ 1.7 & FEROS \\
\hline
\\
\end{tabular}
\tablefoot{$^{\rm l}$Pulsating subdwarf of V\,1093\,Her type.}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Comparison with literature.}
\label{tab:vrotlit}
\begin{center}
\begin{tabular}{lrrl}
\hline
\noalign{\smallskip}
System & This work & Literature & Reference \\
& ${v_{\rm rot}\,\sin\,i}$ & ${v_{\rm rot}\,\sin\,i}$ & \\
& [${\rm km\,s^{-1}}$] & [${\rm km\,s^{-1}}$] & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
KPD\,2109$+$4401 & $10.5\pm1.6$ & $<10.0$ & Heber \\
PG\,1219$+$534 & $5.7\pm1.4$ & $<10.0$ & et al. (\cite{heber00}) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
BD$+$48$^{\circ}$\,2721 & $4.7\pm1.4$ & $<5.0$ & Edelmann \\
Feige\,65 & $7.2\pm1.1$ & $<5.0$ & et al. (\cite{edelmann01}) \\
HD\,205805 & $4.5\pm1.0$ & $<5.0$ & \\
HD\,4539 & $3.9\pm1.0$ & $<5.0$ & \\
LB\,1516 & $6.0\pm1.3$ & $<5.0$ & \\
PG\,0342$+$026 & $6.2\pm1.0$ & $<5.0$ & \\
PG\,0909$+$276 & $9.3\pm1.4$ & $<5.0$ & \\
PHL\,932 & $9.0\pm1.3$ & $<5.0$ & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Feige\,49 & $6.2\pm1.0$ & $0.0^{*}$ & Przybilla \\
HD\,205805 & $4.5\pm1.0$ & $0.0^{*}$ & et al. (\cite{przybilla06}) \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\tablefoot{$^{*}$Adopted value for line fits is below the detection limit.}
\end{center}
\end{table*}
\begin{acknowledgements}
S. G. was supported by the Deutsche Forschungsgemeinschaft under grant
He~1354/49-1. The authors thank N. Reid, R. Napiwotzki, L. Morales-Rueda and H. Edelmann for providing their data.
Furthermore, we would like to thank the referee G. Fontaine for his comments and suggestions.
\end{acknowledgements}
| {'timestamp': '2012-07-02T02:02:36', 'yymm': '1206', 'arxiv_id': '1206.6977', 'language': 'en', 'url': 'https://arxiv.org/abs/1206.6977'} |
\section{Introduction}\label{Introduction}
Wireless communication systems have now become ubiquitous and constitute a key component of the fabric of modern day life.
However, the inherent \textit{openness} of the wireless medium makes it susceptible to adversarial attacks.
The vulnerabilities of the wireless system can be largely classified based on the capability of an adversary--
a) \textit{Eavesdropping attack}, in which the eavesdropper (passive adversary) can listen to the wireless channel and try to infer information (which if leaked may severely compromise data integrity).
The study of information theoretic security (or communication in presence of eavesdropping attacks) was initiated by Wyner \cite{WynerWiretap}, Csisz\'{a}r and K\"{o}rner \cite{CsiszarKorner}. Recently, there has been a resurgent interest in extending these results to multi-user scenarios. We refer the reader to a comprehensive tutorial \cite{ITsecurity} on this topic and the references therein.
b) \textit{Jamming attack}, in which the jammer (active adversary) can transmit information in order to disrupt reliable data transmission or reception. While there has been some work in studying the impact of jamming on the capacity of point-to-point channels (such as \cite{Basar-1983, MedardJamming, Kashyap-IT-2004}), the literature on information theoretic analysis of jamming attacks (and associated countermeasures) for multi-user channels is relatively sparse in comparison to the case of eavesdropping attacks.
In this paper, we focus on a class of \textit{time-varying} jamming attacks over a fast fading multi-user multiple-input single-output (MISO) broadcast channel (BC), in which a transmitter equipped with $K$ transmit antennas intends to send independent messages to $K$ single antenna receivers.
While several jamming scenarios are plausible, we initiate the study of jamming attacks by focusing on a simple yet harmful jammer. In particular, we consider a jammer equipped with $K$ transmit antennas and at any given time instant, has the capability of jamming a subset of the receivers.
We consider a scenario in which the jammers' strategy at any given time is random, i.e., the subset of receivers to be jammed is probabilistically selected. Furthermore, the jamming strategy varies in an independent and identically distributed (i.i.d.) manner across time\footnote{While we realize that perhaps more sophisticated jamming scenarios may arise in practice, as a first step, it is important to understand i.i.d jamming scenarios before studying the impact of more complicated attacks (such as time/signal correlated jammer, on-off jamming etc). Even in the i.i.d. jamming scenarios, interesting and non-trivial problems arise that we address in this paper in the context of broadcast channels.}. Such random, time-varying jamming attacks may be inflicted either intentionally by an adversary or unintentionally, in different scenarios. We next highlight some plausible scenarios in which such random time varying jamming attacks could arise.
A resource constrained jammer that intentionally jams the receivers may conserve power by selectively jamming a subset (or none) of the receivers based on its available resources. Such a jammer can also choose to jam the receivers when it has information about channel sounding procedures (i.e., when this procedure occurs) and disrupts the communication only during those specific time instants.
Interference from neighboring cells in a cellular system can act as a bottleneck to improve spectral efficiency and be particularly harmful for cell edge users. The interference seen from adjacent cells in such scenarios can be time varying depending on whether the neighboring cells are transmitting on the same frequency or not (which can change with time); and the spatial separation of the users from interfering cells.
A frequency-selective jammer can disrupt communication on certain frequencies (carriers) in multi-carrier (for instance OFDM-based) systems. A jammer that has knowledge about the pilot signal-based synchronization procedures, can jam only those sub carriers that carry the pilot symbols in order to disrupt the synchronization procedure of the multi-carrier system \cite{Clancy}. Our analysis in this paper suggests that the transmitter and receivers based on the knowledge of the jammers' strategy, can reduce the effects of these jamming attacks by coding/transmitting across various jamming states (jamming state here can be interpreted as the subset of frequencies/sub-carriers that are jammed at a given time instant).
Interestingly, the MISO BC with a time-varying jamming attack can also be interpreted as a network with a time-varying topology. The concept of topological interference alignment has been recently introduced in \cite{JafarTopological} (also see \cite{JafarTopological_ISIT}, \cite{AvestimehrTopological}) to understand the effects of time-varying topology on interference mitigation techniques such as interference alignment. In \cite{JafarTopological_ISIT}, the authors characterize the $\mathsf{DoF}$ by studying the interference management problem in such networks using a 1-bit delay-less feedback (obtained from the receivers) indicating the presence or absence of an interference link. The connection between jamming attacks considered in this paper and time-varying network topologies can be noted by observing the following: if at a given time, a receiver is jammed, then its received signal is completely drowned in the jamming signal (assuming jamming power as high as the desired signal) which is analogous to the channel (or link) to the jammed receiver being wiped out. For instance, in a $3$-user MISO BC with a time-varying jamming attack, a total of $2^{3}=8$ \textit{topologies} could arise (see Figure \ref{Fig:FigureModel}) over time: none of the receivers are jammed (one topology), all receivers are jammed (one topology), only one out of the three receivers is jammed (three topologies), or only two out of three receivers are jammed (i.e., three topologies). Interestingly, the \textit{retroactive anti-jamming} techniques presented in this paper are philosophically related to topological interference alignment with alternating connectivity \cite{JafarTopological_ISIT}. The common theme that emerges is that it is necessary to \textit{code across} multiple jamming states (equivalently, topologies as in \cite{JafarTopological_ISIT}) in order to achieve the optimal performance, which is measured in terms of degrees of freedom (capacity at high SNR).
The model considered in the paper also bears similarities with broadcast erasure channels studied in \cite{ChihChunWang2012}, \cite{Erasure} etc. The presence of a jamming signal ($J$) at a receiver implies that the information bearing signal ($X$) is un-recoverable from the received signal ($Y= X+ J+ N$) in the context of degrees of freedom (since the pre-log of mutual information between $X$ and $Y$ would be zero as both signal and jamming powers become large). Hence, the presence of a jammer can be interpreted as an ``erasure". In the absence of a jammer (or no ``erasure"), the signal $X$ can be recovered from $Y=X+N$ within noise distortion.
We study the impact of such random time-varying jamming attacks on the degrees-of-freedom (henceforth referred by $\mathsf{DoF}$) region of the MISO BC. The $\mathsf{DoF}$ of a network can be regarded as an approximation of its capacity at high SNR and is also referred to as the pre-log of capacity. Even in the absence of a jammer, it is well known that the $\mathsf{DoF}$ is crucially dependent on the availability of channel state information at the transmitter ($\mathsf{CSIT}$). The $\mathsf{DoF}$ region of the MISO BC has been studied under a variety of assumptions on the availability of $\mathsf{CSIT}$ including full (perfect and instantaneous) $\mathsf{CSIT}$ \cite{MIMOBC}, no $\mathsf{CSIT}$ \cite{CaireShamai, Huang}, delayed $\mathsf{CSIT}$ \cite{MAT2012, VV:DCSI-BC}, compound $\mathsf{CSIT}$ \cite{Weingarten_Shamai_Kramer}, quantized $\mathsf{CSIT}$ \cite{Jindal_BCFB}, mixed (perfect delayed and partial instantaneous) $\mathsf{CSIT}$ \cite{JafarTCBC} and asymmetric $\mathsf{CSIT}$ (perfect $\mathsf{CSIT}$ for one user, delayed $\mathsf{CSIT}$ for the other) \cite{Jafar_corr}. To note the dependence of $\mathsf{DoF}$ on $\mathsf{CSIT}$, we remark that a sum $\mathsf{DoF}$ of $2$ is achieved in the $2$-user MISO BC when perfect $\mathsf{CSIT}$ information is available \cite{MIMOBC}, while it reduces to $1$ (with statistically equivalent receivers) when no $\mathsf{CSIT}$ is available \cite{Huang}. Interestingly it is shown in \cite{MAT2012} that completely outdated $\mathsf{CSIT}$ in a fast fading channel is still useful and helps increase the $\mathsf{DoF}$ from $1$ to $\frac{4}{3}$. Interesting extensions to the $K$-user case with delayed $\mathsf{CSIT}$ are also presented in \cite{MAT2012}.
In this paper, we denote the availability of $\mathsf{CSIT}$ (by $\mathsf{CSI}$, we refer to the channel between the transmitter and the receiver, we \emph{do not} assume the knowledge of the jammer's channel at the transmitter or the receivers) through a variable $I_{\mathsf{CSIT}}$, which can take values either $\mathsf{P}$, $\mathsf{D}$ or $\mathsf{N}$; where the state $I_{\mathsf{CSIT}}=\mathsf{P}$ indicates that the transmitter has perfect and instantaneous channel state information at time $t$, the state $I_{\mathsf{CSIT}}=\mathsf{D}$ indicates that the transmitter has perfect but delayed channel state information (i.e., it has knowledge of the channel realizations of time instants $\{1,2,\ldots, t-1\}$ at time $t$), and the state $I_{\mathsf{CSIT}}=\mathsf{N}$ indicates that the transmitter has no channel state information.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{JammingIllustration_3User.pdf}
\vspace{-0.2in}
\caption{Possible jamming scenarios in a $3$-user MISO broadcast channel.}
\label{Fig:FigureModel}
\end{figure}
As mentioned above, the impact of $\mathsf{CSIT}$ on the $\mathsf{DoF}$ of MISO broadcast channels has been explored for scenarios in which there is no adversarial time-varying interference. The novelty of this work is two fold: a) incorporating adversarial time-varying interference, and b) studying the \textit{joint} impact of $\mathsf{CSIT}$ and the knowledge about the absence/presence of interference at the transmitter (termed $\mathsf{JSIT}$).
As we show in this paper, in the presence of a time-varying jammer, not only the $\mathsf{CSIT}$ availability but also the knowledge of jammer's strategy significantly impacts the $\mathsf{DoF}$.
Indeed, if the transmitter is non-causally aware of the jamming strategy at time $t$, i.e., if it knows \textit{which} receiver (or receivers) is going to be disrupted at time $t$, the transmitter can utilize this knowledge and adapt its transmission strategy by:
either transmitting to a subset of receivers simultaneously (if only a subset of them are jammed/not-jammed) or conserving energy by not transmitting (if all the receivers are jammed).
However, such adaptation may not be feasible if there is delay in learning the jammer's strategy. Feedback delays could arise in practice as the detection of a jamming signal would be done at the receiver (for instance, via a binary hypothesis test \cite{KayDetection} in which the receiver could use energy detection to validate the presence/absence of a jammer in its vicinity). This binary decision could be subsequently fed back to the transmitter. In presence of feedback delays, the standard approach would be to exploit the time correlation in the jammer's strategy to predict the current jammer's strategy from the delayed measurements.
The predicted jammer state could then be used in place of the true jammer state. However, if the jammer's strategy is completely uncorrelated across time (which is the case if the jammers' strategy is i.i.d), delayed feedback reveals no information about the current state, and a predict-then-adapt scheme offers no advantage. A third and perhaps worst case scenario could also arise in which the transmitter only has statistical knowledge of jammer's strategy. This could be the case when the feedback links are unreliable or if the feedback links themselves are susceptible to jamming attacks, i.e., the outputs of feedback links are untrustworthy.
To take all such plausible scenarios into account, we formally model the jamming strategy via an independent and identically distributed (i.i.d.) random variable $S(t)= (S_1(t), S_{2}(t),\ldots,S_K(t))$; which we call the jammer state information $(\mathsf{JSI})$ at time $t$. Note here that in the context of the paper, the jammers' state only indicates knowledge about the jammers' strategy (i.e., which receivers are jammed) and \emph{not} the channel between the jammer and receiver. At time $t$, if the $k$th component of $S(t)$, i.e., $S_k(t)=1$, it indicates that receiver $k$ is being jammed, and $S_{k}(t)=0$ indicates that receiver $k$ receives a jamming free signal. We denote the availability of jammer state information at the transmitter ($\mathsf{JSIT}$) through a variable $I_{\mathsf{JSIT}}$, which (similar to $I_{\mathsf{CSIT}}$) can take values either $\mathsf{P}$, $\mathsf{D}$ or $\mathsf{N}$; where the state $I_{\mathsf{JSIT}}=\mathsf{P}$ indicates that the transmitter has perfect and instantaneous jammer state information $(S_1(t), S_2(t),\ldots,S_K(t))$ at time $t$, the state $I_{\mathsf{JSIT}}=D$ indicates that the transmitter has delayed jammer state information (i.e., it has access to $\{S_1(i), S_{2}(i),\ldots,S_K(i)\}_{i=1}^{t-1}$ at time $t$), and the state $I_{\mathsf{JSIT}}=N$ indicates that the transmitter does not have the exact realization of $S(t)$ at its disposal. In all configurations above, it is assumed that the transmitter knows the statistics of $S(t)$.
\emph{Summary of Main Results:}
Depending on the \textit{joint} availability of channel state information ($\mathsf{CSIT}$) and jammer state information ($\mathsf{JSIT}$) at the transmitter, the variable $I_{\mathsf{CSIT}}I_{\mathsf{JSIT}}$ can take $9$ values and hence a total of $9$ distinct scenarios can arise: $\mathsf{PP}$, $\mathsf{PD}$, $\mathsf{PN}$, $\mathsf{DP}$, $\mathsf{DD}$, $\mathsf{DN}$, $\mathsf{NP}$, $\mathsf{ND}$, and $\mathsf{NN}$. The main contributions of this paper are the following.
\begin{enumerate}
\item For the $2$-user scenario, we characterize the exact $\mathsf{DoF}$ region for the $\mathsf{PP}$, $\mathsf{PD}$, $\mathsf{PN}$, $\mathsf{DP}$, $\mathsf{DD}$, $\mathsf{NP}$ and $\mathsf{NN}$ configurations.
\item For the $\mathsf{DN}$ and $\mathsf{ND}$ configurations in a $2$-user MISO BC, we present novel inner bounds to the $\mathsf{DoF}$ regions.
\item The interplay between $\mathsf{CSIT}$ and $\mathsf{JSIT}$ and the associated impact on the $\mathsf{DoF}$ region in the various configurations is discussed. Specifically, the gain in $\mathsf{DoF}$ by transmitting across various jamming states and the loss in $\mathsf{DoF}$ due to the unavailability of $\mathsf{CSI}$ or $\mathsf{JSI}$ at the transmitter is quantified by the achievable sum $\mathsf{DoF}$.
\item We extend the analysis in a $2$-user MISO BC to a generic $K$-user MISO BC with such random time-varying jamming attacks.
The $\mathsf{DoF}$ region is completely characterized for the $\mathsf{PP}$, $\mathsf{PD}$, $\mathsf{PN}$, $\mathsf{NP}$ and $\mathsf{NN}$ configurations.
Further, novel inner bounds are presented for the sum $\mathsf{DoF}$ in $\mathsf{DP}$ and $\mathsf{DD}$ configurations.
These bounds provide insights on the scaling of sum $\mathsf{DoF}$ with the number of receivers $K$.
\end{enumerate}
The remaining parts of the paper are organized as follows. The system model is introduced in Section~\ref{system_model}.
The main contributions of the paper i.e., the Theorems describing the $\mathsf{DoF}$ regions in various ($\mathsf{CSIT}$,$\mathsf{JSIT}$) configurations for the $2$-user and $K$-user MISO BC are illustrated in Sections~\ref{Theorems} and \ref{TheoremsKuser} respectively and the corresponding converse proofs are presented in the Appendix. The coding (transmission) schemes achieving the optimal $\mathsf{DoF}$ regions are described in Sections~\ref{Schemes}, \ref{TheoremsKuser}. Finally, conclusions are drawn in Section~\ref{Conclusions}.
\section{System Model}\label{system_model}
A $K$-user MISO broadcast channel with $K$ transmit antennas and $K$ single antenna receivers, is considered in the presence of a random, time-varying jammer. The system model for the $K=2$ user case is shown in Fig.~\ref{Fig:sys_model}.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{System_Model_2.pdf}
\vspace{-0.5in}
\caption{System Model for a $2$-user scenario.}
\label{Fig:sys_model}
\end{figure}
The channel output at receiver $k$, for $k=1,2,\ldots,K$ at time $t$ is given as:
\begin{align}\label{system_model_eq}
Y_{k}(t)&= \mathbf{H}_{k}(t)\mathbf{X}(t) + S_k(t)\mathbf{G}_{k}(t)\mathbf{J}(t) + N_{k}(t),
\end{align}
where $\mathbf{X}(t)$ is the $K \times 1$ channel input vector at time $t$ with
\begin{align}
E\left(|\mathbf{X}(t)|^2\right)\leq P_T,
\end{align}
and $P_T$ is the power constraint on $\mathbf{X}(t)$. In \eqref{system_model_eq}, $\mathbf{H}_{k}(t)=[h_{1k}(t),h_{2k}(t),\ldots,h_{Kk}(t)]$ is the $1\times K$ channel vector from the transmitter to the $k$th receiver at time $t$, $\mathbf{G}_{k}(t)$ is the $1\times K$ channel response from the jammer to receiver $k$ at time $t$ and $\mathbf{J}(t)$ is the $K\times 1$ jammer's channel input at time $t$ (a worst case scenario where the jammer has $K$ degrees-of-freedom to disrupt all $K$ parallel streams of data from the transmitter to the $K$ receivers). Without loss of generality, the channel vectors $\mathbf{H}_k(t)$ and $\mathbf{G}_k(t)$ are assumed to be sampled from any continuous distribution (for instance, Rayleigh) with an identity covariance matrix, and are i.i.d. across time. The additive noise $N_{k}(t)$ is distributed according to $\mathcal{CN}(0,1)$ for $k=1,\ldots,K$ and are assumed to be independent of all other random variables. The random variable $S(t)= \{S_1(t), S_2(t),\ldots,S_K(t)\}$ that denotes the jammer state information $\mathsf{JSI}$ at time $t$, is a $2^K$-valued i.i.d. random variable.
For example, in the $3$-user MISO BC, the $\mathsf{JSI}$ $S(t)$ is a $8$-ary valued random variable taking values $\{000,001,010,011,100,101,110,111\}$ with probabilities $\{\lambda_{000},\lambda_{001},\lambda_{010},\lambda_{011},\lambda_{100},\lambda_{101},\lambda_{110},\lambda_{111}\}$ respectively, for arbitrary $\{\lambda_{ijk}\geq 0\}_{i,j,k=0,0,0}^{1,1,1}$ such that $\sum_{i,j,k}\lambda_{ijk}=1$.
The jammer state $S(t)$ at time $t$ can be interpreted as follows:
\begin{itemize}
\item $S(t)=\left(0,0,0\right)$ : none of the receivers are jammed. This occurs with probability $\lambda_{000}$.
\item $S(t)=\{\left(1,0,0\right)/\left(0,1,0\right)/\left(0,0,1\right)\}$ : only one receiver is jammed. This scenario occurs with probability $\lambda_{100}/\lambda_{010}/\lambda_{001}$ respectively. $S(t)=\left(1,0,0\right)$ indicates that the $1$st receiver is jammed while the receivers $2$ and $3$ are not jammed.
\item $S(t)=\{\left(1,1,0\right)/\left(1,0,1\right)/\left(0,1,1\right)\}$ : any two out of the three receivers are jammed. This happens with probability $\lambda_{110}/\lambda_{101}/\lambda_{011}$ respectively.
\item $S(t)=\left(1,1,1\right)$ : all the receivers are jammed with probability $\lambda_{111}$.
\end{itemize}
Using the probability vector $\{\lambda_{000}, \lambda_{001}, \lambda_{010}, \lambda_{100}, \lambda_{011}, \lambda_{110}, \lambda_{101}, \lambda_{111} \}$,
we define the marginal probabilities
\begin{align}\label{lambda_receiver_1}
\lambda_1&=\lambda_{000}+\lambda_{001}+\lambda_{010}+\lambda_{011}, \nonumber \\
\lambda_2&=\lambda_{000}+\lambda_{001}+\lambda_{100}+\lambda_{101}, \nonumber \\
\lambda_3&=\lambda_{000}+\lambda_{010}+\lambda_{100}+\lambda_{110},
\end{align}
where $\lambda_{k}\in [0,1]$ denotes the \textit{total} probability with which receiver $k$ is \textit{not} jammed. For example, in the $3$-user scenario, $\lambda_1$ indicates the total probability with which the $1$st receiver is not jammed which happens when any one of the following events happen 1) none of the receivers are jammed with probability $\lambda_{000}$, 2) only the $2$nd receiver is jammed with probability $\lambda_{010}$, 3) only $3$rd receiver is jammed with probability $\lambda_{001}$ or 4) both the $2$nd and $3$rd receivers are jammed with probability $\lambda_{011}$. Similar definitions hold for the $K$-user MISO BC. In general, $S(t)$ is a $K\times 1$ vector where a $1 (0)$ in the $k$th position indicates that the $k$th receiver is jammed (not-jammed).
It is assumed that the jammer sends a signal with power equal to $P_T$ (the transmit signal power). This formulation attempts to capture the performance of the system in a time-varying interference (here jammer) limited scenario where the received interference power is as high as the transmit signal power $P_T$ (a worst case scenario where the receiver by no means can recover the symbol from the received signal). Furthermore, it is assumed that $\{\mathbf{J}(t)\}_{t=1}^{n}$ is independent of $\{S(t)\}_{t=1}^{n}$. We denote the global channel state information (between transmitter and receivers) at time $t$ by $\mathbf{H}(t)\triangleq \{\mathbf{H}_{1}(t), \mathbf{H}_{2}(t),\ldots,\mathbf{H}_{K}(t)\}$. In all analysis that follows, we assume that both the receivers have complete knowledge of global channel vectors $\{\mathbf{H}(t)\}_{t=1}^{n}$ and also of the jammer's strategy $\{S(t)\}_{t=1}^{n}$, i.e., full $\mathsf{CSIR}$ and full $\mathsf{JSIR}$ (similar assumptions were made in earlier works, see \cite{MAT2012}, \cite{ACSIT-ISIT}, \cite{ACSIT2012} and references therein).
\paragraph{Assumptions:} The following are the list of assumptions made in this paper.
\begin{itemize}
\item If $\mathsf{CSIT}$ exists (i.e., when $I_{\mathsf{CSIT}}=\mathsf{P}$ or $\mathsf{D}$), the transmitter receives either instantaneous or delayed feedback from the receivers regarding the channel $\mathbf{H}(t)$. In either scenario, neither the transmitter nor the receivers require knowledge of $\mathbf{G}(t)=\{\mathbf{G}_{1}(t),\ldots,\mathbf{G}_{K}(t)\}$ i.e., the channel between the jammer and the receivers.
\item If $\mathsf{JSIT}$ exists (i.e., when $I_{\mathsf{JSIT}}=\mathsf{P}$ or $\mathsf{D}$), then the transmitter receives either instantaneous or delayed feedback about the jammers' strategy i.e., $S(t)$.
\item Irrespective of the availability/ un-availability of $\mathsf{CSIT}$ and $\mathsf{JSIT}$, it is assumed that the transmitter has statistical knowledge of the jammer's strategy (i.e., statistics of $S(t)$) which is assumed to be constant across time (these assumptions form the basis for future studies that deal with time varying statistics of a jammer).
\item While the achievability schemes presented in Sections~\ref{Schemes}, \ref{TheoremsKuser} hold for arbitrary correlations between the random variables
$S(t)$, $\mathbf{J}(t)$, and $\mathbf{G}(t)$, the converse proofs provided in the Appendix hold under the assumption that these random variables are mutually independent and when the elements of $\mathbf{J}(t)$ are distributed i.i.d. as $\mathcal{CN}(0,P_T)$.
\item The theorems, achievability schemes and the converse proofs presented in Sections~\ref{Theorems}--\ref{TheoremsKuser} and the Appendix hold true for any continuous distributions that $\mathbf{H}(t)$ and $\mathbf{G}(t)$ may assume. While these achievability schemes are valid for any distribution of the jammers' signal $\mathbf{J}(t)$, the converse proofs are presented for the case in which the jammers' signal is Gaussian distributed.
\end{itemize}
For the $K$-user MISO BC, a rate tuple $(R_{1},R_{2},\ldots,R_K)$, with $R_{k}= \log(|W_{k}|)/n$, where $n$ is the number of channel uses, $W_k$ denotes the message for the $k$th receiver and $|W_k|$ represents the cardinality of $W_k$, is achievable if there exist a sequence of encoding functions $f^{(n)}$ and decoding functions $g^{(n)}_{k}\left(Y_k^n,\mathbf{H}^n,\mathbf{S}^n\right)$ (one for each receiver) such that for all $k=1,2,\ldots,K$,
\begin{equation}\label{error_convergence}
P\left(W_k\neq g_k^n\left(Y_k^n,\mathbf{H}^n,\mathbf{S}^n\right)\right)\leq n\epsilon_{kn},
\end{equation}
where
\begin{equation}
\epsilon_{kn}\longrightarrow 0 \quad \mbox{as} \quad n\longrightarrow \infty
\end{equation}
i.e, the probability of incorrectly decoding the message $W_k$ from the signal received at user $k$ converges to zero asymptotically. In \eqref{error_convergence}, we have used the following shorthand notations $Y_k^n=\left(Y_k(1),\ldots,Y_k(n)\right)$, $\mathbf{H}^{n}=\left(\mathbf{H}_1(1),..,\mathbf{H}_K(1),..,\mathbf{H}_1(n),..,\mathbf{H}_K(n)\right)$ and $\mathbf{S}^{n}=\left(S(1),S(2),\ldots,S(n)\right)$.
We are specifically interested in the degrees-of-freedom region $\mathcal{D}$, defined as the set of all achievable pairs $(d_{1},d_{2},\ldots,d_{K})$ with $d_{k}=\lim_{P_{T}\rightarrow \infty} \frac{R_{k}}{\log(P_{T})}$. The encoding functions $f^{(n)}$ that achieve the $\mathsf{DoF}$ described in Sections~\ref{Theorems} and \ref{TheoremsKuser} depend on the availability of $\mathsf{CSIT}$ and $\mathsf{JSIT}$ i.e, on the variable $I_{\mathsf{CSIT}}I_{\mathsf{JSIT}}$. For example, in the $\mathsf{DD}$ (delayed $\mathsf{CSIT}$, delayed $\mathsf{JSIT}$) configuration, the encoding function takes the following form;
\begin{equation}
\mathbf{X}(n)=f^{(n)}\left(W_{1},W_{2},\ldots,W_K,\mathbf{H}^{n-1}, \mathbf{S}^{n-1}\right),
\end{equation}
where the transmit signal $\mathbf{X}(n)$ at time $n$, depends on the the past channel state $\left(\mathbf{H}^{n-1}\right)$ and jammer state $\left(\mathbf{S}^{n-1}\right)$ information available at the transmitter.
However, in the $\mathsf{NP}$ configuration since the transmitter does not have knowledge about the channel (as no $\mathsf{CSIT}$ is available), it exploits the perfect and instantaneous knowledge about the jammers' strategy $\left(S(t)\right)$ by sending information exclusively to the unjammed receivers.
As a result, the encoding function for the $\mathsf{NP}$ configuration can be represented as
\begin{equation}
\mathbf{X}(n)=f^{(n)}\left(W_{1},W_{2},\ldots,W_K,\mathbf{S}^n\right).
\end{equation}
The encoding functions across various channel and jammer states depend on the transmission strategies used and are discussed in more detail in Sections~\ref{Schemes} and \ref{TheoremsKuser}.
\subsection{Review of Known Results}
As mentioned earlier, the $\mathsf{DoF}$ region for the $K$-user MISO BC has been studied extensively in the absence of external interference. We briefly present some of those important results that are relevant to the work presented in this paper.
\begin{enumerate}
\item In the absence of jamming, the $\mathsf{DoF}$ region with perfect $\mathsf{CSIT}$ is given by,
\begin{align}
d_k\leq 1,\quad k=1,2,\ldots,K,
\end{align}
and the achievable sum $\mathsf{DoF}$ is $K$ \cite{ACSIT2012}.
\item With delayed $\mathsf{CSIT}$, the $\mathsf{DoF}$ region in the absence of a jammer was characterized by Maddah-Ali and Tse in \cite{MAT2012}, and is given by
\begin{align}
\sum_{k=1}^K \frac{d_{\pi(k)}}{k}\leq 1,
\end{align}
where $\pi(K)$ is a permutation of the set of numbers $\{1,2,3,\ldots,K\}$. In such a scenario, the sum $\mathsf{DoF}$ (henceforth referred to as
$\mathsf{DoF}_{\mathsf{MAT}}$) is given by
\begin{align}\label{DoF_MAT}
\mathsf{DoF}_{\mathsf{MAT}}(K)=\frac{K}{1+\frac{1}{2}+\ldots\frac{1}{K}}.
\end{align}
\item The $\mathsf{DoF}$ region with no $\mathsf{CSIT}$ is given by
\begin{align}
\sum_{k=1}^K d_k\leq 1.
\end{align}
and the sum $\mathsf{DoF}$ in this case reduces to $1$ \cite{ACSIT2012}.
\end{enumerate}
It is easy to see that the sum $\mathsf{DoF}$ achieved in a delayed $\mathsf{CSIT}$ scenario lies in between the sum $\mathsf{DoF}$ achieved in the perfect $\mathsf{CSIT}$ and no $\mathsf{CSIT}$ scenarios.
\section{Main Results and Discussion}\label{Theorems}
We first present $\mathsf{DoF}$ results for the $2$-user MISO BC under various assumptions on the availability of $\mathsf{CSIT}$ and $\mathsf{JSIT}$ and discuss various insights arising from these results. In the $2$-user case, the jammer state $S(t)$ at time $t$ can take one out of four values: $00, 01, 10$, or $11$, where
\begin{itemize}
\item $S(t)=00$ indicates that none of the receivers are jammed, which happens with probability $\lambda_{00}$,
\item $S(t)=01$ indicates that only receiver $1$ is not jammed, which happens with probability $\lambda_{01}$,
\item $S(t)=10$ indicates that only the $2$nd receiver is un-jammed with probability $\lambda_{01}$, and finally
\item $S(t)=11$ indicates that both the receivers are jammed with probability $\lambda_{11}$.
\end{itemize}
In order to compactly present the results, we define the marginal probabilities
\begin{align}
\lambda_{1}&\triangleq \lambda_{00}+\lambda_{01},\nonumber\\
\lambda_{2}&\triangleq \lambda_{00}+\lambda_{10}\nonumber,
\end{align}
where $\lambda_{k}$, for $k=1,2$ is the total probability with which receiver $k$ is \emph{not jammed}. In the sequel, Theorems~\ref{TheoremPP}-\ref{TheoremNN} present the optimal $\mathsf{DoF}$ characterization for the $\left(\mathsf{CSIT},\mathsf{JSIT}\right)$ configurations $\mathsf{PP},\mathsf{PD},\mathsf{PN},\mathsf{DP},\mathsf{DD},\mathsf{NP}$ and $\mathsf{NN}$ while Theorems~\ref{TheoremDN} and \ref{TheoremND} present non-trivial achievable schemes (novel inner bounds) for the $\mathsf{DN}$ and $\mathsf{ND}$ configurations.
\begin{Theo}\label{TheoremPP}
The $\mathsf{DoF}$ region of the $2$-user MISO BC for each of the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configurations $\mathsf{PP}$, $\mathsf{PD}$ and $\mathsf{PN}$ is the same and is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy
\begin{align}
d_{1}&\leq \lambda_{1}\\
d_{2}&\leq \lambda_{2}.
\end{align}
\end{Theo}
\begin{Theo}\label{TheoremDP}
The $\mathsf{DoF}$ region of the $2$-user MISO BC for the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{DP}$, is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy\begin{align}
d_{1}&\leq \lambda_1\\
d_{2}&\leq \lambda_2\\
2d_{1}+d_{2}&\leq 2\lambda_1+\lambda_{10}\\
d_{1}+2d_{2}&\leq 2\lambda_2+\lambda_{01}.
\end{align}
\end{Theo}
\begin{Theo}\label{TheoremDD}
The $\mathsf{DoF}$ region of the $2$-user MISO BC for the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{DD}$, is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{(\lambda_{1}+\lambda_{2})}&\leq 1 \label{DD1} \\
\frac{d_{1}}{(\lambda_{1}+\lambda_{2})}+\frac{d_{2}}{\lambda_{2}}&\leq 1. \label{DD2}
\end{align}
\end{Theo}
\begin{Theo}\label{TheoremNP}
The $\mathsf{DoF}$ region for the $2$-user MISO BC for the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{NP}$, is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy
\begin{align}
d_{1}&\leq \lambda_1\\
d_{2}&\leq \lambda_2\\
d_{1}+d_{2} &\leq \lambda_{00}+\lambda_{01}+\lambda_{10}.
\end{align}
\end{Theo}
\begin{Theo}\label{TheoremNN}
The $\mathsf{DoF}$ region of the $2$-user MISO BC for the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{NN}$ is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{\lambda_{2}}&\leq 1.
\end{align}
\end{Theo}
\begin{remark}{\em [Redundancy of $\mathsf{JSIT}$ with Perfect $\mathsf{CSIT}$]}
{\em We note from Theorem~\ref{TheoremPP} that when Perfect $\mathsf{CSIT}$ is available, the $\mathsf{DoF}$ region remains the same regardless of availability/un-availability of jammer state information at the transmitter. This implies that with perfect $\mathsf{CSIT}$, only statistical knowledge about the jammer's strategy suffices to achieve the optimal $\mathsf{DoF}$ region (note that it is assumed that the transmitter has statistical knowledge of the jammers' strategy). The availability of perfect $\mathsf{CSIT}$ helps to avoid cross-interference in such a broadcast type communication system and thereby enables the receivers to decode their intended symbols whenever they are not jammed. }
\end{remark}
\begin{remark}{\em [Quantifying $\mathsf{DoF}$ Loss]}
{\em When the transmitter has perfect knowledge about the jammers state i.e, perfect $\mathsf{JSIT}$, it is seen that the $\mathsf{Sum}\ \mathsf{DoF}$ for the various configurations is
\begin{align}
\mathsf{Sum}\ \mathsf{DoF} \textsf{ (with Perfect $\mathsf{JSIT}$)}=
\begin{cases}
\lambda_1+\lambda_2,& \mbox{ perfect $\mathsf{CSIT}$},\\
\lambda_1+\lambda_2-\frac{2}{3}\lambda_{00},& \mbox{ delayed $\mathsf{CSIT}$},\\
\lambda_1+\lambda_2-\lambda_{00},& \mbox{ no $\mathsf{CSIT}$}.
\end{cases}
\end{align}
It is seen that the sum $\mathsf{DoF}$s achieved in the $\mathsf{DP}$ and $\mathsf{NP}$ configurations are less than $\left(\lambda_1+\lambda_2\right)$, the sum $\mathsf{DoF}$ achieved in the $\mathsf{PP}$ configuration. The loss in $\mathsf{DoF}$ due to delayed channel knowledge is $\frac{2}{3}\lambda_{00}$ and due to no channel knowledge is $\lambda_{00}$. As expected, the loss in the $\mathsf{NP}$ configuration is more than the corresponding $\mathsf{DoF}$ loss in the $\mathsf{DP}$ configuration due to the un-availability of $\mathsf{CSIT}$. Interestingly, the loss in $\mathsf{DoF}$ due to delayed channel state information in the absence of a jammer is $2-\frac{4}{3}=\frac{2}{3}$ (where $2 \ \left(\frac{4}{3}\right)$ is the $\mathsf{DoF}$ achieved in a $2$-user MISO BC with perfect (delayed) $\mathsf{CSIT}$ \cite{MAT2012}), which, in the presence of a jammer, corresponds to the case when $\lambda_{00}=1$ i.e, none of the receivers are jammed. Along similar lines, the $\mathsf{DoF}$ loss due to no $\mathsf{CSIT}$ is $2-1=1$ where $1$ is the $\mathsf{DoF}$ achieved in the $2$-user MISO BC when there is no $\mathsf{CSIT}$ \cite{ACSIT2012} (in the absence of jamming). The loss in $\mathsf{DoF}$ converges to $0$ as $\lambda_{00}\rightarrow 0$ i.e, the $\mathsf{PP}$, $\mathsf{DP}$ and $\mathsf{NP}$ configurations are equivalent when the jammer disrupts either one or both the receivers at any given time. }
\end{remark}
\begin{remark}{\em [Separability with Perfect $\mathsf{JSIT}$]
When perfect $\mathsf{JSIT}$ is present, i.e., in the $\mathsf{PP}$, $\mathsf{DP}$ and $\mathsf{NP}$ configurations, the transmitter \textbf{does not} need to code (transmit) \emph{across} different jammer states; or in other words, the jammer's states are \emph{separable}. For instance, consider the case of delayed $\mathsf{CSIT}$. In the absence of a jammer, the optimal $\mathsf{DoF}$ with delayed $\mathsf{CSIT}$ is $4/3$ as shown in \cite{MAT2012}. The optimal strategy in presence of a jammer and with perfect $\mathsf{JSIT}$ is the following: use the $00$ state to achieve $\frac{4}{3}\lambda_{00}$ $\mathsf{DoF}$ by employing the $\mathsf{MAT}$ scheme \cite{MAT2012} (transmission scheme to achieve the sum $\mathsf{DoF}$ given in \eqref{DoF_MAT}, explained in Section~\ref{Schemes}), use $01$ state to achieve $\lambda_{01}$ $\mathsf{DoF}$ by transmitting to receiver $1$, use $10$ state to achieve $\lambda_{10}$ $\mathsf{DoF}$ by transmitting to receiver $2$. The state $11$ yields $0$ $\mathsf{DoF}$ since both the receivers are jammed. Thus, the net achievable $\mathsf{DoF}$ of this separation based strategy is given as: $\frac{4}{3}\lambda_{00}+ \lambda_{01}+\lambda_{10}= \lambda_{1}+\lambda_{2}-\frac{2}{3}\lambda_{00}$. Similar interpretations hold with perfect $\mathsf{CSIT}$ and no $\mathsf{CSIT}$. The transmission schemes that achieve these $\mathsf{DoF}$s and make the jammers' states separable are illustrated in more detail in Section~\ref{Schemes}. }
\end{remark}
\begin{remark} {\em [Marginal Equivalence] The $\mathsf{DoF}$ regions in Theorems \ref{TheoremPP}, \ref{TheoremDD} and \ref{TheoremNN} only depend on the marginal probabilities $(\lambda_{1}, \lambda_{2})$ with which each receiver is not jammed. This implies that two different jamming strategies with statistics, $\{\lambda_{00}, \lambda_{01}, \lambda_{10}, \lambda_{11}\}$ and $\{\lambda^{'}_{00}, \lambda^{'}_{01}, \lambda^{'}_{10}, \lambda^{'}_{11}\}$ result in the same $\mathsf{DoF}$ regions for $\mathsf{PP}$, $\mathsf{PD}$, $\mathsf{PN}$, $\mathsf{DD}$ and $\mathsf{NN}$ configurations as long as $\lambda_{00}+\lambda_{01}=\lambda^{'}_{00}+\lambda^{'}_{01}=\lambda_{1}$ and $\lambda_{00}+\lambda_{10}=\lambda^{'}_{00}+\lambda^{'}_{10}=\lambda_{2}$.}
\end{remark}
In the next two Theorems, we present achievable $\mathsf{DoF}$ regions for the remaining configurations $\mathsf{DN}$ and $\mathsf{ND}$ respectively. It should be noticed that ignoring the availability of delayed $\mathsf{CSIT}$ in the $\mathsf{DN}$ configuration and the availability of delayed $\mathsf{JSIT}$ in the $\mathsf{ND}$ configuration, the $\mathsf{DoF}$ region described by Theorem~\ref{TheoremNN} can always be achieved. However, the novel inner bounds presented in Theorems~\ref{TheoremDN},\ref{TheoremND} show that the achievable $\mathsf{DoF}$ can be improved by synergistically using the delayed feedback regarding $\mathsf{CSIT}$ and $\mathsf{JSIT}$.
\begin{Theo}\label{TheoremDN}
An achievable $\mathsf{DoF}$ region for the $2$-user MISO BC for the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{DN}$, is given as follows.
\noindent For $\frac{|\lambda_{1}-\lambda_{2}|}{\lambda_{1}\lambda_{2}}\leq 1$, following region is achievable
\begin{align}
d_{1} +\frac{\left(2\max(1,\lambda_{1}/\lambda_{2})-1\right)}{(1+\lambda_2)}d_{2} &\leq \lambda_{1} \\
\frac{\left(2\max(1,\lambda_{2}/\lambda_{1})-1\right)}{(1+\lambda_1)}d_{1} + d_{2}&\leq \lambda_{2}.
\end{align}
For $\frac{|\lambda_{1}-\lambda_{2}|}{\lambda_{1}\lambda_{2}}> 1$, following region is achievable
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{\lambda_{2}}&\leq 1.
\end{align}
\end{Theo}
Though the optimal $\mathsf{DoF}$ region for the $\mathsf{DN}$ configuration remains unknown, we propose a novel inner bound (achievable scheme) to the $\mathsf{DoF}$ region as specified in Theorem~\ref{TheoremDN}. This scheme is based on a coding scheme (alternative to the original transmission scheme proposed in \cite{MAT2012}) to achieve $\mathsf{DoF}$ of $\frac{4}{3}$ for the $2$-user MISO BC in the absence of jamming attacks. This alternative scheme is discussed in Section~\ref{Schemes}.
\begin{Theo}\label{TheoremND}
An achievable $\mathsf{DoF}$ region for the $2$-user MISO BC in the $\mathsf{CSIT}$-$\mathsf{JSIT}$ configuration $\mathsf{ND}$, is given by the set of non-negative pairs $(d_{1},d_{2})$ that satisfy
\begin{align}
\frac{d_1}{\lambda_1} + \frac{d_2}{\lambda_{00}+\lambda_{01}+\lambda_{10}} &\leq 1 \\
\frac{d_1}{\lambda_{00}+\lambda_{01}+\lambda_{10}} + \frac{d_2}{\lambda_2} &\leq 1.
\end{align}
\end{Theo}
By noticing that $\lambda_{00}+\lambda_{01}+\lambda_{10} \geq \mathsf{max}\left(\lambda_1,\lambda_2\right)$, it can be seen that the $\mathsf{DoF}$ region described by Theorem~\ref{TheoremND} is better than the region described by Theorem~\ref{TheoremNN} i.e., the region achieved in the $\mathsf{NN}$ configuration can be improved by utilizing the delayed $\mathsf{JSIT}$ information. Also, the $\mathsf{DoF}$ achievable in the $\mathsf{ND}$ configuration is a subset of the $\mathsf{DoF}$ achieved in the $\mathsf{DD}$ configuration. This is because $\lambda_1+\lambda_2\geq\lambda_{00}+\lambda_{01}+\lambda_{10}$. However, in scenarios where $\lambda_{00}=0$, the $\mathsf{DoF}$ region achieved by these two configurations is the same. Thus the converse proof in the Appendix that shows the optimality of the $\mathsf{DoF}$ region achieved in the $\mathsf{DD}$ configuration also holds true for the $\mathsf{ND}$ scenario when $\lambda_{00}=0$. This equivalence will be explained further in Section~\ref{Schemes}.
Table~\ref{Theorem_table} summarizes the mapping between the $(\mathsf{CSIT},\mathsf{JSIT})$ configurations and the
theorems that specify their $\mathsf{DoF}$. The coding schemes that achieve the corresponding degrees of freedom regions are detailed in Section \ref{Schemes} and the corresponding converse proofs are presented in the Appendix.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|l|l|l|l|}\hline
$\mathsf{CSIT}$&$\mathsf{JSIT}$&Configuration ($I_{\mathsf{CSIT}} I_{\mathsf{JSIT}}$) & Theorem \\ \hline
&Perfect&$\mathsf{PP}$& \\
Perfect&Delayed&$\mathsf{PD}$& Theorem~\ref{TheoremPP}\\
&None&$\mathsf{PN}$& \\ \hline
&Perfect&$\mathsf{DP}$& Theorem~\ref{TheoremDP}\\
Delayed&Delayed&$\mathsf{DD}$& Theorem~\ref{TheoremDD}\\
&None&$\mathsf{DN}$& Theorem~\ref{TheoremDN} [inner bound]\\ \hline
&Perfect&$\mathsf{NP}$& Theorem~\ref{TheoremNP}\\
None&Delayed&$\mathsf{ND}$& Theorem~\ref{TheoremND} [inner bound]\\
&None&$\mathsf{NN}$& Theorem~\ref{TheoremNN}\\ \hline
\end{tabular}
\caption{$\mathsf{CSIT},\mathsf{JSIT}$ configurations and corresponding theorems.}
\label{Theorem_table}
\end{table}
\section{Achievability Proofs}\label{Schemes}
Here, we present the transmission schemes achieving the bounds mentioned in Theorems~\ref{TheoremPP}-\ref{TheoremND}.
\subsection{Perfect $\mathsf{CSIT}$}
In this sub-section schemes achieving the $\mathsf{DoF}$ for $\mathsf{PP}$, $\mathsf{PD}$ and $\mathsf{PN}$ configurations are discussed.
It is clear that the following ordering holds:
\begin{align}\label{DoF_PCSIT_Compare}
\mathsf{DoF}_{\mathsf{PN}} \subseteq \mathsf{DoF}_{\mathsf{PD}} \subseteq \mathsf{DoF}_{\mathsf{PP}},
\end{align}
i.e, the $\mathsf{DoF}$ is never reduced when $\mathsf{JSI}$ $(i.e., S(t))$ is available at the transmitter.
\subsubsection{Perfect $\mathsf{CSIT}$, Perfect $\mathsf{JSIT}$ ($\mathsf{PP}$):}
In this configuration the transmitter has perfect and instantaneous knowledge of $\mathsf{CSIT}$ and $\mathsf{JSIT}$. Further, since the jammers' states ($4$ in this case)
are i.i.d across time, the transmitter's strategy in this configuration is also independent across time. This is further explained below.
\begin{itemize}
\item When $S(t)=11$, i.e., when both the receivers are jammed, the transmitter does not send any information symbols to the receivers
as they are completely disrupted by the jamming signals.
\item When $S(t)=01$, i.e., the case when only the $2$nd receiver is jammed and the $1$st receiver is un-jammed, the transmitter sends
\begin{align}
\mathbf{X}(t)=\left[\begin{matrix}a \\ 0\end{matrix} \right],
\end{align}
where $a$ is an information symbol intended for the $1$st receiver. In this case, the receiver $1$ gets
\begin{align}
Y_1(t)=\mathbf{H}_1(t)\mathbf{X}(t)+N_1(t)\equiv h_{11}(t)a+N_1(t),
\end{align}
and the $2$nd receiver gets
\begin{align}
Y_2(t)=\mathbf{H}_2(t)\mathbf{X}(t)+\mathbf{G}_2(t)\mathbf{J}(t)+N_2(t).
\end{align}
The $2$nd receiver cannot recover its symbols because it is disrupted by the jamming signals. However, since the $1$st receiver is un-jammed, it can recover the intended symbols within noise distortion\footnote{Throughout the paper, it is assumed that the receivers are capable of recovering their symbols within noise distortion whenever they are not jammed (a valid assumption given that the $\mathsf{DoF}$ characterization is done for $P_T\rightarrow \infty$). }.
\item $S(t)=10$, i.e., the case when only the $1$st receiver is jammed and the $2$nd receiver is un-jammed. This is the converse case of the jammers' state $S(t)=01$. In this scenario, the transmitter sends
\begin{align}
\mathbf{X}(t)=\left[\begin{matrix}0 \\ b\end{matrix} \right],
\end{align}
where $b$ is an information symbol intended for the $2$nd receiver. The $2$nd receiver can recover the symbol $b$ within noise distortion.
\item Finally, for the jammer state $S(t)=00$, i.e., none of the receivers are jammed, the transmitter can increase the $\mathsf{DoF}$ by sending symbols to both the receivers. This is achieved by using the knowledge of the perfect and instantaneous channel state information. In such a scenario, the transmitter employs a pre-coding based zero-forcing transmission strategy as illustrated below. The transmitter sends
\begin{align}
\mathbf{X}(t)=\mathbf{B}_1(t)a+\mathbf{B}_2(t)b
\end{align}
where $\mathbf{B}_1(t)$ and $\mathbf{B}_2(t)$ are $2\times 1$ auxiliary pre-coding vectors such that $\mathbf{H}_{1}(t)\mathbf{B}_2(t)=0$ and
$\mathbf{H}_{2}(t)\mathbf{B}_1(t)=0$ (i.e, there is no interference caused at a user due to the un-intended information symbols). Thus, the received signals at the users are given by
\begin{align}
Y_{1}(t)&= \mathbf{H}_{1}(t)\mathbf{B}_1(t)a + N_{1}(t)\\
Y_{2}(t)&= \mathbf{H}_{2}(t)\mathbf{B}_2(t)b + N_{2}(t)\\
\end{align}
which are decoded at the receivers using available $\mathsf{CSIR}$ (jamming signal $\mathbf{J}(t)$ is not present in the received signal since $S_1(t)=S_2(t)=0$).
\end{itemize}
Based on the above transmission scheme, it is seen that each receiver can decode the intended information symbols whenever they are not jammed. Since, the $1$st receiver is not jammed in the states $S(t)=00$ and $S(t)=01$, which happen with probabilities $\lambda_{00},\lambda_{01}$ respectively (i.e., it can recover symbols for $\lambda_{00}+\lambda_{01}$ fraction of the total transmission time), the $\mathsf{DoF}$ achieved is $\lambda_1=\lambda_{00}+\lambda_{01}$. Similarly, the $\mathsf{DoF}$ achieved by the $2$nd receiver is $\lambda_2=\lambda_{00}+\lambda_{10}$. Thus the $\mathsf{DoF}$ pair $(\lambda_1,\lambda_2)$ described by Theorem~\ref{TheoremPP} is achieved using this transmission scheme.
\subsubsection{Perfect $\mathsf{CSIT}$, Delayed $\mathsf{JSIT}$ ($\mathsf{PD}$):}
Unlike in the $\mathsf{PP}$ configuration, the transmitters' strategy in the $\mathsf{PD}$ configuration is not independent (or not separable) across various time instants due to the unavailability of instantaneous $\mathsf{JSIT}$. However, we show that using the knowledge of perfect and instantaneous $\mathsf{CSIT}$ and the delayed knowledge of $\mathsf{JSIT}$, the $\mathsf{DoF}$ pair $(d_{1},d_{2})=(\lambda_{1}, \lambda_{2})$ can still be achieved. Since the transmitter has delayed knowledge about the jammers strategy, it adapts its transmission scheme at time $t$ based on the feedback it receives about the jammers' strategy at time $t-1$ i.e., $S(t-1)$. This transmission scheme is briefly explained here
Let $\{a_1,a_2\}$ denote the symbols to be sent to the $1$st receiver and $\{b_1,b_2\}$ to the $2$nd receiver. Since the transmitter has perfect knowledge about the channel or $\mathsf{CSIT}$, it creates pre-coding vectors $\mathbf{B}_1(t)$ and $\mathbf{B}_2(t)$ such that $\mathbf{H}_{1}(t)\mathbf{B}_2(t)=0$ and $\mathbf{H}_{2}(t)\mathbf{B}_1(t)=0$ (similar to the $\mathsf{PP}$ configuration). For example, at $t=1$, it sends
\begin{align}
\mathbf{X}(1)=\mathbf{B}_1(1)a_1+\mathbf{B}_2(1)b_1.
\end{align}
\begin{itemize}
\item If the d-$\mathsf{JSIT}$ about the jammer's state at $t=1$ indicates that none of the receivers were jammed i.e., $S(1)=00$, then the transmitter sends new symbols $a_2$ and $b_2$ as
\begin{align}
\mathbf{X}(2)=\mathbf{B}_1(2)a_2+\mathbf{B}_2(2)b_2,
\end{align}
at time $t=2$ because both the receivers can decode their intended symbols $a_1$ and $b_1$ within noise distortion in the absence of jamming signals.
\item If the jammer's state at $t=1$ suggests that only the $1$st receiver was jammed i.e., $S(t)=10$, then the transmitter sends
\begin{align}
\mathbf{X}(2)=\mathbf{B}_1(2)a_1+\mathbf{B}_2(2)b_2,
\end{align}
in order to deliver the undelivered symbol to the $1$st receiver and a new symbol for the $2$nd receiver (since it was not jammed at $t=1$).
\item When the feedback about the jammers' state at $t=1$ indicates that $S(1)=01$, the coding scheme used when $S(t)=10$ is reversed (roles of the receivers are flipped) and the transmitter sends a new symbol to the $1$st receiver and the undelivered symbol to the $2$nd receiver as
\begin{align}
\mathbf{X}(2)=\mathbf{B}_1(2)a_2+\mathbf{B}_2(2)b_1.
\end{align}
\item If both the receivers were jammed i.e., $S(1)=11$, then the transmitter re-transmits the symbols for the both the receivers as
\begin{align}
\mathbf{X}(2)=\mathbf{B}_1(2)a_1+\mathbf{B}_2(2)b_1.
\end{align}
\end{itemize}
By extending this transmission scheme to multiple time instants, the $\mathsf{DoF}$ described by Theorem~\ref{TheoremPP} is also achieved in the $\mathsf{PD}$ configuration (since the receivers $1$ and $2$ get jamming free symbols whenever they are not jammed which happen with probabilities $\lambda_1$ and $\lambda_2$ respectively).
\subsubsection{Perfect $\mathsf{CSIT}$, No $\mathsf{JSIT}$ ($\mathsf{PN}$):}
In this section, we sketch the achievability of the pair $(d_{1},d_{2})=(\lambda_{1}, \lambda_{2})$ for the $\mathsf{PN}$ configuration. We first note that for a scheme of block length $n$, for sufficiently large $n$, only $\lambda_k n$ symbols will be received cleanly (i.e., not-jammed) at receiver $k$, since at each time instant the $k$th receiver gets a jamming free signal with probability $\lambda_k$. As the transmitter is statistically aware of jammers' strategy, it only sends $\lambda_k n$ symbols for receiver $k$ over the entire transmission period. It overcomes the problem of no feedback by sending pre-coded random linear combinations (LC) of these $\{\lambda_k n\}_{k=1,2}$ symbols at each time instant. Notice here the difference between the schemes suggested for the $\mathsf{PD}$ and $\mathsf{PN}$ configurations. Due to the availability of $\mathsf{JSIT}$, albeit in a delayed manner in the $\mathsf{PD}$ configuration, the transmitter can deliver information symbols to the receivers in a timely fashion without combining the symbols. This is not the case in the $\mathsf{PN}$ configuration. The proposed scheme for $\mathsf{PN}$ configuration is illustrated below.
Let $\{a_{j}\}_{j=1}^{\lambda_{1}n}$ and $\{b_{j}\}_{j=1}^{\lambda_{2}n}$ denote the information symbols intended to be sent to receiver $1$ and $2$ respectively.
Having the knowledge of $\{\mathbf{H}_{1}(t), \mathbf{H}_{2}(t)\}$, the transmitter sends the following input at time $t$:
\begin{align}
\mathbf{X}(t)=\mathbf{B}_1(t)f_{t}(a_{1},\ldots,a_{\lambda_{1}n})+ \mathbf{B}_2(t)g_{t}(b_{1},\ldots,b_{\lambda_{2}n}),
\end{align}
where $f_{t}(\cdot), g_{t}(\cdot)$ are \textit{random} linear combinations\footnote{The random coefficients are assumed to be known at the receivers. The characterization of the overhead involved in this process is beyond the scope of this paper. } of the respective $\lambda_{1}n$ and $\lambda_{2}n$ symbols; and the $\mathbf{B}_1(t)$, $\mathbf{B}_2(t)$ are $2\times 1$ precoding vectors (similar to the ones used in $\mathsf{PP}$ and $\mathsf{PD}$ configurations). Thus, the received signals at time $t$ are given as
\begin{align}
Y_{1}(t)\hspace{-0.1cm}&= \hspace{-0.1cm}\mathbf{H}_{1}(t)\mathbf{B}_1(t)f_{t}(a_{1},..,a_{\lambda_{1}n})\hspace{-0.05cm}+\hspace{-0.05cm} S_1(t)\mathbf{G}_{1}(t)\mathbf{J}(t) +N_{1}(t)\nonumber\\
Y_{2}(t)\hspace{-0.1cm}&= \hspace{-0.1cm}\mathbf{H}_{2}(t)\mathbf{B}_2(t)g_{t}(b_{1},..,b_{\lambda_{2}n})\hspace{-0.05cm} +\hspace{-0.05cm} S_2(t)\mathbf{G}_{2}(t)\mathbf{J}(t) +N_{2}(t).\nonumber
\end{align}
Each receiver can decode all these symbols upon successfully receiving $\lambda_k n$ linearly independent combinations\footnote{ Note here that in order to be able to decode all $\lambda_{1}n$ symbols, we need $\lambda_{1}n$ linearly independent combinations of $\lambda_{1}n$ symbols. For example to be able to decode $a_1,a_2,a_3$ , 3 LCs say $f_1(a_1,a_2, a_3),f_2(a_1,a_2,a_3),f_3(a_1,a_2, a_3)$ are sufficient.} transmitted using the zero-forcing strategy.
Using this scheme, each receiver can decode $\lambda_k n$ symbols over $n$ time instants using the received $\lambda_k n$ LCs.
Hence $(d_{1}, d_{2})=(\lambda_1, \lambda_2)$ is achievable. The proposed scheme is in similar spirit to the random network coding used in broadcast packet erasure channels where the receivers collect sufficient number of packets before being able to decode their intended information (see \cite{ChihChunWang2012}, \cite{Erasure} and references therein).
\begin{remark}{\em For all possible ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configurations, the $\mathsf{DoF}$ pairs: $(d_{1}, d_{2})=(\lambda_1,0)$ and $(d_{1}, d_{2})=(0, \lambda_2)$
are achievable. This is possible via a simple scheme in which the transmitter sends random LC's of $\lambda_{k}n$ symbols to only the $k$th receiver throughout the transmission interval. The $k$th receiver can decode $\lambda_{k}n$ symbols in $n$ time slots given the fact that it receives jamming free LCs with probability $\lambda_k$. As Theorem \ref{TheoremNN} suggests, for the case in which the transmitter has neither $\mathsf{CSI}$ nor $\mathsf{JSI}$ (i.e., in the $\mathsf{NN}$ configuration), the optimal strategy is to alternate between transmitting symbols exclusively to only one receiver. }
\end{remark}
\begin{remark}
{\em Although the $\mathsf{PP}$, $\mathsf{PD}$ and $\mathsf{PN}$ configurations are equivalent in terms of the achievable $\mathsf{DoF}$ region, they may not be equivalent in terms of the achievable capacity region. For instance, it can be seen in the $\mathsf{PN}$ configuration that the intended symbols can be decoded only after sufficient linear combinations of the intended symbols are received. However, this is not the case in the other configurations. In $\mathsf{PP}$ and $\mathsf{PD}$ configurations, the receivers can decode their intended symbols instantaneously whenever they are not jammed. Thus with respect to the receivers, the decoding delay is maximum in the case of $\mathsf{PN}$ configuration while it is the least in the $\mathsf{PP}$ and $\mathsf{PD}$ configurations. In addition, with respect to the transmitter, re-transmissions are not required in the $\mathsf{PP}$ configuration while they are necessary in the case of the $\mathsf{PD}$ and $\mathsf{PN}$ configurations to ensure that the receivers get their intended symbols. Thus it must not be confused that the $\mathsf{PP}$, $\mathsf{PD}$ and $\mathsf{PN}$ configurations are equivalent. }
\end{remark}
\subsection{Delayed $\mathsf{CSIT}$}\label{DCSIT}
The $\mathsf{DoF}$ region of a 2-user MISO BC using delayed-$\mathsf{CSIT}$ has been studied in the absence of a jammer \cite{MAT2012}. A 3-stage scheme was proposed by the authors in \cite{MAT2012} to increase the optimal $\mathsf{DoF}$ from 1 (no $\mathsf{CSIT}$) to $\frac{4}{3}$. We briefly explain this scheme here.
\subsubsection{Scheme achieving $\mathsf{DoF} =\frac{4}{3}$ in the absence of jamming}
At $t=1$, the transmitter sends
\begin{equation}
\mathbf{X}(1)=\left[\begin{matrix}a_1 \\ a_2\end{matrix} \right],
\end{equation}
where $a_1,a_2$ are symbols intended for the $1$st receiver.
The outputs at the receivers (within noise distortion) at $t=1$ are given as
\begin{align}
Y_1(1)&=\mathbf{H}_1(1)\left[\begin{matrix}a_1 \\ a_2\end{matrix} \right] = h_{11}(1)a_1+h_{21}(1)a_2\triangleq \mathcal{F}_1(a_1,a_2) \\
Y_2(1)&=\mathbf{H}_2(1)\left[\begin{matrix}a_1 \\ a_2\end{matrix} \right] = h_{12}(1)a_1+h_{22}(1)a_2\triangleq \mathcal{F}_2(a_1,a_2),
\end{align}
where $\mathbf{H}_{k}(t)=[h_{1k}(t)\quad h_{2k}(t)]$ for $k=1,2$ and $h_{1k}(t)$, $h_{2k}(t)$ represent the channel between the $2$ transmit antennas and the $k$th receive antenna. The LC at $2$nd receiver is not discarded, instead it is used as side information in Stage $3$. In Stage $2$ the transmitter creates a symmetric situation at the $2$nd receiver by transmitting $b_1,b_2$, the symbols intended for the $2$nd receiver.
\begin{equation}
\mathbf{X}(2)=\left[\begin{matrix}b_1 \\ b_2\end{matrix} \right].
\end{equation}
The outputs at the receivers at $t=2$ are given as
\begin{align}
Y_1(2)&=\mathbf{H}_1(2)\left[\begin{matrix}b_1 \\ b_2\end{matrix} \right] = h_{11}(2)b_1+h_{21}(2)b_2\triangleq \mathcal{G}_1(b_1,b_2) \\
Y_2(2)&=\mathbf{H}_2(2)\left[\begin{matrix}b_1 \\ b_2\end{matrix} \right] = h_{12}(2)b_1+h_{22}(2)b_2\triangleq \mathcal{G}_2(b_1,b_2).
\end{align}
Similar to stage $1$, the undesired LC at receiver $1$ is not discarded. The transmitter is aware of the LCs $\mathcal{F}_1,\mathcal{F}_2,\mathcal{G}_1,\mathcal{G}_2$ via delayed $\mathsf{CSIT}$. At this point, each receiver has one LC that is not intended for them, but is useful if it is delivered at the other receiver. Having access to $\mathcal{F}_2$ along with $\mathcal{F}_1$ will enable the $1$st receiver to decode its intended symbols. Similarly, the $2$nd receiver can decode its $b$-symbols using $\mathcal{G}_1$ and $\mathcal{G}_2$.
To achieve this, the transmitter multicasts
\begin{equation}
\mathbf{X}(3)=\left[\begin{matrix}\mathcal{F}_2(a_1,a_2)+\mathcal{G}_1(b_1,b_2) \\ 0 \end{matrix} \right]
\end{equation}
at $t=3$ to the receivers. Upon successfully receiving this symbol within noise distortion,
the receivers can recover $\mathcal{F}_2(a_1,a_2)$ and $\mathcal{G}_1(b_1,b_2)$ using the available side information (the side information can be cancelled from the new LC). Thus each receiver has $2$ LCs of $2$ intended symbols. Using this transmission scheme the receivers can decode $2$ symbols each in $3$ time slots. Thus the optimal $\mathsf{DoF}$ $(\frac{2}{3},\frac{2}{3})$ is achieved using this transmit strategy. Hereafter, this scheme is referred to as the ``$\mathsf{MAT}$ scheme''.
Below we present transmission schemes to achieve optimal $\mathsf{DoF}$ in the presence of jamming signals, specifically in scenarios where the jamming state information ($\mathsf{JSIT}$) is either available instantaneously or with a delay or is not available i.e., for the $\mathsf{DP}$, $\mathsf{DD}$ and $\mathsf{DN}$ configurations. The following relationship holds true,
\begin{align}\label{DoF_DCSIT_Compare}
\mathsf{DoF}_{\mathsf{DN}} \subseteq \mathsf{DoF}_{\mathsf{DD}} \subseteq \mathsf{DoF}_{\mathsf{DP}}.
\end{align}
\subsubsection{Delayed $\mathsf{CSIT}$, Perfect $\mathsf{JSIT}$ ($\mathsf{DP}$):}
As seen in Fig.~\ref{Fig:FigureDP}, the following $\mathsf{DoF}$ pairs $(d_1,d_2)=(\lambda_1,0)$,
$(\lambda_1,\lambda_{10}), \ (\frac{2}{3}\lambda_{00}+\lambda_{01},\frac{2}{3}\lambda_{00}+\lambda_{10})$ and $ (\lambda_{01},\lambda_2), \ (0,\lambda_2)$ are achievable in the $\mathsf{DP}$ configuration. The $\mathsf{DoF}$ pairs $(\lambda_1,0)$ and $(0,\lambda_2)$ are readily achievable by transmitting to only receiver $1$ (resp. receiver $2$). Here, we present transmission schemes to achieve the $\mathsf{DoF}$ pairs $(\frac{2}{3}\lambda_{00}+\lambda_{01},\frac{2}{3}\lambda_{00}+\lambda_{10})$, $(\lambda_1,\lambda_{10})$ and $(\lambda_{01},\lambda_2)$.
Due to the availability of perfect $\mathsf{JSIT}$, the transmitters strategy is independent across time i.e., the transmitter uses a different strategy based on the jammers' state. Thus the transmission scheme can be divided into $4$ different strategies based on the jammers' state $S(t)$ which is detailed below.
\begin{figure}[t]
\centering
\includegraphics[width=11.0cm]{TheoremDP.pdf}
\caption{$\mathsf{DoF}$ region with delayed $\mathsf{CSIT}$ and perfect $\mathsf{JSIT}$.}\label{Fig:FigureDP}
\end{figure}
\begin{itemize}
\item When the jammers' state $S(t)=00$, the transmitter uses the $\mathsf{MAT}$ scheme which was described earlier. Since this state is seen with
probability $\lambda_{00}$ and the $\mathsf{DoF}$ achieved by the $\mathsf{MAT}$ scheme in the presence of delayed $\mathsf{CSIT}$ is $\left(\frac{2}{3},\frac{2}{3}\right)$, the overall $\mathsf{DoF}$ achieved whenever this jammer state is seen is given by $\left(\frac{2}{3}\lambda_{00},\frac{2}{3}\lambda_{00}\right)$.
Instead of using the $\mathsf{MAT}$ scheme, if the transmitter chooses to send symbols exclusively to only one receiver, then the $\mathsf{DoF}$ pair $\left(\lambda_{00},0\right)$ or $\left(0,\lambda_{00}\right)$ is achieved depending on whether it chooses the $1$st or the $2$nd receiver (notice the $\mathsf{DoF}$ loss by using this strategy).
\item When $S(t)=01$, the jammer transmits symbols only to the $1$st receiver (since the $2$nd receiver cannot recover its symbols due to jamming) which can recover the intended symbol within noise distortion. Since this state is seen with probability $\lambda_{01}$, the $\mathsf{DoF}$ achievable in this state is given by $\left(\lambda_{01},0\right)$.
\item The state $S(t)=10$ is the converse of the previous state $S(t)=01$ with the roles of the two receivers flipped. Thus the $\mathsf{DoF}$ achieved in this state is
$\left(0,\lambda_{10}\right)$.
\item When the jammers' state is $S(t)=11$, none of the receivers can recover the symbols as their received signals are completely disrupted by the jamming signals. Thus the transmitter does not send symbols whenever this jamming state occurs.
\end{itemize}
Since the jammers' states are disjoint, the overall $\mathsf{DoF}$ achieved in the $\mathsf{DP}$ configuration is given by the pair $(d_1,d_2)=(\frac{2}{3}\lambda_{00}+\lambda_{01},\frac{2}{3}\lambda_{00}+\lambda_{10})$ if it chooses to use the $\mathsf{MAT}$ scheme. Else the $\mathsf{DoF}$ pairs,
$(d_1,d_2)=(\lambda_{1},\lambda_{10})$ or $(d_1,d_2)=(\lambda_{01},\lambda_2)$ are achievable. This completes the achievability scheme for the $\mathsf{DP}$ configuration. Hence, the $\mathsf{DoF}$ region mentioned by Theorem~\ref{TheoremDP} is achieved.
As mentioned earlier, if perfect $\mathsf{JSIT}$ is available, the transmitter \textbf{does not} have to transmit/ code across different jammers' states in order to achieve $\mathsf{DoF}$ gains. In other words, the jammers' states are separable due to availability of perfect $\mathsf{JSIT}$. As will be seen next, this separability no longer holds true in the case of $\mathsf{DD}$ and $\mathsf{DN}$ configurations and hence necessitate transmitting across various jamming states. These transmission schemes thereby introduce decoding delays at the intended receivers.
\subsubsection{Delayed $\mathsf{CSIT}$, Delayed $\mathsf{JSIT}$ ($\mathsf{DD}$):}
In this subsection, we propose a transmission scheme that achieves the following $(d_{1}, d_{2})$ pair (which corresponds to intersection of (\ref{DD1}) and (\ref{DD2}), see Fig. \ref{Fig:FigureDD}):
\begin{align}\label{DoFDD_opt_point}
(d_{1}, d_{2})&=\left(\frac{\lambda_{1}}{\frac{\lambda_{1}+\lambda_{2}}{\lambda_{1}}-\frac{\lambda_{2}}{\lambda_{1}+\lambda_{2}}}, \frac{\lambda_{2}}{\frac{\lambda_{1}+\lambda_{2}}{\lambda_{2}}-\frac{\lambda_{1}}{\lambda_{1}+\lambda_{2}}}\right).
\end{align}
In this scheme, the decoding process follows once the transmission of the symbols has finished and the receivers have all required linear combinations of the symbols which are used to decode the symbols.
The decoding process using the linear combinations is explicitly mentioned in the transmission schemes below.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{TheoremDD.pdf}
\vspace{-10pt}
\caption{$\mathsf{DoF}$ region with delayed $\mathsf{CSIT}$ and delayed $\mathsf{JSIT}$ i.e. $\mathsf{DD}$ configuration.}\label{Fig:FigureDD}
\end{figure}
This algorithm operates in three stages. In stage $1$, the transmitter sends symbols intended only for receiver $1$ and keeps re-transmitting them until they are received within noise distortion (or uncorrupted by the jamming signal) at at least one receiver. In stage $2$, the transmitter sends symbols intended only for receiver $2$ in the same manner. Stage $3$ consists of transmitting the undelivered symbols to the intended receivers. The specific LCs to be transmitted in stage $3$ are determined by the feedback (i.e., d-$\mathsf{CSIT}$ and d-$\mathsf{JSIT}$) received from the stages $1$ and $2$. The eventual goal of the scheme is to deliver $n_{1}$ symbols
(denoted by $\{a_{j}\}_{j=1}^{n_{1}}$; or $a$-symbols) to receiver $1$ and $n_{2}$ symbols (denoted by $\{b_{j}\}_{j=1}^{n_{2}}$; or $b$-symbols) to receiver $2$.
Below we explain the 3-stages involved in the proposed transmission scheme.
\textbf{\textit{Stage 1}}--In this stage, the transmitter intends to deliver $n_1$ $a$-symbols, in a manner such that each $a$-symbol is received at \textit{at least} one of the receivers (either $1$st or $2$nd receiver). At every time instant the transmitter sends two symbols on two transmit antennas. A pair of symbols (say $a_1$ and $a_2$) are re-transmitted until they are received at at least one receiver (this knowledge is available via d-$\mathsf{JSIT}$). Any one of the following four scenarios can arise:
\begin{enumerate}
\item \textit{Event $00$}: none of the receivers are jammed (which happens with probability $\lambda_{00}$). As an example, suppose that at time $t$, if the transmitter sends $(a_{1}, a_{2})$: then receiver $1$ gets $\mathcal{F}_{1}(a_{1},a_{2})$ and receiver $2$ gets $\mathcal{F}_{2}(a_{1},a_{2})$. The fact that the event $00$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$; and the LCs $(\mathcal{F}_{1}(a_{1},a_{2}), \mathcal{F}_{2}(a_{1},a_{2}))$ can be obtained at the transmitter at time $t+1$ via d-$\mathsf{CSIT}$. The goal of stage $3$ would be to deliver $\mathcal{F}_{2}(a_{1},a_{2})$ to receiver $1$ by exploiting the fact that it is already received at receiver $2$. Thus, at time $t+1$, the transmitter sends two new symbols $(a_{3}, a_{4})$.
\item \textit{Event $01$}: receiver $1$ is not jammed, while receiver $2$ is jammed (which happens with probability $\lambda_{01}$). As an example, suppose that at time $t$, if the transmitter sends $(a_{1}, a_{2})$: then receiver $1$ gets $\mathcal{F}_{1}(a_{1},a_{2})$ and receiver $2$'s signal is drowned in the jamming signal. The fact that the event $01$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$; and the LC $\mathcal{F}_{1}(a_{1},a_{2})$ can be obtained at the transmitter at time $t+1$ via d-$\mathsf{CSIT}$. Thus, at time $t+1$, the transmitter sends a fresh symbol $a_{3}$ on one antenna; and a LC of $(a_{1},a_{2})$; say $\tilde{\mathcal{F}}_{1}(a_{1},a_{2})$; such that $\mathcal{F}_{1}(a_{1},a_{2})$ and $\tilde{\mathcal{F}}_{1}(a_{1},a_{2})$ constitute two linearly independent combinations of $(a_{1}, a_{2})$. In summary, at time $t+1$, the transmitter sends $(a_{3}, \tilde{\mathcal{F}}_{1}(a_{1},a_{2}))$.
\item \textit{Event $10$}: receiver $2$ is not jammed, while receiver $1$ is jammed (which happens with probability $\lambda_{10}$). As an example, suppose that at time $t$, if the transmitter sends $(a_{1}, a_{2})$: then receiver $1$'s signal is drowned in the jamming signal, whereas receiver $2$ gets $\mathcal{F}_{2}(a_{1},a_{2})$. The fact that the event $10$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$; and the LC $\mathcal{F}_{2}(a_{1},a_{2})$ can be obtained at the transmitter at time $t+1$ via d-$\mathsf{CSIT}$. The goal of stage $3$ would be to deliver $\mathcal{F}_{2}(a_{1},a_{2})$ to receiver $1$ by exploiting the fact that it is already received at receiver $2$. Thus, at time $t+1$, the transmitter sends a fresh symbol $a_{3}$ on one antenna; and a LC of $(a_{1},a_{2})$; say $\tilde{\mathcal{F}}_{2}(a_{1},a_{2})$; such that $\mathcal{F}_{2}(a_{1},a_{2})$ and $\tilde{\mathcal{F}}_{2}(a_{1},a_{2})$ constitute two linearly independent combinations of $(a_{1}, a_{2})$. In summary, at time $t+1$, the transmitter sends $(a_{3}, \tilde{\mathcal{F}}_{2}(a_{1},a_{2}))$.
\item \textit{Event $11$}: both receivers are jammed (which happens with probability $\lambda_{11}$). Using d-$\mathsf{JSIT}$, transmitter knows at time $t+1$ that the event $11$ occurred and hence at time $t+1$, it re-transmits $(a_{1}, a_{2})$ on the two transmit antennas.
\end{enumerate}
The above events are \emph{disjoint}, so in one time slot, the average number of useful LCs
delivered to \textbf{at least one receiver} is given by
\begin{equation}\label{expected_symbols_DD}
E[\textnormal{$\#$ of LC's delivered}] =2\lambda_{00}+\lambda_{01}+\lambda_{10}\triangleq \phi. \nonumber
\end{equation}
Hence, the expected time to deliver one LC is
\begin{equation}
\frac{1}{\phi}=\frac{1}{2\lambda_{00}+\lambda_{01}+\lambda_{10}}\triangleq \frac{1}{\lambda_1+\lambda_2}.
\end{equation}
The time spent in this stage to deliver $n_1$ LCs is
\begin{align}\label{N_1}
N_1=\frac{n_1}{\lambda_1+\lambda_2}.
\end{align}
Since receiver $1$ is not jammed in events $00$ and $01$, i.e., for $\lambda_1$ fraction of the time, it receives only $\lambda_1 N_1$ LCs. The number of undelivered LCs is $n_1-\lambda_1N_1=\frac{\lambda_2n_1}{\lambda_1+\lambda_2}$. These LCs are available at receiver $2$ (corresponding to events $00$ and $10$) and are known to the transmitter via d-$\mathsf{CSIT}$. This side information created at receiver $2$ is not discarded, instead it is used in Stage 3 of the transmission scheme.
{\textbf{\textit{Stage 2}}}--
In this stage, the transmitter intends to deliver $n_2$ $b$-symbols, in a manner such that each symbol is received at \textit{at least} one of the receivers. Stage 1 is repeated here with the roles of the receivers 1 and 2 interchanged. On similar lines to Stage 1, the time spent in this stage is
\begin{align}\label{N_2}
N_2=\frac{n_2}{\lambda_1+\lambda_2}.
\end{align}
The number of LCs received at receiver $2$ is $\lambda_2N_2$ and the number of LCs not delivered to receiver $2$ but are available as side information at receiver $1$ is $n_2-\lambda_2N_2=\frac{\lambda_1n_2}{\lambda_1+\lambda_2}$.
\begin{remark} {\em At the end of these $2$ stages, following typical situation arises: $\mathcal{F}(a_1,a_2)$ (resp. $\mathcal{G}(b_1,b_2)$) is a LC intended for receiver $1$ (resp. $2$) but is available as side information at receiver $2$ (resp. $1$)\footnote{Such situations correspond to events $00$ and $01$ in Stage $1$; and events $00$, $10$ in Stage $2$.}. Notice that these LCs must be transmitted to the complementary receivers so that the desired symbols can be decoded. In Stage $3$, the transmitter sends a random LC of these symbols, say $\mathcal{L}=l_1\mathcal{F}(a_1,a_2)+l_2\mathcal{G}(b_1,b_2)$ where $l_1, l_2$ that form the new LC are known to the transmitter and receivers \emph{a priori}. Now, assuming that only receiver $2$ (resp. $1$) is jammed, $\mathcal{L}$ is received at receiver $1$ (resp. $2$) within noise distortion. Using this LC, it can recover $\mathcal{F}(a_1,a_2)$ (resp. $\mathcal{G}(b_1,b_2)$) from $\mathcal{L}$ since it already has $\mathcal{G}(b_1,b_2)$ (resp. $\mathcal{F}(a_1,a_2)$) as side information. When no receiver is jammed, both the receivers are capable of recovering $\mathcal{F}(a_1,a_2)$, $\mathcal{G}(b_1,b_2)$ simultaneously.}
\end{remark}
{\textbf{\textit{Stage 3}}}--In this stage, the undelivered LCs to each receiver are transmitted using the technique mentioned above.
Let us assume that $\mathcal{F}_1(a_1,a_2)$ and $\mathcal{G}_1(b_1,b_2)$ are LCs available as side information at receivers 2 and 1 respectively. The transmitter sends $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$, a LC of these symbols on one transmit antenna, with the eventual goal of multicasting this LC (i.e., send it to \textit{both} receivers). The following events, as specified earlier in Stages $1$ and $2$, are also possible while in this stage.
\textit{Event $00$}: Suppose at time $t$, if the transmitter sends $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$, then both the receivers get this LC within noise distortion. With the capability to recover $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ within a scaling factor, the receivers 1 and 2 decode their intended LCs $\mathcal{F}_1$ and $\mathcal{G}_1$ respectively using the side informations $\mathcal{G}_1$ and $\mathcal{F}_1$ that are available with them. Since the intended LCs are delivered at the intended receivers, the transmitter, at time $t+1$, sends a new LC of two new symbols $\tilde{\mathcal{L}}(\tilde{\mathcal{F}}_1,\tilde{\mathcal{G}}_1)$.
\textit{Event $01$}: Since receiver $2$ is jammed, its signal is drowned in the jamming signal while receiver $1$ gets $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ and is capable of recovering $\mathcal{F}_1$ using $\mathcal{G}_1$ available as side information. The fact that event $01$ occurred is known to the transmitter at time $t+1$ via d-$\mathsf{JSIT}$. Thus, at time $t+1$, the transmitter sends a new LC $\tilde{\mathcal{L}}(\tilde{\mathcal{F}}_1,\mathcal{G}_1)$ since $\mathcal{G}_1$ has not yet been delivered to receiver $2$.
\textit{Event $10$}: This event is similar to event $01$, with the roles of the receivers 1 and 2 interchanged. Hence, receiver $2$ is capable of recovering $\mathcal{G}_1$ from
$\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ while receiver $1$'s signal is drowned in the jamming signal. Thus at time $t+1$, the transmitter sends a new LC $\tilde{\mathcal{L}}(\mathcal{F}_1,\tilde{\mathcal{G}}_1)$ since $\mathcal{F}_1$ has not yet been delivered to receiver $1$.
\textit{Event $11$}: Using d-$\mathsf{JSIT}$, transmitter knows at time $t+1$ that the event $11$ occurred and hence at time $t+1$, it re-transmits $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ on one of its transmit antennas.
\begin{figure}[t]
\centering
\includegraphics[width=13.0cm]{DDScheme.pdf}
\caption{Coding with delayed $\mathsf{CSIT}$ and delayed $\mathsf{JSIT}$.}\label{CodingDD}
\end{figure}
Since, all the events are disjoint, in one time slot, the average number of LCs delivered to receiver $1$ is given by
\begin{equation}
E[\textnormal{$\#$ of LC's delivered to user 1}] =\lambda_{00}+\lambda_{01}\triangleq \lambda_1. \nonumber
\end{equation}
Hence, the expected time to deliver one LC to receiver $1$ in this stage is $\frac{1}{\lambda_1}$. Given that $\frac{\lambda_2 n_1}{\lambda_1+\lambda_2}$ LCs are to be delivered to receiver $1$ in this stage, the time taken to achieve this is
$\frac{\lambda_2 n_1}{\lambda_1(\lambda_1+\lambda_2)}$. Interchanging the roles of the users, the time taken to deliver $\frac{\lambda_1 n_2}{\lambda_1+\lambda_2}$ LCs to receiver $2$ is $\frac{\lambda_1 n_2}{\lambda_2(\lambda_1+\lambda_2)}$. Thus the total time required to satisfy the requirements of both the receivers in Stage 3 is given by
\begin{equation}\label{N_3}
N_3=\mathrm{max}\left(\frac{\lambda_2n_1}{\lambda_1(\lambda_1+\lambda_2)},\frac{\lambda_1n_2}{\lambda_2(\lambda_1+\lambda_2)}\right).
\end{equation}
The optimal $\mathsf{DoF}$ achieved in the $\mathsf{DD}$ configuration is readily evaluated as
\begin{eqnarray}
d_1=\frac{n_1}{N_1+N_2+N_3}, \ d_2=\frac{n_2}{N_1+N_2+N_3}.
\end{eqnarray}
Substituting for $\{N_i\}_{i=1,2,3}$ from \eqref{N_1}--\eqref{N_3}, we have,
\begin{align}
d_k&=\frac{n_k}{\frac{n_1}{\lambda_1+\lambda_2}+\frac{n_2}{\lambda_1+\lambda_2}+\mathrm{max}\left(\frac{\lambda_2 n_1}{\lambda_1(\lambda_1+\lambda_2)},\frac{\lambda_1n_2}{\lambda_2(\lambda_1+\lambda_2)}\right)},\ k=1,2. \nonumber \\
\end{align}
Using $\eta=\frac{n_1}{n_1+n_2}$, we have
\begin{align}
d_1&=\frac{\eta}{\frac{1}{\lambda_1+\lambda_2}+\mathrm{max}\left(\frac{\lambda_2\eta}{\lambda_1(\lambda_1+\lambda_2)},\frac{\lambda_1(1-\eta)}{\lambda_2(\lambda_1+\lambda_2)}\right)} \nonumber \\
d_2&=\frac{1-\eta}{\frac{1}{\lambda_1+\lambda_2}+\mathrm{max}\left(\frac{\lambda_2\eta}{\lambda_1(\lambda_1+\lambda_2)},\frac{\lambda_1(1-\eta)}{\lambda_2(\lambda_1+\lambda_2)}\right)}.
\end{align}
Eliminating $\eta$ from the above two equations, yields the $(d_{1}, d_{2})$ pair given in \eqref{DoFDD_opt_point}.
\begin{remark}{\em It is seen that only JSI at time $t$ is necessary for the transmitter to make a decision on the LCs to be transmitted at time $t+1$ in Stage 3. Also, it is worth noting that the outer most points on the $\mathsf{DoF}$ region described by Theorem~\ref{TheoremDD} (for a given $\lambda_1$, $\lambda_2$) are obtained for different values of $\eta \in [0,1]$. Another interesting point to note here is that if $\lambda_1=\lambda_2=1$,(which is possible only if $\lambda_{00}=1$) i.e none of the receivers are jammed, the $\mathsf{DoF}$ achieved is $\frac{4}{3}$ which is the optimum $\mathsf{DoF}$ achieved in a d-$\mathsf{CSIT}$ scenario for the 2-user MISO broadcast channel as shown by Maddah-Ali and Tse in \cite{MAT2012}. }
\end{remark}
\subsubsection{Delayed $\mathsf{CSIT}$, No $\mathsf{JSIT}$ ($\mathsf{DN}$):}
One of the novel contributions of this paper is developing a new coding/transmission scheme for the $\mathsf{DN}$ configuration.
Before we explain the proposed scheme, we first present a modified $\mathsf{MAT}$ scheme (original $\mathsf{MAT}$ scheme proposed in \cite{MAT2012}) that achieves a $\mathsf{DoF}$ of $\frac{4}{3}$ in a 2-user MISO BC (in the absence of jamming).
\paragraph{Modified $\mathsf{MAT}$ Scheme:}
Consider a 2-user MISO BC where the transmitter intends to deliver $a$-symbols ($a_1,a_2$) to the $1$st receiver and $b$-symbols ($b_1,b_2$) to the $2$nd receiver respectively. The $\mathsf{MAT}$ scheme proposed in \cite{MAT2012} was illustrated earlier in Section~\ref{DCSIT}. Here we first revise the modified $\mathsf{MAT}$ scheme to achieve the same results.
At $t=1$, the transmitter sends
\begin{equation}
\mathbf{X}(1)=\left[\begin{matrix}a_1+b_1 \\ a_2+b_2\end{matrix} \right],
\end{equation}
on its two transmit antennas.
The outputs (within noise distortion) at the $2$ receivers are given as (ignoring noise)
\begin{align}
Y_1(1)&=\mathbf{H}_1(1)\left[\begin{matrix}a_1+b_1 \\ a_2+b_2\end{matrix} \right] = h_{11}(1)(a_1+b_1)+h_{21}(1)(a_2+b_2) \nonumber \\
&=\underbrace{(h_{11}(1)a_1+h_{21}a_2)}_{\mathcal{F}_1(a_1,a_2)}+\underbrace{(h_{11}(1)b_1+h_{21}b_2)}_{\mathcal{G}_1(b_1,b_2)} \nonumber \\
&\triangleq \mathcal{F}_1(a_1,a_2)+\mathcal{G}_1(b_1,b_2) \\
Y_2(1)&=\mathbf{H}_2(1)\left[\begin{matrix}a_1+b_1 \\ a_2+b_2\end{matrix} \right] = h_{21}(1)(a_1+b_1)+h_{22}(1)(a_2+b_2) \nonumber \\
&=\underbrace{(h_{21}(1)a_1+h_{21}a_2)}_{\mathcal{F}_2(a_1,a_2)}+\underbrace{(h_{22}(1)b_1+h_{21}b_2)}_{\mathcal{G}_2(b_1,b_2)} \nonumber \\
&\triangleq \mathcal{F}_2(a_1,a_2)+\mathcal{G}_2(b_1,b_2),
\end{align}
where $\mathcal{F}_1,\mathcal{F}_2$ represent LCs of the symbols $a_1,a_2$ and similarly $\mathcal{G}_1,\mathcal{G}_2$ are LCs of the symbols $b_1,b_2$ (the received symbols can be grouped in this manner as the receivers have $\mathsf{CSIR}$.). These LCs are known to the transmitter at time $t=2$ via d-$\mathsf{CSIT}$. The $1$st receiver requires $\mathcal{F}_2$ (apart from $\mathcal{F}_1$) to decode its symbols and the $2$nd receiver needs $\mathcal{G}_1$ (apart from $\mathcal{G}_2$) for its symbols. Thus at time $t=2$, the transmitter multicasts $\mathcal{G}_1$ to both the receivers on one of its transmit antennas as
\begin{equation}
\mathbf{X}(2)=\left[\begin{matrix}\mathcal{G}_1(b_1,b_2) \\ 0\end{matrix} \right].
\end{equation}
which is received within noise distortion at both the receivers. Using the recovered $\mathcal{G}_1$ (within noise distortion), the $1$st receiver can recover $\mathcal{F}_1$ by removing it from the symbol $Y_1(1)$ that it received at time $t=1$. At this point, receiver $1$ has one LC of intended symbols $\mathcal{F}_1$ and also needs $\mathcal{F}_2$ to recover its symbols. Thus the transmitter multicasts $\mathcal{F}_2$ to both the receivers at time $t=3$ as
\begin{equation}
\mathbf{X}(3)=\left[\begin{matrix}\mathcal{F}_2(a_1,a_2)\\ 0\end{matrix} \right].
\end{equation}
Using the same technique as receiver $1$, the $2$nd receiver can recover $\mathcal{G}_2$ by removing $\mathcal{F}_2$ from the symbol $Y_2(1)$ that it received at time $t=1$. Thus at the end of $3$ time instants, the receivers $1$ and $2$ have $\mathcal{F}_1,\mathcal{F}_2$ and $\mathcal{G}_1,\mathcal{G}_2$ respectively, that help them decode their intended symbols. Thus using this transmission scheme, $4$ symbols are decoded at the receivers in $3$ time slots that leads to a sum $\mathsf{DoF}$ of $\frac{4}{3}$ which is also the $\mathsf{DoF}$ achieved by the $\mathsf{MAT}$ scheme in the 2-user MISO BC with delayed $\mathsf{CSIT}$.
\begin{figure}[t]
\centering
\includegraphics[width=11.0cm]{TheoremDN.pdf}
\caption{Achievable $\mathsf{DoF}$ region with delayed $\mathsf{CSIT}$ and no $\mathsf{JSIT}$.}\label{Fig:FigureDN}
\end{figure}
\paragraph{Proposed Scheme for $\mathsf{DN}$:}It is clearly seen that the modified $\mathsf{MAT}$ scheme presented above cannot be directly extended to the case where the jammer disrupts the receivers. Below, we present a novel 3-stage transmission strategy to achieve the $\mathsf{DoF}$ described by Theorem~\ref{TheoremDN}. The transmitter uses the statistical knowledge of the jammers strategy to deliver symbols to both the receivers in this configuration (as feedback information about the undelivered symbols is not available at the transmitter). Similar to the $\mathsf{PN}$ configuration, the transmitter sends random LCs of the intended symbols to both the users to overcome the unavailability of $\mathsf{JSIT}$.
Let $(1+\lambda_1)n$ and $(1+\lambda_2)n$ (the reason for choosing $(1+\lambda_k)n$, $k=1,2$, as the length of symbol sequence will be clear as we proceed through the algorithm) denote the total number of symbols the transmitter intends to deliver to receivers $1$ and $2$ respectively, where $\lambda_1,\lambda_2$ indicate the probability with which the receivers are not disrupted by the jammer. In this scheme, we assume that the decoding process follows once the transmission of the symbols has finished and the receivers have all required linear combinations of the symbols which are used to decode the symbols.
So each receiver needs $(1+\lambda_1)n$, $(1+\lambda_2)n$ LCs respectively to completely decode their symbols.
\begin{itemize}
\item \textbf{Stage 1}: The transmitter forms random LCs of the $(1+\lambda_1)n$ $a$-symbols and $(1+\lambda_2)n$ $b$-symbols symbols intended for both the receivers. Let us denote these LCs by $(a_1,a_2,\ldots,a_{\left(1+\lambda_1\right)n})$ and $(b_1,b_2,\ldots,b_{\left(1+\lambda_2\right)n})$ respectively (these are the actual transmitted symbols and are similar to the $a$-symbols and $b$-symbols mentioned earlier in the modified $\mathsf{MAT}$ scheme). In Stage 1, the transmitter combines these $a$-symbols and $b$-symbols and sends them over $n$ time instants (please refer to the modified $\mathsf{MAT}$ scheme to see how combination of $a$-symbols and $b$-symbols are sent). Since the receivers $1,2$ are not jammed with a probability $\lambda_1,\lambda_2$ respectively, they receive
$\lambda_1n$ and $\lambda_2n$ combinations of $a$-symbols and $b$-symbols over $\tau_1=n$ time instants.
\begin{figure}[t]
\centering
\includegraphics[width=13.0cm]{DNScheme.pdf}
\caption{Coding with delayed $\mathsf{CSIT}$ and no $\mathsf{JSIT}$.}\label{CodingDN}
\end{figure}
As mentioned, the transmitter does not have knowledge about the LCs undelivered to the receivers. However, using d-$\mathsf{CSIT}$, it can reconstruct the LCs that would have been received at each receiver irrespective of whether they are jammed or not. For example, let us denote these LCs by $\mathcal{F}_1,\mathcal{F}_2$, $\mathcal{G}_1,\mathcal{G}_2$ that correspond to combinations of $a_1,a_2,b_1$ and $b_2$ (refer to modified $\mathsf{MAT}$ scheme). Irrespective of whether $\mathcal{F}_1+\mathcal{G}_1$ is received at receiver $1$ or not, the LC $\mathcal{F}_2$ is useful for it as it will act as an additional LC that helps decode its intended symbols. Similar reasoning holds for receiver $2$ with respect to the symbol $\mathcal{G}_1$. But because these LCs have been received at the un-intended receiver, these act as side information which are used in the stages $2$ and $3$ of the algorithm.
\item \textbf{Stage 2}: In this stage, the transmitter multicasts $\mathcal{F}$-type LCs that would have been received at receiver $2$ (irrespective of whether it is jammed or not, the transmitter can reconstruct them using d-$\mathsf{CSIT}$). This is now available at the $1$st receiver with a probability $\lambda_1$ and with probability $\lambda_2$ at the $2$nd receiver. This is useful for both the receivers as it is a useful LC of intended symbols for the $1$st receiver while it can be used to remove the side information at receiver $2$ to recover its intended LC if at all it was received in the past (note that this is not useful for the $2$nd receiver, if a LC consisting of this $\mathcal{F}$- symbol was never received in the past). Thus the total time taken to deliver one such $\mathcal{F}$-symbol at both the receivers is given by
\begin{equation}
\mathrm{max}\left(\frac{1}{\lambda_1},\frac{1}{\lambda_2}\right),
\end{equation}
as they are un-jammed with probabilities $\lambda_1,\lambda_2$ respectively. Since there are $n$ such $\mathcal{F}$-type LCs (created over $n$ time instants in stage $1$),
the total time necessary to deliver them is given by
\begin{equation}
\tau_2=\mathrm{max}\left(\frac{1}{\lambda_1},\frac{1}{\lambda_2}\right)n.
\end{equation}
\item \textbf{Stage 3}: This stage is the complement of the Stage $2$, where the transmitter sends the $\mathcal{G}-$type LCs that would have been received at the $1$st receiver, but are useful to both of them. Thus the total time spent in Stage $3$ is given by
\begin{equation}
\tau_3=\mathrm{max}\left(\frac{1}{\lambda_1},\frac{1}{\lambda_2}\right)n.
\end{equation}
\end{itemize}
\subparagraph{$\mathsf{DoF}$ analysis:} At the end of the proposed $3-$stage algorithm, notice that both the receivers have $(1+\lambda_1)n$ and $(1+\lambda_2)n$ intended LCs. Since $(1+\lambda_1)n$ random LCs are sufficient to decode $(1+\lambda_1)n$ symbols, at the end of this stage $3$, both the receivers have successfully decoded all intended symbols. Thus the $\mathsf{DoF}$ is given by
\begin{eqnarray}
d_1&=&\frac{(1+\lambda_1)n}{\tau_1+\tau_2+\tau_3} \\
&=&\frac{(1+\lambda_1)n}{n+\mathrm{max}\left(\frac{n}{\lambda_1},\frac{n}{\lambda_2}\right)+\mathrm{max}\left(\frac{n}{\lambda_1},\frac{n}{\lambda_2}\right)} \\
&=&\frac{(1+\lambda_1)}{1+2\mathrm{max}\left(\frac{1}{\lambda_1},\frac{1}{\lambda_2}\right)}
\end{eqnarray}
On similar lines, we have
\begin{eqnarray}
d_2&=&\frac{(1+\lambda_2)}{1+2\mathrm{max}\left(\frac{1}{\lambda_1},\frac{1}{\lambda_2}\right)},
\end{eqnarray}
which is the $\mathsf{DoF}$ region given by Theorem~\ref{TheoremDN}.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{3DPlot.pdf}
\caption{Achievable sum $\mathsf{DoF}$ in the $\mathsf{DN}$ configuration}
\label{Fig:3DPlotDN}
\end{figure}
Theorems~\ref{TheoremNN} and \ref{TheoremDN}, suggest that the $\mathsf{DoF}$ in the $\mathsf{DN}$ configuration can be increased only when the region described in Theorem~\ref{TheoremNN} is a subset of the region described in Theorem~\ref{TheoremDN}. This is possible only when
\begin{align}
\frac{(1+\lambda_2)\lambda_1}{2\mathsf{max}\left(1,\lambda_1/\lambda_2-1\right)} \geq \lambda_2 \nonumber \\
\frac{(1+\lambda_1)\lambda_2}{2\mathsf{max}\left(1,\lambda_2/\lambda_1-1\right)} \geq \lambda_1.
\end{align}
In other words, the proposed scheme for the $\mathsf{DN}$ configuration can achieve $\mathsf{DoF}$ gains over the naive TDMA-based scheme if and only if $\lambda_1,\lambda_2$ satisfy (obtained by solving the above two equations)
\begin{align}
\frac{|\lambda_1-\lambda_2|}{\lambda_1\lambda_2}\leq 1.
\end{align}
Fig.~\ref{Fig:3DPlotDN} shows the $\mathsf{Sum}\ \mathsf{DoF}$ achieved using the naive TDMA scheme and the proposed scheme for the $\mathsf{DN}$ configuration.
Since the transmitter has statistical knowledge about the jammers strategy, it can choose to use the naive scheme or the novel scheme based on the values of $\lambda_1,\lambda_2$.
\subsection{No $\mathsf{CSIT}$}
The following relationship holds true,
\begin{align}\label{DoF_NCSIT_Compare}
\mathsf{DoF}_{\mathsf{NN}} \subseteq \mathsf{DoF}_{\mathsf{ND}} \subseteq \mathsf{DoF}_{\mathsf{NP}}.
\end{align}
i.e, the $\mathsf{DoF}$ is never reduced when JSI is available at the transmitter.
\subsubsection{No $\mathsf{CSIT}$, Perfect $\mathsf{JSIT}$ ($\mathsf{NP}$) :}
As seen in Fig.~\ref{Fig:FigureNP}, the following $\mathsf{DoF}$ pairs $(d_1,d_2)=(\lambda_1,0)$,
$(\lambda_1,\lambda_{10})$, $(\lambda_{01},\lambda_2)$, and $(0,\lambda_2)$ are achievable in the $\mathsf{NP}$ configuration.
The $\mathsf{DoF}$ pairs $(\lambda_1,0)$ and $(0,\lambda_2)$ are readily achievable using the naive scheme mentioned before where the transmitter sends symbols exclusively to the receiver that is not jammed (when the transmitter sends $n$ symbols to the $k$th receiver using the knowledge of perfect $\mathsf{JSIT}$, it receives $\lambda_kn$ symbols since it is not jammed with probability $\lambda_k$). The remaining $\mathsf{DoF}$ pairs, $(\lambda_1,\lambda_{10})$ and $(\lambda_{01},\lambda_2)$ are achieved via the transmission schemes suggested in the $\mathsf{DP}$ configuration for the corresponding $\mathsf{DoF}$ pairs.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{TheoremNP.pdf}
\vspace{-10pt}
\caption{$\mathsf{DoF}$ region with no $\mathsf{CSIT}$ and perfect $\mathsf{JSIT}$.}\label{Fig:FigureNP}
\end{figure}
\subsubsection{No $\mathsf{CSIT}$, Delayed $\mathsf{JSIT}$ ($\mathsf{ND}$):}
Here, we present a 3-stage scheme that achieves the $\mathsf{DoF}$ region given by Theorem~\ref{TheoremND}. This scheme is similar to the algorithm proposed for the $\mathsf{DD}$ configuration. In stage $1$, the transmitter sends symbols intended for receiver $1$ alone and keeps re-transmitting them until it is received (jamming free signal) at at least one receiver. On similar lines, the transmitter sends symbols intended only for receiver 2 in Stage 2. Stage 3 consists of transmitting the undelivered symbols to the intended receiver. However, since there is no CSI available at the transmitter, the algorithm proposed for the $\mathsf{DD}$ configuration cannot be applied here. The modified 3-stage algorithm is presented henceforth.
\begin{figure}[t]
\centering
\includegraphics[width=12.0cm]{TheoremND.pdf}
\vspace{-10pt}
\caption{An achievable $\mathsf{DoF}$ region with no $\mathsf{CSIT}$ and delayed $\mathsf{JSIT}$.}\label{Fig:FigureND}
\end{figure}
\textbf{\textit{Stage 1}}--In this stage, the transmitter intends to deliver $n_1$ $a$-symbols, in a manner such that each symbol is received at \textit{at least} one of the receivers. At every time instant the transmitter sends one symbol on one of its transmit antennas. This message is re-transmitted until it is received at at least one receiver. Any one of the following four scenarios can arise: \\
\textit{Event $00$}: none of the receivers are jammed (which happens with probability $\lambda_{00}$). As an example, suppose that at time $t$, if the transmitter sends $a_{1}$: then receiver $1$ gets $\mathcal{F}_{1}(a_{1})$ and receiver $2$ gets $\mathcal{F}_{2}(a_{1})$ (note that these are scaled versions of the transmit signal corrupted by white Gaussian noise and are recovered by the receivers within noise distortion). The fact that the event $00$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$. The transmitter ignores the side information created at receiver $2$, since the intended symbol is delivered to receiver $1$. The transmitter sends a new symbol $a_2$ at time $t+1$.
\textit{Event $01$}: receiver $1$ is not jammed, while receiver $2$ is jammed (which happens with probability $\lambda_{01}$). As an example, suppose that at time $t$, if the transmitter sends $a_{1}$: then receiver $1$ gets $\mathcal{F}_{1}(a_{1})$ and receiver $2$'s signal is drowned in the jamming signal. The fact that the event $01$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$. Since the intended $a-$ symbol is delivered to receiver $1$, at time $t+1$, the transmitter sends a new symbol $a_2$ from the message queue of symbols intended for receiver $1$.
\textit{Event $10$}: receiver $2$ is not jammed, while receiver $1$ is jammed (which happens with probability $\lambda_{10}$). As an example, suppose that at time $t$, if the transmitter sends $a_{1}$: then receiver $1$'s signal is drowned in the jamming signal, whereas receiver $2$ gets $\mathcal{F}_{2}(a_{1})$. The fact that the event $10$ occurred at time $t$ is known at time $t+1$ via d-$\mathsf{JSIT}$. Since the receivers have CSI and JSI, receiver $1$ is aware of the message received at receiver $2$ within noise distortion. This message is not discarded, but instead used as side information and is delivered to the receiver $1$ in Stage $3$.
\textit{Event $11$}: both receivers are jammed (which happens with probability $\lambda_{11}$). Using d-$\mathsf{JSIT}$, transmitter knows at time $t+1$ that the event $11$ occurred and hence at time $t+1$, it re-transmits $a_{1}$ on one of its transmit antennas.
The above events are \emph{disjoint}, so in one time slot, the average number of useful messages
delivered to at least one receiver is given by
\begin{equation}
E[\textnormal{$\#$ of symbols delivered}] =\lambda_{00}+\lambda_{01}+\lambda_{10}\triangleq \phi. \nonumber
\end{equation}
Hence, the expected time to deliver one LC is
\begin{equation}
\frac{1}{\phi}=\frac{1}{\lambda_{00}+\lambda_{01}+\lambda_{10}}.
\end{equation}
\emph{Summary of Stage $1$:}
\begin{itemize}
\item The time spent in this stage to deliver $n_1$ LCs is
\begin{align}\label{N_1_ND}
N_1=\frac{n_1}{\phi}.
\end{align}
\item Since receiver $1$ is not jammed in events $00$ and $01$, i.e., with probability $\lambda_1$, it receives only $\lambda_1 N_1$ symbols.
\item The number of undelivered symbols is $n_1-\lambda_1N_1=\frac{\lambda_{10} n_1}{\phi}$. These symbols are available at receiver $2$ (corresponding to the event $10$) and are known to the transmitter via d-$\mathsf{JSIT}$. This side information created at receiver $2$ is not discarded, instead it is used in Stage 3 of the transmission scheme.
\item The loss in $\mathsf{DoF}$ in this configuration due to the unavailability of $\mathsf{CSIT}$ is observed by noticing the expected number of symbols delivered in the $\mathsf{ND}$ configuration which is given by $\lambda_{00}+\lambda_{01}+\lambda_{10}$ while it is $2\lambda_{00}+\lambda_{01}+\lambda_{10}$ in the $\mathsf{DD}$ configuration as seen in \eqref{expected_symbols_DD}.
\end{itemize}
{\textbf{\textit{Stage 2}}}--
In this stage, the transmitter intends to deliver $n_2$ $b$-symbols, in a manner such that each symbol is received at \textit{at least} one of the receivers. Stage 1 is repeated here with the roles of the receivers 1 and 2 interchanged. On similar lines to Stage 1, the time spent in this stage is
\begin{align}\label{N_2_ND}
N_2=\frac{n_2}{\phi}.
\end{align} The number of symbols received at receiver $2$ is $\lambda_2N_2$ and the number of symbols not delivered to receiver $2$ but are available as side information at receiver $1$ is $n_2-\lambda_2N_2=\frac{\lambda_{01}n_2}{\phi}$.
\begin{remark}{\em At the end of these $2$ stages, following typical situation arises: $\mathcal{F}(a_1)$ (resp. $\mathcal{G}(b_1)$) is a symbol intended for receiver $1$ (resp. $2$) but is available as side information at receiver $2$ (resp. $1$)\footnote{Such situations correspond to the event $10$ in Stage $1$; and the event $01$ in Stage $2$.}. Notice that these symbols must be transmitted to the complementary receivers so that the desired symbols can be decoded. The transmitter, via delayed-$\mathsf{JSIT}$, is aware of the symbols that that are not delivered to the receivers (however the transmitter is not required to be aware of $\mathcal{F}$ (resp. $\mathcal{G}$) since the receivers have this knowledge and that $\mathcal{F}$ (resp. $\mathcal{G}$) is the noise corrupted version of one symbol $a_1$ (resp. $b_1$)). In Stage $3$, the transmitter sends a random LC of these symbols, say $\mathcal{L}=h_1\mathcal{F}(a_1)+h_2\mathcal{G}(b_1)$ where $h_1, h_2$ that form the new LC are known to the transmitter and receivers \emph{a priori}. Now, assuming that only receiver $2$ (resp. $1$) is jammed, $\mathcal{L}$ is received at receiver $1$ (resp. $2$) within noise distortion. Using this LC, it can recover $\mathcal{F}(a_1)$ (resp. $\mathcal{G}(b_1)$) from $\mathcal{L}$ since it already has $\mathcal{G}(b_1)$ (resp. $\mathcal{F}(a_1)$) as side information. When no receiver is jammed, both the receivers are capable of recovering $\mathcal{F}(a_1)$, $\mathcal{G}(b_1)$ simultaneously.}
\end{remark}
{\textbf{\textit{Stage 3}}}--In this stage, the undelivered symbols to each receiver are transmitted using the technique mentioned above. Let us assume that $\mathcal{F}_1(a_1)$ and $\mathcal{G}_1(b_1)$ are symbols available as side information at receivers $2$ and $1$ respectively.
The transmitter sends $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$, a LC of these symbols on one transmit antenna, with the eventual goal of multicasting this LC (i.e., send it to \textit{both} receivers). The following events, as specified earlier in Stages $1$ and $2$, are also possible while in this stage.
\textit{Event $00$}: Suppose at time $t$, if the transmitter sends $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$, then both the receivers get this LC within noise distortion. With the capability to recover $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ within a scaling factor, the receivers $1$ and $2$ decode their intended messages $\mathcal{F}_1$ and $\mathcal{G}_1$ respectively using the side informations $\mathcal{G}_1$ and $\mathcal{F}_1$ that are available with them. Since the intended messages are delivered at the intended receivers, the transmitter, at time $t+1$, sends a new LC of two new symbols $\tilde{\mathcal{L}}(\tilde{\mathcal{F}}_1,\tilde{\mathcal{G}}_1)$.
\textit{Event $01$}: Since receiver $2$ is jammed, its signal is drowned in the jamming signal while receiver $1$ gets $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ and is capable of recovering $\mathcal{F}_1$ using $\mathcal{G}_1$ available as side information. The fact that event $01$ occurred is known to the transmitter at time $t+1$ via d-$\mathsf{JSIT}$. Thus, at time $t+1$, the transmitter sends a new LC $\tilde{\mathcal{L}}(\tilde{\mathcal{F}}_1,\mathcal{G}_1)$ since $\mathcal{G}_1$ has not yet been delivered to receiver $2$.
\textit{Event $10$}: This event is similar to event $01$, with the roles of the receivers 1 and 2 interchanged. Hence, receiver $2$ is capable of recovering $g_1$ from $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ while receiver $1$'s signal is drowned in the jamming signal. Thus at time $t+1$, the transmitter sends a new LC $\tilde{\mathcal{L}}(\mathcal{F}_1,\tilde{\mathcal{G}}_1)$ since $\mathcal{F}_1$ has not yet been delivered to receiver $1$.
\textit{Event $11$}: Using d-$\mathsf{JSIT}$, transmitter knows at time $t+1$ that the event $11$ occurred and hence at time $t+1$, it re-transmits $\mathcal{L}(\mathcal{F}_1,\mathcal{G}_1)$ on one of its transmit antennas.
Since, all the events are disjoint, in one time slot, the average number of LCs delivered to receiver $1$ is given by
\begin{equation}
E[\textnormal{$\#$ of symbols delivered to user 1}] =\lambda_{00}+\lambda_{01}\triangleq \lambda_1. \nonumber
\end{equation}
Hence, the expected time to deliver one symbol to receiver $1$ in this stage is $\frac{1}{\lambda_1}$. Given that $\frac{\lambda_{10} n_1}{\phi}$ symbols are to be delivered to receiver $1$ in this stage, the time taken to achieve this is
$\frac{\lambda_{10} n_1}{\lambda_1\phi}$. Interchanging the roles of the users, the time taken to deliver $\frac{\lambda_{01} n_2}{\phi}$ symbols to receiver $2$ is $\frac{\lambda_{01} n_2}{\lambda_2\phi}$. Thus the total time required to satisfy the requirements of both the receivers in Stage $3$ is given by
\begin{equation}\label{N_3_ND}
N_3=\mathrm{max}\left(\frac{\lambda_{10} n_1}{\lambda_1\phi}, \frac{\lambda_{01} n_2}{\lambda_2\phi}\right).
\end{equation}
The optimal $\mathsf{DoF}$ achieved in the $\mathsf{DD}$ configuration is readily evaluated as
\begin{eqnarray}
d_1=\frac{n_1}{N_1+N_2+N_3}, \ d_2=\frac{n_2}{N_1+N_2+N_3}.
\end{eqnarray}
Substituting $\{N_i\}_{i=1,2,3}$ from \eqref{N_1_ND}--\eqref{N_3_ND}, we have,
\begin{eqnarray}
d_1&=&\frac{\eta}{\frac{1}{\phi}+\mathrm{max}\left(\frac{\lambda_{10} \eta}{\lambda_1\phi},\frac{\lambda_{01} (1-\eta)}{\lambda_2\phi}\right)} \nonumber \\
d_2&=&\frac{1-\eta}{\frac{1}{\phi}+\mathrm{max}\left(\frac{\lambda_{10} \eta}{\lambda_1\phi},\frac{\lambda_{01} (1-\eta)}{\lambda_2\phi}\right)},
\end{eqnarray}
where $\eta=\frac{n_1}{n_1+n_2}$. Eliminating $\eta$ from the above two equations, yields the $\mathsf{DoF}$ region given by Theorem~\ref{TheoremND}.
The $\mathsf{DoF}$ pairs $(\lambda_1,0)$ and $(0,\lambda_2)$ are achieved by using the transmission strategy proposed for the $\mathsf{NN}$ configuration below.
\subsubsection{No $\mathsf{CSIT}$, No $\mathsf{JSIT}$ ($\mathsf{NN}$) :}
The $\mathsf{DoF}$ for the $\mathsf{NN}$ configuration is given by Theorem~\ref{TheoremNN} and the simple time sharing scheme achieves $\mathsf{DoF}_{\mathsf{NN}}$. For completeness, we briefly explain the transmission scheme used in this configuration.
We first explain the achievability of the $\mathsf{DoF}$ pair: $(d_{1}, d_{2})=(\lambda_1,0)$. To this end, note that receiver $1$ is jammed in an i.i.d. manner with probability $(1-\lambda_{1})$. This implies that for a scheme of sufficiently large duration $n$, it will receive $\lambda_{1}n$ jamming free information symbols (corresponding to those instants in which $S_{1}(t)=0$). However, in the $\mathsf{NN}$ configuration (no $\mathsf{CSIT}$ and no $\mathsf{JSIT}$), the transmitter is not aware of the symbols which are received without being jammed. In order to compensate for the lack of this knowledge, it sends random linear combinations (LCs) (the random coefficients are assumed to be known at the receivers \cite{Erasure}) of $\lambda_{1}n$ symbols over $n$ time slots. For sufficiently large $n$, receiver $1$ obtains $\lambda_{1}n$ jamming free LCs and hence it can decode these symbols. Thus the $\mathsf{DoF}$ pair $(\lambda_{1}, 0)$ is achievable. Similarly, by switching the role of the receivers, the pair $(0, \lambda_{2})$ is also achievable. Finally, the entire region in Theorem \ref{TheoremNN} is achievable by time sharing between these two strategies.
\section{Extensions to Multi-receiver MISO Broadcast Channel}\label{TheoremsKuser}
We present extensions of the $2$-user case to that of a multi-user broadcast channel. In particular, for the $K$-user scenario,
the total number of possible jammer states is $2^K$, which can be interpreted as:
\begin{align}
2^{K}&= \underbrace{{K \choose 0}}_{\mbox{None jammed}} + \underbrace{{K \choose 1}}_{\mbox{One receiver jammed}}+\ldots+ \underbrace{{K \choose K}}_{\mbox{All receivers jammed}}.
\end{align}
In such a scenario, the jammer state $S(t)$ at time $t$ is a length $K$ vector with each element taking values $0$ or $1$.
We present the optimal $\mathsf{DoF}$ regions for the $\mathsf{PP},\mathsf{PD}$ and $\mathsf{PN}$ configurations in Theorem~\ref{TheoremPP-KUser} and for the $\mathsf{NN}$ configuration in Theorem~\ref{TheoremNN-KUser}. For the $\mathsf{DP}$ and $\mathsf{DD}$ configurations, we present lower bounds on the sum $\mathsf{DoF}$ under a class of symmetric jamming strategies. Furthermore, we illustrate the impact of jamming and the availability of $\mathsf{JSIT}$ (either instantaneous or delayed) by comparing the $\mathsf{DoF}$ achievable in these configurations with the $\mathsf{DoF}$ achieved in the absence of jamming (with delayed $\mathsf{CSIT}$) i.e., $\mathsf{DoF}_{\mathsf{MAT}}(K)$ (defined in Section~\ref{system_model}) \cite{MAT2012}. For most of the configurations, the achievability schemes are straight forward extensions of the coding schemes presented in the $2$-user case. Hence, in the interest of space, we do not outline these schemes again.
\begin{Theo}\label{TheoremPP-KUser}
The $\mathsf{DoF}$ region of the $K$-user MISO BC for each of the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configurations $\mathsf{PP}$, $\mathsf{PD}$ and $\mathsf{PN}$ is the same and is given by the set of non-negative pairs $(d_{1},\ldots, d_{K})$ that satisfy
\begin{align}
d_{k}&\leq \lambda_{k}, \quad k=1,\ldots,K,
\end{align}
where $\lambda_k$ is the probability with which the $k$th receiver is not jammed.
\end{Theo}
The achievability of this $\mathsf{DoF}$ region is a straightforward extension of the scheme proposed in Section~\ref{Schemes} for the $2$-user MISO BC for the corresponding $I_{\mathsf{CSIT}}I_{\mathsf{JSIT}}$ configurations.
\begin{Theo}\label{TheoremNN-KUser}
The $\mathsf{DoF}$ region of the $K$-user MISO BC for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration $\mathsf{NN}$ is given as
\begin{align}
\sum_{k=1}^{K}\frac{d_{k}}{\lambda_{k}}&\leq 1.
\end{align}
\end{Theo}
The achievability of this $\mathsf{DoF}$ region is also an extension of the transmission scheme proposed for the $\mathsf{NN}$ configuration in Section~\ref{Schemes} for the $2$-user MISO BC. This is a simple time sharing scheme (TDMA) where the transmitter sends information to only one receiver among the $K$ receivers at any given time instant.
For the $\mathsf{DP}$ and $\mathsf{DD}$ configurations,
we consider a symmetric scenario in which any subset of receivers are jammed symmetrically i.e,
\begin{align}\label{invariant_lambda}
\lambda_{\mathbf{s}}=\lambda_{\pi(\mathbf{s})},
\end{align}
where $\lambda_{\mathbf{s}}$ is the probability that $S(t)=\mathbf{s}$ at any given time $t$ and $\pi(\mathbf{s})$ denotes any permutation of the $K$ length jamming state vector $S(t)=\mathbf{s}$. In particular, for $K=3$, this assumption corresponds to
\begin{align}\label{equal_lambda}
\lambda_{001}=\lambda_{010}=\lambda_{100}, \mbox{ and } \lambda_{011}=\lambda_{101}=\lambda_{110}.
\end{align}
From \eqref{lambda_receiver_1} and \eqref{equal_lambda}, it is seen that
\begin{align}
\lambda_1=\lambda_2=\lambda_3= \lambda_{000}+ \lambda_{001}+\lambda_{010}+\lambda_{011},
\end{align}
i.e., the marginal probabilities of the receivers being jammed (un-jammed) are the same. For the $K$-user case, we have
\begin{align}
\lambda_1=\lambda_2\ldots=\lambda_K.
\end{align}
Let $||\mathbf{s}||_1$ denote the $1$-norm of the $K$-length vector $\mathbf{s}$. In other words, $||\mathbf{s}||_1$ indicates the total number of $1$'s seen in the vector $\mathbf{s}$ and hence $0\leq ||\mathbf{s}||_1\leq K$. We denote $\eta_{j}$ as the total probability with which any $j$ receivers are jammed i.e.,
\begin{align}
\eta_j=Pr\left(||\mathbf{s}||_1=j\right),\ j=0,1,2,\ldots,K,
\end{align}
where $Pr(\mathcal{E})$ indicates the probability of occurrence of event $\mathcal{E}$. By definition, we have $\sum_{j=0}^K \eta_j = 1$ and we collectively define these probabilities as the $\left(K+1\right)\times1$ vector $\eta=[\eta_0,\eta_1,\ldots,\eta_K]^T$. For instance, $\eta_0=1$ corresponds to the no jamming scenario i.e., none of the receivers are jammed. For $K=3$, we have
\begin{align}\label{eta_values_3_user}
\eta_{0}=\lambda_{000}, \quad \eta_{1}=\lambda_{001}+\lambda_{010}+\lambda_{100}\nonumber \\
\eta_{2}=\lambda_{011}+\lambda_{101}+\lambda_{110}, \quad \eta_{3}=\lambda_{111}.
\end{align}
It is easily verified that $\eta_{0}+\eta_{1}+\eta_{2}+\eta_{3}=1$. From \eqref{lambda_receiver_1}, \eqref{equal_lambda}-\eqref{eta_values_3_user}, it is seen that
$\lambda_i=\eta_0+\frac{2}{3}\eta_1+\frac{1}{3}\eta_2$, for $i=1,2,3.$
In general, it can be shown that
\begin{align}\label{final_lambda}
\lambda_1=\lambda_2=\cdots=\lambda_K= \left(\sum_{j=0}^{K}\left(\frac{K-j}{K}\right)\eta_{j}\right)\triangleq \lambda_{\eta}.
\end{align}
\begin{Theo}\label{TheoremDP-KUser}
An achievable sum $\mathsf{DoF}$ of the $K$-user MISO BC for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration $\mathsf{DP}$ is given as\footnote{$\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}({\bf{\eta}}, K)$ and $\mathsf{DoF}_{\mathsf{DD}}^{\mathsf{Ach}}({\bf{\eta}}, K)$ denote the lower bound (achievable) on the $\mathsf{DoF}$ obtained in the $\mathsf{DP}$ and $\mathsf{DD}$ configurations in the $K$-user scenario.}
\begin{align}\label{TheoremDP-KUser-DoF}
\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}({\bf{\eta}}, K)&= \sum_{j=0}^{K} \eta_{j}\mathsf{DoF}_{\mathsf{MAT}}(K-j).
\end{align}
\end{Theo}
We note from Theorem~\ref{TheoremDP-KUser} that when perfect $\mathsf{JSIT}$ is available, the sum $\mathsf{DoF}$ in \eqref{TheoremDP-KUser-DoF} is achieved by transmitting only to the unjammed receivers. The transmission scheme that achieves this sum $\mathsf{DoF}$ is the $K$-user extension of the scheme presented for the $\mathsf{DP}$ configuration in Section~\ref{Schemes}.
\begin{Theo}\label{TheoremDD-KUser}
An achievable sum $\mathsf{DoF}$ of the $K$-user MISO BC for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration $\mathsf{DD}$ is given as
\begin{align}\label{DoF-TheoremDD-KUser}
\mathsf{DoF}_{\mathsf{DD}}^{\mathsf{Ach}}({\bf{\eta}}, K)&= \left(\sum_{j=0}^{K}\left(\frac{K-j}{K}\right)\eta_{j}\right)\mathsf{DoF}_{\mathsf{MAT}}(K)\triangleq \lambda_{\eta}\mathsf{DoF}_{\mathsf{MAT}}(K).
\end{align}
\end{Theo}
\begin{remark} {\em The $\mathsf{DoF}$ result in \eqref{DoF-TheoremDD-KUser} has the following interesting interpretation: consider a simpler problem in which only two jamming states are present: $S(t)=00\cdots0$ (none of the receivers are jammed) with probability $\lambda_{\eta}$ and $S(t)=11\cdots1$ (all receivers are jammed) with probability $1-\lambda_{\eta}$. In addition, assume that the transmitter has perfect $\mathsf{JSIT}$. In such a scenario, the transmitter can use the $\textsf{MAT}$ scheme (for the $K$-user case) for $\lambda_{\eta}$ fraction of time to achieve $\lambda_{\eta}\mathsf{DoF}_{\textsf{MAT}}$ degrees-of-freedom (this scenario is equivalent to jamming state $S(t)=00$ in the $\mathsf{DP}$ configuration for a $2$-user scenario which is discussed in Section~\ref{Schemes}) which is precisely as shown in \eqref{DoF-TheoremDD-KUser}. Even though equivalence of these distinct problems is not evident \emph{a priori}, the $\mathsf{DoF}$ result indicates the benefits of using $\mathsf{JSIT}$, although it is completely delayed.}
\end{remark}
It is reasonable to expect that the $\mathsf{DoF}$ achievable in the $\mathsf{DP}$ configuration will be higher than the $\mathsf{DoF}$ that can be achieved in the $\mathsf{DD}$ configuration. This can be readily shown as
\begin{align}
\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}({\bf{\eta}}, K)&= \sum_{i=0}^{K} \eta_{i}\mathsf{DoF}_{\mathsf{MAT}}(K-i)\nonumber \\
&= \sum_{i=0}^{K} \Bigg[\left(\frac{K-i}{K}\right)\eta_{i}\left(\frac{K}{K-i}\right)\mathsf{DoF}_{\mathsf{MAT}}(K-i)\Bigg]\nonumber \\
&= \sum_{i=0}^{K} \Bigg[\left(\frac{K-i}{K}\right)\eta_{i}\left(\frac{K}{K-i}\right)\frac{K-i}{1+\frac{1}{2}+\cdots+\frac{1}{K-i}}\Bigg]\nonumber \\
&= \sum_{i=0}^{K} \Bigg[\left(\frac{K-i}{K}\right)\eta_{i}\frac{K}{1+\frac{1}{2}+\cdots+\frac{1}{K-i}}\Bigg]\nonumber \\
&\geq \sum_{i=0}^{K} \Bigg[\left(\frac{K-i}{K}\right)\eta_{i}\frac{K}{1+\frac{1}{2}+\cdots+\frac{1}{K}}\Bigg]\nonumber \\
&= \sum_{i=0}^{K} \Bigg[\left(\frac{K-i}{K}\right)\eta_{i}\mathsf{DoF}_{\mathsf{MAT}}(K)\Bigg]\nonumber \\
&=\mathsf{DoF}_{\mathsf{DD}}^{\mathsf{Ach}}({\bf{\eta}}, K).
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{DoF_Compare_Paper.pdf}
\vspace{-18pt}
\caption{$\mathsf{DoF}$ comparison of $\mathsf{MAT}$ scheme, $\mathsf{DP}$ and $\mathsf{DD}$ configurations.}
\label{K_User_DP_DD}
\end{figure}
\noindent Fig.~\ref{K_User_DP_DD} shows the $\mathsf{DoF}$ comparison between $\mathsf{DP}$ and the $\mathsf{DD}$ configurations for a special case in which
any subset of receivers is jammed with probability $\lambda_s=\frac{1}{2^K},\forall s$ i.e.,
\begin{align}\label{eta_j}
\eta_j=\frac{{K \choose j}}{2^K}.
\end{align}
It is seen that the sum $\mathsf{DoF}$ achieved in these configurations increases with the number of users, $K$. The additional $\mathsf{DoF}$ achievable in the $\mathsf{DP}$ configuration compared to the $\mathsf{DD}$ configuration increases with $K$ and is lower bounded by\footnote{For large values of $K$, the expression
$1+\frac{1}{2}+\ldots+\frac{1}{K}\rightarrow \mathsf{log}(K)$. Hence the right side expression of \eqref{DoF_DD_DP_LB} behaves as $\frac{K}{\left(\mathsf{log}(K)\right)^2}\underset{K\rightarrow \infty}{\longrightarrow}\infty$.}
\begin{align}\label{DoF_DD_DP_LB}
\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}(\eta,K)-\mathsf{DoF}_{\mathsf{DD}}^{\mathsf{Ach}}(&\eta,K)\geq\frac{K-1}{4\left(1+\frac{1}{2}+\ldots+\frac{1}{K}\right)^2} \underset{K\rightarrow \infty}{\longrightarrow}\infty.
\end{align}
Also, it can be shown that the $\mathsf{DoF}$ gap between $\mathsf{DoF}_{\mathsf{MAT}}(K)$ and $\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}(\eta,K)$ is lower bounded by\footnote{For large $K$, the expression on the right side of \eqref{DoF_MAT_DP_LB} behaves as $\frac{K}{\mathsf{log}(K)}-\frac{K}{\left(\mathsf{log}(K)\right)^2}$ which tends to $\infty$ as $K\rightarrow \infty$.}
\begin{align}\label{DoF_MAT_DP_LB}
\mathsf{DoF}_{\mathsf{MAT}}(K)-\mathsf{DoF}_{\mathsf{DP}}^{\mathsf{Ach}}(\eta,K)\geq\frac{K}{2\left(1+\frac{1}{2}+\ldots+\frac{1}{K}\right)}
-\frac{K\left(2^K-1\right)}{2^K\left(1+\frac{1}{2}+\ldots+\frac{1}{K}\right)^2} \underset{K\rightarrow \infty}{\longrightarrow}\infty.
\end{align}
These bounds illustrate the dependence of the sum $\mathsf{DoF}$ on the availability of perfect $\mathsf{JSIT}$ in a multi-user MISO BC in the presence of jamming attacks. For example, since the transmitter has instantaneous knowledge of the users that are jammed (at any given instant) in the $\mathsf{DP}$ configuration, it can conserve energy by only transmitting to the un-jammed receivers. However since no such information is available in the $\mathsf{DD}$ configuration, the transmitter has to transmit across different jamming scenarios (different subsets of receivers jammed) in such a configuration to realize $\mathsf{DoF}$ gains over naive transmission schemes.
The sum $\mathsf{DoF}$ achieved in these configurations is much larger than the $\mathsf{DoF}$ achieved using a naive transmission scheme ($\mathsf{DoF}=\lambda_{\eta}$) where the transmitter sends information to only one user at any given time instant without using $\mathsf{CSIT}$ or $\mathsf{JSIT}$. The coding schemes that achieve the sum $\mathsf{DoF}$ in \eqref{TheoremDP-KUser-DoF} and \eqref{DoF-TheoremDD-KUser} are detailed in Section~\ref{Schemes}.
\subsection{Achievability Scheme for $\mathsf{DD}$ configuration in $K$-user scenario}
Before we explain the $\mathsf{DoF}$ achievability scheme for the $K$-user $\mathsf{DD}$ configuration, we briefly explain the $\mathsf{DD}$ configuration for the $2$-user MISO BC for a special case in which the users are un-jammed with equal probability i.e,
\begin{align}
\lambda=\lambda_1=\lambda_2 \triangleq \lambda_{01}=\lambda_{10}.
\end{align}
In such a scenario, a simple $2$-phase scheme can be developed to achieve the optimal sum $\mathsf{DoF}$ of $\frac{4\lambda}{3}$ (this is seen by substituting $\lambda_1=\lambda_2=\lambda$ and $n_1=n_2$ in \eqref{DoFDD_opt_point}). We define order $1$ symbols as the set of symbols intended to only $1$ receiver while order $2$ symbols as the ones that are intended at both the receivers. Phase $1$ of the algorithm only uses order $1$ symbols while the order $2$ symbols are used in the $2$nd phase. We define $\mathsf{DoF}_1(2,\lambda)$ as the $\mathsf{DoF}$ of the $2$-user MISO BC to deliver order $1$ symbols in the case where the receivers are un-jammed with probability $\lambda$. On similar lines, $\mathsf{DoF}_2(2,\lambda)$ is the $\mathsf{DoF}$ of the system in delivering the order $2$ symbols to both the receivers.
\begin{itemize}
\item {Phase 1:} Phase $1$ consists of $2$-stages one each for both the users. In each of these stages, symbols intended for a particular user are transmitted such that they are received at either receiver. Since each receiver is un-jammed with a probability $\lambda$, it receives $\lambda d$ symbols intended for itself and $\lambda d$ symbols of the other user which is used as side information in the $2$nd phase of this algorithm. Here $d$ is the time duration of each stage of this phase. Since a total of $n$ symbols are transmitted in each stage, we have
\begin{align}
2\lambda d=n \implies d=\frac{n}{2\lambda}.
\end{align}
The total time spent in this phase is $2d=\frac{n}{\lambda}$. At the end of this phase, each user has $\lambda d=\frac{n}{2}$ intended symbols and $\frac{n}{2}$ symbols intended for the other user. Using these $\frac{n}{2}$ side information symbols available at both the users, the transmitter can form $\frac{n}{2}$ LCs of these symbols which are transmitted in the $2$nd phase of the algorithm. These LCs are required by both the users that help them decode their intended symbols. Thus we have
\begin{align}
\mathsf{DoF}_1(2,\lambda)=\frac{2n}{\frac{n}{\lambda}+\frac{\frac{n}{2}}{\mathsf{DoF}_2(2,\lambda)}}.
\end{align}
\item Phase 2: The $\frac{n}{2}$ LCs of the side information symbols created at the transmitter are multicasted in this phase until both the receivers receive all the LCs. These LCs help the receivers decode their intended symbols using the available $\mathsf{CSIR}$ and the side information created in the $1$st phase of the algorithm. Since each receiver is jammed with probability $(1-\lambda)$, the expected time taken to deliver a order $2$ symbol to any receiver is $\frac{1}{\lambda}$. Hence the total time spent in this stage is
\begin{align}
\frac{n}{2}\mathrm{max}\left(\frac{1}{\lambda},\frac{1}{\lambda}\right)=\frac{n}{2\lambda}.
\end{align}
Using the above result we can calculate $\mathsf{DoF}_2(2,\lambda)$ as
\begin{align}
\mathsf{DoF}_2(2,\lambda)=\frac{\frac{n}{2}}{\frac{n}{2\lambda}}=\lambda.
\end{align}
\end{itemize}
Hence the sum $\mathsf{DoF}$ of the $2$-user MISO BC is given by
\begin{align}
\mathsf{DoF}_1(2,\lambda)&=\frac{2n}{\frac{n}{\lambda}+\frac{\frac{n}{2}}{\lambda}}\\
&=\frac{4\lambda}{3},
\end{align}
which is also the sum $\mathsf{DoF}$ obtained from \eqref{DoFDD_opt_point} for the specified scenario. This algorithm also builds up the platform for developing the transmission scheme for the $K$-receiver MISO BC whose $\mathsf{DoF}$ is given by \eqref{DoF-TheoremDD-KUser}.
An interesting observation can be made from this result. If the jammer attacks either both or none of the receivers at any given time (i.e., $\lambda_{01}=\lambda_{10}=0$) such that the total probability with which the receivers are jammed together is $(1-\lambda)$ (and hence the probability with which they are not jammed is $\lambda$), the $\mathsf{DoF}$ achievable is $\frac{4}{3}\lambda$ ( $\frac{4}{3}$ is the optimal $\mathsf{DoF}$ achieved in a $2$-receiver MISO BC with d-$\mathsf{CSIT}$ \cite{MAT2012}). This is shown in Fig.~\ref{state_equivalence}. Though such an equivalence is not seen \emph{apriori}, the sum $\mathsf{DoF}$ achieved by this transmission scheme shows that a synergistic benefit is achievable over a long duration of time if all the possible jammer states are used jointly.
\begin{figure}
\hspace{-20pt}\includegraphics[width=1.1\textwidth]{StateEquivalence.pdf}
\vspace{-140pt}\caption{State Equivalence when $\lambda_{01}=\lambda_{10}$.}
\label{state_equivalence}
\end{figure}
\subsubsection{$K$-User:}
In this subsection, we present a $K$-phase transmission scheme that achieves the $\mathsf{DoF}$ described in Theorem~\ref{TheoremDD-KUser}.
The achievability of Theorem~\ref{TheoremDD-KUser} is based on the synergistic usage of delayed $\mathsf{CSIT}$ and delayed
$\mathsf{JSIT}$ by exploiting side-information created at the un-jammed receivers in the past and transmitting linear combinations of such side-information symbols in the future. Before we explain the scheme for this configuration, we first give a brief description of the transmission scheme that achieves $\mathsf{DoF}_{\mathsf{MAT}}(K)$ for the $K$-user MISO BC with delayed $\mathsf{CSIT}$ and in the absence of any jamming attacks \cite{MAT2012}. Hereafter this scheme is referred to as the $\mathsf{MAT}$ scheme.
A $K$-phase transmission scheme is presented in \cite{MAT2012} to achieve $\mathsf{DoF}_{\mathsf{MAT}}(K)$. The transmitter has information about the symbols (or linear combinations of the transmitted symbols) available at the receivers via delayed-$\mathsf{CSIT}$. The first phase of the algorithm sends symbols intended for each receiver. The side information (symbols that are desired at a user but are available at other users) created at the receivers are used in the subsequent phases of the algorithm to create higher order symbols (symbols required by $>1$ receivers) \cite{MAT2012}, thereby increasing the $\mathsf{DoF}$.
Specifically, $(K-j+1){K\choose j}$ order $j$ symbols (symbols intended for $j\leq K$ receivers) are chosen in the $j$th phase to create $j{K\choose j+1}$ order $(j+1)$ symbols that are necessary for $(j+1)\leq K$ receivers and are used in the $(j+1)$th phase of the algorithm. Using this, a recursive relationship between
$\mathsf{DoF}$ of the $j$th and $(j+1)$th phases is obtained as \cite[eq. (28)]{MAT2012}.\vspace{-10pt}
\begin{align}
\mathsf{DoF}_j(K)=\frac{(K-j+1){K\choose j}}{{K\choose j}+\frac{j {K\choose j+1}}{\mathsf{DoF}_{j+1}(K)}},\label{OrderjMAT}
\end{align}
where $\mathsf{DoF}_j(K)$ is the $\mathsf{DoF}$ of the $K$-user MISO BC to deliver order $j$ symbols. This recursive relationship then leads to the $\mathsf{DoF}$ for a $K$-user MISO BC given by $\mathsf{DoF}_{\mathsf{MAT}}(K)$. See \cite{MAT2012} for a complete description of the coding scheme.
It is assumed that the decoding process takes place when the receivers have received sufficient linear combinations (LCs) of the intended symbols required to decode their symbols. For example, $n$ \textit{jamming free} LCs are sufficient to decode $n$ symbols at a receiver. The synergistic benefits of transmitting over different jamming states in these configurations is achievable in the long run by exploiting the knowledge about the present and past jamming states.
Before we present the proposed scheme, notations necessary for the proposed multi-phase transmission scheme are presented. Let $\mathsf{DoF}_{j}(\eta,K)$ denote the $\mathsf{DoF}$ of the $K$-user MISO BC to deliver order $j$ symbols to the users in a scenario where the receivers are jamming free with equal probability $\lambda_{\eta}$ given by \eqref{final_lambda} which is a function of $\eta=[\eta_{0}, \eta_{1}, \ldots, \eta_{K}]$.
We show that in the presence of a jammer, the following relationship (analogous to (\ref{OrderjMAT})) holds:
\begin{align}\label{DoF_Iterative_K_user}
\mathsf{DoF}_{j}(\eta,K)=\frac{(K-j+1){K \choose j}}{\frac{{K \choose j}}{\lambda_{\eta}}+\frac{j{K\choose {j+1}}}{\mathsf{DoF}_{j+1}(\eta,K)}}.
\end{align}
Using \eqref{DoF_Iterative_K_user}, it can be shown that the $\mathsf{DoF}$ of a $K$-user MISO BC in the presence of such a jamming attack is given by
\begin{align}\label{DoF_Iterative_K_user_compare_MAT}
\mathsf{DoF}_{\mathsf{DD}}(\eta,K)\triangleq\mathsf{DoF}_1(\eta,K)=\lambda_{\eta}\mathsf{DoF}_{\mathsf{MAT}}(K),
\end{align}
where $\mathsf{DoF}_{\mathsf{MAT}}(K)$ is given by \eqref{DoF_MAT}.
We initially present the transmission scheme for the $1$st phase and later generalize it for the $j$th $(j\leq K)$ phase.
\paragraph{Phase $1$:}
Phase $1$ of the coding scheme consists of $K$-stages, one for each receiver. In these stages, symbols intended for each user are transmitted in their respective stages. For instance, let $\left(a_1,a_2,\ldots,a_K\right)$ represent the symbols to be delivered to the $1$st receiver. The transmitter sends these symbols on its $K$ transmit antennas during the $1$st stage. The receivers get \textit{jamming free} LCs of these symbols when they are not jammed.
Each of these $K$-stages end when the LCs intended for a particular receiver are received \textit{jamming free} by at least one of the $K$ receivers. This information (i.e., which LC was received and whether it was received unjammed or not) is available at the transmitter using d-$\mathsf{CSIT}$ and d-$\mathsf{JSIT}$.
Let $d$ denote the duration of one such stage. A particular receiver is not jammed with probability $\lambda_{\eta}$, and hence
$\lambda_{\eta} d$ jamming free LCs are available at each of the $K$ receivers. Since $K$ jamming free LCs suffice to decode $K$ symbols, we enforce
$K\times (\lambda_{\eta} d)=K \Rightarrow d=\frac{1}{\lambda_{\eta}}$.
Since there are $K$ such stages in the $1$st phase, the total time duration of this phase is
$\tau_1=\frac{K}{\lambda_{\eta}}$.
At the end of this phase, each receiver requires $(K-1)\lambda_{\eta} d=(K-1)$ additional jamming free LCs that are available at the other receivers to decode its symbols. Each receiver has order $1$ LCs (side information) that are required by the other receivers. These order $1$ LCs are used to create order $2$ LCs which are subsequently used in the $2$nd phase of the algorithm.
Notice that the total number of $(K-1)K$ order $1$ LCs available at the end of this phase can be used to create $\frac{(K-1)K}{2}$ order $2$ LCs that are used in the $2$nd phase of the algorithm. Thus the $\mathsf{DoF}$ can be represented as
\begin{align}
\mathsf{DoF}_1(\eta,K)=\frac{K^{2}}{\tau_1+\tau_2},
\end{align}
where $\tau_2$ is the total time taken to deliver $\frac{(K-1)K}{2}$ order $2$ LCs to the receivers and is given by
\begin{align}
\tau_2=\frac{\frac{(K-1)K}{2}}{\mathsf{DoF}_2(\eta,K)}.
\end{align}
Thus the $\mathsf{DoF}_1(\eta,K)$ is given by
\begin{align}
\mathsf{DoF}_1(\eta,K)&=\frac{K^2}{\frac{K}{\lambda_{\eta}}+\frac{\frac{(K-1)K}{2}}{\mathsf{DoF}_2(\eta,K)}}=\frac{K}{\frac{1}{\lambda_{\eta}}+\frac{\frac{(K-1)}{2}}{\mathsf{DoF}_2(\eta,K)}}.
\end{align}
Notice here that this conforms with the recursion given in \eqref{DoF_Iterative_K_user}.
\paragraph{Phase $j$ :}
In the $j$th phase the transmitter sends $(K-j+1)$ order $j$-symbols on its $(K-j+1)$ transmit antennas. The $j$th phase has ${K \choose j}$ such stages one each for the ${K \choose j}$ different subsets of $j\leq K$ receivers. It can be shown that $(j+1)$ order $j$ jamming free symbols (LCs) can be used to create $j$ symbols (LCs) of order $(j+1)$. Equivalently, $1$ order $j$ symbol helps to create $\frac{j}{j+1}$ order $(j+1)$ symbols. Hence, $(K-j+1)$ order $j$ jamming free symbols transmitted in the $j$th phase, help to create $(K-j)\frac{j}{j+1}$ order $(j+1)$ symbols which are subsequently transmitted in the $(j+1)$th phase of the algorithm.
Since each receiver is not jammed with probability $\lambda_{\eta}$, the average time required to deliver an order $j$ symbol (LC) is $\frac{1}{\lambda_{\eta}}$. The total time duration of this phase is $\frac{{K \choose j}}{\lambda_{\eta}}$ since we have ${K \choose j}$ stages. Thus the $j$th phase transmits $(K-j+1){K \choose j}$ \textit{jamming free} symbols of order $j$ in $\frac{{K \choose j}}{\lambda_{\eta}}$ time slots and generates $j{K \choose j+1}$ order $(j+1)$ symbols which are delivered to the receivers in the subsequent phases. The $K$th phase transmits symbols of order $K$ and does not create any new symbols (LCs).
Thus we have\vspace{-10pt}
\begin{align}
\mathsf{DoF}_{j}(\eta,K)=\frac{(K-j+1){K \choose j}}{\frac{{K \choose j}}{\lambda_{\eta}}+\frac{j{K\choose {j+1}}}{\mathsf{DoF}_{j+1}(\eta,K)}},
\end{align}
Using this recurrence relation we can show that
\begin{align}\label{DoF_K_User_Result}
\mathsf{DoF}_{1}(\eta,K)=\lambda_{\eta} \frac{K}{1+\frac{1}{2}\ldots+\frac{1}{K}}.
\end{align}
\section{Conclusions}\label{Conclusions}
In this paper, the MISO broadcast channel has been studied
in the presence of a time-varying jammer. We introduced a new variable $\mathsf{JSIT}$ to indicate the presence or absence of information regarding the jammer.
From our results, the interplay between $\mathsf{CSIT}$ and $\mathsf{JSIT}$ and associated impact on the $\mathsf{DoF}$ regions are illuminated.
For the case in which there is perfect $\mathsf{CSIT}$, by employing a randomized zero-forcing precoding scheme, the $\mathsf{DoF}$ region remains the same irrespective of the availability/un-availability of $\mathsf{JSIT}$.
On the other hand, for the case of delayed $\mathsf{CSIT}$ and $\mathsf{JSIT}$, our results show that both the jammer and channel state information must be synergistically used in order
to provide $\mathsf{DoF}$ gains. Whenever there is perfect $\mathsf{JSIT}$, it is seen that the jammers' states are separable and the optimal strategy is to send information symbols independently across different jamming states. The result for the $\mathsf{NN}$ configuration quantifies the $\mathsf{DoF}$ loss in case of unavailability of $\mathsf{JSIT}$ and $\mathsf{CSIT}$. The results for the $K$-user MISO BC indicate the scaling of the sum $\mathsf{DoF}$ with the number of users in the presence of jamming attacks. Finally, several interesting open questions and directions emerge out of this work. We outline some of these below.
\begin{enumerate}
\item It remains unclear if the inner bounds to the $\mathsf{DoF}$ region for the $\mathsf{DN}$ and $\mathsf{ND}$ configurations are optimal. The exact $\mathsf{DoF}$ region for these configurations remains an interesting open problem. The $\mathsf{DoF}$ region achieved by the $\mathsf{DD}$ configuration in the $2$-user MISO BC serves as an outer bound for both $\mathsf{DN}$ and $\mathsf{ND}$ configurations. Improving both these inner and outer bounds for the $\mathsf{DN}$ and $\mathsf{ND}$ configurations is a challenging problem.
\item For the $\mathsf{DD}$ configuration, a $3$-stage scheme is proposed to achieve the optimal $\mathsf{DoF}$ region. In the $3$rd stage of this coding scheme, the transmitter did not require any $\mathsf{CSIT}$ or $\mathsf{JSIT}$. This raises an interesting question: what is the minimum fraction of time over which $\mathsf{CSIT}$ and $\mathsf{JSIT}$ must be acquired in order to achieve the optimal $\mathsf{DoF}$. A similar problem has been considered in the absence of a jammer \cite{ACSIT2012}, in which the minimum amount of $\mathsf{CSIT}$ required to achieve a particular $\mathsf{DoF}$ value is characterized.
\item Finally, the results presented in this paper can possibly be extended to scenarios where the jammers' statistics are not stationary. While the analysis presented in this paper assumes that the jammers' states are i.i.d and that its statistics are constant across time, it would be interesting to understand the behavior of $\mathsf{DoF}$ regions in a scenario where the jammers' states are correlated across time and also possibly correlated with the transmit signals.
\end{enumerate}
\section{Appendix}\label{Appendix}
\subsection{Converse Proof for Theorem \ref{TheoremPP}}
We first present the proof for the bounds $d_{1}\leq \lambda_{1}$ and $d_{2}\leq \lambda_{2}$ for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration PP. Clearly, these bounds would also continue to serve as valid outer bounds for the worse configurations $\mathsf{PD}$ and $\mathsf{PN}$. Since these bounds are symmetric, it suffices to prove that $d_{1}\leq \lambda_{1}$. We have the following sequence of bounds for receiver $1$:
\begin{align}
nR_{1}&= H(W_{1})= H(W_{1}|\mathbf{H}^{n}, S_1^{n}, S_{2}^{n})\\
&= I(W_{1}; Y_{1}^{n}| \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) + H(W_{1}|Y_{1}^{n}, \mathbf{H}^{n}, S_1^{n}, S_{2}^{n})\\
&\leq I(W_{1}; Y_{1}^{n}| \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) + n\epsilon_{n}\label{Fano1}\\
&= h(Y_{1}^{n}| \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) - h(Y_{1}^{n}| W_{1}, \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) +n\epsilon_{n}\\
&\leq n\log(P_{T})- h(Y_{1}^{n}| W_{1}, \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) +n\epsilon_{n}\\
&\leq n\log(P_{T})- h(Y_{1}^{n}| \mathbf{X}^{n}, W_{1}, \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) +n\epsilon_{n}\\
&= n\log(P_{T})- h(S_{1}^{n}\mathbf{G}_{1}^{n}\mathbf{J}_{1}^{n}+ N_{1}^{n}| \mathbf{X}^{n}, W_{1}, \mathbf{H}^{n}, S_1^{n}, S_{2}^{n}) +n\epsilon_{n}\\
&= n\log(P_{T})- h(S_{1}^{n}\mathbf{G}_{1}^{n}\mathbf{J}_{1}^{n}+ N_{1}^{n}| S_1^{n},S_{2}^{n}) +n\epsilon_{n}\\
&\leq n\log(P_{T})- n(\lambda_{10}+\lambda_{11})\log(P_{T})+n\epsilon_{n}\label{eq2}\\
&= n(1-\lambda_{10}-\lambda_{11})\log(P_{T}) + n\epsilon_{n}\\
&= n(\lambda_{00}+\lambda_{01})\log(P_{T})+ n\epsilon_{n}\\
&=n\lambda_{1}\log(P_{T})+ n\epsilon_{n},\label{eq3}
\end{align}
where (\ref{Fano1}) follows from Fano's inequality, \eqref{eq2} is obtained from the fact that $\mbox{Pr}(S_1(t)=1)=(\lambda_{11}+\lambda_{10})$ and the assumption that the jammer's signal is AWGN with power $P_{T}$. Normalizing \eqref{eq3} by $n\log(P_{T})$, and taking the limit $n\rightarrow \infty$ and then $P_{T}\rightarrow \infty$, we obtain
\begin{align}
d_{1}&\leq \lambda_{1}.
\end{align}
On similar lines since user $2$ is jammed with probability $(\lambda_{11}+\lambda_{01})$, it can be readily proved that
\begin{align}
d_{2}&\leq(\lambda_{00}+\lambda_{10})=\lambda_{2}.
\end{align}
\subsection{Converse Proof for Theorem \ref{TheoremDD}}
We next provide the proof for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration $\mathsf{DD}$, in which the transmitter has delayed $\mathsf{CSIT}$ and delayed $\mathsf{JSIT}$. In this case, we prove the bound:
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{(\lambda_{1}+\lambda_{2})}&\leq 1.
\end{align}
Let $\Omega=(\mathbf{H}^{n}, S_{1}^{n}, S_{2}^{n})$ denote the global $\mathsf{CSIT}$ and $\mathsf{JSIT}$ for the entire block length $n$.
We next enhance the original MISO broadcast channel and make it physically degraded by letting a genie provide
the output of receiver $1$ to receiver $2$. Formally, in the new MISO BC, receiver $1$ has $(Y_{1}^{n}, \Omega)$
and receiver $2$ has $(Y_{1}^{n}, Y_{2}^{n}, \Omega)$. We next note that for a physically degraded BC, it is known
from \cite{ElGamalFB} that feedback from the receivers \textit{does not} increase the capacity region.
We can therefore remove delayed $\mathsf{CSIT}$ and delayed $\mathsf{JSIT}$ from the transmitter without decreasing the capacity region of the enhanced MISO BC.
The capacity region for this model serves as an outer bound to the capacity region of the original MISO BC.
Henceforth, we will focus on the model in which receiver $1$ has $(Y_{1}^{n}, \Omega)$, receiver $2$ has $(Y_{1}^{n}, Y_{2}^{n}, \Omega)$ and most importantly, the transmitter has \textit{no} $\mathsf{CSIT}$ and \textit{no} $\mathsf{JSIT}$.
For such a model, we next state the following key property, which we call as the statistical equivalence property (denoted in short by SEP):
\begin{align}
h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t)) &= h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t)).\label{SEP}
\end{align}
This property follows from the following facts:
\begin{enumerate}
\item $\mathbf{H}_{1}(t)$ and $\mathbf{H}_{2}(t)$ are drawn from the same distribution.
\item $N_{1}(t)$ and $N_{2}(t)$ are statistically equivalent, i.e., drawn from the same distribution.
\item $\mathbf{X}(t)$ is independent of $(\mathbf{H}_{1}^{n}, \mathbf{H}_{2}^{n}, N_{1}^{n}, N_{2}^{n})$.
\end{enumerate}
With these in place, we have the following sequence of bounds for receiver $1$:
\begin{align}
nR_{1}&= H(W_{1})= H(W_{1}|\Omega)\\
&\leq I(W_{1}; Y_{1}^{n} | \Omega) + n\epsilon_{n}\\
&= h(Y_{1}^{n}|\Omega) - h(Y_{1}^{n}|W_{1}, \Omega) + n\epsilon_{n}\\
&\leq n\log(P_{T})- h(Y_{1}^{n}|W_{1}, \Omega) + n\epsilon_{n}.\label{Term1}
\end{align}
We now focus on the second term appearing in (\ref{Term1}):
\begin{align}
h(Y_{1}^{n}|W_{1}, \Omega)&=\sum_{t=1}^{n}h(Y_{1t} | W_{1}, \Omega, Y_{1}^{t-1})\geq \sum_{t=1}^{n}h(Y_{1t} | W_{1}, \Omega, Y_{1}^{t-1}, Y_{2}^{t-1})\\
&= \sum_{t=1}^{n}h(Y_{1t} | S_{1}(t), S_{2}(t), \underbrace{W_{1}, \Omega\setminus\{S_{1}(t), S_{2}(t)\}, Y_{1}^{t-1}, Y_{2}^{t-1}}_{\triangleq U_{t}})\\
&= \sum_{t=1}^{n}h(Y_{1t} | S_{1}(t), S_{2}(t), U_{t})\\
&= \sum_{t=1}^{n}\Big[\lambda_{00}h(Y_{1t} | S_{1}(t)=0, S_{2}(t)=0, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(Y_{1t} | S_{1}(t)=0, S_{2}(t)=1, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{10}h(Y_{1t} | S_{1}(t)=1, S_{2}(t)=0, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{11}h(Y_{1t} | S_{1}(t)=1, S_{2}(t)=1, U_{t})\Big]\\
&= \sum_{t=1}^{n}\Big[\lambda_{00}h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{10}h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{11}h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | U_{t})\Big]\label{E1}\\
&= \sum_{t=1}^{n}\Big[(\lambda_{00}+\lambda_{01})h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t})\nonumber\\
&\hspace{1.3cm} + (\lambda_{10}+\lambda_{11})h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | U_{t}) \Big]\\
&\geq \sum_{t=1}^{n}\Big[(\lambda_{00}+\lambda_{01})h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t})\nonumber\\
&\hspace{1.3cm} + (\lambda_{10}+\lambda_{11})h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | \mathbf{H}_{1}(t)\mathbf{X}(t), U_{t}) \Big]\\
\end{align}
\begin{align}
&= \sum_{t=1}^{n}\Big[(\lambda_{00}+\lambda_{01})h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t})\nonumber\\
&\hspace{1.3cm} + (\lambda_{10}+\lambda_{11})h( \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t)) \Big]\\
&\geq \sum_{t=1}^{n}\Big[(\lambda_{00}+\lambda_{01})\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t})}_{\triangleq \eta_{t}}+ (\lambda_{10}+\lambda_{11})\log(P_{T})\Big]\\
&= (\lambda_{00}+\lambda_{01})\sum_{t=1}^{n}\eta_{t} + n(\lambda_{10}+\lambda_{11})\log(P_{T})\label{E2}
\end{align}
where (\ref{E1}) follows from the fact that the random variables $S_{1}(t), S_{2}(t)$ are i.i.d. across time, and independent of all other random variables including $(U_{t}, N_{1}(t), \mathbf{X}(t), \mathbf{H}_{1}(t))$, i.e., we have used that
$h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t})= h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t})$ and similar simplifications for the remaining three terms. In (\ref{E2}), we have defined
\begin{align}
\eta_{t}\triangleq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | U_{t}).
\end{align}
Substituting (\ref{E2}) back in (\ref{Term1}), we obtain
\begin{align}
nR_{1}&\leq n(\lambda_{00}+\lambda_{01})\log(P_{T}) - (\lambda_{00}+\lambda_{01})\sum_{t=1}^{n}\eta_{t} + n\epsilon_{n}\\
&= n\lambda_{1}\log(P_{T}) - \lambda_{1}\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg] + n\epsilon_{n}\label{Term1a}
\end{align}
We next focus on the receiver $2$ which has access to both $Y_{1}^{n}$ and $Y_{2}^{n}$:
\begin{align}
nR_{2}&= H(W_{2})= H(W_{2}|W_{1},\Omega)\\
&\leq I(W_{2}; Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega) + n\epsilon_{n}\\
&= h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega) - h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, W_{2}, \Omega) + n\epsilon_{n}\\
&\leq h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega) - h(Y_{1}^{n}, Y_{2}^{n}| \mathbf{X}^{n}, W_{1}, W_{2}, \Omega) + n\epsilon_{n}\\
&\leq h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega) - n(\lambda_{01}+\lambda_{10}+2\lambda_{11})\log(P_{T}) + n\epsilon_{n},\label{E3}
\end{align}
where (\ref{E3}) follows from the fact that given $(\mathbf{X}^{n}, W_{1}, W_{2}, \Omega)$, the contribution of the information bearing signals (i.e., $\mathbf{H}_{k}^{n}\mathbf{X}^{n}$ for $k=1,2$) can be removed from $(Y_{1}^{n}, Y_{2}^{n})$, and we are left only with jamming signals (which are assumed to be Gaussian with power $P_{T}$, i.i.d. across time and independent of all other random variables) and unit variance Gaussian noise, the entropy of which can be lower bounded as in (\ref{E3}).
We next expand the first term in (\ref{E3}) as follows:
\begin{align}
h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega)&=\sum_{t=1}^{n} h(Y_{1t}, Y_{2t}| W_{1}, \Omega, Y_{1}^{t-1}, Y_{2}^{t-1})\\
&= \sum_{t=1}^{n}h(Y_{1t}, Y_{2t} | S_{1}(t), S_{2}(t), \underbrace{W_{1}, \Omega\setminus\{S_{1}(t), S_{2}(t)\}, Y_{1}^{t-1}, Y_{2}^{t-1}}_{\triangleq U_{t}})\\
&= \sum_{t=1}^{n} h(Y_{1t}, Y_{2t} | S_{1}(t), S_{2}(t), U_{t})
\end{align}
\begin{align}
&= \sum_{t=1}^{n} \Big[ \lambda_{00}h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=1, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{10}h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{11}h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=1, U_{t})\Big].\label{E4}
\end{align}
We next bound each one of the four terms in (\ref{E4}) as follows:
\begin{align}
&h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&= h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t), \mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t)| S_{1}(t)=0, S_{2}(t)=0, U_{t})\\
&\leq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t)| S_{1}(t)=0, S_{2}(t)=0, U_{t}) \nonumber\\
&\qquad+ h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t)| S_{1}(t)=0, S_{2}(t)=0, U_{t})\\
&= h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t)| U_{t}) + h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t)| U_{t})\\
&= 2\eta_{t}\label{SEPuse12},
\end{align}
where in (\ref{SEPuse12}), we have made use of the (conditional version of) statistical equivalence property (SEP) for the two receivers as stated in (\ref{SEP}).
\begin{align}
&h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=1, U_{t})\nonumber\\
&= h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t), \mathbf{H}_{2}(t)\mathbf{X}(t)+ \mathbf{G}_{2}(t)\mathbf{J}(t)+ N_{2}(t)| S_{1}(t)=0, S_{2}(t)=1, U_{t})\\
&\leq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t)| S_{1}(t)=0, S_{2}(t)=1, U_{t}) \nonumber\\
&\qquad+ h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ \mathbf{G}_{2}(t)\mathbf{J}(t)+N_{2}(t)| S_{1}(t)=0, S_{2}(t)=1, U_{t})\\
&\leq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t)| U_{t}) + \log(P_{T})\\
&= \eta_{t}+ \log(P_{T})\label{SEPuse1}.
\end{align}
In summary, we have
\begin{align}
h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=0, U_{t})&\leq 2\eta_{t}\\
h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=1, U_{t})&\leq \eta_{t} + \log(P_{T})\\
h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=0, U_{t})&\leq \eta_{t} + \log(P_{T})\\
h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=1, U_{t})&\leq 2\log(P_{T}).
\end{align}
Substituting these back in (\ref{E4}), we obtain
\begin{align}
h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega)&\leq n(\lambda_{01}+\lambda_{10}+2\lambda_{11})\log(P_{T}) + (\lambda_{01}+\lambda_{10}+2\lambda_{00})\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg]\label{E5}
\end{align}
Upon substituting (\ref{E5}) back in (\ref{E3}), we have the following bound on $R_{2}$:
\begin{align}
nR_{2}&\leq (\lambda_{01}+\lambda_{10}+2\lambda_{00})\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg] + n\epsilon_{n}\\
&= (\lambda_{1}+\lambda_{2})\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg] + n\epsilon_{n}\label{Term2a}
\end{align}
In summary, from (\ref{Term1a}) and (\ref{Term2a}), we can write
\begin{align}
nR_{1}&\leq n\lambda_{1}\log(P_{T}) - \lambda_{1}\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg] + n\epsilon_{n}\label{Term1Final}\\
nR_{2}&\leq (\lambda_{1}+\lambda_{2})\Bigg[\sum_{t=1}^{n}\eta_{t}\Bigg] + n\epsilon_{n}\label{Term2Final}
\end{align}
Eliminating the term $\Big[\sum_{t=1}^{n}\eta_{t}\Big]$, we obtain
\begin{align}
n\frac{R_{1}}{\lambda_{1}} + n\frac{R_{2}}{(\lambda_{1}+\lambda_{2})}&\leq n\log(P_{T}) + n\epsilon^{'}_{n}
\end{align}
Normalizing by $n\log(P_{T})$, and taking the limits $n\rightarrow \infty$ and then $P_{T}\rightarrow \infty$, we obtain the bound:
\begin{align}
\frac{d_{1}}{\lambda_{1}}+ \frac{d_{2}}{(\lambda_{1}+\lambda_{2})}&\leq 1.
\end{align}
Reversing the role of receivers $1$ and $2$, i.e., making receiver $2$ degraded with respect to receiver $1$, we can similarly obtain the other bound
\begin{align}
\frac{d_{1}}{(\lambda_{1}+\lambda_{2})}+ \frac{d_{2}}{\lambda_{2}}&\leq 1.
\end{align}
This completes the proof of the converse for Theorem \ref{TheoremDD}.
\subsection{Converse Proof for Theorem \ref{TheoremDP}}
We next provide the proof for the ($\mathsf{CSIT}$, $\mathsf{JSIT}$) configuration $\mathsf{DP}$, in which the transmitter has delayed $\mathsf{CSIT}$ and perfect (instantaneous) $\mathsf{JSIT}$. In this case, we prove the bound:
\begin{align}
2d_{1}+d_{2}&\leq 2\lambda_{00}+2\lambda_{01}+\lambda_{10}
\end{align}
Let $\Omega=(\mathbf{H}^{n}, S_{1}^{n}, S_{2}^{n})$ denote the global $\mathsf{CSIT}$ and $\mathsf{JSIT}$ for the entire block length $n$.
As in the proof for Theorem \ref{TheoremDD}, we enhance the original MISO broadcast channel and make it physically degraded by letting a genie provide
the output of receiver $1$ to receiver $2$. Formally, in the new MISO BC, receiver $1$ has $(Y_{1}^{n}, \Omega)$
and receiver $2$ has $(Y_{1}^{n}, Y_{2}^{n}, \Omega)$. We next note that for a physically degraded BC, it is known
from \cite{ElGamalFB} that feedback from the receivers \textit{does not} increase the capacity region.
We can therefore remove delayed $\mathsf{CSIT}$ from the transmitter without decreasing the capacity region of the enhanced MISO BC.
The capacity region for this model serves as an outer bound to the capacity region of the original MISO BC.
Henceforth, we will focus on the model in which receiver $1$ has $(Y_{1}^{n}, \Omega)$, receiver $2$ has $(Y_{1}^{n}, Y_{2}^{n}, \Omega)$ and
most importantly, the transmitter has \textit{no} $\mathsf{CSIT}$. Note that unlike in proof for Theorem \ref{TheoremDD}, in this case we \textit{cannot} remove the
assumption of perfect $\mathsf{JSIT}$. Recall that in the proof of Theorem \ref{TheoremDD}, we made use of the following relationships (which we called as the statistical equivalence property):
\begin{align}
&h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=i, S_{2}(t)=j, U_{t}) \nonumber\\
&\quad= h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t) | S_{1}(t)=i^{'}, S_{2}(t)=j^{'}, U_{t}).\label{SEPDD}
\end{align}
for $i, i^{'}, j, j^{'}\in \{0,1\}$.
In this case, we can only use a stricter version of the statistical equivalence property:
\begin{align}
&h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t}) \nonumber\\
&\quad= h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t}).\label{SEPDP}
\end{align}
The reason is that for the $\mathsf{DP}$ configuration, due to the fact that the transmitter has perfect $\mathsf{JSIT}$, the marginal probabilities $p(X(t)| S_{1}(t)=i, S_{2}(t)=j, U_{t})$
can depend explicitly on $(i,j)$, the realization of jammer's strategies at time $t$, which was not the case in Theorem \ref{TheoremDD}.
With these in place, we have the following sequence of bounds for receiver $1$:
\begin{align}
nR_{1}&\leq n\log(P_{T})- h(Y_{1}^{n}|W_{1}, \Omega) + n\epsilon_{n}.\label{TermDP1}
\end{align}
We next focus on the second term in (\ref{TermDP1}):
\begin{align}
h(Y_{1}^{n}|W_{1}, \Omega)&=\sum_{t=1}^{n}h(Y_{1t} | W_{1}, \Omega, Y_{1}^{t-1})\geq \sum_{t=1}^{n}h(Y_{1t} | W_{1}, \Omega, Y_{1}^{t-1}, Y_{2}^{t-1})\\
&= \sum_{t=1}^{n}h(Y_{1t} | S_{1}(t), S_{2}(t), \underbrace{W_{1}, \Omega\setminus\{S_{1}(t), S_{2}(t)\}, Y_{1}^{t-1}, Y_{2}^{t-1}}_{\triangleq U_{t}})\\
&= \sum_{t=1}^{n}h(Y_{1t} | S_{1}(t), S_{2}(t), U_{t})\nonumber \\
&= \sum_{t=1}^{n}\Big[\lambda_{00}h(Y_{1t} | S_{1}(t)=0, S_{2}(t)=0, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(Y_{1t} | S_{1}(t)=0, S_{2}(t)=1, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{10}h(Y_{1t} | S_{1}(t)=1, S_{2}(t)=0, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{11}h(Y_{1t} | S_{1}(t)=1, S_{2}(t)=1, U_{t})\Big]\nonumber\\
&= \sum_{t=1}^{n}\Big[\lambda_{00}h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t}) \nonumber\\
&\hspace{1.3cm} + \lambda_{01}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=1, U_{t})}_{\geq 0} \nonumber\\
&\hspace{1.3cm} + \lambda_{10}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | S_{1}(t)=1, S_{2}(t)=0, U_{t})}_{\geq \log(P_{T})} \nonumber\\
&\hspace{1.3cm} + \lambda_{11}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ \mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t) | S_{1}(t)=1, S_{2}(t)=1, U_{t})}_{\geq \log(P_{T})}\Big]\label{E1DP}\nonumber \\
&\geq \sum_{t=1}^{n}\Big[\lambda_{00}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t})}_{\triangleq \eta^{(00)}_{t}} \nonumber\\
&\hspace{1.3cm} + \lambda_{01}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=1, U_{t})}_{\triangleq \eta^{(01)}_{t}} \nonumber\\
&\hspace{1.3cm} + (\lambda_{10}+\lambda_{11})\log(P_{T})\Big
\end{align}
\begin{align}
&= \lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} + \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n(\lambda_{10}+\lambda_{11})\log(P_{T}),\label{E2DP}
\end{align}
where in (\ref{E1DP}), we used the fact that elements of $\mathbf{J}_{T}$ are i.i.d. with variance $P_{T}$, and in (\ref{E2DP}), we have defined
\begin{align}
\eta^{(00)}_{t}&\triangleq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0,U_{t})\\
\eta^{(01)}_{t}&\triangleq h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=1,U_{t}).
\end{align}
Substituting (\ref{E2DP}) in (\ref{TermDP1}), we obtain
\begin{align}
nR_{1}&\leq n(\lambda_{00}+\lambda_{01})\log(P_{T}) - \lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} - \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n\epsilon_{n}\label{F1DP}
\end{align}
We next focus on the receiver $2$ which has access to both $Y_{1}^{n}$ and $Y_{2}^{n}$. We can obtain the following bound similar to the one obtained in the proof for Theorem \ref{TheoremDD}:
\begin{align}
nR_{2}&\leq h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega) - n(\lambda_{01}+\lambda_{10}+2\lambda_{11})\log(P_{T}) + n\epsilon_{n},\label{E3DP}
\end{align}
We next expand the first term in (\ref{E3DP}) as follows:
\begin{align}
h(Y_{1}^{n}, Y_{2}^{n}| W_{1}, \Omega)&=\sum_{t=1}^{n} h(Y_{1t}, Y_{2t}| W_{1}, \Omega, Y_{1}^{t-1}, Y_{2}^{t-1})\\
&= \sum_{t=1}^{n}h(Y_{1t}, Y_{2t} | S_{1}(t), S_{2}(t), \underbrace{W_{1}, \Omega\setminus\{S_{1}(t), S_{2}(t)\}, Y_{1}^{t-1}, Y_{2}^{t-1}}_{\triangleq U_{t}})\\
&= \sum_{t=1}^{n} h(Y_{1t}, Y_{2t} | S_{1}(t), S_{2}(t), U_{t})\\
&= \sum_{t=1}^{n} \Big[ \lambda_{00}h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{01}\underbrace{h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=1, U_{t})}_{\leq h(Y_{1t} | S_{1}(t)=0, S_{2}(t)=1, U_{t})+ \log(P_{T})}\nonumber\\
&\hspace{1.3cm} + \lambda_{10}\underbrace{h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=0, U_{t})}_{\leq 2\log(P_{T})}\nonumber\\
&\hspace{1.3cm} + \lambda_{11}\underbrace{h(Y_{1t}, Y_{2t} | S_{1}(t)=1, S_{2}(t)=1, U_{t})}_{\leq 2\log(P_{T})}\Big]\nonumber \\
&\leq \sum_{t=1}^{n} \Big[ \lambda_{00}h(Y_{1t}, Y_{2t} | S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(Y_{1t}| S_{1}(t)=0, S_{2}(t)=1, U_{t})\nonumber\\
&\hspace{1.3cm} + (\lambda_{01}+2\lambda_{10}+2\lambda_{11})\log(P_{T})\Big]\nonumber
\end{align}
\begin{align}
&\leq \sum_{t=1}^{n} \Big[ \lambda_{00}h(Y_{1t}| S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{00}h(Y_{2t}| S_{1}(t)=0, S_{2}(t)=0, U_{t})\nonumber\\
&\hspace{1.3cm} + \lambda_{01}h(Y_{1t}| S_{1}(t)=0, S_{2}(t)=1, U_{t})\nonumber\\
&\hspace{1.3cm} + (\lambda_{01}+2\lambda_{10}+2\lambda_{11})\log(P_{T})\Big]\\
&= \sum_{t=1}^{n} \Big[ \lambda_{00}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t})}_{= \ \eta^{(00)}_{t}}\nonumber\\
&\hspace{1.3cm} + \lambda_{00}\underbrace{h(\mathbf{H}_{2}(t)\mathbf{X}(t)+ N_{2}(t) | S_{1}(t)=0, S_{2}(t)=0, U_{t})}_{= \ \eta^{(00)}_{t}}\nonumber\\
&\hspace{1.3cm} + \lambda_{00}\underbrace{h(\mathbf{H}_{1}(t)\mathbf{X}(t)+ N_{1}(t) | S_{1}(t)=0, S_{2}(t)=1, U_{t})}_{= \ \eta^{(01)}_{t}}\nonumber\\
&\hspace{1.3cm} + (\lambda_{01}+2\lambda_{10}+2\lambda_{11})\log(P_{T})\Big]\\
&= 2\lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} + \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n(\lambda_{01}+2\lambda_{10}+2\lambda_{11})\log(P_{T}).\label{E4DP}
\end{align}
Substituting (\ref{E4DP}) in (\ref{E3DP}), we get
\begin{align}
nR_{2}&\leq 2\lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} + \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n\lambda_{10}\log(P_{T}) + n\epsilon_{n}.\label{E5DP}
\end{align}
Collectively, from (\ref{F1DP}) and (\ref{E5DP}), we can then write:
\begin{align}
nR_{1}&\leq n(\lambda_{00}+\lambda_{01})\log(P_{T}) - \lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} - \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n\epsilon_{n}\label{M1DP}\\
nR_{2}&\leq 2\lambda_{00}\sum_{t=1}^{n}\eta^{(00)}_{t} + \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t} + n\lambda_{10}\log(P_{T}) + n\epsilon_{n}\label{M2DP}
\end{align}
Taking $2\times$ (\ref{M1DP}) $+$ (\ref{M2DP}), we obtain:
\begin{align}
n(2R_{1}+R_{2})&\leq n(2\lambda_{00}+ 2\lambda_{01} + \lambda_{10})\log(P_{T}) - \lambda_{01}\sum_{t=1}^{n}\eta^{(01)}_{t}+ n\epsilon_{n}\\
&\leq n(2\lambda_{00}+ 2\lambda_{01} + \lambda_{10})\log(P_{T})+ n\epsilon_{n},
\end{align}
where we used the fact that $\eta^{00}_{t}\geq 0$ for all $t$.
Normalizing by $n\log(P_{T})$, and taking the limits $n\rightarrow \infty$, and then $P_{T}\rightarrow \infty$, we obtain
\begin{align}
2d_{1}+d_{2}&\leq 2\lambda_{00}+2\lambda_{01}+\lambda_{10}.
\end{align}
Reversing the roles of receivers $1$ and $2$, we can obtain the other bound:
\begin{align}
d_{1}+2d_{2}&\leq 2\lambda_{00}+2\lambda_{10}+\lambda_{01}.
\end{align}
\subsection{Converse Proof for Theorem \ref{TheoremNN}}
Here, we consider the configuration in which there is no $\mathsf{CSIT}$ and no $\mathsf{JSIT}$ i.e., $\mathsf{NN}$ configuration and prove the bound:
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{\lambda_{2}}&\leq 1.
\end{align}
To this end, we recall a classical result \cite{Bergmans1973}, which states
that for memoryless broadcast channels without feedback, the capacity region only depends on marginal distributions $p(y_{k}|x)$, for $k=1,2$.
This implies for the problem at hand, in which the jammer's strategy is memoryless, and there is no $\mathsf{CSIT}$ and no $\mathsf{JSIT}$,
the capacity region only depends on the \textit{marginal} probabilities $\lambda_{1}$ and $\lambda_{2}$, i.e., the probabilities with which
each of the receiver is not jammed. Without loss of generality, assume that $\lambda_{1}\geq \lambda_{2}$, i.e., receiver $2$ is jammed with higher probability than receiver $1$.
We will now show that this MISO BC falls in the class of \textit{stochastically degraded} broadcast channels. We first recall that a broadcast channel (defined by $p(y_{1},y_{2}|x)$) is stochastically degraded \cite{NITBook} if there exists a random variable $Y_{1^{'}}$ such that
\begin{enumerate}
\item $Y_{1^{'}}|\{X=x\}\sim p_{Y_{1}|X}(y_{1^{'}}|x)$, i.e., $Y_{1^{'}}$ has the same conditional distribution as $Y_{1}$ (given $X$), and
\item $X\rightarrow Y_{1^{'}}\rightarrow Y_{2}$ form a Markov chain.
\end{enumerate}
Hence, in order to show that the MISO BC with no $\mathsf{CSIT}$ and no $\mathsf{JSIT}$ is stochastically degraded, we will show the existence of a random variable $Y_{1^{'}}$ such that $Y_{1^{'}}$ has the same conditional pdf as $Y_{1}$ and $X\rightarrow Y_{1^{'}}\rightarrow Y_{2}$ form a Markov chain.
We first note that the channel outputs for the original BC at time $t$ are:
\begin{align}
Y_{1}(t)&= \mathbf{H}_{1}(t)\mathbf{X}(t)+ S_{1}(t)\mathbf{G}_{1}(t)\mathbf{J}(t)+ N_{1}(t)\\
Y_{2}(t)&= \mathbf{H}_{2}(t)\mathbf{X}(t)+ S_{2}(t)\mathbf{G}_{2}(t)\mathbf{J}(t)+ N_{2}(t).
\end{align}
Next, we create an artificial output $Y_{1^{'}}$, defined at time $t$ as:
\begin{align}
Y_{1^{'}}(t)&= \mathbf{H}_{2}(t)\mathbf{X}(t)+ \tilde{S}(t)S_{2}(t)\mathbf{G}_{2}(t)\mathbf{J}(t)+ N_{2}(t),
\end{align}
where the random variable $\tilde{S}(t)$ is distributed i.i.d. as follows:
\begin{align}
\tilde{S}(t)=
\begin{cases}
0, & \mbox{ w.p. } \frac{\lambda_{1}-\lambda_{2}}{1-\lambda_{2}},\\
1, & \mbox{ w.p. } \frac{1-\lambda_{1}}{1-\lambda_{2}}.
\end{cases}
\end{align}
Furthermore, $\tilde{S}(t)$ is independent of all other random variables.
It is straightforward to verify that $Y_{1^{'}}$ and $Y_{1}$ have the same marginal distribution: since \break $\mathbf{H}_{1}(t)$ and $\mathbf{H}_{2}(t)$ are identically distributed, $\mathbf{G}_{1}(t)$ and $\mathbf{G}_{2}(t)$ are identically distributed, $N_{1}(t)$ and $N_{2}(t)$ are identically distributed, and most importantly, the random variables $\tilde{S}(t)S_{2}(t)$ and $S_{1}(t)$ are identically distributed. Furthermore, note that when $\tilde{S}(t)=0$, then $Y_{2}(t)= Y_{1^{'}}(t)+ \mathbf{G}_{2}(t)\mathbf{J}(t)+ N_{2}(t)$, and when $\tilde{S}(t)=0$, then we have $Y_{2}(t)= Y_{1^{'}}(t)$, i.e., $X(t)\rightarrow Y_{1^{'}}(t)\rightarrow Y_{2}(t)$ forms a Markov chain.
This argument proves that the original MISO broadcast channel with no $\mathsf{CSIT}$ falls in the class of stochastically degraded broadcast channels, for which the capacity region is given by the set of rate pairs $(R_{1}, R_{2})$ satisfying:
\begin{align}
R_{2}&\leq I(U; Y_{2}| \mathbf{H}, S_{1}, S_{2})\\
R_{1}&\leq I(\mathbf{X}; Y_{1}| U, \mathbf{H}, S_{1}, S_{2}),
\end{align}
where $U\rightarrow X\rightarrow (Y_{1}, Y_{2}, S_{1}, S_{2})$ forms a Markov chain.
Using this, we can write
\begin{align}
R_{2}&\leq h(Y_{2}| \mathbf{H}, S_{1}, S_{2}) - h(Y_{2}| U, \mathbf{H}, S_{1}, S_{2})\\
&\leq \log(P_{T}) - (1-\lambda_{2})\log(P_{T})-\lambda_{2}h(\mathbf{H}_{2}X+N_{2}|U, \mathbf{H}) + o(\log(P_{T}))\\
&= \lambda_{2}\log(P_{T}) -\lambda_{2}h(\mathbf{H}_{2}X+N_{2}|U, \mathbf{H}) + o(\log(P_{T})).\label{T3a}
\end{align}
Similarly, the other bound can be written as:
\begin{align}
R_{1}&\leq h(Y_{1}| U, \mathbf{H}, S_{1}, S_{2}) - h(Y_{1}| \mathbf{X}, U, \mathbf{H}, S_{1}, S_{2})\\
&= (1-\lambda_{1})\log(P_{T}) + \lambda_{1}h(\mathbf{H}_{1}X+N_{1}|U, \mathbf{H}) - (1-\lambda_{2})\log(P_{T}) + o(\log(P_{T}))\\
&= \lambda_{1}h(\mathbf{H}_{1}X+N_{1}|U, \mathbf{H}) + o(\log(P_{T}))\\
&= \lambda_{1}h(\mathbf{H}_{2}X+N_{2}|U, \mathbf{H}) + o(\log(P_{T})),\label{T3b}
\end{align}
where (\ref{T3b}) follows from the statistically equivalence property (as stated in the previous section).
Combining (\ref{T3a}) and (\ref{T3b}), we obtain:
\begin{align}
\frac{R_{1}}{\lambda_{1}}+ \frac{R_{2}}{\lambda_{2}}&\leq \log(P_{T}) + o(\log(P_{T}))
\end{align}
Normalizing by $\log(P_{T})$ and taking the limit $P_{T}\rightarrow \infty$, we have the proof for
\begin{align}
\frac{d_{1}}{\lambda_{1}}+\frac{d_{2}}{\lambda_{2}}&\leq 1.
\end{align}
\subsection{Converse Proof for Theorem \ref{TheoremNP}}
Here, we consider the configuration in which there is no $\mathsf{CSIT}$ and perfect $\mathsf{JSIT}$ i.e., $\mathsf{NP}$ configuration and prove the bound:
\begin{align}
d_{1}+d_{2} &\leq \lambda_{00}+\lambda_{01}+\lambda_{10}.
\end{align}
Let $\Omega=(S_{1}^{n}, S_{2}^{n})$ denote the global $\mathsf{JSIT}$ for the entire block length $n$.
We have the following sequence of bounds
\begin{align}
n(R_1+R_2)&=H(W_1)+H(W_2) \\
&=H(W_1,W_2) \\
&= H(W_1,W_2|\Omega) \\
&=I(W_1,W_2;Y_1^n,Y_2^n|\Omega)+H(W_1,W_2|Y_1^n,Y_2^n,\Omega) \\
&\leq I(W_1,W_2;Y_1^n,Y_2^n|\Omega)+n\epsilon_n \\
&=h(Y_1^n,Y_2^n|\Omega)-h(Y_1^n,Y_2^n|\Omega,W_1,W_2)+n\epsilon_n
\end{align}
Note here that the two receivers are statistically equivalent when they are not jammed with a probability $\lambda_{00}$. In such a scenario, the transmitter can send information to only one receiver as there is no $\mathsf{CSIT}$ available. Using this, we have the following
\begin{align}
n(R_1+R_2)&\leq h(Y_1^n,Y_2^n|\Omega)-h(Y_1^n,Y_2^n|\Omega,W_1,W_2)+n\epsilon_n \\
&\leq n(\lambda_{00}\log{P_T}+\lambda_{01}2\log(P_T)+\lambda_{10}2\log(P_T)+\lambda_{11}2\log(P_T))\\
&\hspace{12pt}-n(\lambda_{01}\log(P_T)+\lambda_{10}\log(P_T)+\lambda_{11}\log(P_T)) \\
n(R_1+R_2)&=n\left(\lambda_{00}\log(P_T)+\lambda_{01}\log(P_T)+\lambda_{10}\log(P_T)\right).
\end{align}
Normalizing by $n\log(P_T)$ and then $n\rightarrow \infty$ and $P_T\rightarrow \infty$ we obtain the bound
\begin{align}
d_1+d_2&\leq (\lambda_{00}+\lambda_{01}+\lambda_{10}).
\end{align}
This completes the converse proof for Theorem~\ref{TheoremNN}.
| {'timestamp': '2013-08-30T02:01:27', 'yymm': '1308', 'arxiv_id': '1308.6316', 'language': 'en', 'url': 'https://arxiv.org/abs/1308.6316'} |
\section{Future work}
\label{sec:future}
\paragraph{Formal Cartesian cubical type theory}
With Guillaume Brunerie, Thierry Coquand, and Dan Licata, we have developed a
formal Cartesian cubical type theory with univalent universes, accompanied by a
constructive cubical set model, most of which has been formalized in Agda in the
style of \citet{ortonpitts16topos}. This forthcoming work explores the the Kan
operations described in this paper---in particular, with the addition of $x=z$
diagonal constraints---in a proof-theoretic and model-theoretic setting, rather
than the computational setting emphasized in this paper.
\paragraph{Cubical (higher) inductive types}
Evan Cavallo is currently extending this work to account for a general class of
inductive types with higher-dimensional recursive constructors. In the
cubical setting, such types are generated by dimension-parametrized constructors
with prescribed boundaries. (For example, $\ensuremath{\mathbb{S}^1}$ is generated by $\ensuremath{\mathsf{base}}$ and
$\lp{x}$, whose $x$-faces are $\ensuremath{\mathsf{base}}$.)
\paragraph{Discrete, $\Hcom$, and $\Coe$ types}
In this paper we divide types into pretypes and Kan types, but finer
distinctions are possible. Some types support $\Hcom$ but not necessarily
$\Coe$, or vice versa. Exact equality types always have $\Hcom$ structure
because $\ensuremath{\star}$ is a suitable composite for every box, but not $\Coe$ in general.
Types with $\Hcom$ or
$\Coe$ structure are not themselves closed under all type formers,
but depend on each other; for example,
\begin{enumerate}[itemsep=.2ex,parsep=.2ex]
\item $\cwftype{hcom}{\picl{a}{A}{B}}$ when $\cwftype{pre}{A}$ and
$\wftype{hcom}{\oft{a}{A}}{B}$,
\item $\cwftype{hcom}{\sigmacl{a}{A}{B}}$ when $\cwftype{hcom}{A}$ and
$\wftype{Kan}{\oft{a}{A}}{B}$,
\item $\cwftype{coe}{\picl{a}{A}{B}}$ when $\cwftype{coe}{A}$ and
$\wftype{coe}{\oft{a}{A}}{B}$, and
\item $\cwftype{coe}{\Path{x.A}{M}{N}}$ when $\cwftype{Kan}[\Psi,x]{A}$,
$\coftype{M}{\dsubst{A}{0}{x}}$, and $\coftype{N}{\dsubst{A}{1}{x}}$.
\end{enumerate}
\emph{Discrete Kan} types, such as $\ensuremath{\mathsf{nat}}$ and
$\ensuremath{\mathsf{bool}}$, are not only Kan but also strict sets, in the sense that all paths are
exactly equal to reflexivity. To be precise, we say $\ceqtype{disc}{A}{B}$ if
for any $\tds{\Psi_1}{\psi_1}{\Psi}$, $\tds{\Psi_2}{\psi_2,\psi_2'}{\Psi_1}$,
we have $\ceqtype{Kan}[\Psi_2]{\td{A}{\psi_1\psi_2}}{\td{B}{\psi_1\psi_2'}}$,
and for any $\coftype[\Psi_1]{M}{\td{A}{\psi_1}}$, we have
$\ceqtm[\Psi_2]{\td{M}{\psi_2}}{\td{M}{\psi_2'}}{\td{A}{\psi_1\psi_2}}$.
Discrete Kan types are closed under most type formers, including exact equality.
Exact equality types do not in general admit coercion, because
$\Coe{x.\Eq{A}{\dsubst{P}{0}{x}}{P}}{0}{1}{\ensuremath{\star}}$ turns any line $P$ into an
exact equality $\Eq{A}{\dsubst{P}{0}{x}}{\dsubst{P}{1}{x}}$ between its end
points. However, if $\cwftype{disc}{A}$ then
$\wftype{disc}{\oft{a}{A},\oft{a'}{A}}{\Eq{A}{a}{a'}}$, because paths in $A$
\emph{are} exact equalities.
\paragraph{Further improvements in \textsc{{\color{red}Red}PRL}{}}
Implementing and using this type theory in \textsc{{\color{red}Red}PRL}{} has already led to several
minor improvements not described in this paper:
\begin{enumerate}[itemsep=.2ex,parsep=.2ex]
\item We have added \emph{line types} to \textsc{{\color{red}Red}PRL}{}, $(x{:}\mathsf{dim})\to A$,
path types whose end points are not fixed. Elements of line types are simply
terms with an abstracted dimension, which has proven cleaner in practice than
the iterated sigma type $\sigmacl{a}{A}{\sigmacl{a'}{A}{\Path{\_.A}{a}{a'}}}$.
\item We are experimenting with alternative implementations of the Kan
operations for $\Fcom$ and $\ua$ types in \textsc{{\color{red}Red}PRL}{}, some inspired by the work
in the forthcoming formal Cartesian cubical type theory mentioned above.
\item The \textsc{{\color{red}Red}PRL}{} proof theory includes discrete Kan, $\Hcom$, and $\Coe$
types as described above, in addition to the Kan types and pretypes described in
this paper.
\item The definitions of the $M \ensuremath{\steps_\stable} M'$ and $\sisval{M}$ judgments have been
extended to account for computations that are stable by virtue of taking place
under dimension binders.
\end{enumerate}
\section{Introduction}
\label{sec:intro}
In Parts I and II of this series \citep{ahw2016cubical,ah2016cubicaldep} we
developed \emph{mathematical meaning explanations} for higher-dimensional type
theories with Cartesian cubical structure \citep{ahw2017cubical}. In Part III,
we extend these meaning explanations to support an infinite hierarchy of Kan,
univalent universes \citep{voevodskycmu}.
\paragraph{Mathematical meaning explanations}
We define the judgments of computational higher type theory as dimension-indexed
relations between programs equipped with a deterministic operational semantics.
These relations are cubical analogues of Martin-L\"{o}f's \emph{meaning
explanations} \citep{cmcp} and of the original Nuprl type theory
\citep{constableetalnuprl}, in which types are merely specifications of the
computational behavior of programs. Because types are defined behaviorally, we
trivially obtain the \emph{canonicity} property at every type. (Difficulties
instead lie in checking formation, introduction, and elimination rules. In
contrast, the type theory of \citet{cohen2016cubical} is defined by such rules,
and a separate argument by \citet{hubercanonicity} establishes canonicity.)
\begin{theorem}[Canonicity]
If $M$ is a closed term of type $\ensuremath{\mathsf{bool}}$, then $M \ensuremath{\Downarrow} \ensuremath{\mathsf{true}}$ or $M \ensuremath{\Downarrow}
\ensuremath{\mathsf{false}}$.
\end{theorem}
In a sense, our meaning explanations serve as \emph{cubical logical relations},
or a \emph{cubical realizability model}, justifying the rules presented in
\cref{sec:rules}. However, those rules are intended only for reference; the
rules included in the \textsc{{\color{red}Red}PRL}{} proof assistant \citep{redprl} differ
substantially (as described in \cref{sec:rules}). Moreover, as
$\coftype[x_1,\dots,x_n]{M}{A}$ means that $M$ is a ($n$-dimensional) program
with behavior $A$, programs do not have unique types, nor are typing judgments
decidable.
\paragraph{Cartesian cubes}
Our programs are parametrized by \emph{dimension names} $x, y, \dots$ ranging
over an abstract interval with end points $0$ and $1$. Programs with at most $n$
free dimension names represent $n$-dimensional cubes: points ($n=0$), lines
($n=1$), squares ($n=2$), and so forth. Substituting $\dsubst{}{0}{x}$ or
$\dsubst{}{1}{x}$ yields the left or right face of a cube in dimension $x$;
substituting $\dsubst{}{y}{x}$ yields the $x,y$ diagonal; and weakening by $y$
yields a cube degenerate in the $y$ direction.
The resulting notion of cubes is Cartesian
\citep{licata2014cubical,awodey16cartesian,buchholtz2017}. In contrast, the
\citet{bch} model of type theory has only faces and degeneracies, while the
\citet{cohen2016cubical} type theory uses a de Morgan algebra of cubes with
connections ($x\land y$, $x\lor y$) and reversals ($1-x$) in addition to faces,
diagonals, and degeneracies. The Cartesian notion of cube is appealing because
it results in a \emph{structural} dimension context (with exchange, weakening,
and contraction) and requires no equational reasoning at the dimension level.
\paragraph{Kan operations}
\emph{Kan types} are types equipped with coercion ($\Coe$) and homogeneous
composition ($\Hcom$) operations. If $A$ is a Kan type varying in $x$, the
\emph{coercion} $\Coe*{x.A}$ sends an element $M$ of $\dsubst{A}{r}{x}$ to an
element of $\dsubst{A}{r'}{x}$, such that the coercion is equal to $M$ when
$r=r'$. For example, given a point $M$ in the $\dsubst{}{0}{x}$ side of the type
$A$, written $\coftype[\cdot]{M}{\dsubst{A}{0}{x}}$, we can coerce it to a point
$\Coe{x.A}{0}{1}{M}$ in $\dsubst{A}{1}{x}$, or coerce it to an $x$-line
$\Coe{x.A}{0}{x}{M}$ between $M$ and $\Coe{x.A}{0}{1}{M}$.
\[
\begin{tikzpicture}
\node (lhs) at (0 , 1) {$M$} ;
\node (rhs) at (4 , 1) {$\Coe{x.A}{0}{1}{M}$} ;
\draw (lhs) [->] to node [auto] {$\Coe{x.A}{0}{x}{M}$} (rhs) ;
\tikzset{shift={(8,0)}}
\draw (0 , 2) [->] to node [above] {\small $x$} (0.5 , 2) ;
\draw (0 , 2) [->] to node [left] {\small $y$} (0 , 1.5) ;
\node (tl) at (1.5 , 2) {$\cdot$} ;
\node (tr) at (5.5 , 2) {$\cdot$} ;
\node (bl) at (1.5 , 0) {$\cdot$} ;
\node (br) at (5.5 , 0) {$\cdot$} ;
\draw (tl) [->] to node [above] {$M$} (tr) ;
\draw (tl) [->] to node [left] {$N_0$} (bl) ;
\draw (tr) [->] to node [right] {$N_1$} (br) ;
\draw (bl) [->,dashed] to node [below] {$\Hcom{A}{0}{1}{M}{\cdots}$} (br) ;
\node at (3.5 , 1) {$\Hcom{A}{0}{y}{M}{\cdots}$} ;
\end{tikzpicture}
\]
If $A$ is a Kan type, then \emph{homogeneous composition} in $A$ states that any
open box in $A$ has a composite; for example,
$\Hcom{A}{0}{1}{M}{\tube{x=0}{y.N_0},\tube{x=1}{y.N_1}}$ is the bottom line of
the above square. The cap $M$ is a line on the $\dsubst{}{0}{y}$ side of the
box; $y.N_0$ (resp., $y.N_1$) is a line on the $x=0$ (resp., $x=1$) side of the
box; and the composite is on the $\dsubst{}{1}{y}$ side of the box. Furthermore,
the cap and tubes must be equal where they coincide (the $x=0$ side of $M$ with
the $\dsubst{}{0}{y}$ side of $N_0$), every pair of tubes must be equal where
they coincide (vacuous here, as $x=0$ and $x=1$ are disjoint) and the composite
is equal to the tubes where they coincide (the $x=0$ side of the composite with
the $\dsubst{}{1}{y}$ side of $N_0$). Fillers are the special case in which we
compose to a free dimension name $y$; here,
$\Hcom{A}{0}{y}{M}{\tube{x=0}{y.N_0},\tube{x=1}{y.N_1}}$ is the entire square.
These Kan operations are variants of the uniform Kan conditions first proposed
by \citet{bch}. Notably, \citet{bch} and \citet{cohen2016cubical} combine
coercion and composition into a single heterogeneous composition operation and
do not allow compositions from or to dimension names. Unlike both
\citet{cohen2016cubical} and related work by \citet{licata2014cubical}, we allow
tubes along diagonals ($x=z$), and require every non-trivial box to contain at
least one opposing pair of tubes $x=0$ and $x=1$. The latter restriction
(detailed in \cref{def:valid}) allows us to achieve canonicity for
zero-dimensional elements of the circle and weak booleans.
\paragraph{Pretypes and exact equality}
As in the ``two-level type theories'' of \citet{voevodsky13hts},
\citet{altenkirch16strict}, and \citet{boulier17twolevel}, we allow for
\emph{pretypes} that are not necessarily Kan. In particular, we have types
$\Eq{A}{M}{N}$ of \emph{exact equalities} that internalize (and reflect into)
judgmental equalities $\ceqtm{M}{N}{A}$. Exact equality types are not, in
general, Kan, as one cannot compose exact equalities with non-degenerate lines.
However, unlike in prior two-level type theories, certain exact equality types
\emph{are} Kan (for example, when $A=\ensuremath{\mathsf{nat}}$; see \cref{sec:future} for a precise
characterization). We write $\cwftype{pre}{A}$ when $A$ is a pretype, and
$\cwftype{Kan}{A}$ when $A$ is a Kan type. Pretypes and Kan types are both closed
under most type formers; for example, if $\cwftype{\kappa}{A}$ and $\cwftype{\kappa}{B}$ then
$\cwftype{\kappa}{\arr{A}{B}}$.
\paragraph{Universes and univalence}
We have two cumulative hierarchies of universes $\Upre$ and $\UKan$
internalizing pretypes and Kan types respectively. The Kan universes $\UKan$ are
both Kan and univalent. (See \url{https://git.io/vFjUQ} for a \textsc{{\color{red}Red}PRL}{}-checked
proof of the univalence theorem.) Homogeneous compositions of Kan types are
types whose elements are formal $\Kbox$es of elements of the constituent types.
Every equivalence $E$ between $A$ and $B$ gives rise to the $\ua{x}{A,B,E}$ type
whose $x$-faces are $A$ and $B$; such types are a special case of ``Glue types''
\citep{cohen2016cubical}.
\paragraph{\textsc{{\color{red}Red}PRL}{}}
\textsc{{\color{red}Red}PRL}{} is an interactive proof assistant for computational higher type theory
in the tradition of LCF and Nuprl; the \textsc{{\color{red}Red}PRL}{} logic is principally organized
around dependent refinement rules \citep{spiwack2011,sterling2017}, which are
composed using a simple language of proof tactics. Unlike the inference rules
presented in \cref{sec:rules}, \textsc{{\color{red}Red}PRL}{}'s rules are given in the form of a
goal-oriented sequent calculus which is better-suited for both programming and
automation.
\section{Mathematical meaning explanations}
\label{sec:meanings}
In this section, we finally define the judgments of higher type theory as
relations parametrized by a choice of cubical type system $\tau$. In these
definitions we suppress dependency on $\tau$, but we will write
$\relcts*{\tau}{\judg{\ensuremath{\mathcal{J}}}}$ to make the choice of $\tau$ explicit.
The presuppositions of a judgment are facts that must be true before one can
even sensibly state that judgment. For example, in \cref{def:ceqtm} below, we
presuppose that $A$ is a pretype when defining what it means to be equal
elements of $A$; if we do not know $A$ to be a pretype, $\vper{A}$ has no
meaning. In every judgment $\judg{\ensuremath{\mathcal{J}}}$ we will presuppose that the free
dimensions of all terms are contained in $\Psi$.
\subsection{Judgments}
\begin{definition}\label{def:ceqtypep}
The judgment $\ceqtype{pre}{A}{B}$ holds when $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ and
$\ensuremath{\mathsf{Coh}}(\alpha)$. Whenever $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ the choice of $\alpha$ is
unique and independent of $B$, so we notate it $\vper{A}$.
\end{definition}
\begin{definition}\label{def:ceqtm}
The judgment $\ceqtm{M}{N}{A}$ holds, presupposing $\ceqtype{pre}{A}{A}$, when
$\ensuremath{\mathsf{Tm}}(\vper{A})(M,N)$.
\end{definition}
If $A$ and $B$ have no free dimensions and $\ceqtype{pre}{A}{B}$, then for any
$\Psi'$, $\lift\tau(\Psi',A,B,\vper{A})$ and $\vper{A}$ is context-indexed; if
$M$, $N$, and $A$ have no free dimensions and $\ceqtm{M}{N}{A}$, then
$\lift{(\vper{A}(\Psi'))}(M,N)$ for all $\Psi'$. Therefore one can regard the
ordinary meaning explanations as an instance of these meaning explanations, in
which all dependency on dimensions trivializes.
We are primarily interested in \emph{Kan types}, pretypes equipped with Kan
operations that implement composition, inversion, etc., of cubes. These Kan
operations are best specified using judgments augmented by \emph{dimension
context restrictions}. We extend the prior judgments to restricted ones:
\begin{definition}\label{def:satisfies}
For any $\Psi$ and set of unoriented equations $\Xi = (r_1=r_1',\dots,r_n=r_n')$
in $\Psi$ (that is, $\fd{\etc{r_i},\etc{r_i'}}\subseteq\Psi$), we say that
$\tds{\Psi'}{\psi}{\Psi}$ \emph{satisfies} $\Xi$ if $\td{r_i}{\psi} = \td{r_i'}{\psi}$ for each
$i\in [1,n]$.
\end{definition}
\begin{definition}
\label{def:crestricted}
~\begin{enumerate}
\item
The judgment $\ceqtype{pre}<\Xi>{A}{B}$ holds, presupposing
$\fd{\Xi}\subseteq\Psi$, when $\ceqtype{pre}[\Psi']{\td{A}{\psi}}{\td{B}{\psi}}$
for every $\tds{\Psi'}{\psi}{\Psi}$ satisfying $\Xi$.
\item
The judgment $\ceqtm<\Xi>{M}{N}{A}$ holds, presupposing
$\cwftype{pre}<\Xi>{A}$, when
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{N}{\psi}}{\td{A}{\psi}}$ for every $\tds{\Psi'}{\psi}{\Psi}$
satisfying $\Xi$.
\end{enumerate}
\end{definition}
\begin{definition}\label{def:valid}
A list of equations $\etc{r_i=r_i'}$ is valid if either $r_i=r_i'$ for some $i$,
or $r_i=r_j$, $r_i'=0$, and $r_j'=1$ for some $i,j$.
\end{definition}
\begin{definition}\label{def:kan}
The judgment $\ceqtype{Kan}{A}{B}$ holds, presupposing $\ceqtype{pre}{A}{B}$, when the
following Kan conditions hold for any $\tds{\Psi'}{\psi}{\Psi}$:
\begin{enumerate}
\item
If
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\td{A}{\psi}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\td{A}{\psi}}$
for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\td{A}{\psi}}$
for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm[\Psi']{\Hcom*{\td{A}{\psi}}{r_i=r_i'}}%
{\Hcom{\td{B}{\psi}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\td{A}{\psi}}$;
\item if $r=r'$ then
$\ceqtm[\Psi']{\Hcom{\td{A}{\psi}}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{\td{A}{\psi}}$;
and
\item if $r_i = r_i'$ then
$\ceqtm[\Psi']{\Hcom*{\td{A}{\psi}}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\td{A}{\psi}}$.
\end{enumerate}
\item
If $\Psi' = (\Psi'',x)$ and $\ceqtm[\Psi'']{M}{M'}{\dsubst{\td{A}{\psi}}{r}{x}}$, then
\begin{enumerate}
\item $\ceqtm[\Psi'']{\Coe*{x.\td{A}{\psi}}}{\Coe{x.\td{B}{\psi}}{r}{r'}{M'}}%
{\dsubst{\td{A}{\psi}}{r'}{x}}$; and
\item if $r=r'$ then
$\ceqtm[\Psi'']{\Coe{x.\td{A}{\psi}}{r}{r}{M}}{M}{\dsubst{\td{A}{\psi}}{r}{x}}$.
\end{enumerate}
\end{enumerate}
\end{definition}
We extend the closed judgments to open terms by functionality, that is, an open
pretype (resp., element of a pretype) is an open term that sends equal elements
of the pretypes in the context to equal closed pretypes (resp., elements). The
open judgments are defined simultaneously, stratified by the length of the
context. (We assume the variables $a_1,\dots,a_n$ in a context are distinct.)
\begin{definition}\label{def:wfctx}
We say $\wfctx{(\oft{a_1}{A_1},\dots,\oft{a_n}{A_n})}$ when
\begin{gather*}
\cwftype{pre}{A_1}, \\
\wftype{pre}{\oft{a_1}{A_1}}{A_2}, \dots \\ \text{and}~
\wftype{pre}{\oft{a_1}{A_1},\dots,\oft{a_{n-1}}{A_{n-1}}}{A_n}.
\end{gather*}
\end{definition}
\begin{definition}\label{def:eqtypep}
We say $\eqtype{pre}{\oft{a_1}{A_1},\dots,\oft{a_n}{A_n}}{B}{B'}$,
presupposing \\
$\wfctx{(\oft{a_1}{A_1},\dots,\oft{a_n}{A_n})}$, when for any $\tds{\Psi'}{\psi}{\Psi}$ and any
\begin{gather*}
\ceqtm[\Psi']{N_1}{N_1'}{\td{A_1}{\psi}}, \\
\ceqtm[\Psi']{N_2}{N_2'}{\subst{\td{A_2}{\psi}}{N_1}{a_1}}, \dots\\\text{and}~
\ceqtm[\Psi']{N_n}{N_n'}
{\subst{\td{A_n}{\psi}}{N_1,\dots,N_{n-1}}{a_1,\dots,a_n}},
\end{gather*}
$\ceqtype{pre}[\Psi']
{\subst{\td{B}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}}
{\subst{\td{B'}{\psi}}{N_1',\dots,N_n'}{a_1,\dots,a_n}}$.
\end{definition}
\begin{definition}\label{def:eqtm}
We say $\eqtm{\oft{a_1}{A_1},\dots,\oft{a_n}{A_n}}{M}{M'}{B}$,
presupposing \\
$\wftype{pre}{\oft{a_1}{A_1},\dots,\oft{a_n}{A_n}}{B}$,
when for any $\tds{\Psi'}{\psi}{\Psi}$ and any
\begin{gather*}
\ceqtm[\Psi']{N_1}{N_1'}{\td{A_1}{\psi}}, \\
\ceqtm[\Psi']{N_2}{N_2'}{\subst{\td{A_2}{\psi}}{N_1}{a_1}}, \dots\\\text{and}~
\ceqtm[\Psi']{N_n}{N_n'}
{\subst{\td{A_n}{\psi}}{N_1,\dots,N_{n-1}}{a_1,\dots,a_n}},
\end{gather*}
$\ceqtm[\Psi']
{\subst{\td{M}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}}
{\subst{\td{M'}{\psi}}{N_1',\dots,N_n'}{a_1,\dots,a_n}}
{\subst{\td{B}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}}$.
\end{definition}
One should read $[\Psi]$ as extending across the entire judgment, as it
specifies the starting dimension at which to consider not only $B$ and $M$ but
$\ensuremath{\Gamma}$ as well.
The open judgments, like the closed judgments, are symmetric and transitive.
In particular, if $\eqtype{pre}{\ensuremath{\Gamma}}{B}{B'}$ then $\wftype{pre}{\ensuremath{\Gamma}}{B}$.
As a result, the earlier hypotheses of each definition ensure that later
hypotheses are sensible; for example,
$\wfctx{(\oft{a_1}{A_1},\dots,\oft{a_n}{A_n})}$ and
$\coftype[\Psi']{N_1}{\td{A_1}{\psi}}$ ensure that
$\cwftype{pre}[\Psi']{\subst{\td{A_2}{\psi}}{N_1}{a_1}}$.
\begin{definition}\label{def:eqtypek}
We say $\eqtype{Kan}{\oft{a_1}{A_1},\dots,\oft{a_n}{A_n}}{B}{B'}$,
presupposing \\
$\eqtype{pre}{\oft{a_1}{A_1},\dots,\oft{a_n}{A_n}}{B}{B'}$,
when for any $\tds{\Psi'}{\psi}{\Psi}$ and any
\begin{gather*}
\ceqtm[\Psi']{N_1}{N_1'}{\td{A_1}{\psi}}, \\
\ceqtm[\Psi']{N_2}{N_2'}{\subst{\td{A_2}{\psi}}{N_1}{a_1}}, \dots\\\text{and}~
\ceqtm[\Psi']{N_n}{N_n'}
{\subst{\td{A_n}{\psi}}{N_1,\dots,N_{n-1}}{a_1,\dots,a_n}},
\end{gather*}
we have
$\ceqtype{Kan}[\Psi']
{\subst{\td{B}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}}
{\subst{\td{B'}{\psi}}{N_1',\dots,N_n'}{a_1,\dots,a_n}}$.
\end{definition}
Finally, the open judgments can also be augmented by context restrictions. In
order to make sense of \cref{def:restricted}, the presuppositions of the open
judgments require them to be closed under dimension substitution, which we will
prove in \cref{lem:td-judgments}.
\begin{definition}
\label{def:restricted}
~\begin{enumerate}
\item
The judgment $\wfctx<\Xi>{\ensuremath{\Gamma}}$ holds, presupposing $\fd{\Xi}\subseteq\Psi$,
when $\wfctx[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}$ for every $\tds{\Psi'}{\psi}{\Psi}$ satisfying $\Xi$.
\item
The judgment $\eqtype{pre}<\Xi>{\ensuremath{\Gamma}}{B}{B'}$ holds, presupposing
$\wfctx<\Xi>{\ensuremath{\Gamma}}$, when
$\eqtype{pre}[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{B}{\psi}}{\td{B'}{\psi}}$ for every
$\tds{\Psi'}{\psi}{\Psi}$ satisfying $\Xi$.
\item
The judgment $\eqtm<\Xi>{\ensuremath{\Gamma}}{M}{M'}{B}$ holds, presupposing
$\wfctx<\Xi>{\ensuremath{\Gamma}}$ and $\wftype{pre}<\Xi>{\ensuremath{\Gamma}}{B}$, when
$\eqtm[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{M}{\psi}}{\td{M'}{\psi}}{\td{B}{\psi}}$ for
every $\tds{\Psi'}{\psi}{\Psi}$ satisfying $\Xi$.
\item
The judgment $\eqtype{Kan}<\Xi>{\ensuremath{\Gamma}}{B}{B'}$ holds, presupposing
$\wfctx<\Xi>{\ensuremath{\Gamma}}$, when
$\eqtype{Kan}[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{B}{\psi}}{\td{B'}{\psi}}$ for every $\tds{\Psi'}{\psi}{\Psi}$
satisfying $\Xi$.
\end{enumerate}
\end{definition}
\subsection{Structural properties}
Every judgment is closed under dimension substitution.
\begin{lemma}\label{lem:td-judgments}
For any $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item if $\ceqtype{pre}{A}{B}$ then
$\ceqtype{pre}[\Psi']{\td{A}{\psi}}{\td{B}{\psi}}$;
\item if $\ceqtm{M}{N}{A}$ then
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{N}{\psi}}{\td{A}{\psi}}$;
\item if $\ceqtype{Kan}{A}{B}$ then $\ceqtype{Kan}[\Psi']{\td{A}{\psi}}{\td{B}{\psi}}$;
\item if $\wfctx{\ensuremath{\Gamma}}$ then $\wfctx[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}$;
\item if $\eqtype{pre}{\ensuremath{\Gamma}}{A}{B}$ then
$\eqtype{pre}[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{A}{\psi}}{\td{B}{\psi}}$;
\item if $\eqtm{\ensuremath{\Gamma}}{M}{N}{A}$ then
$\eqtm[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{M}{\psi}}{\td{N}{\psi}}{\td{A}{\psi}}$; and
\item if $\eqtype{Kan}{\ensuremath{\Gamma}}{A}{B}$ then
$\eqtype{Kan}[\Psi']{\td{\ensuremath{\Gamma}}{\psi}}{\td{A}{\psi}}{\td{B}{\psi}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For proposition (1), by $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ we have
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\td{A}{\psi},\td{B}{\psi},\td{\alpha}{\psi})$. We must show
for all $\tds{\Psi''}{\psi'}{\Psi'}$ that $(\td{\alpha}{\psi})_{\psi'}(M_0,N_0)$
implies $\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi\psi'})(M_0,N_0)$; this follows from
value-coherence of $\alpha$ at $\psi\psi'$. Propositions (2) and (3) follow from
$\vper{\td{A}{\psi}} = \td{\vper{A}}{\psi}$ and closure of $\ensuremath{\mathsf{Tm}}$ and the Kan
conditions under dimension substitution.
Propositions (4), (5), and (6) are proven simultaneously by induction on the
length of $\ensuremath{\Gamma}$. If $\ensuremath{\Gamma}=\cdot$, then (4) is trivial, and (5) and (6) follow
because the closed judgments are closed under dimension substitution. The
induction steps for all three use all three induction hypotheses. Proposition
(7) follows similarly.
\end{proof}
\begin{lemma}\label{lem:td-judgres}
For any $\tds{\Psi'}{\psi}{\Psi}$, if $\judg<\Xi>{\ensuremath{\mathcal{J}}}$ then
$\judg[\Psi']<\td{\Xi}{\psi}>{\td{\ensuremath{\mathcal{J}}}{\psi}}$.
\end{lemma}
\begin{proof}
We know that $\judg[\Psi']{\td{\ensuremath{\mathcal{J}}}{\psi}}$ for any $\tds{\Psi'}{\psi}{\Psi}$ satisfying $\Xi$,
and want to show that $\judg[\Psi'']{\td{\ensuremath{\mathcal{J}}}{\psi\psi'}}$ for any $\tds{\Psi'}{\psi}{\Psi}$ and
$\tds{\Psi''}{\psi'}{\Psi'}$ satisfying $\td{\Xi}{\psi}$. It suffices to
show that if $\psi'$ satisfies $\td{\Xi}{\psi}$, then $\psi\psi'$ satisfies
$\Xi$. But these both hold if and only if for each $(r_i=r_i')\in\Xi$,
$\td{r_i}{\psi\psi'} = \td{r_i'}{\psi\psi'}$.
\end{proof}
\begin{remark}
The context-restricted judgments can be thought of as merely a notational
device, because it is possible to systematically translate $\judg<\Xi>{\ensuremath{\mathcal{J}}}$
into ordinary judgments by case analysis:
\begin{enumerate}
\item All $\psi$ satisfy an empty set of equations, so $\judg<\cdot>{\ensuremath{\mathcal{J}}}$ if
and only if $\judg[\Psi']{\td{\ensuremath{\mathcal{J}}}{\psi}}$ for all $\psi$, which by
\cref{lem:td-judgments} holds if and only if $\judg{\ensuremath{\mathcal{J}}}$.
\item A $\psi$ satisfies $(\Xi,r=r)$ if and only if it satisfies $\Xi$, so
$\judg<\Xi,r=r>{\ensuremath{\mathcal{J}}}$ if and only if $\judg<\Xi>{\ensuremath{\mathcal{J}}}$.
\item No $\psi$ satisfies $(\Xi,0=1)$, so $\judg<\Xi,0=1>{\ensuremath{\mathcal{J}}}$ always.
\item By \cref{lem:td-judgres}, $\judg[\Psi,x]<\Xi,x=r>{\ensuremath{\mathcal{J}}}$ if and only if
$\judg<\dsubst{\Xi}{r}{x},r=r>{\dsubst{\ensuremath{\mathcal{J}}}{r}{x}}$, which holds if and only
if $\judg<\dsubst{\Xi}{r}{x}>{\dsubst{\ensuremath{\mathcal{J}}}{r}{x}}$.
\end{enumerate}
\end{remark}
The open judgments satisfy the \emph{structural rules} of type theory, like
hypothesis and weakening.
\begin{lemma}[Hypothesis]
If $\wfctx{(\ensuremath{\Gamma},\oft{a_i}{A_i},\ensuremath{\Gamma}')}$ then
$\oftype{\ensuremath{\Gamma},\oft{a_i}{A_i},\ensuremath{\Gamma}'}{a_i}{A_i}$.
\end{lemma}
\begin{proof}
We must show for any $\tds{\Psi'}{\psi}{\Psi}$ and equal elements $N_1,N_1',\dots,N_n,N_n'$ of
the pretypes in $(\td{\ensuremath{\Gamma}}{\psi},\oft{a_i}{\td{A_i}{\psi}},\td{\ensuremath{\Gamma}'}{\psi})$, that
$\ceqtm[\Psi']{N_i}{N_i'}{\td{A_i}{\psi}}$. But this is exactly our assumption
about $N_i,N_i'$.
\end{proof}
\begin{lemma}[Weakening]
~\begin{enumerate}
\item If $\eqtype{pre}{\ensuremath{\Gamma},\ensuremath{\Gamma}'}{B}{B'}$ and $\wftype{pre}{\ensuremath{\Gamma}}{A}$, then
$\eqtype{pre}{\ensuremath{\Gamma},\oft{a}{A},\ensuremath{\Gamma}'}{B}{B'}$.
\item If $\eqtm{\ensuremath{\Gamma},\ensuremath{\Gamma}'}{M}{M'}{B}$ and $\wftype{pre}{\ensuremath{\Gamma}}{A}$, then
$\eqtm{\ensuremath{\Gamma},\oft{a}{A},\ensuremath{\Gamma}'}{M}{M'}{B}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first part, we must show for any $\tds{\Psi'}{\psi}{\Psi}$ and equal elements
\begin{gather*}
\ceqtm[\Psi']{N_1}{N_1'}{\td{A_1}{\psi}}, \\
\ceqtm[\Psi']{N_2}{N_2'}{\subst{\td{A_2}{\psi}}{N_1}{a_1}}, \dots \\
\ceqtm[\Psi']{N}{N'}{\subst{\td{A}{\psi}}{N_1,\dots}{a_1,\dots}},
\dots\\\text{and}~
\ceqtm[\Psi']{N_n}{N_n'}
{\subst{\td{A_n}{\psi}}{N_1,\dots,N,\dots,N_{n-1}}{a_1,\dots,a,\dots,a_n}},
\end{gather*}
that the corresponding instances of $B,B'$ are equal closed pretypes.
By $\eqtype{pre}{\ensuremath{\Gamma},\ensuremath{\Gamma}'}{B}{B'}$ we know that $a\ensuremath{\mathbin{\#}} \ensuremath{\Gamma}',B,B'$---since the
contained pretypes become closed when substituting for $a_1,\dots,a_n$.
It also gives us
$\ceqtype{pre}[\Psi']
{\subst{\td{B}{\psi}}{N_1,\dots}{a_1,\dots}}
{\subst{\td{B'}{\psi}}{N_1',\dots}{a_1,\dots}}$
which are the desired instances of $B,B'$ because $a\ensuremath{\mathbin{\#}} B,B'$.
The second part follows similarly.
\end{proof}
The definition of equal pretypes was chosen to ensure that equal pretypes have
equal elements.
\begin{lemma}\label{lem:ceqtypep-ceqtm}
If $\ceqtype{pre}{A}{B}$ and $\ceqtm{M}{N}{A}$ then $\ceqtm{M}{N}{B}$.
\end{lemma}
\begin{proof}
If $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ then $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,B,A,\alpha)$; the result
follows by $\vper{A}=\vper{B}$.
\end{proof}
\begin{lemma}
If $\eqtype{pre}{\ensuremath{\Gamma}}{A}{B}$ and $\eqtm{\ensuremath{\Gamma}}{M}{N}{A}$ then $\eqtm{\ensuremath{\Gamma}}{M}{N}{B}$.
\end{lemma}
\begin{proof}
If $\ensuremath{\Gamma} = (\oft{a_1}{A_1},\dots,\oft{a_n}{A_n})$ then $\eqtm{\ensuremath{\Gamma}}{M}{N}{A}$ means
that for any $\tds{\Psi'}{\psi}{\Psi}$ and equal elements $N_1,N_1',\dots,N_n,N_n'$ of the
pretypes in $\td{\ensuremath{\Gamma}}{\psi}$, the corresponding instances of $M$ and $N$ are
equal in $\subst{\td{A}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}$. But
$\eqtype{pre}{\ensuremath{\Gamma}}{A}{B}$ implies this pretype is equal to
$\subst{\td{B}{\psi}}{N_1,\dots,N_n}{a_1,\dots,a_n}$, so the result follows by
\cref{lem:ceqtypep-ceqtm}.
\end{proof}
\subsection{Basic lemmas}
The definition of $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ can be simplified when $\tau$ is
a cubical type system: it suffices to check for all
$\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$ that
$\td{A}{\psi_1} \ensuremath{\Downarrow} A_1$, $\td{B}{\psi_1} \ensuremath{\Downarrow} B_1$,
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{A}{\psi_1\psi_2},\phi)$,
$\lift{\tau}(\Psi_2,\td{B_1}{\psi_2},\td{B}{\psi_1\psi_2},\phi')$, and
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{B_1}{\psi_2},\phi'')$. Then
$\phi=\phi'=\phi''$ and $\alpha$ exists uniquely. The proof uses the observation
that the following permissive form of transitivity holds for any functional PER
$R$: if $R(\Psi,A,B,\alpha)$ and $R(\Psi,B,C,\beta)$ then $R(\Psi,A,C,\alpha)$
and $\alpha=\beta$.
\begin{lemma}\label{lem:pty-evals}
If $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A,\alpha)$, then $A\ensuremath{\Downarrow} A_0$ and
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A_0,\alpha)$.
\end{lemma}
\begin{proof}
Check for all $\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$
that $\lift{\tau}(\Psi_2,\td{A}{\psi_1\psi_2},\td{A_0}{\psi_1\psi_2},\phi)$ and
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{A_1'}{\psi_2},\phi')$ where
$\td{A}{\psi_1}\ensuremath{\Downarrow} A_1$ and $\td{A_0}{\psi_1}\ensuremath{\Downarrow} A_1'$.
The former holds by $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A,\alpha)$ at the substitutions
$\id$ and $\psi_1\psi_2$. For the latter, $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A,\alpha)$ at
$\psi_1,\id[\Psi_1]$ proves that $\lift{\tau}(\Psi_1,A_1,\td{A}{\psi_1},\_)$ and
at $\id,\psi_1$ proves $\lift{\tau}(\Psi_1,\td{A_0}{\psi_1},\td{A}{\psi_1},\_)$.
By transitivity, $\tau(\Psi_1,A_1,A_1',\_)$ so
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi_1,A_1,A_1',\_)$ and thus
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{A_1'}{\psi_2},\phi')$ as required.
\end{proof}
\begin{lemma}\label{lem:cwftypep-evals-ceqtypep}
If $\cwftype{pre}{A}$, then $A\ensuremath{\Downarrow} A_0$ and $\ceqtype{pre}{A}{A_0}$.
\end{lemma}
\begin{proof}
By \cref{lem:pty-evals} we have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A_0,\alpha)$; value-coherence
follows by $\cwftype{pre}{A}$.
\end{proof}
\begin{lemma}\label{lem:coftype-ceqtm}
If $\coftype{M}{A}$, $\coftype{N}{A}$, and $\lift{\vper{A}}(M,N)$, then
$\ceqtm{M}{N}{A}$.
\end{lemma}
\begin{proof}
We check for all $\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$
that
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{M}{\psi_1\psi_2},\td{N}{\psi_1\psi_2})$; the
other needed relations follow from $\coftype{M}{A}$ and $\coftype{N}{A}$. By
$\cwftype{pre}{A}$, $\lift{\vper{A}}(M,N)$ implies $\ensuremath{\mathsf{Tm}}(\vper{A})(M_0,N_0)$ where
$M\ensuremath{\Downarrow} M_0$ and $N\ensuremath{\Downarrow} N_0$, hence
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{M_0}{\psi_1\psi_2},\td{N_0}{\psi_1\psi_2})$.
By $\coftype{M}{A}$ we have
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{M_0}{\psi_1\psi_2},\td{M}{\psi_1\psi_2})$
and similarly for $N$, so the result follows by transitivity.
\end{proof}
\begin{lemma}\label{lem:coftype-evals-ceqtm}
If $\coftype{M}{A}$, then $M\ensuremath{\Downarrow} M_0$ and $\ceqtm{M}{M_0}{A}$.
\end{lemma}
\begin{proof}
By $\coftype{M}{A}$, $M\ensuremath{\Downarrow} M_0$ and $\vper{A}(M_0,M_0)$. By $\cwftype{pre}{A}$,
$\coftype{M_0}{A}$, so the result follows by \cref{lem:coftype-ceqtm}.
\end{proof}
\begin{lemma}\label{lem:cwftypek-evals-ceqtypek}
If $\cwftype{Kan}{A}$, $\cwftype{Kan}{B}$, and for all $\tds{\Psi'}{\psi}{\Psi}$,
$\ceqtype{Kan}[\Psi']{A_\psi}{B_\psi}$ where
$\td{A}{\psi}\ensuremath{\Downarrow} A_\psi$ and
$\td{B}{\psi}\ensuremath{\Downarrow} B_\psi$, then
$\ceqtype{Kan}{A}{B}$.
\end{lemma}
\begin{proof}
By \cref{lem:cwftypep-evals-ceqtypep} we have
$\ceqtype{pre}[\Psi']{\td{A}{\psi}}{A_\psi}$ and
$\ceqtype{pre}[\Psi']{\td{B}{\psi}}{B_\psi}$ for all $\tds{\Psi'}{\psi}{\Psi}$;
thus $\ceqtype{pre}[\Psi']{\td{A}{\psi}}{\td{B}{\psi}}$ for all $\tds{\Psi'}{\psi}{\Psi}$, and it
suffices to establish that if
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\td{A}{\psi}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\td{A}{\psi}}$
for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\td{A}{\psi}}$
for any $i$,
\end{enumerate}
then
$\ceqtm[\Psi']{\Hcom*{\td{A}{\psi}}{r_i=r_i'}}%
{\Hcom{\td{B}{\psi}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\td{A}{\psi}}$.
We already know both terms are elements of this type (by \cref{def:kan} and
$\ceqtype{pre}[\Psi']{\td{A}{\psi}}{\td{B}{\psi}}$), so by
\cref{lem:coftype-ceqtm} it suffices to check that these terms are related by
$\lift{\vper{\td{A}{\psi}}}$ or equivalently $\lift{\vper{A_\psi}}$. This is
true because $\Hcom{\td{A}{\psi}}\ensuremath{\longmapsto}^* \Hcom{A_\psi}$,
$\Hcom{\td{B}{\psi}}\ensuremath{\longmapsto}^* \Hcom{B_\psi}$, and
by $\ceqtype{Kan}[\Psi']{A_\psi}{B_\psi}$ and
$\ceqtype{pre}[\Psi']{\td{A}{\psi}}{A_\psi}$,
$\ceqtm[\Psi']{\Hcom{A_\psi}}{\Hcom{B_\psi}}{A_\psi}$. The remaining $\Hcom$
equations of \cref{def:kan} follow by transitivity and
$\cwftype{Kan}[\Psi']{A_\psi}$; the $\Coe$ equations follow by a similar argument.
\end{proof}
In order to establish that a term is a pretype or element, one must frequently
reason about the evaluation behavior of its aspects. When all aspects compute in
lockstep, a \emph{head expansion} lemma applies; otherwise one must appeal to
its generalization, \emph{coherent expansion}:
\begin{lemma}\label{lem:cohexp-ceqtypep}
Assume we have $\wftm{A}$ and a family of terms $\{A^{\Psi'}_\psi\}_{\tds{\Psi'}{\psi}{\Psi}}$
such that for all $\tds{\Psi'}{\psi}{\Psi}$,
$\ceqtype{pre}[\Psi']{A^{\Psi'}_{\psi}}{\td{(A^{\Psi}_{\id})}{\psi}}$ and
$\td{A}{\psi} \ensuremath{\longmapsto}^* A^{\Psi'}_\psi$. Then $\ceqtype{pre}{A}{A^\Psi_{\id}}$.
\end{lemma}
\begin{proof}
We must show that for any
$\tds{\Psi_1}{\psi_1}{\Psi}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$,
$\td{A}{\psi_1}\ensuremath{\Downarrow} A_1$,
$\td{(A^\Psi_{\id})}{\psi_1}\ensuremath{\Downarrow} A_1'$, and
$\lift{\tau}(\Psi_2,-,-,\_)$ relates
$\td{A_1}{\psi_2}$,
$\td{A}{\psi_1\psi_2}$,
$\td{(A^\Psi_{\id})}{\psi_1\psi_2}$,
and $\td{A'_1}{\psi_2}$.
\begin{enumerate}
\item $\td{A}{\psi_1}\ensuremath{\Downarrow} A_1$ and
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{A}{\psi_1\psi_2},\phi)$.
We know $\td{A}{\psi_1} \ensuremath{\longmapsto}^* A^{\Psi_1}_{\psi_1}$ and
$\cwftype{pre}[\Psi_1]{A^{\Psi_1}_{\psi_1}}$, so
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{(A^{\Psi_1}_{\psi_1})}{\psi_2},\phi)$
where $A^{\Psi_1}_{\psi_1}\ensuremath{\Downarrow} A_1$.
By $\ceqtype{pre}[\Psi_1]{A^{\Psi_1}_{\psi_1}}{\td{(A^{\Psi}_{\id})}{\psi_1}}$
under $\psi_2$ and
$\ceqtype{pre}[\Psi_2]{\td{(A^{\Psi}_{\id})}{\psi_1\psi_2}}{A^{\Psi_2}_{\psi_1\psi_2}}$,
we have
$\ceqtype{pre}[\Psi_2]{\td{(A^{\Psi_1}_{\psi_1})}{\psi_2}}{A^{\Psi_2}_{\psi_1\psi_2}}$
and thus
$\lift{\tau}(\Psi_2,\td{(A^{\Psi_1}_{\psi_1})}{\psi_2},A^{\Psi_2}_{\psi_1\psi_2},\phi)$.
The result follows by transitivity and
$\td{A}{\psi_1\psi_2} \ensuremath{\longmapsto}^* A^{\Psi_2}_{\psi_1\psi_2}$.
\item $\lift{\tau}(\Psi_2,\td{A}{\psi_1\psi_2},\td{(A^\Psi_{\id})}{\psi_1\psi_2},\phi')$.
By $\ceqtype{pre}[\Psi_2]{A^{\Psi_2}_{\psi_1\psi_2}}{\td{(A^{\Psi}_{\id})}{\psi_1\psi_2}}$
we have
$\lift{\tau}(\Psi_2,A^{\Psi_2}_{\psi_1\psi_2},\td{(A^\Psi_{\id})}{\psi_1\psi_2},\phi')$;
the result follows by
$\td{A}{\psi_1\psi_2} \ensuremath{\longmapsto}^* A^{\Psi_2}_{\psi_1\psi_2}$.
\item $\td{(A^{\Psi}_{\id})}{\psi_1}\ensuremath{\Downarrow} A_1'$ and
$\lift{\tau}(\Psi_2,\td{(A^\Psi_{\id})}{\psi_1\psi_2},\td{A'_1}{\psi_2},\phi'')$.
Follows from $\cwftype{pre}{A^{\Psi}_{\id}}$.
\qedhere
\end{enumerate}
\end{proof}
\begin{lemma}\label{lem:cohexp-ceqtm}
Assume we have $\wftm{M}$, $\cwftype{pre}{A}$, and a family of terms
$\{M^{\Psi'}_\psi\}_{\tds{\Psi'}{\psi}{\Psi}}$ such that for all $\tds{\Psi'}{\psi}{\Psi}$,
$\ceqtm[\Psi']
{M^{\Psi'}_{\psi}}
{\td{(M^{\Psi}_{\id})}{\psi}}
{\td{A}{\psi}}$ and
$\td{M}{\psi} \ensuremath{\longmapsto}^* M^{\Psi'}_\psi$. Then
$\ceqtm{M}{M^\Psi_{\id}}{A}$.
\end{lemma}
\begin{proof}
We must show that for any
$\tds{\Psi_1}{\psi_1}{\Psi}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$,
$\td{M}{\psi_1}\ensuremath{\Downarrow} M_1$,
$\td{(M^\Psi_{\id})}{\psi_1}\ensuremath{\Downarrow} M'_1$, and
$\lift{\vper{A}}_{\psi_1\psi_2}$ relates
$\td{M_1}{\psi_2}$,
$\td{M}{\psi_1\psi_2}$,
$\td{(M^\Psi_{\id})}{\psi_1\psi_2}$,
and $\td{M'_1}{\psi_2}$.
\begin{enumerate}
\item $\td{M}{\psi_1}\ensuremath{\Downarrow} M_1$ and
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{M_1}{\psi_2},\td{M}{\psi_1\psi_2})$.
We know $\td{M}{\psi_1} \ensuremath{\longmapsto}^* M^{\Psi_1}_{\psi_1}$ and
$\coftype[\Psi_1]{M^{\Psi_1}_{\psi_1}}{\td{A}{\psi_1}}$, so
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{M_1}{\psi_2},\td{(M^{\Psi_1}_{\psi_1})}{\psi_2})$
where $M^{\Psi_1}_{\psi_1}\ensuremath{\Downarrow} M_1$.
By $\ceqtm[\Psi_1]
{M^{\Psi_1}_{\psi_1}}
{\td{(M^{\Psi}_{\id})}{\psi_1}}
{\td{A}{\psi_1}}$ under $\psi_2$ and
$\ceqtm[\Psi_2]
{\td{(M^{\Psi}_{\id})}{\psi_1\psi_2}}
{M^{\Psi_2}_{\psi_1\psi_2}}
{\td{A}{\psi_1\psi_2}}$, we have
$\ceqtm[\Psi_2]
{\td{(M^{\Psi_1}_{\psi_1})}{\psi_2}}
{M^{\Psi_2}_{\psi_1\psi_2}}
{\td{A}{\psi_1\psi_2}}$ and thus
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{(M^{\Psi_1}_{\psi_1})}{\psi_2},M^{\Psi_2}_{\psi_1\psi_2})$.
The result follows by transitivity and
$\td{M}{\psi_1\psi_2} \ensuremath{\longmapsto}^* M^{\Psi_2}_{\psi_1\psi_2}$.
\item $\lift{\vper{A}}_{\psi_1\psi_2}(\td{M}{\psi_1\psi_2},\td{(M^\Psi_{\id})}{\psi_1\psi_2})$.
By $\ceqtm[\Psi_2]
{M^{\Psi_2}_{\psi_1\psi_2}}
{\td{(M^{\Psi}_{\id})}{\psi_1\psi_2}}
{\td{A}{\psi_1\psi_2}}$ we have
$\lift{\vper{A}}_{\psi_1\psi_2}(M^{\Psi_2}_{\psi_1\psi_2},\td{(M^\Psi_{\id})}{\psi_1\psi_2})$;
the result follows by
$\td{M}{\psi_1\psi_2} \ensuremath{\longmapsto}^* M^{\Psi_2}_{\psi_1\psi_2}$.
\item $\td{(M^{\Psi}_{\id})}{\psi_1}\ensuremath{\Downarrow} M_1'$ and
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{(M^\Psi_{\id})}{\psi_1\psi_2},\td{M'_1}{\psi_2})$.
Follows from $\coftype{M^{\Psi}_{\id}}{A}$.
\qedhere
\end{enumerate}
\end{proof}
\begin{lemma}\label{lem:cohexp-ceqtypek}
Assume we have $\wftm{A}$ and a family of terms $\{A^{\Psi'}_\psi\}_{\tds{\Psi'}{\psi}{\Psi}}$
such that for all $\tds{\Psi'}{\psi}{\Psi}$,
$\ceqtype{Kan}[\Psi']{A^{\Psi'}_{\psi}}{\td{(A^{\Psi}_{\id})}{\psi}}$ and
$\td{A}{\psi} \ensuremath{\longmapsto}^* A^{\Psi'}_\psi$. Then $\ceqtype{Kan}{A}{A^\Psi_{\id}}$.
\end{lemma}
\begin{proof}
By \cref{lem:cohexp-ceqtypep}, $\ceqtype{pre}{A}{A^\Psi_{\id}}$; it suffices to
establish the conditions in \cref{def:kan}. First, assume $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\td{A}{\psi}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\td{A}{\psi}}$
for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\td{A}{\psi}}$
for any $i$,
\end{enumerate}
and show that
$\ceqtm[\Psi']{\Hcom*{\td{A}{\psi}}{r_i=r_i'}}%
{\Hcom{\td{(A^\Psi_{\id})}{\psi}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\td{A}{\psi}}$.
We apply \cref{lem:cohexp-ceqtm} to $\Hcom*{\td{A}{\psi}}{r_i=r_i'}$ and the
family
\[
\{ \Hcom{A^{\Psi''}_{\psi\psi'}}{\td{r}{\psi'}}{\td{r'}{\psi'}}%
{\td{M}{\psi'}}{\sys{\td{r_i}{\psi'}=\td{r_i'}{\psi'}}{y.\td{N_i}{\psi'}}}
\}^{\Psi''}_{\psi'}
\]
at $\cwftype{pre}[\Psi']{\td{A}{\psi}}$. We know $\Hcom{\td{A}{\psi\psi'}}
\ensuremath{\longmapsto}^* \Hcom{A^{\Psi''}_{\psi\psi'}}$ by $\td{A}{\psi\psi'} \ensuremath{\longmapsto}^*
A^{\Psi''}_{\psi\psi'}$, and
$\ceqtm[\Psi'']
{\Hcom{A^{\Psi''}_{\psi\psi'}}}
{\Hcom{\td{(A^{\Psi'}_{\psi})}{\psi'}}}
{\td{A}{\psi\psi'}}$ by
$\ceqtype{Kan}[\Psi'']{A^{\Psi''}_{\psi\psi'}}{\td{(A^{\Psi'}_{\psi})}{\psi'}}$ and
$\ceqtype{pre}[\Psi'']{A^{\Psi''}_{\psi\psi'}}{\td{A}{\psi\psi'}}$
(both by transitivity through $\td{(A^{\Psi}_{\id})}{\psi\psi'}$).
We conclude that
$\ceqtm[\Psi']{\Hcom{\td{A}{\psi}}}{\Hcom{A^{\Psi'}_{\psi}}}{\td{A}{\psi}}$, and
the desired result follows by
$\ceqtype{Kan}[\Psi']{A^{\Psi'}_{\psi}}{\td{(A^{\Psi}_{\id})}{\psi}}$.
The remaining $\Hcom$ equations of \cref{def:kan} follow by transitivity and
$\cwftype{Kan}{A^{\Psi}_{\id}}$.
Next, assuming $\tds{(\Psi',x)}{\psi}{\Psi}$ and
$\ceqtm[\Psi']{M}{M'}{\dsubst{\td{A}{\psi}}{r}{x}}$, show that
$\ceqtm[\Psi']{\Coe*{x.\td{A}{\psi}}}
{\Coe{x.\td{(A^\Psi_{\id})}{\psi}}{r}{r'}{M'}}%
{\dsubst{\td{A}{\psi}}{r'}{x}}$.
We apply \cref{lem:cohexp-ceqtm} to $\Coe*{x.\td{A}{\psi}}$ and
$\{ \Coe{x.A^{\Psi''}_{\psi\psi'}}{\td{r}{\psi'}}{\td{r'}{\psi'}}%
{\td{M}{\psi'}} \}^{\Psi''}_{\psi'}$
at $\cwftype{pre}[\Psi']{\dsubst{\td{A}{\psi}}{r'}{x}}$, using the same argument as
before; we conclude that
$\ceqtm[\Psi']{\Coe{x.\td{A}{\psi}}}{\Coe{x.A^{\Psi'}_{\psi}}}{\dsubst{\td{A}{\psi}}{r'}{x}}$,
and the desired result follows by
$\ceqtype{Kan}[\Psi',x]{A^{\Psi'}_{\psi}}{\td{(A^{\Psi}_{\id})}{\psi}}$.
The remaining $\Coe$ equation of \cref{def:kan} follows by transitivity and
$\cwftype{Kan}{A^{\Psi}_{\id}}$.
\end{proof}
\begin{lemma}[Head expansion]\label{lem:expansion}
~\begin{enumerate}
\item If $\cwftype{pre}{A'}$ and $A\ensuremath{\steps_\stable}^* A'$, then $\ceqtype{pre}{A}{A'}$.
\item If $\coftype{M'}{A}$ and $M\ensuremath{\steps_\stable}^* M'$, then $\ceqtm{M}{M'}{A}$.
\item If $\cwftype{Kan}{A'}$ and $A\ensuremath{\steps_\stable}^* A'$, then $\ceqtype{Kan}{A}{A'}$.
\end{enumerate}
\end{lemma}
\begin{proof}
~\begin{enumerate}
\item
By \cref{lem:cohexp-ceqtypep} with $A^{\Psi'}_\psi = \td{A'}{\psi}$, because
$\td{A}{\psi}\ensuremath{\longmapsto}^* \td{A'}{\psi}$ and
$\cwftype{pre}[\Psi']{\td{A'}{\psi}}$ for all $\psi$.
\item
By \cref{lem:cohexp-ceqtm} with $M^{\Psi'}_\psi = \td{M'}{\psi}$, because
$\td{M}{\psi}\ensuremath{\longmapsto}^* \td{M'}{\psi}$ and
$\coftype[\Psi']{\td{M'}{\psi}}{\td{A}{\psi}}$ for all $\psi$.
\item
By \cref{lem:cohexp-ceqtypek} with $A^{\Psi'}_\psi = \td{A'}{\psi}$, because
$\td{A}{\psi}\ensuremath{\longmapsto}^* \td{A'}{\psi}$ and
$\cwftype{Kan}[\Psi']{\td{A'}{\psi}}$ for all $\psi$.
\qedhere
\end{enumerate}
\end{proof}
The $\Hcom$ operation implements \emph{homogeneous} composition, in the sense
that $A$ must be degenerate in the bound direction of the tubes. We can obtain
\emph{heterogeneous} composition, $\Com$, by combining $\Hcom$ and $\Coe$.
\begin{theorem}\label{thm:com}
If $\ceqtype{Kan}[\Psi,y]{A}{B}$,
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm{M}{M'}{\dsubst{A}{r}{y}}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{A}$ for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\dsubst{A}{r}{y}}$ for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item
$\ceqtm{\Com*{y.A}{r_i=r_i'}}{\Com{y.B}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}
{\dsubst{A}{r'}{y}}$;
\item if $r=r'$ then
$\ceqtm{\Com{y.A}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{\dsubst{A}{r}{y}}$; and
\item if $r_i = r_i'$ then
$\ceqtm{\Com*{y.A}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\dsubst{A}{r'}{y}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
For all $\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i'$ and $r_j=r_j'$, we
know $\ceqtm[\Psi']{\td{N_i}{\psi}}{\td{N_j'}{\psi}}{\td{A}{\psi}}$. By
\cref{def:kan},
$\ceqtm[\Psi']
{\td{(\Coe{y.A}{y}{r'}{N_i})}{\psi}}
{\td{(\Coe{y.B}{y}{r'}{N_j'})}{\psi}}
{\td{\dsubst{A}{r'}{y}}{\psi}}$, and therefore
$\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>{\Coe{y.A}{y}{r'}{N_i}}{\Coe{y.B}{y}{r'}{N_j'}}{A}$.
By a similar argument we conclude
$\ceqtm<r_i=r_i'>{\dsubst{(\Coe{y.A}{y}{r'}{N_i})}{r}{y}}%
{\Coe{y.A}{r}{r'}{M}}{\dsubst{A}{r'}{y}}$,
and by \cref{def:kan} directly,
$\ceqtm{\Coe{y.A}{r}{r'}{M}}{\Coe{y.B}{r}{r'}{M'}}{\dsubst{A}{r'}{y}}$.
By \cref{def:kan} we conclude
\begin{gather*}
{\Hcom{\dsubst{A}{r'}{y}}{r}{r'}%
{\Coe{y.A}{r}{r'}{M}}%
{\sys{r_i=r_i'}{y.\Coe{y.A}{y}{r'}{N_i}}}} \\
\ceqtm{{}}
{\Hcom{\dsubst{B}{r'}{y}}{r}{r'}%
{\Coe{y.B}{r}{r'}{M'}}%
{\sys{r_i=r_i'}{y.\Coe{y.B}{y}{r'}{N_i'}}}}
{\dsubst{A}{r'}{y}}.
\end{gather*}
Result (1) follows by \cref{lem:expansion} on each side.
Result (2) follows by \cref{lem:expansion} and, by \cref{def:kan} twice,
\[
\ceqtm
{\Hcom{\dsubst{A}{r'}{y}}{r'}{r'}%
{\Coe{y.A}{r'}{r'}{M}}%
{\sys{r_i=r_i'}{y.\Coe{y.A}{y}{r'}{N_i}}}}
{M}
{\dsubst{A}{r'}{y}}.
\]
Result (3) follows by \cref{lem:expansion} and, by \cref{def:kan} twice,
\[
\ceqtm
{\Hcom{\dsubst{A}{r'}{y}}{r}{r'}%
{\Coe{y.A}{r}{r'}{M}}%
{\sys{r_i=r_i'}{y.\Coe{y.A}{y}{r'}{N_i}}}}
{\dsubst{N_i}{r'}{y}}
{\dsubst{A}{r'}{y}}.
\qedhere
\]
\end{proof}
\section{Programming language}
\label{sec:opsem}
The programming language itself has two sorts---dimensions and terms---and
binders for both sorts. Terms are an ordinary untyped lambda calculus with
constructors; dimensions are either dimension constants ($0$ or $1$) or
dimension names ($x,y,\dots$), the latter behaving like nominal constants
\citep{pittsnominal}. Dimensions may appear in terms: for example, $\lp{r}$ is a
term when $r$ is a dimension. The operational semantics is defined on terms that
are closed with respect to term variables but may contain free dimension names.
Dimension names represent generic elements of an abstract interval whose end
points are notated $0$ and $1$. While one may sensibly substitute any dimension
for a dimension name, terms are \emph{not} to be understood solely in terms
of their dimensionally-closed instances (namely, their end points). Rather, a
term's dependence on dimension names is to be understood generically;
geometrically, one might imagine additional unnamed points in the interior of
the abstract interval.
\subsection{Terms}
\begin{align*}
M &:=
\picl{a}{A}{B} \mid
\sigmacl{a}{A}{B} \mid
\Path{x.A}{M}{N} \mid
\Eq{A}{M}{N} \mid
\ensuremath{\mathsf{void}} \mid
\ensuremath{\mathsf{nat}} \mid
\ensuremath{\mathsf{bool}} \\&\mid
\ensuremath{\mathsf{wbool}} \mid
\ensuremath{\mathbb{S}^1} \mid
\Upre \mid
\UKan \mid
\ua{r}{A,B,E} \mid
\uain{r}{M,N} \mid
\uaproj{r}{M,F} \\&\mid
\lam{a}{M} \mid
\app{M}{N} \mid
\pair{M}{N} \mid
\fst{M} \mid
\snd{M} \mid
\dlam{x}{M} \mid
\dapp{M}{r} \mid
\ensuremath{\star} \\&\mid
\ensuremath{\mathsf{z}} \mid
\suc{M} \mid
\natrec{M}{N_1}{n.a.N_2} \mid
\ensuremath{\mathsf{true}} \mid
\ensuremath{\mathsf{false}} \mid
\ifb{b.A}{M}{N_1}{N_2} \\&\mid
\ensuremath{\mathsf{base}} \mid
\lp{r} \mid
\Celim{c.A}{M}{N_1}{x.N_2} \\&\mid
\Coe*{x.A} \mid
\Hcom*{A}{r_i=r_i'} \\&\mid
\Com*{y.A}{r_i=r_i'} \mid
\Fcom*{r_i=r_i'} \\&\mid
\Ghcom*{A}{r_i=r_i'} \mid
\Gcom*{y.A}{r_i=r_i'} \\&\mid
\Kbox*{r_i=r_i'} \mid
\Kcap*{r_i=r_i'}
\end{align*}
We use capital letters like $M$, $N$, and $A$ to denote terms, $r$,
$r'$, $r_i$ to denote dimensions, $x$ to denote dimension names, $\ensuremath{\varepsilon}$
to denote dimension constants ($0$ or $1$), and $\overline{\e}$ to denote the
opposite dimension constant of $\ensuremath{\varepsilon}$. We write $x.-$ for dimension
binders, $a.-$ for term binders, and $\fd{M}$ for the set of dimension
names free in $M$. Additionally, in $\picl{a}{A}{B}$ and $\sigmacl{a}{A}{B}$,
$a$ is bound in $B$. Dimension substitution $\dsubst{M}{r}{x}$ and term
substitution $\subst{M}{N}{a}$ are defined in the usual way.
The final argument of most composition operators is a (possibly empty) list of
triples $(r_i,r_i',y.N_i)$ whose first two components are dimensions, and whose
third is a term (in some cases, with a bound dimension). We write
$\sys{r_i=r_i'}{y.N_i}$ to abbreviate such lists or transformations on such
lists, and $\xi_i$ to abbreviate $r_i=r_i'$ when their identity is irrelevant.
\begin{definition}\label{def:wftm}
We write $\wftm{M}$ when $M$ is a term with no free term variables, and
$\fd{M}\subseteq\Psi$. (Similarly, we write $\wfval{M}$ when $\wftm{M}$ and
$\isval{M}$.)
\end{definition}
\begin{definition}
A total dimension substitution $\tds{\Psi'}{\psi}{\Psi}$ assigns to each dimension name in
$\Psi$ either $0$, $1$, or a dimension name in $\Psi'$. It follows that if
$\wftm{M}$ then $\wftm[\Psi']{\td{M}{\psi}}$.
\end{definition}
\subsection{Operational semantics}
The following describes a deterministic weak head reduction evaluation strategy
for (term-)closed terms in the form of a transition system with two judgments:
\begin{enumerate}
\item $\isval{M}$, stating that $M$ is a \emph{value}, or
\emph{canonical form}.
\item $M\ensuremath{\longmapsto} M'$, stating that $M$ takes \emph{one step of
evaluation} to $M'$.
\end{enumerate}
These judgments are defined so that if $\isval{M}$, then $M\not\ensuremath{\longmapsto}$, but the
converse need not be the case. As usual, we write $M\ensuremath{\longmapsto}^* M'$ to mean that
$M$ transitions to $M'$ in zero or more steps. We say $M$ evaluates to $V$,
written $M \ensuremath{\Downarrow} V$, when $M\ensuremath{\longmapsto}^* V$ and $\isval{V}$.
The $\ensuremath{\longmapsto}$ judgment satisfies two additional conditions. Determinacy implies
that a term has at most one value; dimension preservation states that evaluation
does not introduce new (free) dimension names.
\begin{lemma}[Determinacy]
If $M\ensuremath{\longmapsto} M_1$ and $M\ensuremath{\longmapsto} M_2$, then $M_1 = M_2$.
\end{lemma}
\begin{lemma}[Dimension preservation]
If $M\ensuremath{\longmapsto} M'$, then $\fd{M'}\subseteq\fd{M}$.
\end{lemma}
Many rules below are annotated with $\cube$. Those rules define an additional
pair of judgments $\sisval{M}$ and $M\ensuremath{\steps_\stable} M'$ by replacing every occurrence
of $\ensuremath{\mathsf{val}}$ (resp., $\ensuremath{\longmapsto}$) in those rules with $\ensuremath{\mathsf{val}}_\cube$ (resp.,
$\ensuremath{\steps_\stable}$). These rules define the \emph{cubically-stable values} (resp.,
\emph{cubically-stable steps}), characterized by the following property:
\begin{lemma}[Cubical stability]
If $\wftm{M}$, then for any $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item if $\sisval{M}$ then $\isval{\td{M}{\psi}}$, and
\item if $M\ensuremath{\steps_\stable} M'$ then $\td{M}{\psi}\ensuremath{\longmapsto}\td{M'}{\psi}$.
\end{enumerate}
\end{lemma}
Cubically-stable values and steps are significant because they are unaffected by
the cubical apparatus. All standard operational semantics rules are
cubically-stable.
\paragraph{Types}
\begin{mathpar}
\Infer[\cube]
{ }
{\isval{\picl{a}{A}{B}}}
\and
\Infer[\cube]
{ }
{\isval{\sigmacl{a}{A}{B}}}
\and
\Infer[\cube]
{ }
{\isval{\Path{x.A}{M}{N}}}
\and
\Infer[\cube]
{ }
{\isval{\Eq{A}{M}{N}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{void}}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{nat}}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{bool}}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{wbool}}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathbb{S}^1}}}
\and
\Infer[\cube]
{ }
{\isval{\Upre}}
\and
\Infer[\cube]
{ }
{\isval{\UKan}}
\and
\Infer
{ }
{\isval{\ua{x}{A,B,E}}}
\and
\Infer[\cube]
{ }
{\ua{0}{A,B,E} \ensuremath{\longmapsto} A}
\and
\Infer[\cube]
{ }
{\ua{1}{A,B,E} \ensuremath{\longmapsto} B}
\end{mathpar}
\paragraph{Kan operations}
\begin{mathpar}
\Infer[\cube]
{A\ensuremath{\longmapsto} A'}
{\Hcom*{A}{\xi_i} \ensuremath{\longmapsto} \Hcom*{A'}{\xi_i}}
\and
\Infer[\cube]
{A\ensuremath{\longmapsto} A'}
{\Coe*{x.A} \ensuremath{\longmapsto} \Coe*{x.A'}}
\and
\Infer[\cube]
{ }
{\Com*{y.A}{\xi_i} \ensuremath{\longmapsto}
\Hcom{\dsubst{A}{r'}{y}}{r}{r'}{\Coe{y.A}{r}{r'}{M}}{\sys{\xi_i}{y.\Coe{y.A}{y}{r'}{N_i}}}}
\and
\Infer[\cube]
{r = r'}
{\Fcom*{\xi_i} \ensuremath{\longmapsto} M}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i<j) \\ r_j = r_j'}
{\Fcom*{r_i=r_i'} \ensuremath{\longmapsto} \dsubst{N_j}{r'}{y}}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i)}
{\isval{\Fcom*{r_i=r_i'}}}
\and
\Infer[\cube]
{ }
{\Ghcom{A}{r}{r'}{M}{\cdot} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{T_\ensuremath{\varepsilon} = \Hcom{A}{r}{z}{M}{
\tube{s'=\ensuremath{\varepsilon}}{y.N},
\tube{s'=\overline{\e}}{y.\Ghcom{A}{r}{y}{M}{\sys{\xi_i}{y.N_i}}},
\sys{\xi_i}{y.N_i}}}
{\Ghcom{A}{r}{r'}{M}{\tube{s=s'}{y.N},\sys{\xi_i}{y.N_i}} \ensuremath{\longmapsto} \\
\Hcom{A}{r}{r'}{M}{\sys{s=\ensuremath{\varepsilon}}{z.T_\ensuremath{\varepsilon}},\tube{s=s'}{y.N},\sys{\xi_i}{y.N_i}}}
\and
\Infer[\cube]
{ }
{\Gcom*{y.A}{\xi_i} \ensuremath{\longmapsto}
\Ghcom{\dsubst{A}{r'}{y}}{r}{r'}{\Coe{y.A}{r}{r'}{M}}{\sys{\xi_i}{y.\Coe{y.A}{y}{r'}{N_i}}}}
\end{mathpar}
\paragraph{Dependent function types}
\begin{mathpar}
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\app{M}{N} \ensuremath{\longmapsto} \app{M'}{N}}
\and
\Infer[\cube]
{ }
{\app{\lam{a}{M}}{N} \ensuremath{\longmapsto} \subst{M}{N}{a}}
\and
\Infer[\cube]
{ }
{\isval{\lam{a}{M}}}
\and
\Infer[\cube]
{ }
{\Hcom*{\picl{a}{A}{B}}{\xi_i} \ensuremath{\longmapsto}
\lam{a}{\Hcom{B}{r}{r'}{\app{M}{a}}{\sys{\xi_i}{y.\app{N_i}{a}}}}}
\and
\Infer[\cube]
{ }
{\Coe{x.\picl{a}{A}{B}}{r}{r'}{M} \ensuremath{\longmapsto}
\lam{a}{\Coe{x.\subst{B}{\Coe{x.A}{r'}{x}{a}}{a}}%
{r}{r'}{\app{M}{\Coe{x.A}{r'}{r}{a}}}}}
\end{mathpar}
\paragraph{Dependent pair types}
\begin{mathpar}
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\fst{M} \ensuremath{\longmapsto} \fst{M'}}
\and
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\snd{M} \ensuremath{\longmapsto} \snd{M'}}
\and
\Infer[\cube]
{ }
{\isval{\pair{M}{N}}}
\and
\Infer[\cube]
{ }
{\fst{\pair{M}{N}} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{ }
{\snd{\pair{M}{N}} \ensuremath{\longmapsto} N}
\and
\Infer[\cube]
{F = \Hcom{A}{r}{z}{\fst{M}}{\sys{\xi_i}{y.\fst{N_i}}}}
{\Hcom*{\sigmacl{a}{A}{B}}{\xi_i}
\ensuremath{\longmapsto} \\
\pair{\Hcom{A}{r}{r'}{\fst{M}}{\sys{\xi_i}{y.\fst{N_i}}}}
{\Com{z.\subst{B}{F}{a}}{r}{r'}{\snd{M}}{\sys{\xi_i}{y.\snd{N_i}}}}}
\and
\Infer[\cube]
{ }
{\Coe{x.\sigmacl{a}{A}{B}}{r}{r'}{M} \ensuremath{\longmapsto}
\pair{\Coe{x.A}{r}{r'}{\fst{M}}}
{\Coe{x.\subst{B}{\Coe{x.A}{r}{x}{\fst{M}}}{a}}{r}{r'}{\snd{M}}}}
\end{mathpar}
\paragraph{Path types}
\begin{mathpar}
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\dapp{M}{r} \ensuremath{\longmapsto} \dapp{M'}{r}}
\and
\Infer[\cube]
{ }
{\dapp{(\dlam{x}{M})}{r} \ensuremath{\longmapsto} \dsubst{M}{r}{x}}
\and
\Infer[\cube]
{ }
{\isval{\dlam{x}{M}}}
\and
\Infer[\cube]
{ }
{\Hcom*{\Path{x.A}{P_0}{P_1}}{\xi_i} \ensuremath{\longmapsto}
\dlam{x}{\Hcom{A}{r}{r'}{\dapp{M}{x}}%
{\sys{x=\ensuremath{\varepsilon}}{\_.P_\ensuremath{\varepsilon}},\sys{\xi_i}{y.\dapp{N_i}{x}}}}}
\and
\Infer[\cube]
{ }
{\Coe{y.\Path{x.A}{P_0}{P_1}}{r}{r'}{M} \ensuremath{\longmapsto}
\dlam{x}{\Com{y.A}{r}{r'}{\dapp{M}{x}}{\sys{x=\ensuremath{\varepsilon}}{y.P_\ensuremath{\varepsilon}}}}}
\end{mathpar}
\paragraph{Equality types}
\begin{mathpar}
\Infer[\cube]
{ }
{\isval{\ensuremath{\star}}}
\and
\Infer[\cube]
{ }
{\Hcom*{\Eq{A}{E_0}{E_1}}{\xi_i} \ensuremath{\longmapsto} \star}
\end{mathpar}
\paragraph{Natural numbers}
\begin{mathpar}
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{z}}}}
\and
\Infer[\cube]
{ }
{\isval{\suc{M}}}
\and
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\natrec{M}{Z}{n.a.S} \ensuremath{\longmapsto} \natrec{M'}{Z}{n.a.S}}
\and
\Infer[\cube]
{ }
{\natrec{\ensuremath{\mathsf{z}}}{Z}{n.a.S} \ensuremath{\longmapsto} Z}
\and
\Infer[\cube]
{ }
{\natrec{\suc{M}}{Z}{n.a.S} \ensuremath{\longmapsto} \subst{\subst{S}{M}{n}}{\natrec{M}{Z}{n.a.S}}{a}}
\and
\Infer[\cube]
{ }
{\Hcom*{\ensuremath{\mathsf{nat}}}{\xi_i} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{ }
{\Coe*{x.\ensuremath{\mathsf{nat}}} \ensuremath{\longmapsto} M}
\end{mathpar}
\paragraph{Booleans}
\begin{mathpar}
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{true}}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{false}}}}
\and
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\ifb{b.A}{M}{T}{F} \ensuremath{\longmapsto} \ifb{b.A}{M'}{T}{F}}
\and
\Infer[\cube]
{ }
{\ifb{b.A}{\ensuremath{\mathsf{true}}}{T}{F} \ensuremath{\longmapsto} T}
\and
\Infer[\cube]
{ }
{\ifb{b.A}{\ensuremath{\mathsf{false}}}{T}{F} \ensuremath{\longmapsto} F}
\and
\Infer[\cube]
{ }
{\Hcom*{\ensuremath{\mathsf{bool}}}{\xi_i} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{ }
{\Coe*{x.\ensuremath{\mathsf{bool}}} \ensuremath{\longmapsto} M}
\end{mathpar}
\paragraph{Weak booleans}
\begin{mathpar}
\Infer[\cube]
{ }
{\Hcom*{\ensuremath{\mathsf{wbool}}}{\xi_i} \ensuremath{\longmapsto} \Fcom*{\xi_i}}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i) \\
H = \Fcom{r}{z}{M}{\sys{r_i=r_i'}{y.N_i}}}
{\ifb{b.A}{\Fcom*{r_i=r_i'}}{T}{F}
\ensuremath{\longmapsto} \\
\Com{z.\subst{A}{H}{b}}{r}{r'}{\ifb{b.A}{M}{T}{F}}{\sys{r_i=r_i'}{y.\ifb{b.A}{N_i}{T}{F}}}}
\and
\Infer[\cube]
{ }
{\Coe*{x.\ensuremath{\mathsf{wbool}}} \ensuremath{\longmapsto} M}
\end{mathpar}
\paragraph{Circle}
\begin{mathpar}
\Infer[\cube]
{ }
{\Hcom*{\ensuremath{\mathbb{S}^1}}{\xi_i} \ensuremath{\longmapsto} \Fcom*{\xi_i}}
\and
\Infer[\cube]
{ }
{\lp{\ensuremath{\varepsilon}} \ensuremath{\longmapsto} \ensuremath{\mathsf{base}}}
\and
\Infer[\cube]
{ }
{\isval{\ensuremath{\mathsf{base}}}}
\and
\Infer
{ }
{\isval{\lp{x}}}
\and
\Infer[\cube]
{M \ensuremath{\longmapsto} M'}
{\Celim{c.A}{M}{P}{x.L} \ensuremath{\longmapsto} \Celim{c.A}{M'}{P}{x.L}}
\and
\Infer[\cube]
{ }
{\Celim{c.A}{\ensuremath{\mathsf{base}}}{P}{x.L} \ensuremath{\longmapsto} P}
\and
\Infer
{ }
{\Celim{c.A}{\lp{w}}{P}{x.L} \ensuremath{\longmapsto} \dsubst{L}{w}{x}}
\and
\Infer
{r \neq r' \\ r_i\neq r_i'\ (\forall i) \\
F = \Fcom{r}{z}{M}{\sys{r_i=r_i'}{y.N_i}}}
{\Celim{c.A}{\Fcom*{r_i=r_i'}}{P}{x.L}
\ensuremath{\longmapsto} \\
\Com{z.\subst{A}{F}{c}}{r}{r'}{\Celim{c.A}{M}{P}{x.L}}{\sys{r_i=r_i'}{y.\Celim{c.A}{N_i}{P}{x.L}}}}
\and
\Infer[\cube]
{ }
{\Coe*{x.\ensuremath{\mathbb{S}^1}} \ensuremath{\longmapsto} M}
\end{mathpar}
\paragraph{Univalence}\
\begin{mathparpagebreakable}
\Infer
{ }
{\isval{\uain{x}{M,N}}}
\and
\Infer[\cube]
{ }
{\uain{0}{M,N} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{ }
{\uain{1}{M,N} \ensuremath{\longmapsto} N}
\and
\Infer[\cube]
{ }
{\uaproj{0}{M,F} \ensuremath{\longmapsto} \app{F}{M}}
\and
\Infer[\cube]
{ }
{\uaproj{1}{M,F} \ensuremath{\longmapsto} M}
\and
\Infer
{M \ensuremath{\longmapsto} M'}
{\uaproj{x}{M,F} \ensuremath{\longmapsto} \uaproj{x}{M',F}}
\and
\Infer
{ }
{\uaproj{x}{\uain{x}{M,N},F} \ensuremath{\longmapsto} N}
\and
\Infer
{O = \Hcom{A}{r}{y}{M}{\sys{\xi_i}{y.N_i}} \\
\etc{T} =
\tube{x=0}{y.\app{\fst{E}}{O}},
\tube{x=1}{y.\Hcom{B}{r}{y}{M}{\sys{\xi_i}{y.N_i}}}}
{\Hcom*{\ua{x}{A,B,E}}{\xi_i}
\ensuremath{\longmapsto} \\
\uain{x}{\dsubst{O}{r'}{y},
\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\sys{\xi_i}{y.\uaproj{x}{N_i,\fst{E}}},\etc{T}}}}
\and
\Infer[\cube]
{ }
{\Coe{x.\ua{x}{A,B,E}}{0}{r'}{M} \ensuremath{\longmapsto}
\uain{r'}{M,\Coe{x.B}{0}{r'}{\app{\fst{\dsubst{E}{0}{x}}}{M}}}}
\and
\Infer[\cube]
{O = \fst{\app{\snd{\dsubst{E}{r'}{x}}}{\Coe{x.B}{1}{r'}{N}}} \\
P = \Hcom{\dsubst{B}{r'}{x}}{1}{0}{\Coe{x.B}{1}{r'}{N}}{
\tube{r'=0}{y.\dapp{\snd{O}}{y}},
\tube{r'=1}{\_.\Coe{x.B}{1}{r'}{N}}}}
{\Coe{x.\ua{x}{A,B,E}}{1}{r'}{N} \ensuremath{\longmapsto}
\uain{r'}{\fst{O},P}}
\and
\Infer
{O_\ensuremath{\varepsilon} = \uaproj{w}{\Coe{x.\ua{x}{A,B,E}}{\ensuremath{\varepsilon}}{w}{M},\fst{\dsubst{E}{w}{x}}} \\
P = \Com{x.B}{y}{x}{\uaproj{y}{M,\fst{\dsubst{E}{y}{x}}}}{\etc{\tube{y=\ensuremath{\varepsilon}}{w.O_\ensuremath{\varepsilon}}}} \\
Q_\ensuremath{\varepsilon}[a] = \pair%
{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}%
{\dlam{z}{\Com{y.\dsubst{B}{0}{x}}{\ensuremath{\varepsilon}}{y}{\dsubst{\dsubst{P}{0}{x}}{\ensuremath{\varepsilon}}{y}}%
{\etc{U}}}} \\
\etc{U} =
\tube{z=0}{y.\app{\fst{\dsubst{E}{0}{x}}}{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}},
\tube{z=1}{y.\dsubst{P}{0}{x}} \\
R = \dapp{\app{\app{\snd{\app{\snd{\dsubst{E}{0}{x}}}{\dsubst{P}{0}{x}}}}{Q_0[\dsubst{M}{0}{y}]}}%
{Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]}}{y} \\
\etc{T} =
\etc{\tube{y=\ensuremath{\varepsilon}}{\_.\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}}},
\tube{y=r'}{\_.\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}},
\tube{r'=0}{z.\dapp{\snd{R}}{z}}}
{\Coe{x.\ua{x}{A,B,E}}{y}{r'}{M} \ensuremath{\longmapsto}
\uain{r'}{\fst{R},\Hcom{\dsubst{B}{r'}{x}}{1}{0}{\dsubst{P}{r'}{x}}{\etc{T}}}}
\and
\Infer
{x\neq y \\
\etc{T} =
\tube{x=0}{y.\app{\fst{E}}{\Coe{y.A}{r}{y}{M}}},
\tube{x=1}{y.\Coe{y.B}{r}{y}{M}}}
{\Coe{y.\ua{x}{A,B,E}}{r}{r'}{M} \ensuremath{\longmapsto}
\uain{x}{\Coe{y.A}{r}{r'}{M},\Com{y.B}{r}{r'}{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{\etc{T}}}}
\end{mathparpagebreakable}
\paragraph{Universes}\
\begin{mathparpagebreakable}
\Infer[\cube]
{ }
{\Hcom*{\UKan}{\xi_i} \ensuremath{\longmapsto} \Fcom*{\xi_i}}
\and
\Infer[\cube]
{ }
{\Coe*{x.\Ux} \ensuremath{\longmapsto} M}
\and
\Infer[\cube]
{r = r'}
{\Kbox*{\xi_i} \ensuremath{\longmapsto} M}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i<j) \\ r_j = r_j'}
{\Kbox*{r_i=r_i'} \ensuremath{\longmapsto} N_j}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i)}
{\isval{\Kbox*{r_i=r_i'}}}
\and
\Infer[\cube]
{r = r'}
{\Kcap*{\xi_i} \ensuremath{\longmapsto} M}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i<j) \\ r_j = r_j'}
{\Kcap*{r_i=r_i'} \ensuremath{\longmapsto} \Coe{y.B_j}{r'}{r}{M}}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i) \\ M \ensuremath{\longmapsto} M'}
{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}} \ensuremath{\longmapsto}
\Kcap{r}{r'}{M'}{\sys{r_i=r_i'}{y.B_i}}}
\and
\Infer
{r\neq r' \\ r_i\neq r_i'\ (\forall i)}
{\Kcap{r}{r'}{\Kbox*{\xi_i}}{\sys{r_i=r_i'}{y.B_i}} \ensuremath{\longmapsto} M}
\and
\Infer
{s\neq s' \\
s_j\neq s_j'\ (\forall j) \\
P_j = \Hcom{B_j}{r}{r'}{\Coe{z.B_j}{s'}{z}{M}}{
\sys{r_i=r_i'}{y.\Coe{z.B_j}{s'}{z}{N_i}}} \\
F[c] = \Hcom{A}{s'}{z}{\Kcap{s}{s'}{c}{\sys{s_j=s_j'}{z.B_j}}}{
\sys{s_j=s_j'}{z'.\Coe{z.B_j}{z'}{s}{\Coe{z.B_j}{s'}{z'}{c}}}} \\
O = \Hcom{A}{r}{r'}{\dsubst{(F[M])}{s}{z}}{\sys{r_i=r_i'}{y.\dsubst{(F[N_i])}{s}{z}}} \\
Q = \Hcom{A}{s}{s'}{O}{
\sys{r_i=r_i'}{z.F[\dsubst{N_i}{r'}{y}]},
\sys{s_j=s_j'}{z.\Coe{z.B_j}{z}{s}{P_j}},
\tube{r=r'}{z.F[M]}}}
{\Hcom{\Fcom{s}{s'}{A}{\sys{s_j=s_j'}{z.B_j}}}{r}{r'}{M}{\sys{r_i=r_i'}{y.N_i}} \ensuremath{\longmapsto}
\Kbox{s}{s'}{Q}{\sys{s_j=s_j'}{\dsubst{P_j}{s'}{z}}}}
\and
\Infer
{s\neq s' \\
s_i\neq s_i'\ (\forall i) \\
N_i = \Coe{z.B_i}{s'}{z}{\Coe{x.\dsubst{B_i}{s'}{z}}{r}{x}{M}} \\
O = \dsubst{(\Hcom{A}{s'}{z}{\Kcap{s}{s'}{M}{\sys{s_i=s_i'}{z.B_i}}}{
\sys{s_i=s_i'}{z.\Coe{z.B_i}{z}{s}{N_i}}})}{r}{x} \\
P = \Gcom{x.A}{r}{r'}{\dsubst{O}{\dsubst{s}{r}{x}}{z}}{
\st{\sys{s_i=s_i'}{x.\dsubst{N_i}{s}{z}}}{x\ensuremath{\mathbin{\#}} s_i,s_i'},
\st{\tube{s=s'}{x.\Coe{x.A}{r}{x}{M}}}{x\ensuremath{\mathbin{\#}} s,s'}} \\
Q_k = \Gcom{z.\dsubst{B_k}{r'}{x}}{\dsubst{s}{r'}{x}}{z}{P}{
\st{\sys{s_i=s_i'}{z.\dsubst{N_i}{r'}{x}}}{x\ensuremath{\mathbin{\#}} s_i,s_i'},
\tube{r=r'}{z.\dsubst{N_k}{r'}{x}}}}
{\Coe{x.\Fcom{s}{s'}{A}{\sys{s_i=s_i'}{z.B_i}}}{r}{r'}{M} \ensuremath{\longmapsto} \\
\dsubst{(\Kbox{s}{s'}{\Hcom{A}{s}{s'}{P}{
\sys{s_i=s_i'}{z.\Coe{z.B_i}{z}{s}{Q_i}},
\tube{r=r'}{z.O}}}{\sys{s_i=s_i'}{\dsubst{Q_i}{s'}{z}}})}{r'}{x}}
\end{mathparpagebreakable}
\section{Rules}
\label{sec:rules}
In this section we collect the rules proven in \cref{sec:meanings,sec:types}
(relative to $\pre\tau_\omega$) for easy reference. Note, however, that these
rules do not constitute our higher type theory, which was defined in
\cref{sec:typesys,sec:meanings} and whose properties were verified in
\cref{sec:types}.
One can settle on a different collection of rules depending on the need.
For example, the \textsc{{\color{red}Red}PRL}{} proof assistant \citep{redprl} based on
this paper uses a sequent calculus rather than natural deduction, judgments
without any presuppositions, and a unified context for dimensions and terms.
For the sake of concision and clarity, we state the following rules in
\emph{local form}, extending them to \emph{global form} by \emph{uniformity},
also called \emph{naturality}. (This format was suggested by
\citet{martin1984intuitionistic}, itself inspired by Gentzen's original concept
of natural deduction.)
While the rules in \cref{sec:types} are stated only for closed terms, the
corresponding generalizations to open-term sequents follow by the definition of
the open judgments, the fact that the introduction and elimination rules respect
equality (proven in \cref{sec:types}), and the fact that all substitutions
commute with term formers.
In the rules below, $\Psi$ and $\Xi$ are unordered sets, and the equations in
$\Xi$ are also unordered. $\ensuremath{\mathcal{J}}$ stands for any type equality or element equality
judgment, and $\kappa$ for either $\mathsf{pre}$ or $\mathsf{Kan}$. The
$\ensuremath{\steps_\stable}$ judgment is the \emph{cubically-stable stepping} relation defined in
\cref{sec:opsem}.
\paragraph{Structural rules}
\begin{mathpar}
\Infer
{\cwftype{\kappa}{A}}
{\oftype{\oft{a}{A}}{a}{A}}
\and
\Infer
{\judg{\ensuremath{\mathcal{J}}} \\
\cwftype{\kappa}{A}}
{\ctx{\oft aA} \judg{\ensuremath{\mathcal{J}}}}
\and
\Infer
{\judg{\ensuremath{\mathcal{J}}} \\
\tds{\Psi'}{\psi}{\Psi}}
{\judg[\Psi']{\td{\ensuremath{\mathcal{J}}}{\psi}}}
\and
\Infer
{\ceqtype{Kan}{A}{A'}}
{\ceqtype{pre}{A}{A'}}
\and
\Infer
{\ceqtype{\kappa}{A}{A'}}
{\ceqtype{\kappa}{A'}{A}}
\and
\Infer
{\ceqtype{\kappa}{A}{A'} \\
\ceqtype{\kappa}{A'}{A''}}
{\ceqtype{\kappa}{A}{A''}}
\and
\Infer
{\ceqtm{M'}{M}{A}}
{\ceqtm{M}{M'}{A}}
\and
\Infer
{\ceqtm{M}{M'}{A} \\
\ceqtm{M'}{M''}{A}}
{\ceqtm{M}{M''}{A}}
\and
\Infer
{\ceqtm{M}{M'}{A} \\
\ceqtype{\kappa}{A}{A'}}
{\ceqtm{M}{M'}{A'}}
\and
\Infer
{\eqtype{\kappa}{\oft{a}{A}}{B}{B'} \\
\ceqtm{N}{N'}{A}}
{\ceqtype{\kappa}{\subst{B}{N}{a}}{\subst{B'}{N'}{a}}}
\and
\Infer
{\eqtm{\oft{a}{A}}{M}{M'}{B} \\
\ceqtm{N}{N'}{A}}
{\ceqtm{\subst{M}{N}{a}}{\subst{M'}{N'}{a}}{\subst{B}{N}{a}}}
\end{mathpar}
\paragraph{Restriction rules}
\begin{mathpar}
\Infer
{\judg{\ensuremath{\mathcal{J}}}}
{\judg<\cdot>{\ensuremath{\mathcal{J}}}}
\and
\Infer
{\judg<\Xi>{\ensuremath{\mathcal{J}}}}
{\judg<\Xi,\ensuremath{\varepsilon}=\ensuremath{\varepsilon}>{\ensuremath{\mathcal{J}}}}
\and
\Infer
{ }
{\judg<\Xi,\ensuremath{\varepsilon}=\overline{\e}>{\ensuremath{\mathcal{J}}}}
\and
\Infer
{\judg<\dsubst{\Xi}{r}{x}>{\dsubst{\ensuremath{\mathcal{J}}}{r}{x}}}
{\judg[\Psi,x]<\Xi,x=r>{\ensuremath{\mathcal{J}}}}
\end{mathpar}
\paragraph{Computation rules}
\begin{mathpar}
\Infer
{\ceqtype{\kappa}{A'}{B} \\
A \ensuremath{\steps_\stable} A'}
{\ceqtype{\kappa}{A}{B}}
\and
\Infer
{\ceqtm{M'}{N}{A} \\
M \ensuremath{\steps_\stable} M'}
{\ceqtm{M}{N}{A}}
\end{mathpar}
\paragraph{Kan conditions}\
\begin{mathparpagebreakable}
\Infer
{r_i = r_j \\
r_i' = 0 \\
r_j' = 1}
{\wfshape{\etc{r_i = r_i'}}}
\and
\Infer
{r_i = r_i'}
{\wfshape{\etc{r_i = r_i'}}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\ceqtype{Kan}{A}{A'} \\
&\ceqtm{M}{M'}{A} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j'}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{A}
\end{array}}}
{\ceqtm{\Hcom*{A}{r_i=r_i'}}{\Hcom{A'}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{A}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}{A} \\
&\coftype{M}{A} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{A}
\end{array}}}
{\ceqtm{\Hcom{A}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{A}}
\and
\Infer
{ {\begin{array}{ll}
& r_i = r_i' \\
&\cwftype{Kan}{A} \\
&\coftype{M}{A} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{A}
\end{array}}}
{\ceqtm{\Hcom*{A}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{A}}
\and
\Infer
{\ceqtype{Kan}[\Psi,x]{A}{A'} \\
\ceqtm{M}{M'}{\dsubst{A}{r}{x}}}
{\ceqtm{\Coe*{x.A}}{\Coe{x.A'}{r}{r'}{M'}}{\dsubst{A}{r'}{x}}}
\and
\Infer
{\cwftype{Kan}[\Psi,x]{A} \\
\coftype{M}{\dsubst{A}{r}{x}}}
{\ceqtm{\Coe{x.A}{r}{r}{M}}{M}{\dsubst{A}{r}{x}}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\ceqtype{Kan}[\Psi,y]{A}{A'} \\
&\ceqtm{M}{M'}{\dsubst{A}{r}{y}} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j'}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{\dsubst{A}{r}{y}}
\end{array}}}
{\ceqtm{\Com*{y.A}{r_i=r_i'}}{\Com{y.A'}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\dsubst{A}{r'}{y}}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}[\Psi,y]{A} \\
&\coftype{M}{\dsubst{A}{r}{y}} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{\dsubst{A}{r}{y}}
\end{array}}}
{\ceqtm{\Com{y.A}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{\dsubst{A}{r}{y}}}
\and
\Infer
{ {\begin{array}{ll}
& r_i = r_i' \\
&\cwftype{Kan}[\Psi,y]{A} \\
&\coftype{M}{\dsubst{A}{r}{y}} \\
(\forall i,j) &\ceqtm[\Psi,y]<r_i = r_i',r_j = r_j'>{N_i}{N_j}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\dsubst{N_i}{r}{y}}{M}{\dsubst{A}{r}{y}}
\end{array}}}
{\ceqtm{\Com*{y.A}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\dsubst{A}{r'}{y}}}
\and
\end{mathparpagebreakable}
\paragraph{Dependent function types}
\begin{mathpar}
\Infer
{\ceqtype{\kappa}{A}{A'} \\
\eqtype{\kappa}{\oft aA}{B}{B'}}
{\ceqtype{\kappa}{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}}
\and
\Infer
{\eqtm{\oft aA}{M}{M'}{B}}
{\ceqtm{\lam{a}{M}}{\lam{a}{M'}}{\picl{a}{A}{B}}}
\and
\Infer
{\ceqtm{M}{M'}{\picl{a}{A}{B}} \\
\ceqtm{N}{N'}{A}}
{\ceqtm{\app{M}{N}}{\app{M'}{N'}}{\subst{B}{N}{a}}}
\and
\Infer
{\oftype{\oft aA}{M}{B} \\
\coftype{N}{A}}
{\ceqtm{\app{\lam{a}{M}}{N}}{\subst{M}{N}{a}}{\subst{B}{N}{a}}}
\and
\Infer
{\coftype{M}{\picl{a}{A}{B}}}
{\ceqtm{M}{\lam{a}{\app{M}{a}}}{\picl{a}{A}{B}}}
\end{mathpar}
\paragraph{Dependent pair types}
\begin{mathpar}
\Infer
{\ceqtype{\kappa}{A}{A'} \\
\eqtype{\kappa}{\oft aA}{B}{B'}}
{\ceqtype{\kappa}{\sigmacl{a}{A}{B}}{\sigmacl{a}{A'}{B'}}}
\and
\Infer
{\ceqtm{M}{M'}{A} \\
\ceqtm{N}{N'}{\subst{B}{M}{a}}}
{\ceqtm{\pair{M}{N}}{\pair{M'}{N'}}{\sigmacl{a}{A}{B}}}
\and
\Infer
{\ceqtm{P}{P'}{\sigmacl{a}{A}{B}}}
{\ceqtm{\fst{P}}{\fst{P'}}{A}}
\and
\Infer
{\ceqtm{P}{P'}{\sigmacl{a}{A}{B}}}
{\ceqtm{\snd{P}}{\snd{P'}}{\subst{B}{\fst{P}}{a}}}
\and
\Infer
{\coftype{M}{A}}
{\ceqtm{\fst{\pair{M}{N}}}{M}{A}}
\and
\Infer
{\coftype{N}{B}}
{\ceqtm{\snd{\pair{M}{N}}}{N}{B}}
\and
\Infer
{\coftype{P}{\sigmacl{a}{A}{B}}}
{\ceqtm{P}{\pair{\fst{P}}{\snd{P}}}{\sigmacl{a}{A}{B}}}
\end{mathpar}
\paragraph{Path types}
\begin{mathpar}
\Infer
{\ceqtype{\kappa}[\Psi,x]{A}{A'} \\
(\forall\ensuremath{\varepsilon})\ \ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}}
{\ceqtype{\kappa}{\Path{x.A}{P_0}{P_1}}{\Path{x.A'}{P_0'}{P_1'}}}
\and
\Infer
{\ceqtm[\Psi,x]{M}{M'}{A} \\
(\forall\ensuremath{\varepsilon})\ \ceqtm{\dsubst{M}{\ensuremath{\varepsilon}}{x}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}}
{\ceqtm{\dlam{x}{M}}{\dlam{x}{M'}}{\Path{x.A}{P_0}{P_1}}}
\and
\Infer
{\ceqtm{M}{M'}{\Path{x.A}{P_0}{P_1}}}
{\ceqtm{\dapp{M}{r}}{\dapp{M'}{r}}{\dsubst{A}{r}{x}}}
\and
\Infer
{\coftype{M}{\Path{x.A}{P_0}{P_1}}}
{\ceqtm{\dapp{M}{\ensuremath{\varepsilon}}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}}
\and
\Infer
{\coftype[\Psi,x]{M}{A}}
{\ceqtm{\dapp{(\dlam{x}{M})}{r}}{\dsubst{M}{r}{x}}{\dsubst{A}{r}{x}}}
\and
\Infer
{\coftype{M}{\Path{x.A}{P_0}{P_1}}}
{\ceqtm{M}{\dlam{x}{(\dapp{M}{x})}}{\Path{x.A}{P_0}{P_1}}}
\end{mathpar}
\paragraph{Equality pretypes}
\begin{mathpar}
\Infer
{\ceqtype{pre}{A}{A'} \\
\ceqtm{M}{M'}{A} \\
\ceqtm{N}{N'}{A}}
{\ceqtype{pre}{\Eq{A}{M}{N}}{\Eq{A'}{M'}{N'}}}
\and
\Infer
{\ceqtm{M}{N}{A}}
{\coftype{\ensuremath{\star}}{\Eq{A}{M}{N}}}
\and
\Infer
{\coftype{E}{\Eq{A}{M}{N}}}
{\ceqtm{M}{N}{A}}
\and
\Infer
{\coftype{E}{\Eq{A}{M}{N}}}
{\ceqtm{E}{\ensuremath{\star}}{\Eq{A}{M}{N}}}
\end{mathpar}
\paragraph{Void}
\begin{mathpar}
\Infer
{ }
{\cwftype{Kan}{\ensuremath{\mathsf{void}}}}
\and
\Infer
{\coftype{M}{\ensuremath{\mathsf{void}}}}
{\judg{\ensuremath{\mathcal{J}}}}
\end{mathpar}
\paragraph{Natural numbers}
\begin{mathpar}
\Infer
{ }
{\cwftype{Kan}{\ensuremath{\mathsf{nat}}}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{z}}}{\ensuremath{\mathsf{nat}}}}
\and
\Infer
{\ceqtm{M}{M'}{\ensuremath{\mathsf{nat}}}}
{\ceqtm{\suc{M}}{\suc{M'}}{\ensuremath{\mathsf{nat}}}}
\and
\Infer
{\wftype{\kappa}{\oft{n}{\ensuremath{\mathsf{nat}}}}{A} \\
\ceqtm{M}{M'}{\ensuremath{\mathsf{nat}}} \\
\ceqtm{Z}{Z'}{\subst{A}{\ensuremath{\mathsf{z}}}{n}} \\
\eqtm{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{S'}{\subst{A}{\suc{n}}{n}}}
{\ceqtm{\natrec{M}{Z}{n.a.S}}{\natrec{M'}{Z'}{n.a.S'}}{\subst{A}{M}{n}}}
\and
\Infer
{\coftype{Z}{A}}
{\ceqtm{\natrec{\ensuremath{\mathsf{z}}}{Z}{n.a.S}}{Z}{A}}
\and
\Infer
{\wftype{\kappa}{\oft{n}{\ensuremath{\mathsf{nat}}}}{A} \\
\coftype{M}{\ensuremath{\mathsf{nat}}} \\
\coftype{Z}{\subst{A}{\ensuremath{\mathsf{z}}}{n}} \\
\oftype{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{\subst{A}{\suc{n}}{n}}}
{\ceqtm{\natrec{\suc{M}}{Z}{n.a.S}}%
{\subst{\subst{S}{M}{n}}{\natrec{M}{Z}{n.a.S}}{a}}%
{\subst{A}{\suc{M}}{n}}}
\end{mathpar}
\paragraph{Booleans}
\begin{mathpar}
\Infer
{ }
{\cwftype{Kan}{\ensuremath{\mathsf{bool}}}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{false}}}{\ensuremath{\mathsf{bool}}}}
\and
\Infer
{\wftype{pre}{\oft{b}{\ensuremath{\mathsf{bool}}}}{C} \\
\ceqtm{M}{M'}{\ensuremath{\mathsf{bool}}} \\
\ceqtm{T}{T'}{\subst{C}{\ensuremath{\mathsf{true}}}{b}} \\
\ceqtm{F}{F'}{\subst{C}{\ensuremath{\mathsf{false}}}{b}}}
{\ceqtm{\ifb{b.A}{M}{T}{F}}{\ifb{b.A'}{M'}{T'}{F'}}{\subst{C}{M}{b}}}
\and
\Infer
{\coftype{T}{B}}
{\ceqtm{\ifb{b.A}{\ensuremath{\mathsf{true}}}{T}{F}}{T}{B}}
\and
\Infer
{\coftype{F}{B}}
{\ceqtm{\ifb{b.A}{\ensuremath{\mathsf{false}}}{T}{F}}{F}{B}}
\end{mathpar}
\paragraph{Weak Booleans}
\begin{mathpar}
\Infer
{ }
{\cwftype{Kan}{\ensuremath{\mathsf{wbool}}}}
\and
\Infer
{\ceqtm{M}{M'}{\ensuremath{\mathsf{bool}}}}
{\ceqtm{M}{M'}{\ensuremath{\mathsf{wbool}}}}
\and
\Infer
{\eqtype{Kan}{\oft{b}{\ensuremath{\mathsf{wbool}}}}{A}{A'} \\
\ceqtm{M}{M'}{\ensuremath{\mathsf{wbool}}} \\
\ceqtm{T}{T'}{\subst{A}{\ensuremath{\mathsf{true}}}{b}} \\
\ceqtm{F}{F'}{\subst{A}{\ensuremath{\mathsf{false}}}{b}}}
{\ceqtm{\ifb{b.A}{M}{T}{F}}{\ifb{b.A'}{M'}{T'}{F'}}{\subst{A}{M}{b}}}
\end{mathpar}
\paragraph{Circle}
\begin{mathpar}
\Infer
{ }
{\cwftype{Kan}{\ensuremath{\mathbb{S}^1}}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{base}}}{\ensuremath{\mathbb{S}^1}}}
\and
\Infer
{ }
{\coftype{\lp{r}}{\ensuremath{\mathbb{S}^1}}}
\and
\Infer
{ }
{\ceqtm{\lp{\ensuremath{\varepsilon}}}{\ensuremath{\mathsf{base}}}{\ensuremath{\mathbb{S}^1}}}
\and
\Infer
{\eqtype{Kan}{\oft{c}{\ensuremath{\mathbb{S}^1}}}{A}{A'} \\
\ceqtm{M}{M'}{\ensuremath{\mathbb{S}^1}} \\
\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}} \\
\ceqtm[\Psi,x]{L}{L'}{\subst{A}{\lp{x}}{c}} \\
(\forall\ensuremath{\varepsilon})\ \ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}}
{\ceqtm{\Celim{c.A}{M}{P}{x.L}}{\Celim{c.A'}{M'}{P'}{x.L'}}{\subst{A}{M}{c}}}
\and
\Infer
{\coftype{P}{B}}
{\ceqtm{\Celim{c.A}{\ensuremath{\mathsf{base}}}{P}{x.L}}{P}{B}}
\and
\Infer
{\coftype[\Psi,x]{L}{B} \\
(\forall\ensuremath{\varepsilon})\ \ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\dsubst{B}{\ensuremath{\varepsilon}}{x}}}
{\ceqtm{\Celim{c.A}{\lp{r}}{P}{x.L}}{\dsubst{L}{r}{x}}{\dsubst{B}{r}{x}}}
\end{mathpar}
\paragraph{Univalence}\
\begin{mathparpagebreakable}
\isContr{C} := \prd{C}{(\picl{c}{C}{\picl{c'}{C}{\Path{\_.C}{c}{c'}}})}
\and
\Equiv{A}{B} :=
\sigmacl{f}{\arr{A}{B}}{(\picl{b}{B}{\isContr{\sigmacl{a}{A}{\Path{\_.B}{\app{f}{a}}{b}}}})}
\and
\Infer
{\ceqtype{\kappa}<r=0>{A}{A'} \\
\ceqtype{\kappa}{B}{B'} \\
\ceqtm<r=0>{E}{E'}{\Equiv{A}{B}}}
{\ceqtype{\kappa}{\ua{r}{A,B,E}}{\ua{r}{A',B',E'}}}
\and
\Infer
{\cwftype{\kappa}{A}}
{\ceqtype{\kappa}{\ua{0}{A,B,E}}{A}}
\and
\Infer
{\cwftype{\kappa}{B}}
{\ceqtype{\kappa}{\ua{1}{A,B,E}}{B}}
\and
\Infer
{\ceqtm<r=0>{M}{M'}{A} \\
\ceqtm{N}{N'}{B} \\
\coftype<r=0>{E}{\Equiv{A}{B}} \\
\ceqtm<r=0>{\app{\fst{E}}{M}}{N}{B}}
{\ceqtm{\uain{r}{M,N}}{\uain{r}{M',N'}}{\ua{r}{A,B,E}}}
\and
\Infer
{\coftype{M}{A}}
{\ceqtm{\uain{0}{M,N}}{M}{A}}
\and
\Infer
{\coftype{N}{B}}
{\ceqtm{\uain{1}{M,N}}{N}{B}}
\and
\Infer
{\ceqtm{M}{M'}{\ua{r}{A,B,E}} \\
\ceqtm<r=0>{F}{\fst{E}}{\arr{A}{B}}}
{\ceqtm{\uaproj{r}{M,F}}{\uaproj{r}{M',\fst{E}}}{B}}
\and
\Infer
{\coftype{M}{A} \\
\coftype{F}{\arr{A}{B}}}
{\ceqtm{\uaproj{0}{M,F}}{\app{F}{M}}{B}}
\and
\Infer
{\coftype{M}{B}}
{\ceqtm{\uaproj{1}{M,F}}{M}{B}}
\and
\Infer
{\coftype<r=0>{M}{A} \\
\coftype{N}{B} \\
\coftype<r=0>{F}{\arr{A}{B}} \\
\ceqtm<r=0>{\app{F}{M}}{N}{B}}
{\ceqtm{\uaproj{r}{\uain{r}{M,N},F}}{N}{B}}
\and
\Infer
{\coftype{N}{\ua{r}{A,B,E}} \\
\ceqtm<r=0>{M}{N}{A}}
{\ceqtm{\uain{r}{M,\uaproj{r}{N,\fst{E}}}}{N}{\ua{r}{A,B,E}}}
\end{mathparpagebreakable}
\paragraph{Universes}\
\begin{mathparpagebreakable}
\Infer
{ }
{\cwftype{pre}{\Upre}}
\and
\Infer
{ }
{\cwftype{Kan}{\UKan}}
\and
\Infer
{\ceqtm{A}{A'}{\Ux}}
{\ceqtype{\kappa}{A}{A'}}
\and
\Infer
{\ceqtm{A}{A'}{\Ux[i]} \\
i\leq j}
{\ceqtm{A}{A'}{\Ux[j]}}
\and
\Infer
{\ceqtm{A}{A'}{\UKan}}
{\ceqtm{A}{A'}{\Upre}}
\and
\Infer
{\ceqtm{A}{A'}{\Ux} \\
\eqtm{\oft aA}{B}{B'}{\Ux}}
{\ceqtm{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}{\Ux}}
\and
\Infer
{\ceqtm{A}{A'}{\Ux} \\
\eqtm{\oft aA}{B}{B'}{\Ux}}
{\ceqtm{\sigmacl{a}{A}{B}}{\sigmacl{a}{A'}{B'}}{\Ux}}
\and
\Infer
{\ceqtm[\Psi,x]{A}{A'}{\Ux} \\
(\forall\ensuremath{\varepsilon})\ \ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}}
{\ceqtm{\Path{x.A}{P_0}{P_1}}{\Path{x.A'}{P_0'}{P_1'}}{\Ux}}
\and
\Infer
{\ceqtm{A}{A'}{\Upre} \\
\ceqtm{M}{M'}{A} \\
\ceqtm{N}{N'}{A}}
{\ceqtm{\Eq{A}{M}{N}}{\Eq{A'}{M'}{N'}}{\Upre}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{void}}}{\Ux}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{nat}}}{\Ux}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{bool}}}{\Ux}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathsf{wbool}}}{\Ux}}
\and
\Infer
{ }
{\coftype{\ensuremath{\mathbb{S}^1}}{\Ux}}
\and
\Infer
{\ceqtm<r=0>{A}{A'}{\Ux} \\
\ceqtm{B}{B'}{\Ux} \\
\ceqtm<r=0>{E}{E'}{\Equiv{A}{B}}}
{\ceqtm{\ua{r}{A,B,E}}{\ua{r}{A',B',E'}}{\Ux}}
\and
\Infer
{i<j}
{\coftype{\Ux[i]}{\Upre[j]}}
\and
\Infer
{i<j}
{\coftype{\UKan[i]}{\UKan[j]}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}{A} \\
&\ceqtm{M}{M'}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j} \\
(\forall i,j) &\ceqtm<r_i = r_i',r_j = r_j'>{N_i}{N_j'}{\dsubst{B_i}{r'}{y}} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}
\end{array}}}
{\ceqtm{\Kbox{r}{r'}{M}{\sys{r_i=r_i'}{N_i}}}%
{\Kbox{r}{r'}{M'}{\sys{r_i=r_i'}{N_i'}}}%
{\Hcom{\UKan[j]}{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}}
\and
\Infer
{\coftype{M}{A}}
{\ceqtm{\Kbox{r}{r}{M}{\sys{r_i=r_i'}{N_i}}}{M}{A}}
\and
\Infer
{ {\begin{array}{ll}
& r_i = r_i' \\
&\cwftype{Kan}{A} \\
&\coftype{M}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j} \\
(\forall i,j) &\ceqtm<r_i = r_i',r_j = r_j'>{N_i}{N_j}{\dsubst{B_i}{r'}{y}} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}
\end{array}}}
{\ceqtm{\Kbox*{r_i=r_i'}}{N_i}{\dsubst{B_i}{r'}{y}}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j'} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
&\ceqtm{M}{M'}{\Hcom{\UKan[j]}{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}} \\
\end{array}}}
{\ceqtm{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}%
{\Kcap{r}{r'}{M'}{\sys{r_i=r_i'}{y.B_i'}}}{A}}
\and
\Infer
{\coftype{M}{A}}
{\ceqtm{\Kcap{r}{r}{M}{\sys{r_i=r_i'}{y.B_i}}}{M}{A}}
\and
\Infer
{ {\begin{array}{ll}
&r_i = r_i' \\
&\cwftype{Kan}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j'} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
&\ceqtm{M}{M'}{\Hcom{\UKan[j]}{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}} \\
\end{array}}}
{\ceqtm{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}%
{\Coe{y.B_i}{r'}{r}{M}}{A}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}{A} \\
&\ceqtm{M}{M'}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j} \\
(\forall i,j) &\ceqtm<r_i = r_i',r_j = r_j'>{N_i}{N_j'}{\dsubst{B_i}{r'}{y}} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
(\forall i) &\ceqtm<r_i = r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}
\end{array}}}
{\ceqtm{\Kcap{r}{r'}{\Kbox{r}{r'}{M}{\sys{r_i=r_i'}{N_i}}}{\sys{r_i=r_i'}{y.B_i}}}%
{M}{A}}
\and
\Infer
{ {\begin{array}{ll}
&\wfshape{\etc{r_i = r_i'}} \\
&\cwftype{Kan}{A} \\
(\forall i,j) &\ceqtype{Kan}[\Psi,y]<r_i = r_i',r_j = r_j'>{B_i}{B_j} \\
(\forall i) &\ceqtype{Kan}<r_i = r_i'>{\dsubst{B_i}{r}{y}}{A} \\
&\coftype{M}{\Hcom{\UKan[j]}{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}
\end{array}}}
{\ceqtm{\Kbox{r}{r'}{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}{\sys{r_i=r_i'}{M}}}%
{M}{\Hcom{\UKan[j]}{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}}
\end{mathparpagebreakable}
\subsection{Composite types}
Unlike the other type formers, $\Fcom$s are only pretypes when their
constituents are Kan types. (For this reason, in \cref{sec:typesys} we only
close $\pre\tau_i$ under $\Fcom$s of types from $\Kan\tau_i$.) The results of
this section hold in $\tau=\Kan\mu(\nu)$ for any cubical type system $\nu$, and
therefore in each $\pre\tau_i$ as well. In this section, we will say that
$A,\sys{r_i=r_i'}{y.B_i}$ and $A',\sys{r_i=r_i'}{y.B_i'}$ are \emph{(equal) type
compositions $r\rightsquigarrow r'$} whenever:
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtype{Kan}{A}{A'}$,
\item $\ceqtype{Kan}[\Psi,y]<r_i=r_i',r_j=r_j'>{B_i}{B_j'}$ for any $i,j$, and
\item $\ceqtype{Kan}<r_i=r_i'>{\dsubst{B_i}{r}{y}}{A}$ for any $i$.
\end{enumerate}
\begin{lemma}\label{lem:fcom-preform}
If $A,\sys{r_i=r_i'}{y.B_i}$ and $A',\sys{r_i=r_i'}{y.B_i'}$ are equal type
compositions $r\rightsquigarrow r'$, then
\begin{enumerate}
\item $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}},
\Fcom{r}{r'}{A'}{\sys{r_i=r_i'}{y.B_i'}},\_)$,
\item if $r=r'$ then $\ceqtype{Kan}{\Fcom{r}{r}{A}{\sys{r_i=r_i'}{y.B_i}}}{A}$, and
\item if $r_i = r_i'$ then
$\ceqtype{Kan}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{B_i}}}{\dsubst{B_i}{r'}{y}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (1) is precisely the statement of \cref{lem:C-prekan}, applied to the
context-indexed PER $\{(\Psi,A_0,B_0)\mid\tau(\Psi,A_0,B_0,\_)\}$ instead of
$\vper{\ensuremath{\mathbb{S}^1}}(\Psi)$; as the $\Fcom$ structure of these PERs is defined
identically, the same proof applies. Part (2) is immediate by
\cref{lem:expansion}. For part (3), if $r=r'$, the result follows by
\cref{lem:expansion} and $\ceqtype{Kan}<r_i=r_i'>{\dsubst{B_i}{r}{y}}{A}$.
Otherwise, there is some least $j$ such that $r_j=r_j'$. Apply coherent
expansion to the left side with family
\[\begin{cases}
\td{A}{\psi} & \text{$\td{r}{\psi}=\td{r'}{\psi}$} \\
\td{\dsubst{B_j}{r'}{y}}{\psi} &
\text{$\td{r}{\psi}\neq\td{r'}{\psi}$, $\td{r_j}{\psi}=\td{r_j'}{\psi}$, and
$\forall k<j,\td{r_k}{\psi}\neq\td{r_k'}{\psi}$}.
\end{cases}\]
If $\td{r}{\psi}=\td{r'}{\psi}$ then
$\ceqtype{Kan}[\Psi']{\td{\dsubst{B_j}{r}{y}}{\psi}}{\td{A}{\psi}}$. If
$\td{r}{\psi}\neq\td{r'}{\psi}$, there is some least $k$ such that
$\td{r_k}{\psi}=\td{r_k'}{\psi}$; then
$\ceqtype{Kan}[\Psi']{\td{\dsubst{B_j}{r'}{y}}{\psi}}{\td{\dsubst{B_k}{r'}{y}}{\psi}}$.
By \cref{lem:cohexp-ceqtypek},
$\ceqtype{Kan}{\Fcom}{\dsubst{B_j}{r'}{y}}$, and part (3) follows by
$\ceqtype{Kan}{\dsubst{B_i}{r'}{y}}{\dsubst{B_j}{r'}{y}}$.
\end{proof}
\begin{lemma}\label{lem:fcom-preintro}
If
\begin{enumerate}
\item $A,\sys{r_i=r_i'}{y.B_i}$ is a type composition $r\rightsquigarrow r'$,
\item $\ceqtm{M}{M'}{A}$,
\item $\ceqtm<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\dsubst{B_i}{r'}{y}}$ for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}$ for any $i$,
\end{enumerate}
then $\ensuremath{\mathsf{Tm}}(\vper{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}})(
\Kbox{r}{r'}{M}{\sys{r_i=r_i'}{N_i}},
\Kbox{r}{r'}{M'}{\sys{r_i=r_i'}{N_i'}})$.
\end{lemma}
\begin{proof}
We focus on the unary case; the binary case follows similarly. For any
$\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$ we must show
$\td{\Kbox}{\psi_1}\ensuremath{\Downarrow} X_1$ and
$\lift{\vper{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}}_{\psi_1\psi_2}
(\td{X_1}{\psi_2},\td{\Kbox}{\psi_1\psi_2})$. We proceed by cases on the first
step taken by $\td{\Kbox}{\psi_1}$ and $\td{\Kbox}{\psi_1\psi_2}$.
\begin{enumerate}
\item $\td{r}{\psi_1}=\td{r'}{\psi_1}$.
Then $\td{\Kbox}{\psi_1}\ensuremath{\steps_\stable} \td{M}{\psi_1}$,
$\vper{\Fcom}_{\psi_1\psi_2} = \vper{A}_{\psi_1\psi_2}$ by
\cref{lem:fcom-preform}, and
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{X_1}{\psi_2},\td{M}{\psi_1\psi_2})$ by
$\coftype{M}{A}$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_j}{\psi_1}=\td{r_j'}{\psi_1}$ (where this is the least such $j$), and
$\td{r}{\psi_1\psi_2}=\td{r'}{\psi_1\psi_2}$.
Then $\td{\Kbox}{\psi_1}\ensuremath{\longmapsto} \td{N_j}{\psi_1}$,
$\td{\Kbox}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M}{\psi_1\psi_2}$, and
$\vper{\Fcom}_{\psi_1\psi_2} = \vper{A}_{\psi_1\psi_2}$ by
\cref{lem:fcom-preform}. By
$\ceqtype{Kan}[\Psi_2]{\td{\dsubst{B_j}{r'}{y}}{\psi_1\psi_2}}{\td{A}{\psi_1\psi_2}}$
and $\coftype[\Psi_1]{\td{N_j}{\psi_1}}{\td{\dsubst{B_j}{r'}{y}}{\psi_1}}$ at
$\id[\Psi_1],\psi_2$ we have
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{X_1}{\psi_2},\td{N_j}{\psi_1\psi_2})$.
We also have
$\lift{\vper{A}}_{\psi_1\psi_2}(\td{N_j}{\psi_1\psi_2},\td{M}{\psi_1\psi_2})$ by
$\ceqtm[\Psi_2]{\td{(\Coe{y.B_j}{r'}{r}{N_j})}{\psi_1\psi_2}}%
{\td{M}{\psi_1\psi_2}}{\td{A}{\psi_1\psi_2}}$ and
$\ceqtm[\Psi_2]{\td{(\Coe{y.B_j}{r'}{r}{N_j})}{\psi_1\psi_2}}%
{\td{N_j}{\psi_1\psi_2}}{\td{A}{\psi_1\psi_2}}$; the result follows by
transitivity.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}=\td{r_i'}{\psi_1}$ (least such),
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}=\td{r_j'}{\psi_1\psi_2}$ (least such).
Then $\td{\Kbox}{\psi_1}\ensuremath{\longmapsto} \td{N_i}{\psi_1}$,
$\td{\Kbox}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{N_j}{\psi_1\psi_2}$, and
$\vper{\Fcom}_{\psi_1\psi_2} = \vper{\dsubst{B_i}{r'}{y}}_{\psi_1\psi_2}$ by
\cref{lem:fcom-preform}. The result follows by
$\coftype[\Psi_1]{\td{N_i}{\psi_1}}{\td{\dsubst{B_i}{r'}{y}}{\psi_1}}$ and
$\ceqtm[\Psi_2]{\td{N_i}{\psi_1\psi_2}}{\td{N_j}{\psi_1\psi_2}}%
{\td{\dsubst{B_i}{r'}{y}}{\psi_1\psi_2}}$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$, and
$\td{r}{\psi_1\psi_2} = \td{r'}{\psi_1\psi_2}$.
Then $\isval{\td{\Kbox}{\psi_1}}$,
$\td{\Kbox}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M}{\psi_1\psi_2}$,
$\vper{\Fcom}_{\psi_1\psi_2} = \vper{A}_{\psi_1\psi_2}$ by
\cref{lem:fcom-preform}, and the result follows by $\coftype{M}{A}$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$,
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}=\td{r_j'}{\psi_1\psi_2}$ (the least such $j$).
Then $\isval{\td{\Kbox}{\psi_1}}$,
$\td{\Kbox}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{N_j}{\psi_1\psi_2}$,
$\vper{\Fcom}_{\psi_1\psi_2} = \vper{\dsubst{B_i}{r'}{y}}_{\psi_1\psi_2}$ by
\cref{lem:fcom-preform}, and the result follows by
$\coftype[\Psi_2]{\td{N_j}{\psi_1\psi_2}}{\td{\dsubst{B_i}{r'}{y}}{\psi_1\psi_2}}$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$, and
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}\neq\td{r_j'}{\psi_1\psi_2}$ for all $j$.
Then $\isval{\td{\Kbox}{\psi_1}}$ and $\isval{\td{\Kbox}{\psi_1\psi_2}}$, and
the result follows by the definition of $\vper{\Fcom}$.
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Pretype formation]\label{rul:fcom-form-pre}
If $A,\sys{r_i=r_i'}{y.B_i}$ and $A',\sys{r_i=r_i'}{y.B_i'}$ are equal type
compositions $r\rightsquigarrow r'$, then
\begin{enumerate}
\item $\ceqtype{pre}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}%
{\Fcom{r}{r'}{A'}{\sys{r_i=r_i'}{y.B_i'}}}$,
\item if $r=r'$ then $\ceqtype{pre}{\Fcom{r}{r}{A}{\sys{r_i=r_i'}{y.B_i}}}{A}$, and
\item if $r_i = r_i'$ then
$\ceqtype{pre}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{B_i}}}{\dsubst{B_i}{r'}{y}}$.
\end{enumerate}
\end{rul}
\begin{proof}
For part (1), by \cref{lem:fcom-preform} it suffices to show
$\ensuremath{\mathsf{Coh}}(\vper{\Fcom})$. Let $\vper{\Fcom}_\psi(M_0,N_0)$ for any $\tds{\Psi'}{\psi}{\Psi}$. If
$\td{r}{\psi}=\td{r'}{\psi}$ then $\ensuremath{\mathsf{Tm}}(\td{\vper{\Fcom}}{\psi})(M_0,N_0)$ by
$\td{\vper{\Fcom}}{\psi}=\td{\vper{A}}{\psi}$ and $\ensuremath{\mathsf{Coh}}(\vper{A})$. Similarly,
if $\td{r_i}{\psi}=\td{r_i'}{\psi}$ for some $i$, then
$\ensuremath{\mathsf{Tm}}(\td{\vper{\Fcom}}{\psi})(M_0,N_0)$ by
$\td{\vper{\Fcom}}{\psi}=\vper{\td{\dsubst{B_i}{r'}{y}}{\psi}}$ and
$\ensuremath{\mathsf{Coh}}(\vper{\td{\dsubst{B_i}{r'}{y}}{\psi}})$.
If $\td{r}{\psi}\neq\td{r'}{\psi}$ and $\td{r_i}{\psi}\neq\td{r_i'}{\psi}$, then
$M_0$ and $N_0$ are $\Kbox$es and the result follows by
\cref{lem:fcom-preintro}.
Parts (2--3) are immediate by \cref{lem:fcom-preform}.
\end{proof}
\begin{rul}[Introduction]\label{rul:fcom-intro}
If
\begin{enumerate}
\item $A,\sys{r_i=r_i'}{y.B_i}$ is a type composition $r\rightsquigarrow r'$,
\item $\ceqtm{M}{M'}{A}$,
\item $\ceqtm<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\dsubst{B_i}{r'}{y}}$ for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}$ for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm{\Kbox*{r_i=r_i'}}{\Kbox{r}{r'}{M'}{\sys{r_i=r_i'}{N_i'}}}%
{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}$;
\item if $r=r'$ then $\ceqtm{\Kbox*{r_i=r_i'}}{M}{A}$; and
\item if $r_i = r_i'$ then $\ceqtm{\Kbox*{r_i=r_i'}}{N_i}{\dsubst{B_i}{r'}{y}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Part (1) is immediate by \cref{lem:fcom-preintro,rul:fcom-form-pre}; part (2) is
immediate by \cref{lem:expansion}. For part (3), if $r=r'$, the result follows
by \cref{lem:expansion}. Otherwise, there is a least $j$ such that $r_j=r_j'$,
and we apply coherent expansion to the left side with family
\[\begin{cases}
\td{M}{\psi} & \text{$\td{r}{\psi}=\td{r'}{\psi}$} \\
\td{N_k}{\psi} &
\text{$\td{r}{\psi}\neq\td{r'}{\psi}$, $\td{r_k}{\psi}=\td{r_k'}{\psi}$, and
$\forall k'<k,\td{r_{k'}}{\psi}\neq\td{r_{k'}'}{\psi}$}.
\end{cases}\]
If $\td{r}{\psi}=\td{r'}{\psi}$ then
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{N_j}{\psi}}{\td{\dsubst{B_i}{r'}{y}}{\psi}}$ by
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{(\Coe{y.B_j}{r'}{r}{N_j})}{\psi}}{\td{A}{\psi}}$,
$\ceqtm[\Psi']{\td{(\Coe{y.B_j}{r'}{r}{N_j})}{\psi}}{\td{N_j}{\psi}}%
{\td{\dsubst{B_i}{r'}{y}}{\psi}}$, and
$\ceqtype{Kan}[\Psi']{\td{\dsubst{B_i}{r'}{y}}{\psi}}{\td{A}{\psi}}$. If
$\td{r}{\psi}\neq\td{r'}{\psi}$ then
$\ceqtm[\Psi']{\td{N_k}{\psi}}{\td{N_j}{\psi}}{\td{\dsubst{B_i}{r'}{y}}{\psi}}$ by
$\ceqtm[\Psi']{\td{N_k}{\psi}}{\td{N_j}{\psi}}{\td{\dsubst{B_j}{r'}{y}}{\psi}}$
and $\ceqtype{Kan}[\Psi',y]{\td{B_i}{\psi}}{\td{B_j}{\psi}}$.
Thus by \cref{lem:cohexp-ceqtypek} we have
$\ceqtm{\Fcom}{N_j}{\dsubst{B_i}{r'}{y}}$, and part (3) follows by
$\ceqtm{N_j}{N_i}{\dsubst{B_i}{r'}{y}}$.
\end{proof}
\begin{rul}[Elimination]\label{rul:fcom-elim}
If $A,\sys{r_i=r_i'}{y.B_i}$ and $A',\sys{r_i=r_i'}{y.B_i'}$ are equal type
compositions $r\rightsquigarrow r'$ and
$\ceqtm{M}{M'}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}$, then
\begin{enumerate}
\item $\ceqtm{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}%
{\Kcap{r}{r'}{M'}{\sys{r_i=r_i'}{y.B_i'}}}{A}$;
\item if $r=r'$ then $\ceqtm{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}{M}{A}$; and
\item if $r_i=r_i'$ then
$\ceqtm{\Kcap{r}{r'}{M}{\sys{r_i=r_i'}{y.B_i}}}{\Coe{y.B_i}{r'}{r}{M}}{A}$.
\end{enumerate}
\end{rul}
\begin{proof}
Part (2) is immediate by \cref{lem:expansion,rul:fcom-form-pre}. For part (3),
if $r=r'$ then the result follows by part (2), $\cwftype{Kan}[\Psi,y]{B_i}$, and
$\ceqtype{Kan}{\dsubst{B_i}{r}{y}}{A}$. Otherwise, $r\neq r'$ and there is a least
$j$ such that $r_j=r_j'$. Apply coherent expansion to the left side with family
\[\begin{cases}
\td{M}{\psi} & \text{$\td{r}{\psi}=\td{r'}{\psi}$} \\
\Coe{y.\td{B_k}{\psi}}{\td{r'}{\psi}}{\td{r}{\psi}}{\td{M}{\psi}} &
\text{$\td{r}{\psi}\neq\td{r'}{\psi}$, $\td{r_k}{\psi}=\td{r_k'}{\psi}$, and
$\forall i<k,\td{r_i}{\psi}\neq\td{r_i'}{\psi}$}
\end{cases}\]
When $\td{r}{\psi}=\td{r'}{\psi}$,
$\ceqtm[\Psi']{\td{(\Coe{y.B_j}{r'}{r}{M})}{\psi}}{\td{M}{\psi}}{\td{A}{\psi}}$
by $\coftype{M}{\dsubst{B_j}{r'}{y}}$ (by \cref{rul:fcom-form-pre}),
$\cwftype{Kan}[\Psi,y]{B_j}$, and
$\ceqtype{Kan}[\Psi']{\td{\dsubst{B_j}{r}{y}}{\psi}}{\td{A}{\psi}}$.
When $\td{r}{\psi}\neq\td{r'}{\psi}$ and
$\td{r_k}{\psi}=\td{r_k'}{\psi}$ where $k$ is the least such, we have
$\ceqtm[\Psi']{\td{(\Coe{y.B_j}{r'}{r}{M})}{\psi}}%
{\Coe{y.\td{B_k}{\psi}}{\td{r'}{\psi}}{\td{r}{\psi}}{\td{M}{\psi}}}%
{\td{A}{\psi}}$ by $\ceqtype{Kan}[\Psi',y]{\td{B_j}{\psi}}{\td{B_k}{\psi}}$
and $\ceqtype{Kan}[\Psi']{\td{\dsubst{B_j}{r}{y}}{\psi}}{\td{A}{\psi}}$.
We conclude that $\ceqtm{\Kcap}{\Coe{y.B_j}{r'}{r}{M}}{A}$ by
\cref{lem:cohexp-ceqtm}, and part (3) follows by
$\ceqtm{\Coe{y.B_j}{r'}{r}{M}}{\Coe{y.B_i}{r'}{r}{M}}{A}$.
For part (1), if $r=r'$ or $r_i=r_i'$ then the result follows by the previous
parts. If $r\neq r'$ and $r_i\neq r_i'$ for all $i$, then for any $\tds{\Psi'}{\psi}{\Psi}$,
$\ceqtm[\Psi']{\td{M}{\psi}}{\Kbox{r}{r'}{O_\psi}{\sys{\td{\xi_i}{\psi}}{N_{i,\psi}}}}%
{\td{\Fcom}{\psi}}$ by \cref{lem:coftype-evals-ceqtm}. Apply coherent expansion
to the left side with family
\[\begin{cases}
\td{M}{\psi} & \text{$\td{r}{\psi}=\td{r'}{\psi}$} \\
\Coe{y.\td{B_j}{\psi}}{\td{r'}{\psi}}{\td{r}{\psi}}{\td{M}{\psi}} &
\text{$\td{r}{\psi}\neq\td{r'}{\psi}$, $\td{r_j}{\psi}=\td{r_j'}{\psi}$, and
$\forall i<j,\td{r_i}{\psi}\neq\td{r_i'}{\psi}$} \\
O_\psi & \text{$\td{r}{\psi}\neq\td{r'}{\psi}$ and
$\forall i,\td{r_i}{\psi}\neq\td{r_i'}{\psi}$} \\
&\quad\text{where $\td{M}{\psi}\ensuremath{\Downarrow}
\Kbox{\td{r}{\psi}}{\td{r'}{\psi}}{O_\psi}{\sys{\td{\xi_i}{\psi}}{N_{i,\psi}}}$}.
\end{cases}\]
When $\td{r}{\psi}=\td{r'}{\psi}$,
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{(O_{\id})}{\psi}}{\td{A}{\psi}}$ because
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{\Kbox}{\psi}}{\td{\Fcom}{\psi}}$,
$\ceqtype{Kan}[\Psi']{\td{\Fcom}{\psi}}{\td{A}{\psi}}$ (by
\cref{rul:fcom-form-pre}), and
$\ceqtm[\Psi']{\td{\Kbox}{\psi}}{\td{(O_{\id})}{\psi}}{\td{\Fcom}{\psi}}$ (by
\cref{rul:fcom-intro}).
When $\td{r}{\psi}\neq\td{r'}{\psi}$ and $\td{r_j}{\psi}=\td{r_j'}{\psi}$ where
$j$ is the least such, $\ceqtm[\Psi']{\td{(O_{\id})}{\psi}}%
{\Coe{y.\td{B_j}{\psi}}{\td{r'}{\psi}}{\td{r}{\psi}}{\td{M}{\psi}}}{\td{A}{\psi}}$
because
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{(N_{j,\id})}{\psi}}{\td{\dsubst{B_j}{r'}{y}}{\psi}}$
(by \cref{rul:fcom-form-pre,rul:fcom-intro}) and
$\ceqtm<r_j=r_j'>{O_{\id}}{\Coe{y.B_j}{r'}{r}{N_{j,\id}}}{A}$.
When $\td{r}{\psi}\neq\td{r'}{\psi}$ and $\td{r_i}{\psi}\neq\td{r_i'}{\psi}$ for
all $i$, $\ceqtm{\td{(O_{\id})}{\psi}}{O_\psi}{\td{A}{\psi}}$ by
$\vper{\Fcom}(\td{(\Kbox{r}{r'}{O_{\id}}{\sys{\xi_i}{N_{i,\id}}})}{\psi},
\Kbox{\td{r}{\psi}}{\td{r'}{\psi}}{O_\psi}{\sys{\td{\xi_i}{\psi}}{N_{i,\psi}}})$
(by $\coftype{M}{\Fcom}$ at $\id,\psi$). Therefore $\ceqtm{\Kcap}{O_{\id}}{A}$
by \cref{lem:cohexp-ceqtm}, and part (1) follows by a symmetric argument on the
right side.
\end{proof}
\begin{rul}[Computation]\label{rul:fcom-comp}
If
\begin{enumerate}
\item $A,\sys{r_i=r_i'}{y.B_i}$ is a type composition $r\rightsquigarrow r'$,
\item $\ceqtm{M}{M'}{A}$,
\item $\ceqtm<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\dsubst{B_i}{r'}{y}}$ for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}$ for any $i$,
\end{enumerate}
then
$\ceqtm{\Kcap{r}{r'}{\Kbox{r}{r'}{M}{\sys{r_i=r_i'}{N_i}}}{\sys{r_i=r_i'}{y.B_i}}}{M}{A}$.
\end{rul}
\begin{proof}
By \cref{rul:fcom-intro,rul:fcom-comp}, we know both sides have this type, so it
suffices to show $\lift{\vper{A}}(\Kcap,M)$.
If $r=r'$ then $\Kcap\ensuremath{\longmapsto}\Kbox\ensuremath{\longmapsto} M$ and $\lift{\vper{A}}(M,M)$.
If $r\neq r'$ and $r_i=r_i'$ where $i$ is the least such, then
$\Kcap\ensuremath{\longmapsto}\Coe{y.B_i}{r'}{r}{\Kbox}$, and
$\lift{\vper{A}}(\Coe{y.B_i}{r'}{r}{\Kbox},M)$ by
$\ceqtm{\Kbox}{N_i}{\dsubst{B_i}{r'}{y}}$ and
$\ceqtm{\Coe{y.B_i}{r'}{r}{N_i}}{M}{A}$.
If $r\neq r'$ and $r_i\neq r_i'$ for all $i$, then
$\Kcap\ensuremath{\longmapsto} M$ and $\lift{\vper{A}}(M,M)$.
\end{proof}
\begin{rul}[Eta]\label{rul:fcom-eta}
If $A,\sys{\xi_i}{y.B_i}$ is a type composition $r\rightsquigarrow r'$ and
$\coftype{M}{\Fcom{r}{r'}{A}{\sys{\xi_i}{y.B_i}}}$, then
$\ceqtm{\Kbox{r}{r'}{\Kcap{r}{r'}{M}{\sys{\xi_i}{y.B_i}}}{\sys{\xi_i}{M}}}%
{M}{\Fcom{r}{r'}{A}{\sys{\xi_i}{y.B_i}}}$.
\end{rul}
\begin{proof}
By $\coftype{\Kcap{r}{r'}{M}{\sys{\xi_i}{y.B_i}}}{A}$ (by \cref{rul:fcom-elim}),
$\coftype<r_i=r_i'>{M}{\dsubst{B_i}{r'}{y}}$ (by \cref{rul:fcom-form-pre}),
$\ceqtm<r_i=r_i'>{\Coe{y.B_i}{r'}{r}{M}}{\Kcap}{A}$ (by \cref{rul:fcom-elim}),
and \cref{rul:fcom-intro}, we have $\coftype{\Kbox}{\Fcom}$. Thus, by
\cref{lem:coftype-ceqtm}, it suffices to show $\lift{\vper{\Fcom}}(\Kbox,M)$.
If $r=r'$ then $\Kbox\ensuremath{\longmapsto}\Kcap\ensuremath{\longmapsto} M$ and $\lift{\vper{\Fcom}}(M,M)$.
If $r\neq r'$ and $r_i=r_i'$ for the least such $i$, then $\Kbox\ensuremath{\longmapsto} M$ and
$\lift{\vper{\Fcom}}(M,M)$. If $r\neq r'$ and $r_i\neq r_i'$ for all $i$, then
$M\ensuremath{\Downarrow}\Kbox{r}{r'}{O}{\sys{\xi_i}{N_i}}$ and
$\ceqtm{M}{\Kbox{r}{r'}{O}{\sys{\xi_i}{N_i}}}{\Fcom}$. The result follows by
transitivity and \cref{rul:fcom-intro}:
\begin{enumerate}
\item $\ceqtm{\Kcap{r}{r'}{M}{\sys{\xi_i}{y.B_i}}}{O}{A}$ by
\cref{lem:coftype-ceqtm} and $\Kcap\ensuremath{\longmapsto}^* O$,
\item $\ceqtm<r_i=r_i'>{M}{N_i}{\dsubst{B_i}{r'}{y}}$ by
$\ceqtm{M}{\Kbox{r}{r'}{O}{\sys{\xi_i}{N_i}}}{\Fcom}$ and \cref{rul:fcom-intro},
and
\item $\ceqtm<r_i=r_i'>{\Coe{y.B_i}{r'}{r}{M}}{\Kcap{r}{r'}{M}{\sys{\xi_i}{y.B_i}}}{A}$
by \cref{rul:fcom-elim} as before.
\qedhere
\end{enumerate}
\end{proof}
Our implementation of $\Coe$rcion for $\Fcom$ requires Kan compositions whose
lists of equations might be invalid (in the sense of \cref{def:valid}), although
Kan types are only guaranteed to have compositions for valid lists of equations.
However, we can implement such \emph{generalized} homogeneous compositions
$\Ghcom$ using only ordinary homogeneous compositions $\Hcom$.
\begin{theorem}\label{thm:ghcom}
If $\ceqtype{Kan}{A}{B}$,
\begin{enumerate}
\item $\ceqtm{M}{M'}{A}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{A}$ for any $i,j$, and
\item $\ceqtm[\Psi]<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{A}$ for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm{\Ghcom*{A}{r_i=r_i'}}%
{\Ghcom{B}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{A}$;
\item if $r=r'$ then
$\ceqtm{\Ghcom{A}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{A}$; and
\item if $r_i = r_i'$ then
$\ceqtm{\Ghcom*{A}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{A}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Use induction on the length of $\etc{r_i=r_i'}$. If there are zero tubes, for
part (1) we must show
$\ceqtm{\Ghcom{A}{r}{r'}{M}{\cdot}}{\Ghcom{B}{r}{r'}{M'}{\cdot}}{A}$, which is
immediate by \cref{lem:expansion} on each side. Part (2) is immediate by
\cref{lem:expansion} on the left, and part (3) is impossible without tubes.
Now consider the case
$\Ghcom{A}{r}{r'}{M}{\tube{s=s'}{y.N},\sys{\xi_i}{y.N_i}}$, where we know
$\Ghcom$s with one fewer tube have the desired properties. By
\cref{lem:expansion} we must show (the binary version of)
\begin{gather*}
\coftype{\Hcom{A}{r}{r'}{M}{\sys{s=\ensuremath{\varepsilon}}{z.T_\ensuremath{\varepsilon}},\tube{s=s'}{y.N},\sys{\xi_i}{y.N_i}}}{A}
\\
\text{where}\
T_\ensuremath{\varepsilon} = \Hcom{A}{r}{z}{M}{
\tube{s'=\ensuremath{\varepsilon}}{y.N},
\tube{s'=\overline{\e}}{y.\Ghcom{A}{r}{y}{M}{\sys{\xi_i}{y.N_i}}},
\sys{\xi_i}{y.N_i}}.
\end{gather*}
First, show $\coftype[\Psi,z]<s=\ensuremath{\varepsilon}>{T_\ensuremath{\varepsilon}}{A}$ by \cref{def:kan}, noting the
composition is valid by $s'=\ensuremath{\varepsilon},s'=\overline{\e}$,
\begin{enumerate}
\item $\coftype<s=\ensuremath{\varepsilon}>{M}{A}$ by $\coftype{M}{A}$,
\item $\coftype[\Psi,y]<s=\ensuremath{\varepsilon},s'=\ensuremath{\varepsilon}>{N}{A}$ (by
$\coftype[\Psi,y]<s=s'>{N}{A}$, because $s=s'$ whenever $s=\ensuremath{\varepsilon},s'=\ensuremath{\varepsilon}$),
$\ceqtm[\Psi,y]<s=\ensuremath{\varepsilon},s'=\ensuremath{\varepsilon},r_i=r_i'>{N}{N_i}{A}$ (by
$\ceqtm[\Psi,y]<s=s',r_i=r_i'>{N}{N_i}{A}$), and
$\ceqtm<s=\ensuremath{\varepsilon},s'=\ensuremath{\varepsilon}>{\dsubst{N}{r}{y}}{M}{A}$ (by
$\ceqtm<s=s'>{\dsubst{N}{r}{y}}{M}{A}$), and
\item
$\coftype[\Psi,y]<s=\ensuremath{\varepsilon},s'=\overline{\e}>{\Ghcom{A}{r}{y}{M}{\sys{\xi_i}{y.N_i}}}{A}$
(by part (1) of the induction hypothesis),
$\ceqtm[\Psi,y]<s=\ensuremath{\varepsilon},s'=\overline{\e},r_i=r_i'>{\Ghcom{A}}{N_i}{A}$ (by part (3) of the
induction hypothesis), and
$\ceqtm[\Psi,y]<s=\ensuremath{\varepsilon},s'=\overline{\e}>{\dsubst{(\Ghcom{A})}{r}{y}}{M}{A}$ (by part (2)
of the induction hypothesis).
\end{enumerate}
The remaining adjacency conditions are immediate. To check
$\coftype{\Hcom{A}}{A}$ it suffices to observe that
$\coftype[\Psi,z]<s=\ensuremath{\varepsilon}>{T_\ensuremath{\varepsilon}}{A}$ (by the above);
$\ceqtm[\Psi,z]<s=\ensuremath{\varepsilon},s=s'>{T_\ensuremath{\varepsilon}}{\dsubst{N}{z}{y}}{A}$ (by the $s'=\ensuremath{\varepsilon}$ tube
in $T_\ensuremath{\varepsilon}$);
$\ceqtm[\Psi,z]<s=\ensuremath{\varepsilon},r_i=r_i'>{T_\ensuremath{\varepsilon}}{\dsubst{N_i}{z}{y}}{A}$ (by the
$r_i=r_i'$ tube in $T_\ensuremath{\varepsilon}$);
$\ceqtm<s=\ensuremath{\varepsilon}>{\dsubst{T_\ensuremath{\varepsilon}}{r}{z}}{M}{A}$ (by $r=\dsubst{z}{r}{z}$ in $T_\ensuremath{\varepsilon}$);
and the $\etc{s=\ensuremath{\varepsilon}}$ tubes ensure the composition is valid. Part (1) follows by
repeating this argument on the right side, and parts (2--3) follow from
\cref{def:kan}.
\end{proof}
\begin{theorem}\label{thm:gcom}
If $\ceqtype{Kan}[\Psi,y]{A}{B}$,
\begin{enumerate}
\item $\ceqtm{M}{M'}{\dsubst{A}{r}{y}}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{A}$ for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\dsubst{A}{r}{y}}$ for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item
$\ceqtm{\Gcom*{y.A}{r_i=r_i'}}{\Gcom{y.B}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}
{\dsubst{A}{r'}{y}}$;
\item if $r=r'$ then
$\ceqtm{\Gcom{y.A}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{\dsubst{A}{r}{y}}$; and
\item if $r_i = r_i'$ then
$\ceqtm{\Gcom*{y.A}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\dsubst{A}{r'}{y}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
The implementation of $\Gcom$ by $\Ghcom$ and $\Coe$ mirrors exactly the
implementation of $\Com$ by $\Hcom$ and $\Coe$; the proof is thus identical to
that of \cref{thm:com}, appealing to \cref{thm:ghcom} instead of \cref{def:kan}.
\end{proof}
\begin{lemma}\label{lem:fcom-hcom}
If $A,\sys{s_j=s_j'}{z.B_j}$ and $A',\sys{s_j=s_j'}{z.B_j'}$ are equal type
compositions $s\rightsquigarrow s'$ and, letting $\Fcom :=
\Fcom{s}{s'}{A}{\sys{s_j=s_j'}{z.B_j}}$,
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm{M}{M'}{\Fcom}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_{i'}=r_{i'}'>{N_i}{N_{i'}'}{\Fcom}$
for any $i,i'$, and
\item $\ceqtm<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\Fcom}$ for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm{\Hcom*{\Fcom}{r_i=r_i'}}%
{\Hcom{\Fcom{s}{s'}{A'}{\sys{s_j=s_j'}{z.B_j'}}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\Fcom}$;
\item if $r=r'$ then
$\ceqtm{\Hcom{\Fcom}{r}{r}{M}{\sys{r_i=r_i'}{y.N_i}}}{M}{\Fcom}$; and
\item if $r_i = r_i'$ then
$\ceqtm{\Hcom*{\Fcom}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\Fcom}$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $s=s'$ or $s_j=s_j'$ for some $j$, the results are immediate by parts (2--3)
of \cref{lem:fcom-preform}. Otherwise, $s\neq s'$ and $s_j\neq s_j'$ for all
$j$; apply coherent expansion to $\Hcom{\Fcom}$ with family
\[\begin{cases}
\Hcom{\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\td{M}{\psi}}{\sys{\td{r_i}{\psi}=\td{r_i'}{\psi}}{y.\td{N_i}{\psi}}}
& \text{$\td{s}{\psi}=\td{s'}{\psi}$} \\
\Hcom{\td{\dsubst{B_j}{s'}{z}}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\td{M}{\psi}}{\sys{\td{r_i}{\psi}=\td{r_i'}{\psi}}{y.\td{N_i}{\psi}}}
& \text{$\td{s}{\psi}\neq\td{s'}{\psi}$,
least $\td{s_j}{\psi}=\td{s_j'}{\psi}$} \\
\td{(\Kbox{s}{s'}{Q}{\sys{s_j=s_j'}{\dsubst{P_j}{s'}{z}}})}{\psi}
& \text{$\td{s}{\psi}\neq\td{s'}{\psi}$,
$\forall j.\td{s_j}{\psi}\neq\td{s_j'}{\psi}$} \\
\quad P_j =
\Hcom{B_j}{r}{r'}{\Coe{z.B_j}{s'}{z}{M}}%
{\sys{r_i=r_i'}{y.\Coe{z.B_j}{s'}{z}{N_i}}} &\\
\quad F[c] =
\Hcom{A}{s'}{z}{\Kcap{s}{s'}{c}{\sys{s_j=s_j'}{z.B_j}}}{\etc{T}} &\\
\quad \etc{T} =
{\sys{s_j=s_j'}{z'.\Coe{z.B_j}{z'}{s}{\Coe{z.B_j}{s'}{z'}{c}}}} &\\
\quad O =
\Hcom{A}{r}{r'}{\dsubst{(F[M])}{s}{z}}{\sys{r_i=r_i'}{y.\dsubst{(F[N_i])}{s}{z}}} &\\
\quad Q =
\Hcom{A}{s}{s'}{O}{
\sys{r_i=r_i'}{z.F[\dsubst{N_i}{r'}{y}]},\etc{U}} &\\
\quad \etc{U} =
\sys{s_j=s_j'}{z.\Coe{z.B_j}{z}{s}{P_j}},
\tube{r=r'}{z.F[M]}
\end{cases}\]
Consider $\psi=\id$.
\begin{enumerate}
\item $\ceqtm[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}'>{P_j}{P_{j'}}{B_j}$ for all
$j,j'$, by
\begin{enumerate}
\item $\ceqtype{Kan}[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}'>{B_j}{B_{j'}}$,
\item $\ceqtm[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}'>%
{\Coe{z.B_j}{s'}{z}{M}}{\Coe{z.B_{j'}}{s'}{z}{M}}{B_j}$
by $\ceqtype{Kan}<s_j=s_j'>{\Fcom}{\dsubst{B_j}{s'}{z}}$,
\item $\ceqtm[\Psi,z,y]<s_j=s_j',s_{j'}=s_{j'}',r_i=r_i',r_{i'}=r_{i'}'>%
{\Coe{z.B_j}{s'}{z}{N_i}}{\Coe{z.B_{j'}}{s'}{z}{N_{i'}}}{B_j}$ for all
$i,i'$, and
\item
$\ceqtm[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}',r_i=r_i'>%
{\Coe{z.B_j}{s'}{z}{M}}{\Coe{z.B_j}{s'}{z}{\dsubst{N_i}{r}{y}}}{B_j}$ for all
$i$ by
$\ceqtm[\Psi,z]<s_j=s_j',r_i=r_i'>{M}{\dsubst{N_i}{r}{y}}{\dsubst{B_j}{s'}{z}}$.
\end{enumerate}
\item $\ceqtm[\Psi,z]{F[c]}{F[c']}{A}$ for any $\ceqtm{c}{c'}{\Fcom}$, by
\begin{enumerate}
\item $\ceqtm{\Kcap{s}{s'}{c}{\sys{s_j=s_j'}{z.B_j}}}%
{\Kcap{s}{s'}{c'}{\sys{s_j=s_j'}{z.B_j}}}{A}$,
\item $\ceqtm[\Psi,z']<s_j=s_j',s_{j'}=s_{j'}'>%
{\Coe{z.B_j}{z'}{s}{\Coe{z.B_j}{s'}{z'}{c}}}%
{\Coe{z.B_{j'}}{z'}{s}{\Coe{z.B_{j'}}{s'}{z'}{c'}}}{A}$ for all $j,j'$ by
$\ceqtype{Kan}<s_j=s_j'>{\Fcom}{\dsubst{B_j}{s'}{z}}$ and
$\ceqtype{Kan}<s_j=s_j'>{\dsubst{B_j}{s}{z}}{A}$, and
\item $\ceqtm<s_j=s_j'>%
{\dsubst{(\Coe{z.B_j}{z'}{s}{\Coe{z.B_j}{s'}{z'}{c}})}{s'}{z'}}%
{\Kcap{s}{s'}{c}{\sys{s_j=s_j'}{z.B_j}}}{A}$ for all $j$ because both sides
$\ensuremath{\mathbin{\doteq}}$ $\Coe{z.B_j}{s'}{s}{c}$.
\end{enumerate}
\item $\coftype{O}{A}$ by
\begin{enumerate}
\item $\coftype{\dsubst{(F[M])}{s}{z}}{A}$ by $\coftype{M}{\Fcom}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_{i'}=r_{i'}'>%
{\dsubst{(F[N_i])}{s}{z}}{\dsubst{(F[N_{i'}])}{s}{z}}{A}$ for all $i,i'$ by
$\ceqtm[\Psi,y]<r_i=r_i',r_{i'}=r_{i'}'>{N_i}{N_{i'}}{\Fcom}$, and
\item $\ceqtm<r_i=r_i'>%
{\dsubst{(F[\dsubst{N_i}{r}{y}])}{s}{z}}{\dsubst{(F[M])}{s}{z}}{A}$ for all $i$
by $\ceqtm<r_i=r_i',r_{i'}=r_{i'}'>{\dsubst{N_i}{r}{y}}{M}{\Fcom}$.
\end{enumerate}
\item $\coftype{Q}{A}$ by
\begin{enumerate}
\item $\ceqtm[\Psi,z]<r_i=r_i',r_{i'}=r_{i'}'>%
{F[\dsubst{N_i}{r'}{y}]}{F[\dsubst{N_{i'}}{r'}{y}]}{A}$ for all $i,i'$ by
$\ceqtm<r_i=r_i',r_{i'}=r_{i'}'>%
{\dsubst{N_i}{r'}{y}}{\dsubst{N_{i'}}{r'}{y}}{\Fcom}$,
\item $\ceqtm[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}'>%
{\Coe{z.B_j}{z}{s}{P_j}}{\Coe{z.B_{j'}}{z}{s}{P_{j'}}}{A}$ by
$\ceqtm[\Psi,z]<s_j=s_j',s_{j'}=s_{j'}'>{P_j}{P_{j'}}{B_j}$,
\item $\coftype[\Psi,z]<r=r'>{F[M]}{A}$ by $\coftype{M}{\Fcom}$,
\item $\coftype{O}{A}$,
\item $\ceqtm<r_i=r_i'>{\dsubst{(F[\dsubst{N_i}{r'}{y}])}{s}{z}}{O}{A}$
for all $i$,
\item $\ceqtm<s_j=s_j'>{\dsubst{(\Coe{z.B_j}{z}{s}{P_j})}{s}{z}}{O}{A}$ for all
$j$, because the left side
$\ceqtm<s_j=s_j'>{\dsubst{(\Coe{z.B_j}{z}{s}{P_j})}{s}{z}}%
{\Hcom{\dsubst{B_j}{s}{z}}{r}{r'}{\Coe{z.B_j}{s'}{s}{M}}%
{\sys{r_i=r_i'}{y.\Coe{z.B_j}{s'}{s}{N_i}}}}{A}$, and this $\ensuremath{\mathbin{\doteq}}$ $O$ by
$\ceqtype{Kan}<s_j=s_j'>{\dsubst{B_j}{s}{z}}{A}$,
$\ceqtm<s_j=s_j'>{\Coe{z.B_j}{s'}{s}{M}}{\dsubst{(F[M])}{s}{z}}{A}$ (because the
right side $\ensuremath{\mathbin{\doteq}}$ $\Coe{z.B_j}{s}{s}{\Coe{z.B_j}{s'}{s}{M}}$), and
$\ceqtm<s_j=s_j',r_i=r_i'>{\Coe{z.B_j}{s'}{s}{N_i}}{\dsubst{(F[N_i])}{s}{z}}{A}$
for all $i$ (because the right side $\ensuremath{\mathbin{\doteq}}$
$\Coe{z.B_j}{s}{s}{\Coe{z.B_j}{s'}{s}{N_i}}$),
\item $\ceqtm<r=r'>{\dsubst{(F[M])}{s}{z}}{O}{A}$,
\item $\ceqtm[\Psi,z]<r_i=r_i',s_j=s_j'>%
{F[\dsubst{N_i}{r'}{y}]}{\Coe{z.B_j}{z}{s}{P_j}}{A}$ for all $i,j$ because both
sides $\ensuremath{\mathbin{\doteq}}$ $\Coe{z.B_j}{z}{s}{\Coe{z.B_j}{s'}{z}{\dsubst{N_i}{r'}{y}}}$,
\item $\ceqtm[\Psi,z]<r_i=r_i',r=r'>{F[\dsubst{N_i}{r'}{y}]}{F[M]}{A}$
for all $i$ by $\ceqtm<r_i=r_i'>{\dsubst{N_i}{r'}{y}}{M}{\Fcom}$, and
\item $\ceqtm[\Psi,z]<s_j=s_j',r=r'>{\Coe{z.B_j}{z}{s}{P_j}}{F[M]}{A}$
for all $j$ because both sides are $\ensuremath{\mathbin{\doteq}}$
$\Coe{z.B_j}{z}{s}{\Coe{z.B_j}{s'}{z}{M}}$.
\end{enumerate}
\item $\coftype{\Kbox{s}{s'}{Q}{\sys{s_j=s_j'}{\dsubst{P_j}{s'}{z}}}}{\Fcom}$ by
$\coftype{Q}{A}$, $\ceqtm<s_j=s_j',s_{j'}=s_{j'}'>%
{\dsubst{P_j}{s'}{z}}{\dsubst{P_{j'}}{s'}{z}}{\dsubst{B_j}{s'}{z}}$ for all
$j,j'$, and $\ceqtm<s_j=s_j'>{\Coe{z.B_j}{s'}{s}{\dsubst{P_j}{s'}{z}}}{Q}{A}$
for all $j$.
\end{enumerate}
When $\td{s}{\psi}\neq\td{s'}{\psi}$ and $\td{s_j}{\psi}\neq\td{s_j'}{\psi}$ for
all $j$, coherence is immediate. When $\td{s}{\psi}=\td{s'}{\psi}$,
$\td{\Kbox}{\psi} \ensuremath{\mathbin{\doteq}} \td{Q}{\psi} \ensuremath{\mathbin{\doteq}}{}$
$\ceqtm[\Psi']{\td{O}{\psi}}{\Hcom{\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\td{M}{\psi}}{\sys{\td{\xi_i}{\psi}}{y.\td{N_i}{\psi}}}}{\td{A}{\psi}}$ by
$\ceqtm[\Psi']{\td{\dsubst{(F[M])}{s}{z}}{\psi}}{\td{M}{\psi}}{\td{A}{\psi}}$
and similarly for each tube. When $\td{s}{\psi}\neq\td{s'}{\psi}$ and
$\td{s_j}{\psi}=\td{s_j'}{\psi}$ for the least such $j$,
$\td{\Kbox}{\psi} \ensuremath{\mathbin{\doteq}} \td{\dsubst{P_j}{s'}{z}}{\psi} \ensuremath{\mathbin{\doteq}}
\td{(\Hcom{\dsubst{B_j}{s'}{z}}{r}{r'}{\Coe{z.B_j}{s'}{s'}{M}}%
{\sys{r_i=r_i'}{y.\Coe{z.B_j}{s'}{s'}{N_i}}})}{\psi}
\ensuremath{\mathbin{\doteq}} \td{(\Hcom{\dsubst{B_j}{s'}{z}}{r}{r'}{M}{\sys{r_i=r_i'}{y.N_i}})}{\psi}$.
By \cref{lem:cohexp-ceqtm}, $\ceqtm{\Hcom{\Fcom}}{\Kbox}{\Fcom}$; part (1)
follows by a symmetric argument on the right side.
For part (2), if $r=r'$ then
$Q\ensuremath{\mathbin{\doteq}} \ceqtm{\dsubst{(F[M])}{s'}{z}}{\Kcap{s}{s'}{M}{\sys{s_j=s_j'}{z.B_j}}}{A}$
and $\dsubst{P_j}{s'}{z}\ensuremath{\mathbin{\doteq}}
\ceqtm<s_j=s_j'>{\Coe{z.B_j}{s'}{s'}{M}}{M}{\dsubst{B_j}{s'}{z}}$ for all $j$,
so $\ceqtm{\Kbox{s}{s'}{Q}{\sys{s_j=s_j'}{\dsubst{P_j}{s'}{z}}}}{M}{\Fcom}$ by
\cref{rul:fcom-eta}, and part (2) follows by transitivity.
For part (3), if $r_i=r_i'$ then
$Q\ensuremath{\mathbin{\doteq}} \ceqtm{\dsubst{(F[\dsubst{N_i}{r'}{y}])}{s'}{z}}%
{\Kcap{s}{s'}{\dsubst{N_i}{r'}{y}}{\sys{s_j=s_j'}{z.B_j}}}{A}$
and $\dsubst{P_j}{s'}{z}\ensuremath{\mathbin{\doteq}}
\ceqtm<s_j=s_j'>{\Coe{z.B_j}{s'}{s'}{\dsubst{N_i}{r'}{y}}}%
{\dsubst{N_i}{r'}{y}}{\dsubst{B_j}{s'}{z}}$ for all $j$, so
$\ceqtm{\Kbox}{\dsubst{N_i}{r'}{y}}{\Fcom}$ by \cref{rul:fcom-eta}, and part (3)
follows by transitivity.
\end{proof}
\begin{lemma}\label{lem:fcom-coe}
Let $\Fcom := \Fcom{s}{s'}{A}{\sys{s_i=s_i'}{z.B_i}}$. If
\begin{enumerate}
\item $\etc{s_i=s_i'}$ is valid in $(\Psi,x)$,
\item $\ceqtype{Kan}[\Psi,x]{A}{A'}$,
\item $\ceqtype{Kan}[\Psi,x,z]<s_i=s_i',s_j=s_j'>{B_i}{B_j'}$ for any $i,j$,
\item $\ceqtype{Kan}[\Psi,x]<s_i=s_i'>{\dsubst{B_i}{s}{z}}{A}$ for any $i$, and
\item $\ceqtm{M}{M'}{\dsubst{\Fcom}{r}{x}}$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm{\Coe*{x.\Fcom}}%
{\Coe{x.\Fcom{s}{s'}{A'}{\sys{s_i=s_i'}{z.B_i'}}}{r}{r'}{M'}}{\dsubst{\Fcom}{r'}{x}}$; and
\item if $r=r'$ then
$\ceqtm{\Coe*{x.\Fcom}}{M}{\dsubst{\Fcom}{r'}{x}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $s=s'$ or $s_i=s_i'$ for some $i$, the results are immediate by parts (2--3)
of \cref{lem:fcom-preform}. Otherwise, $s\neq s'$ and $s_i\neq s_i'$ for all
$i$; apply coherent expansion to $\Coe*{x.\Fcom}$ with family
\[\begin{cases}
\Coe{x.\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}
& \text{$\td{s}{\psi}=\td{s'}{\psi}$} \\
\Coe{x.\td{\dsubst{B_i}{s'}{z}}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}
& \text{$\td{s}{\psi}\neq\td{s'}{\psi}$,
least $\td{s_i}{\psi}=\td{s_i'}{\psi}$} \\
\td{\dsubst{(\Kbox{s}{s'}{R}{\sys{\xi_i}{\dsubst{Q_i}{s'}{z}}})}{r'}{x}}{\psi}
& \text{$\td{s}{\psi}\neq\td{s'}{\psi}$,
$\forall i.\td{s_i}{\psi}\neq\td{s_i'}{\psi}$} \\
\quad N_i =
\Coe{z.B_i}{s'}{z}{\Coe{x.\dsubst{B_i}{s'}{z}}{r}{x}{M}} &\\
\quad O = \dsubst{(\Hcom{A}{s'}{z}{\Kcap{s}{s'}{M}{\sys{\xi_i}{z.B_i}}}{
\sys{\xi_i}{z.\Coe{z.B_i}{z}{s}{N_i}}})}{r}{x} \\
\quad P = \Gcom{x.A}{r}{r'}{\dsubst{O}{\dsubst{s}{r}{x}}{z}}{
\st{\sys{\xi_i}{x.\dsubst{N_i}{s}{z}}}{x\ensuremath{\mathbin{\#}}\xi_i},T} &\\
\quad T =
\st{\tube{s=s'}{x.\Coe{x.A}{r}{x}{M}}}{x\ensuremath{\mathbin{\#}} s,s'} &\\
\quad Q_k = \Gcom{z.\dsubst{B_k}{r'}{x}}{\dsubst{s}{r'}{x}}{z}{P}{
\st{\sys{\xi_i}{z.\dsubst{N_i}{r'}{x}}}{x\ensuremath{\mathbin{\#}}\xi_i},
\tube{r=r'}{z.\dsubst{N_k}{r'}{x}}}\hspace{-0.6em} &\\
\quad R =
\Hcom{A}{s}{s'}{P}{\sys{\xi_i}{z.\Coe{z.B_i}{z}{s}{Q_i}},\tube{r=r'}{z.O}}
\end{cases}\]
Consider $\psi=\id$.
\begin{enumerate}
\item $\ceqtm[\Psi,x,z]<\dsubst{\xi_i}{r}{x},\dsubst{\xi_j}{r}{x}>{N_i}{N_j}{B_i}$
for all $i,j$ by
$\coftype<\dsubst{\xi_i}{r}{x}>{M}{\dsubst{\dsubst{B_i}{s'}{z}}{r}{x}}$
(by $\coftype{M}{\dsubst{\Fcom}{r}{x}}$ and
$\ceqtype{Kan}[\Psi,x]<\xi_i>{\Fcom}{\dsubst{B_i}{s'}{z}}$) and
$\ceqtype{Kan}[\Psi,x,z]<\xi_i,\xi_j>{B_i}{B_j}$.
\item $\coftype[\Psi,z]{O}{\dsubst{A}{r}{x}}$ by
\begin{enumerate}
\item $\coftype{\dsubst{(\Kcap{s}{s'}{M}{\sys{\xi_i}{z.B_i}})}{r}{x}}{\dsubst{A}{r}{x}}$
by $\coftype{M}{\dsubst{\Fcom}{r}{x}}$,
\item $\ceqtm[\Psi,z]<\dsubst{\xi_i}{r}{x},\dsubst{\xi_j}{r}{x}>%
{\Coe{z.\dsubst{B_i}{r}{x}}{z}{\dsubst{s}{r}{x}}{\dsubst{N_i}{r}{x}}}%
{\Coe{z.\dsubst{B_j}{r}{x}}{z}{\dsubst{s}{r}{x}}{\dsubst{N_j}{r}{x}}}%
{\dsubst{A}{r}{x}}$ for all $i,j$
by $\ceqtype{Kan}<\dsubst{\xi_i}{r}{x}>{\dsubst{\dsubst{B_i}{s}{z}}{r}{x}}{\dsubst{A}{r}{x}}$, and
\item $\ceqtm<\dsubst{\xi_i}{r}{x}>%
{\dsubst{(\Kcap{s}{s'}{M}{\sys{\xi_i}{z.B_i}})}{r}{x}}%
{\dsubst{\dsubst{(\Coe{z.B_i}{z}{s}{N_i})}{s'}{z}}{r}{x}}%
{\dsubst{A}{r}{x}}$ for all $i$ by
$\ceqtm<\dsubst{\xi_i}{r}{x}>{\dsubst{\Kcap}{r}{x}}%
{\dsubst{(\Coe{z.B_i}{s'}{s}{M})}{r}{x}}{\dsubst{A}{r}{x}}$ and
$\ceqtm<\dsubst{\xi_i}{r}{x}>{\dsubst{\dsubst{N_i}{s'}{z}}{r}{x}}{M}%
{\dsubst{\dsubst{B_i}{s'}{z}}{r}{x}}$.
\end{enumerate}
\item $\coftype{P}{\dsubst{A}{r'}{x}}$ by
\begin{enumerate}
\item $\coftype{\dsubst{O}{\dsubst{s}{r}{x}}{z}}{\dsubst{A}{r}{x}}$,
\item $\ceqtm[\Psi,x]<\xi_i,\xi_j>{\dsubst{N_i}{s}{z}}{\dsubst{N_j}{s}{z}}{A}$
for all $i,j$ such that $x\ensuremath{\mathbin{\#}}\xi_i,\xi_j$ by
$\ceqtm[\Psi,x]<\dsubst{\xi_i}{r}{x},\dsubst{\xi_j}{r}{x}>%
{\dsubst{N_i}{s}{z}}{\dsubst{N_j}{s}{z}}{\dsubst{B_i}{s}{z}}$ and
$\ceqtype{Kan}[\Psi,x]<\xi_i>{\dsubst{B_i}{s}{z}}{A}$,
\item $\coftype[\Psi,x]<s=s'>{\Coe{x.A}{r}{x}{M}}{A}$ if $x\ensuremath{\mathbin{\#}} s,s'$ by
$\ceqtype{Kan}<\dsubst{s}{r}{x}=\dsubst{s'}{r}{x}>{\dsubst{\Fcom}{r}{x}}{\dsubst{A}{r}{x}}$,
\item $\ceqtm<\xi_i>{\dsubst{O}{\dsubst{s}{r}{x}}{z}}%
{\dsubst{\dsubst{N_i}{s}{z}}{r}{x}}{\dsubst{A}{r}{x}}$ for all $i$ such that
$x\ensuremath{\mathbin{\#}}\xi_i$ by ${\dsubst{O}{\dsubst{s}{r}{x}}{z}}\ensuremath{\mathbin{\doteq}}{}$
$\ceqtm<\dsubst{\xi_i}{r}{x}>{\dsubst{\dsubst{(\Coe{z.B_i}{z}{s}{N_i})}{s}{z}}{r}{x}}%
{\dsubst{\dsubst{N_i}{s}{z}}{r}{x}}{\dsubst{A}{r}{x}}$,
\item $\ceqtm<s=s'>{\dsubst{O}{\dsubst{s}{r}{x}}{z}}%
{\dsubst{(\Coe{x.A}{r}{x}{M})}{r}{x}}{\dsubst{A}{r}{x}}$ if $x\ensuremath{\mathbin{\#}} s,s'$ by
$\dsubst{O}{\dsubst{s}{r}{x}}{z} = \dsubst{O}{s}{z} \ensuremath{\mathbin{\doteq}}{}
\ceqtm<s=s'>{\dsubst{\Kcap}{r}{x}}{M}{\dsubst{A}{r}{x}}$, and
\item $\ceqtm[\Psi,x]<\xi_i,s=s'>{\dsubst{N_i}{s}{z}}{\Coe{x.A}{r}{x}{M}}{A}$
for all $i$ such that $x\ensuremath{\mathbin{\#}} \xi_i,s,s'$ by
$\ceqtm[\Psi,x]<\xi_i,s=s'>{\dsubst{N_i}{s}{z}}%
{\Coe{x.\dsubst{B_i}{s'}{z}}{r}{x}{M}}{\dsubst{B_i}{s'}{z}}$
and $\ceqtype{Kan}[\Psi,x]<\xi_i,s=s'>{\dsubst{B_i}{s'}{z}}{A}$.
\end{enumerate}
\item $\ceqtm[\Psi,z]<\dsubst{\xi_k}{r'}{x},\dsubst{\xi_{k'}}{r'}{x}>%
{Q_k}{Q_{k'}}{\dsubst{B_k}{r'}{x}}$ for all $k,k'$ by
\begin{enumerate}
\item $\coftype<\dsubst{\xi_k}{r'}{x}>{P}{\dsubst{\dsubst{B_k}{s}{z}}{r'}{x}}$
by $\ceqtype{Kan}[\Psi,x]<\xi_k>{A}{\dsubst{B_k}{s}{z}}$,
\item $\ceqtm[\Psi,z]<\dsubst{\xi_k}{r'}{x},\xi_i,\xi_j>%
{\dsubst{N_i}{r'}{x}}{\dsubst{N_j}{r'}{x}}{\dsubst{B_k}{r'}{x}}$ for all $i,j$
such that $x\ensuremath{\mathbin{\#}}\xi_i,\xi_j$ by
$\ceqtm[\Psi,x,z]<\xi_i,\xi_j>{N_i}{N_j}{B_i}$ and
$\ceqtype{Kan}[\Psi,x,z]<\xi_i,\xi_k>{B_i}{B_k}$,
\item $\ceqtm[\Psi,z]<\dsubst{\xi_k}{r'}{x},\dsubst{\xi_{k'}}{r'}{x}>%
{\dsubst{N_k}{r'}{x}}{\dsubst{N_{k'}}{r'}{x}}{\dsubst{B_k}{r'}{x}}$,
\item $\ceqtm<\dsubst{\xi_k}{r'}{x},\xi_i>%
{P}{\dsubst{\dsubst{N_i}{s}{z}}{r'}{x}}{\dsubst{\dsubst{B_k}{s}{z}}{r'}{x}}$ for
all $i$ such that $x\ensuremath{\mathbin{\#}}\xi_i$ by
$\ceqtm<\xi_i>{P}{\dsubst{\dsubst{N_i}{s}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$ and
$\ceqtype{Kan}<\dsubst{\xi_k}{r'}{x}>{\dsubst{A}{r'}{x}}{\dsubst{\dsubst{B_k}{s}{z}}{r'}{x}}$,
\item $\ceqtm<\dsubst{\xi_k}{r'}{x},r=r'>%
{P}{\dsubst{\dsubst{N_k}{s}{z}}{r'}{x}}{\dsubst{\dsubst{B_k}{s}{z}}{r'}{x}}$
because $P\ensuremath{\mathbin{\doteq}}
\ceqtm<\dsubst{\xi_k}{r'}{x},r=r'>{\dsubst{O}{\dsubst{s}{r}{x}}{z}}%
{\dsubst{\dsubst{(\Coe{z.B_k}{z}{s}{N_k})}{\dsubst{s}{r}{x}}{z}}{r}{x}}
{\dsubst{A}{r'}{x}}$, and
\item $\ceqtm[\Psi,z]<\dsubst{\xi_k}{r'}{x},\xi_i,r=r'>%
{\dsubst{N_i}{r'}{x}}{\dsubst{N_k}{r'}{x}}{\dsubst{B_k}{r'}{x}}$ for all $i$
such that $x\ensuremath{\mathbin{\#}}\xi_i$.
\end{enumerate}
\item $\coftype{\dsubst{R}{r'}{x}}{\dsubst{A}{r'}{x}}$ by
\begin{enumerate}
\item $\coftype{P}{\dsubst{A}{r'}{x}}$,
\item $\ceqtm[\Psi,z]<\dsubst{\xi_i}{r'}{x},\dsubst{\xi_j}{r'}{x}>%
{\Coe{z.\dsubst{B_i}{r'}{x}}{z}{\dsubst{s}{r'}{x}}{Q_i}}%
{\Coe{z.\dsubst{B_j}{r'}{x}}{z}{\dsubst{s}{r'}{x}}{Q_j}}%
{\dsubst{A}{r'}{x}}$ for all $i,j$ by $\ceqtype{Kan}[\Psi,z,x]<\xi_i,\xi_j>{B_i}{B_j}$ and
$\ceqtype{Kan}<\dsubst{\xi_i}{r'}{x}>{\dsubst{\dsubst{B_i}{s}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$,
\item $\coftype[\Psi,z]<r=r'>{O}{\dsubst{A}{r'}{x}}$,
\item $\ceqtm<\dsubst{\xi_i}{r'}{x}>%
{P}{\dsubst{\dsubst{(\Coe{z.B_i}{z}{s}{Q_i})}{s}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$
for all $i$ by
$\ceqtm<\dsubst{\xi_i}{r'}{x}>{\dsubst{\dsubst{Q_i}{s}{z}}{r'}{x}}{P}%
{\dsubst{\dsubst{B_i}{s}{z}}{r'}{x}}$ and
$\ceqtype{Kan}<\dsubst{\xi_i}{r'}{x}>{\dsubst{\dsubst{B_i}{s}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$,
\item $\ceqtm<r=r'>{P}{\dsubst{\dsubst{O}{s}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$ by
$\dsubst{\dsubst{O}{s}{z}}{r'}{x} = \dsubst{O}{\dsubst{s}{r'}{x}}{z}$, and
\item $\ceqtm[\Psi,z]<\dsubst{\xi_i}{r'}{x},r=r'>%
{\dsubst{(\Coe{z.B_i}{z}{s}{Q_i})}{r'}{x}}{\dsubst{O}{r'}{x}}{\dsubst{A}{r'}{x}}$
for all $i$ by $\dsubst{O}{r'}{x} =
\ceqtm[\Psi,z]<\dsubst{\xi_i}{r'}{x}>{O}{\dsubst{(\Coe{z.B_i}{z}{s}{N_i})}{r}{x}}{\dsubst{A}{r'}{x}}$
and
$\ceqtm[\Psi,z]<\dsubst{\xi_i}{r'}{x},r=r'>{\dsubst{Q_i}{r'}{x}}{\dsubst{N_i}{r'}{x}}{\dsubst{A}{r'}{x}}$.
\end{enumerate}
\item $\coftype{\Kbox{\dsubst{s}{r'}{x}}{\dsubst{s'}{r'}{x}}{\dsubst{R}{r'}{x}}%
{\sys{\dsubst{\xi_i}{r'}{x}}{\dsubst{Q_i}{\dsubst{s'}{r'}{x}}{z}}}}{\dsubst{\Fcom}{r'}{x}}$ by
\begin{enumerate}
\item $\coftype{\dsubst{R}{r'}{x}}{\dsubst{A}{r'}{x}}$,
\item $\ceqtm<\dsubst{\xi_i}{r'}{x},\dsubst{\xi_j}{r'}{x}>%
{\dsubst{Q_i}{\dsubst{s'}{r'}{x}}{z}}{\dsubst{Q_j}{\dsubst{s'}{r'}{x}}{z}}{\dsubst{\dsubst{B_i}{s'}{z}}{r'}{x}}$
for all $i,j$, and
\item $\ceqtm<\dsubst{\xi_i}{r'}{x}>%
{\dsubst{(\Coe{z.B_i}{s'}{s}{\dsubst{Q_i}{s'}{z}})}{r'}{x}}{\dsubst{R}{r'}{x}}{\dsubst{A}{r'}{x}}$
for all $i$ by $\ceqtm<\dsubst{\xi_i}{r'}{x}>%
{\dsubst{R}{r'}{x}}{\dsubst{\dsubst{(\Coe{z.B_i}{z}{s}{Q_i})}{s'}{z}}{r'}{x}}{\dsubst{A}{r'}{x}}$.
\end{enumerate}
\end{enumerate}
Consider $\tds{\Psi'}{\psi}{\Psi}$. When $\td{s}{\psi}\neq\td{s'}{\psi}$ and
$\td{s_i}{\psi}\neq\td{s_i'}{\psi}$ for all $i$, coherence is immediate. When
$\td{s}{\psi}=\td{s'}{\psi}$, then by $s\neq s'$, we must have $x\ensuremath{\mathbin{\#}} s,s'$
and thus $\td{\dsubst{s}{r'}{x}}{\psi} = \td{\dsubst{s'}{r'}{x}}{\psi}$ also.
Thus $\td{\dsubst{\Kbox}{r'}{x}}{\psi}\ensuremath{\mathbin{\doteq}} \td{\dsubst{R}{r'}{x}}{\psi}\ensuremath{\mathbin{\doteq}}
\ceqtm[\Psi']{\td{\dsubst{P}{r'}{x}}{\psi}}%
{\td{\dsubst{(\Coe{x.A}{r}{x}{M})}{r'}{x}}{\psi}}{\dsubst{A}{r'}{x}}$ as
required. When $\td{s}{\psi}\neq\td{s'}{\psi}$ and
$\td{s_i}{\psi}=\td{s_i'}{\psi}$ for the least such $i$, again $x\ensuremath{\mathbin{\#}}
s_i,s_i'$ and
$\td{\dsubst{\Kbox}{r'}{x}}{\psi}\ensuremath{\mathbin{\doteq}} \td{\dsubst{\dsubst{Q_i}{s'}{z}}{r'}{x}}{\psi}\ensuremath{\mathbin{\doteq}}
\ceqtm[\Psi']{\td{\dsubst{\dsubst{N_i}{s'}{z}}{r'}{x}}{\psi}}%
{\td{(\Coe{x.\dsubst{B_i}{s'}{z}}{r}{r'}{M})}{\psi}}{\dsubst{A}{r'}{x}}$.
By \cref{lem:cohexp-ceqtm},
$\ceqtm{\Coe*{x.\Fcom}}{\dsubst{\Kbox}{r'}{x}}{\dsubst{\Fcom}{r'}{x}}$; part (1)
follows by a symmetric argument on the right side.
For part (2), if $r=r'$ then
$\dsubst{R}{r'}{x}\ensuremath{\mathbin{\doteq}} \ceqtm{\dsubst{\dsubst{O}{s'}{z}}{r'}{x}}%
{\dsubst{(\Kcap{s}{s'}{M}{\sys{\xi_i}{z.B_i}})}{r'}{x}}{\dsubst{A}{r'}{x}}$
and $\dsubst{\dsubst{Q_i}{s'}{z}}{r'}{x}\ensuremath{\mathbin{\doteq}}
\ceqtm<\dsubst{\xi_i}{r'}{x}>{\dsubst{\dsubst{N_k}{s'}{z}}{r'}{x}}{M}{\dsubst{\dsubst{B_i}{s'}{z}}{r'}{x}}$
for all $i$, so $\ceqtm{\dsubst{\Kbox}{r'}{x}}{M}{\dsubst{\Fcom}{r'}{x}}$ by
\cref{rul:fcom-eta}, and part (2) follows by transitivity.
\end{proof}
\begin{rul}[Kan type formation]\label{rul:fcom-form-kan}
If $A,\sys{r_i=r_i'}{y.B_i}$ and $A',\sys{r_i=r_i'}{y.B_i'}$ are equal type
compositions $r\rightsquigarrow r'$, then
\begin{enumerate}
\item $\ceqtype{Kan}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}}}%
{\Fcom{r}{r'}{A'}{\sys{r_i=r_i'}{y.B_i'}}}$,
\item if $r=r'$ then $\ceqtype{Kan}{\Fcom{r}{r}{A}{\sys{r_i=r_i'}{y.B_i}}}{A}$, and
\item if $r_i = r_i'$ then
$\ceqtype{Kan}{\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{B_i}}}{\dsubst{B_i}{r'}{y}}$.
\end{enumerate}
\end{rul}
\begin{proof}
We already showed parts (2--3) in \cref{lem:fcom-preform}. For part (1), the
$\Hcom$ conditions follow from \cref{lem:fcom-hcom} at $\td{\Fcom}{\psi}$ for
any $\tds{\Psi'}{\psi}{\Psi}$; the $\Coe$ conditions follow from \cref{lem:fcom-coe} at
$x.\td{\Fcom}{\psi}$ for any $\tds{(\Psi',x)}{\psi}{\Psi}$.
\end{proof}
\subsection{Universes}
Our type theory has two hierarchies of universes, $\Upre$ and $\UKan$,
constructed by two sequences $\pre\tau_j$ and $\Kan\tau_j$ of cubical type
systems. To prove theorems about universe types in the cubical type system
$\pre\tau_\omega$, we must analyze these sequences as constructed in
\cref{sec:typesys}.
\begin{lemma}\label{lem:monotone-judg}
If $\tau,\tau'$ are cubical type systems, $\tau\subseteq\tau'$, and
$\relcts*{\tau}{\ensuremath{\mathcal{J}}}$ for any judgment $\ensuremath{\mathcal{J}}$, then $\relcts*{\tau'}{\ensuremath{\mathcal{J}}}$.
\end{lemma}
\begin{proof}
The result follows by $\ensuremath{\mathsf{PTy}}(\tau)\subseteq\ensuremath{\mathsf{PTy}}(\tau')$ and the functionality of
$\tau,\tau'$; the latter ensures that any (pre)type in $\tau$ has no other
meanings in $\tau'$.
\end{proof}
\begin{lemma}\label{lem:pty-ceqtypex}
If $\tau$ is a cubical type system, $\wftm{A}$, $\wftm{B}$, and for all
$\tds{\Psi_1}{\psi_1}{\Psi}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$, we have
$\td{A}{\psi_1}\ensuremath{\Downarrow} A_1$,
$\td{A_1}{\psi_2}\ensuremath{\Downarrow} A_2$,
$\td{A}{\psi_1\psi_2}\ensuremath{\Downarrow} A_{12}$,
$\td{B}{\psi_1}\ensuremath{\Downarrow} B_1$,
$\td{B_1}{\psi_2}\ensuremath{\Downarrow} B_2$,
$\td{B}{\psi_1\psi_2}\ensuremath{\Downarrow} B_{12}$,
$\relcts{\tau}{\ceqtype{\kappa}[\Psi_2]{A_2}{A_{12}}}$,
$\relcts{\tau}{\ceqtype{\kappa}[\Psi_2]{B_2}{B_{12}}}$, and
$\relcts{\tau}{\ceqtype{\kappa}[\Psi_2]{A_2}{B_2}}$, then
$\relcts{\tau}{\ceqtype{\kappa}{A}{B}}$.
\end{lemma}
\begin{proof}
We apply coherent expansion to $A$ and the family of terms
$\{A^{\Psi'}_\psi \mid \td{A}{\psi}\ensuremath{\Downarrow} A^{\Psi'}_\psi\}^{\Psi'}_\psi$.
By our hypotheses at $\psi,\id[\Psi']$ and $\id,\id$ we know
$\relcts{\tau}{\cwftype{\kappa}[\Psi']{A^{\Psi'}_\psi}}$ and
$\relcts{\tau}{\cwftype{\kappa}[\Psi']{\td{(A^{\Psi}_{\id})}{\psi}}}$;
for any $\tds{\Psi''}{\psi'}{\Psi'}$, our hypotheses at $\psi,\psi'$ and
$\id,\psi\psi'$ show
$\relcts{\tau}{\ceqtype{\kappa}[\Psi'']{A'}{A^{\Psi''}_{\psi\psi'}}}$ where
$\td{(A^{\Psi'}_\psi)}{\psi'}\ensuremath{\Downarrow} A'$, and
$\relcts{\tau}{\ceqtype{\kappa}[\Psi'']{A''}{A^{\Psi''}_{\psi\psi'}}}$ where
$\td{(A^{\Psi}_{\id})}{\psi\psi'}\ensuremath{\Downarrow} A''$, hence
$\relcts{\tau}{\ceqtype{\kappa}[\Psi'']{A'}{A''}}$.
If $\kappa=\mathsf{pre}$ then by \cref{lem:cwftypep-evals-ceqtypep},
$\relcts{\tau}{\ceqtype{pre}[\Psi']{\td{(A^\Psi_{\id})}{\psi}}{A_0'}}$ where
$\td{(A^\Psi_{\id})}{\psi}\ensuremath{\Downarrow} A_0'$; thus we have
$\relcts{\tau}{\ceqtype{pre}[\Psi']{A^{\Psi'}_{\psi}}{\td{(A^\Psi_{\id})}{\psi}}}$
by transitivity, and by \cref{lem:cohexp-ceqtypep},
$\relcts{\tau}{\ceqtype{pre}{A}{A_0}}$ where $A\ensuremath{\Downarrow} A_0$.
If $\kappa=\mathsf{Kan}$ then by
\cref{lem:cwftypek-evals-ceqtypek},
$\relcts{\tau}{\ceqtype{Kan}[\Psi']{A^{\Psi'}_\psi}{\td{(A^{\Psi}_{\id})}{\psi}}}$,
and by \cref{lem:cohexp-ceqtypek}, $\relcts{\tau}{\ceqtype{Kan}{A}{A_0}}$ where
$A\ensuremath{\Downarrow} A_0$. In either case, we repeat the argument for $B$ to obtain
$\relcts{\tau}{\ceqtype{\kappa}{B}{B_0}}$ where $B\ensuremath{\Downarrow} B_0$, and the result follows
by symmetry and transitivity.
\end{proof}
\begin{rul}[Pretype formation]\label{rul:U-form-pre}
If $i<j$ or $j=\omega$ then $\relcts{\pre\tau_j}{\cwftype{pre}{\Ux[i]}}$ and
$\relcts{\Kan\tau_j}{\cwftype{pre}{\UKan[i]}}$.
\end{rul}
\begin{proof}
In each case we have $\ensuremath{\mathsf{PTy}}(\tau^{\kappa'}_j)(\Psi,\Ux[i],\Ux[i],\_)$ by
$\sisval{\Ux[i]}$ and the definition of $\tau^{\kappa'}_j$. For
$\ensuremath{\mathsf{Coh}}(\vper{\Ux[i]})$, show that if $\vper{\Ux[i]}_{\Psi'}(A_0,B_0)$ then
$\ensuremath{\mathsf{Tm}}(\vper{\Ux[i]}(\Psi'))(A_0,B_0)$. But $\ensuremath{\mathsf{Tm}}(\vper{\Ux[i]}(\Psi'))(A,B)$ if
and only if $\ensuremath{\mathsf{PTy}}(\tau^\kappa_i)(\Psi',A,B,\_)$, so this is immediate by
value-coherence of $\tau^\kappa_i$.
\end{proof}
\begin{rul}[Cumulativity]
If $\relcts{\pre\tau_\omega}{\ceqtm{A}{B}{\Ux[i]}}$ and $i\leq j$ then
$\relcts{\pre\tau_\omega}{\ceqtm{A}{B}{\Ux[j]}}$.
\end{rul}
\begin{proof}
In \cref{sec:typesys} we observed that $\tau^\kappa_i\subseteq\tau^\kappa_j$
whenever $i\leq j$; thus $\vper{\Ux[i]}\subseteq\vper{\Ux[j]}$, and the result
follows because $\ensuremath{\mathsf{Tm}}$ is order-preserving.
\end{proof}
\begin{lemma}\label{lem:U-preelim}
~\begin{enumerate}
\item If $\relcts{\pre\tau_\omega}{\ceqtm{A}{B}{\UKan[i]}}$ then
$\relcts{\Kan\tau_i}{\ceqtype{Kan}{A}{B}}$.
\item If $\relcts{\pre\tau_\omega}{\ceqtm{A}{B}{\Upre[i]}}$ then
$\relcts{\pre\tau_i}{\ceqtype{pre}{A}{B}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove part (1) by strong induction on $i$. For each $i$, define $\Phi =
\{(\Psi,A_0,B_0,\phi) \mid \relcts{\Kan\tau_i}{\ceqtype{Kan}{A_0}{B_0}} \}$,
and show $K(\nu_i,\Phi)\subseteq\Phi$. We will conclude
$\Kan\tau_i\subseteq\Phi$ and so $\relcts{\Kan\tau_i}{\ceqtype{Kan}{A_0}{B_0}}$
whenever $\vper{\UKan[i]}(A_0,B_0)$; part (1) will follow by
\cref{lem:pty-ceqtypex}.
To establish $K(\nu_i,\Phi)\subseteq\Phi$, we check each type former
independently. Consider the case
$\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$. Then
$\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$, which by \cref{lem:pty-ceqtypex} implies
$\relcts{\Kan\tau_i}{\ceqtype{Kan}{A}{A'}}$; similarly,
$\relcts{\Kan\tau_i}{\eqtype{Kan}{\oft{a}{A}}{B}{B'}}$. By
\cref{rul:fun-form-kan}, we conclude
$\relcts{\Kan\tau_i}{\ceqtype{Kan}{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}}$. The
same argument applies for every type former except for $\textsc{UKan}$, where we
must show $\relcts{\Kan\tau_i}{\cwftype{Kan}{\UKan[j]}}$ for every $j<i$. The
$\Coe$ conditions are trivial by $\Coe*{x.\UKan[j]} \ensuremath{\steps_\stable} M$; the $\Hcom$
conditions hold by $\Hcom{\UKan[j]} \ensuremath{\steps_\stable} \Fcom$,
$\relcts{\Kan\tau_i}{\ceqtm{A}{B}{\UKan[j]}}$ implies
$\relcts{\Kan\tau_i}{\ceqtype{Kan}{A}{B}}$ (by induction), and
\cref{rul:fcom-form-kan}.
We prove part (2) directly for all $i$, by establishing
$P(\nu_i,\Kan\tau_i,\Phi)\subseteq\Phi$ for $\Phi = \{(\Psi,A_0,B_0,\phi) \mid
\relcts{\pre\tau_i}{\ceqtype{Kan}{A_0}{B_0}} \}$ and appealing to
\cref{lem:pty-ceqtypex}. Most type formers follow the same pattern as above; we
only discuss $\textsc{Fcom}$, $\textsc{UPre}$, and $\textsc{UKan}$. For
$\textsc{Fcom}$, we appeal to part (1) and \cref{rul:fcom-form-pre}, observing
that $\ensuremath{\mathsf{PTy}}(\Kan\tau_i)(\Psi,A,B,\_)$ if and only if
$\ensuremath{\mathsf{Tm}}(\vper{\UKan[i]}(\Psi))(A,B)$. For $\textsc{UPre}$ and $\textsc{UKan}$,
$\relcts{\pre\tau_i}{\cwftype{pre}{\Ux[j]}}$ for all $j<i$ is immediate by
\cref{rul:U-form-pre}.
\end{proof}
\begin{rul}[Elimination]\label{rul:U-elim}
If $\relcts{\pre\tau_\omega}{\ceqtm{A}{B}{\Ux[i]}}$ then
$\relcts{\pre\tau_\omega}{\ceqtype{\kappa}{A}{B}}$.
\end{rul}
\begin{proof}
Immediate by $\tau^\kappa_i\subseteq\pre\tau_\omega$ and
\cref{lem:U-preelim,lem:monotone-judg}.
\end{proof}
\begin{rul}[Introduction]\label{rul:U-intro}
In $\pre\tau_\omega$,
\begin{enumerate}
\item If $\ceqtm{A}{A'}{\Ux}$ and $\eqtm{\oft aA}{B}{B'}{\Ux}$ then
$\ceqtm{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}{\Ux}$.
\item If $\ceqtm{A}{A'}{\Ux}$ and $\eqtm{\oft aA}{B}{B'}{\Ux}$ then
$\ceqtm{\sigmacl{a}{A}{B}}{\sigmacl{a}{A'}{B'}}{\Ux}$.
\item If $\ceqtm[\Psi,x]{A}{A'}{\Ux}$ and
$\ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$ then
$\ceqtm{\Path{x.A}{P_0}{P_1}}{\Path{x.A'}{P_0'}{P_1'}}{\Ux}$.
\item If $\ceqtm{A}{A'}{\Upre}$, $\ceqtm{M}{M'}{A}$, and $\ceqtm{N}{N'}{A}$ then
$\ceqtm{\Eq{A}{M}{N}}{\Eq{A'}{M'}{N'}}{\Upre}$.
\item $\coftype{\ensuremath{\mathsf{void}}}{\Ux}$.
\item $\coftype{\ensuremath{\mathsf{nat}}}{\Ux}$.
\item $\coftype{\ensuremath{\mathsf{bool}}}{\Ux}$.
\item $\coftype{\ensuremath{\mathsf{wbool}}}{\Ux}$.
\item $\coftype{\ensuremath{\mathbb{S}^1}}{\Ux}$.
\item If $\ceqtm<r=0>{A}{A'}{\Ux}$, $\ceqtm{B}{B'}{\Ux}$, and
$\ceqtm<r=0>{E}{E'}{\Equiv{A}{B}}$, then
$\ceqtm{\ua{r}{A,B,E}}{\ua{r}{A',B',E'}}{\Ux}$.
\item If $i<j$ then $\coftype{\Ux[i]}{\Upre[j]}$.
\item If $i<j$ then $\coftype{\UKan[i]}{\UKan[j]}$.
\end{enumerate}
\end{rul}
\begin{proof}
Note that \cref{rul:U-elim} is needed to make sense of these rules; for example,
in part (1), by \cref{rul:U-elim} and
$\relcts{\pre\tau_\omega}{\coftype{A}{\Ux}}$ we conclude
$\relcts{\pre\tau_\omega}{\cwftype{\kappa}{A}}$, which is a presupposition of
$\relcts{\pre\tau_\omega}{\eqtm{\oft aA}{B}{B'}{\Ux}}$.
For part (1), by $\relcts{\pre\tau_\omega}{\ceqtm{A}{A'}{\Ux}}$ and
\cref{lem:U-preelim}, $\relcts{\tau^\kappa_j}{\ceqtype{\kappa}{A}{A'}}$; similarly, by
$\relcts{\pre\tau_\omega}{\eqtm{\oft aA}{B}{B'}{\Ux}}$ and
\cref{lem:U-preelim,lem:monotone-judg},
$\relcts{\tau^\kappa_j}{\eqtype{\kappa}{\oft aA}{B}{B'}}$. By
\cref{rul:fun-form-pre}, we conclude that
$\relcts{\tau^\kappa_j}{\ceqtype{pre}{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}}$, and in
particular, $\ensuremath{\mathsf{PTy}}(\tau^\kappa_j)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\_)$.
Therefore $\ensuremath{\mathsf{Tm}}(\vper{\Ux})(\picl{a}{A}{B},\picl{a}{A'}{B'})$ as needed.
Parts (2--12) follow the same pattern.
\end{proof}
\begin{rul}[Kan type formation]
$\relcts{\pre\tau_\omega}{\cwftype{Kan}{\UKan[i]}}$.
\end{rul}
\begin{proof}
By \cref{rul:U-intro},
$\relcts{\pre\tau_\omega}{\coftype{\UKan[i]}{\UKan[i+1]}}$; the result follows
by \cref{rul:U-elim}.
\end{proof}
\begin{rul}[Subsumption]
If $\relcts{\pre\tau_\omega}{\ceqtm{A}{A'}{\UKan[i]}}$ then
$\relcts{\pre\tau_\omega}{\ceqtm{A}{A'}{\Upre[i]}}$.
\end{rul}
\begin{proof}
By $\Kan\tau_i\subseteq\pre\tau_i$ we have
$\vper{\UKan[i]}\subseteq\vper{\Upre[i]}$ and thus
$\ensuremath{\mathsf{Tm}}(\vper{\UKan[i]})\subseteq\ensuremath{\mathsf{Tm}}(\vper{\Upre[i]})$.
\end{proof}
\subsection{Dependent function types}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; in $\tau$,
whenever $\ceqtype{pre}{A}{A'}$,
$\eqtype{pre}{\oft{a}{A}}{B}{B'}$, and
$\phi = \{(\lam{a}{N},\lam{a}{N'}) \mid \eqtm{\oft{a}{A}}{N}{N'}{B}\}$, we have
$\tau(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Notice that whenever $\ceqtype{pre}{A}{A'}$ and $\eqtype{pre}{\oft{a}{A}}{B}{B'}$,
we have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\_)$ because
$\sisval{\picl{a}{A}{B}}$ and judgments are preserved by dimension
substitution.
\begin{lemma}\label{lem:fun-preintro}
If $\eqtm{\oft aA}{M}{M'}{B}$ then
$\ensuremath{\mathsf{Tm}}(\vper{\picl{a}{A}{B}})(\lam{a}{M},\lam{a}{M'})$.
\end{lemma}
\begin{proof}
By $\sisval{\lam{a}{M}}$, it suffices to check that
$\vper{\picl{a}{A}{B}}_{\psi}(\lam{a}{\td{M}{\psi}},\lam{a}{\td{M'}{\psi}})$
for any $\tds{\Psi'}{\psi}{\Psi}$; this holds because
$\eqtm[\Psi']{\oft{a}{\td{A}{\psi}}}{\td{M}{\psi}}{\td{M'}{\psi}}{\td{B}{\psi}}$
and $\vper{\picl{a}{A}{B}}_{\psi} =
\vper{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$.
\end{proof}
\begin{rul}[Pretype formation]\label{rul:fun-form-pre}
If $\ceqtype{pre}{A}{A'}$ and
$\eqtype{pre}{\oft aA}{B}{B'}$ then
$\ceqtype{pre}{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}$.
\end{rul}
\begin{proof}
We have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\alpha)$, and by
\cref{lem:fun-preintro}, $\ensuremath{\mathsf{Coh}}(\alpha)$.
\end{proof}
\begin{rul}[Introduction]\label{rul:fun-intro}
If $\eqtm{\oft aA}{M}{M'}{B}$ then
$\ceqtm{\lam{a}{M}}{\lam{a}{M'}}{\picl{a}{A}{B}}$.
\end{rul}
\begin{proof}
Immediate by \cref{lem:fun-preintro,rul:fun-form-pre}.
\end{proof}
\begin{lemma}\label{lem:fun-preelim}
If $\coftype{M}{\picl{a}{A}{B}}$ and $\coftype{N}{A}$ then $M\ensuremath{\Downarrow}\lam{a}{O}$
and $\ceqtm{\app{M}{N}}{\subst{O}{N}{a}}{\subst{B}{N}{a}}$.
\end{lemma}
\begin{proof}
For any $\tds{\Psi'}{\psi}{\Psi}$, we know that $\td{M}{\psi}\ensuremath{\Downarrow}\lam{a}{O_\psi}$ and
$\vper{\picl{a}{A}{B}}_\psi(\lam{a}{\td{O_{\id}}{\psi}},\lam{a}{O_\psi})$, and
therefore
$\eqtm[\Psi']{\oft{a}{\td{A}{\psi}}}{\td{O_{\id}}{\psi}}{O_\psi}{\td{B}{\psi}}$.
We apply coherent expansion to $\app{M}{N}$,
$\cwftype{pre}{\subst{B}{N}{a}}$, and
$\{\subst{O_\psi}{\td{N}{\psi}}{a}\}^{\Psi'}_\psi$, by
$\app{\td{M}{\psi}}{\td{N}{\psi}} \ensuremath{\longmapsto}^*
\app{\lam{a}{O_\psi}}{\td{N}{\psi}} \ensuremath{\longmapsto}
\subst{O_\psi}{\td{N}{\psi}}{a}$ and
$\ceqtm[\Psi']
{\subst{O_\psi}{\td{N}{\psi}}{a}}
{\td{(\subst{O_{\id}}{N}{a})}{\psi}}
{\subst{\td{B}{\psi}}{\td{N}{\psi}}{a}}$.
We conclude by \cref{lem:cohexp-ceqtm} that
$\ceqtm{\app{M}{N}}{\subst{O_{\id}}{N}{a}}{\subst{B}{N}{a}}$, as desired.
\end{proof}
\begin{rul}[Elimination]\label{rul:fun-elim}
If $\ceqtm{M}{M'}{\picl{a}{A}{B}}$ and
$\ceqtm{N}{N'}{A}$ then
$\ceqtm{\app{M}{N}}{\app{M'}{N'}}{\subst{B}{N}{a}}$.
\end{rul}
\begin{proof}
By \cref{lem:fun-preelim} we know
$M\ensuremath{\Downarrow}\lam{a}{O}$,
$M'\ensuremath{\Downarrow}\lam{a}{O'}$,
$\ceqtm{\app{M}{N}}{\subst{O}{N}{a}}{\subst{B}{N}{a}}$, and
$\ceqtm{\app{M'}{N'}}{\subst{O'}{N'}{a}}{\subst{B}{N'}{a}}$.
By \cref{lem:coftype-evals-ceqtm},
$\ceqtm{M}{\lam{a}{O}}{\picl{a}{A}{B}}$ and
$\ceqtm{M'}{\lam{a}{O'}}{\picl{a}{A}{B}}$, and so by
$\vper{\picl{a}{A}{B}}(\lam{a}{O},\lam{a}{O'})$,
$\eqtm{\oft{a}{A}}{O}{O'}{B}$.
We conclude
$\ceqtm{\subst{O}{N}{a}}{\subst{O'}{N'}{a}}{\subst{B}{N}{a}}$ and
$\ceqtype{pre}{\subst{B}{N}{a}}{\subst{B}{N'}{a}}$, and the result follows by
symmetry, transitivity, and \cref{lem:ceqtypep-ceqtm}.
\end{proof}
\begin{rul}[Eta]\label{rul:fun-eta}
If $\coftype{M}{\picl{a}{A}{B}}$ then
$\ceqtm{M}{\lam{a}{\app{M}{a}}}{\picl{a}{A}{B}}$.
\end{rul}
\begin{proof}
By \cref{lem:coftype-evals-ceqtm},
$M\ensuremath{\Downarrow}\lam{a}{O}$ and
$\ceqtm{M}{\lam{a}{O}}{\picl{a}{A}{B}}$;
by transitivity and \cref{rul:fun-intro} it suffices to show
$\eqtm{\oft{a}{A}}{O}{\app{M}{a}}{B}$, that is,
for any $\tds{\Psi'}{\psi}{\Psi}$ and $\ceqtm[\Psi']{N}{N'}{\td{A}{\psi}}$,
$\ceqtm[\Psi']{\subst{\td{O}{\psi}}{N}{a}}{\app{\td{M}{\psi}}{N'}}{\subst{\td{B}{\psi}}{N}{a}}$.
By \cref{lem:fun-preelim},
$\ceqtm[\Psi']{\subst{O_\psi}{N'}{a}}{\app{\td{M}{\psi}}{N'}}{\subst{\td{B}{\psi}}{N'}{a}}$,
where $\td{M}{\psi}\ensuremath{\Downarrow}\lam{a}{O_\psi}$. The result then follows by
$\ceqtype{pre}[\Psi']{\subst{\td{B}{\psi}}{N}{a}}{\subst{\td{B}{\psi}}{N'}{a}}$
and
$\eqtm[\Psi']{\oft{a}{\td{A}{\psi}}}{\td{O_{\id}}{\psi}}{O_\psi}{\td{B}{\psi}}$,
the latter by
$\vper{\picl{a}{A}{B}}_\psi(\lam{a}{\td{O}{\psi}},\lam{a}{O_\psi})$.
\end{proof}
\begin{rul}[Computation]
If $\oftype{\oft aA}{M}{B}$ and
$\coftype{N}{A}$ then
$\ceqtm{\app{\lam{a}{M}}{N}}{\subst{M}{N}{a}}{\subst{B}{N}{a}}$.
\end{rul}
\begin{proof}
Immediate by $\coftype{\subst{M}{N}{a}}{\subst{B}{N}{a}}$,
$\app{\lam{a}{M}}{N}\ensuremath{\steps_\stable} \subst{M}{N}{a}$, and
\cref{lem:expansion}.
\end{proof}
\begin{rul}[Kan type formation]\label{rul:fun-form-kan}
If $\ceqtype{Kan}{A}{A'}$ and
$\eqtype{Kan}{\oft aA}{B}{B'}$ then
$\ceqtype{Kan}{\picl{a}{A}{B}}{\picl{a}{A'}{B'}}$.
\end{rul}
\begin{proof}
By \cref{rul:fun-form-pre}, it suffices to check the five Kan conditions.
($\Hcom$) First, suppose that $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item $\etc{\xi_i}=\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}%
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$ for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}%
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$ for any $i$,
\end{enumerate}
and show
$\ceqtm[\Psi']%
{\Hcom*{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}{\xi_i}}%
{\Hcom{\picl{a}{\td{A'}{\psi}}{\td{B'}{\psi}}}{r}{r'}{M'}{\sys{\xi_i}{N_i'}}}%
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$.
By \cref{lem:expansion} on both sides and \cref{rul:fun-intro}, it suffices to
show
\begin{align*}
\ctx{\oft{a}{\td{A}{\psi}}}{} &
{\Hcom{\td{B}{\psi}}{r}{r'}{\app{M}{a}}{\sys{\xi_i}{y.\app{N_i}{a}}}} \\
\ceqtmtab[\Psi']{}
{\Hcom{\td{B'}{\psi}}{r}{r'}{\app{M'}{a}}{\sys{\xi_i}{y.\app{N_i'}{a}}}}
{\td{B}{\psi}}
\end{align*}
or that for any $\tds{\Psi''}{\psi'}{\Psi'}$ and
$\ceqtm[\Psi'']{N}{N'}{\td{A}{\psi\psi'}}$,
\begin{align*}
&{\Hcom{\subst{\td{B}{\psi\psi'}}{N}{a}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\app{\td{M}{\psi'}}{N}}{\sys{\td{\xi_i}{\psi'}}{y.\app{\td{N_i}{\psi'}}{N}}}} \\
\ceqtmtab[\Psi'']{}
{\Hcom{\subst{\td{B'}{\psi\psi'}}{N'}{a}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\app{\td{M'}{\psi'}}{N'}}{\sys{\td{\xi_i}{\psi'}}{y.\app{\td{N_i'}{\psi'}}{N'}}}}
{\subst{\td{B}{\psi\psi'}}{N}{a}}.
\end{align*}
By $\eqtype{Kan}{\oft aA}{B}{B'}$ we know
$\ceqtype{Kan}[\Psi'']{\subst{\td{B}{\psi\psi'}}{N}{a}}{\subst{\td{B'}{\psi\psi'}}{N'}{a}}$,
so the result follows by \cref{def:kan} once we establish
\begin{enumerate}
\item $\etc{\td{r_i}{\psi'} = \td{r_i'}{\psi'}}$ is valid,
\item $\ceqtm[\Psi'']{\app{\td{M}{\psi'}}{N}}{\app{\td{M'}{\psi'}}{N'}}%
{\subst{\td{B}{\psi\psi'}}{N}{a}}$,
\item $\ceqtm[\Psi'',y]%
<\td{r_i}{\psi'}=\td{r_i'}{\psi'},\td{r_j}{\psi'}=\td{r_j'}{\psi'}>%
{\app{\td{N_i}{\psi'}}{N}}{\app{\td{N_j'}{\psi'}}{N'}}%
{\subst{\td{B}{\psi\psi'}}{N}{a}}$ for any $i,j$, and
\item $\ceqtm[\Psi'']<\td{r_i}{\psi'}=\td{r_i'}{\psi'}>%
{\app{\td{\dsubst{N_i}{r}{y}}{\psi'}}{N}}{\app{\td{M}{\psi'}}{N'}}%
{\subst{\td{B}{\psi\psi'}}{N}{a}}$ for any $i$.
\end{enumerate}
These follow from our hypotheses and a context-restricted variant of
\cref{rul:fun-elim}, namely that
if $\ceqtm<\Xi>{M}{M'}{\picl{a}{A}{B}}$ and
$\ceqtm<\Xi>{N}{N'}{A}$ then
$\ceqtm<\Xi>{\app{M}{N}}{\app{M'}{N'}}{\subst{B}{N}{a}}$. (This statement is
easily proven by expanding the definition of context-restricted judgments.)
Next, we must show that if $r=r'$ then
$\ceqtm[\Psi']
{\Hcom*{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}{\xi_i}}{M}
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$.
By \cref{lem:expansion} on the left and \cref{rul:fun-eta} on the right, it
suffices to show that
\[
\ceqtm[\Psi']
{\lam{a}{\Hcom{\td{B}{\psi}}{r}{r'}{\app{M}{a}}{\sys{\xi_i}{y.\app{N_i}{a}}}}}
{\lam{a}{\app{M}{a}}}
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}.
\]
By \cref{rul:fun-intro}, we show that for any $\tds{\Psi''}{\psi'}{\Psi'}$ and
$\ceqtm[\Psi'']{N}{N'}{\td{A}{\psi\psi'}}$,
\[
\ceqtm[\Psi']
{\Hcom{\subst{\td{B}{\psi\psi'}}{N}{a}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\app{\td{M}{\psi'}}{N}}{\sys{\td{\xi_i}{\psi'}}{y.\app{\td{N_i}{\psi'}}{N}}}}
{\app{\td{M}{\psi'}}{N'}}
{\subst{\td{B}{\psi\psi'}}{N}{a}}.
\]
By $\cwftype{Kan}[\Psi'']{\subst{\td{B}{\psi\psi'}}{N}{a}}$ and $r=r'$ on the left,
it suffices to show
$\ceqtm[\Psi'']{\app{\td{M}{\psi'}}{N}}{\app{\td{M}{\psi'}}{N'}}%
{\subst{\td{B}{\psi\psi'}}{N}{a}}$, which holds by \cref{rul:fun-elim}.
For the final $\Hcom$ property, show that if $r_i=r_i'$ then
$\ceqtm[\Psi']
{\Hcom*{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}{\xi_i}}{\dsubst{N_i}{r'}{y}}
{\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$. As before, by \cref{lem:expansion} on
the left, \cref{rul:fun-eta} on the right, and \cref{rul:fun-intro}, show that
for any $\tds{\Psi''}{\psi'}{\Psi'}$ and
$\ceqtm[\Psi'']{N}{N'}{\td{A}{\psi\psi'}}$,
\[
\ceqtm[\Psi']
{\Hcom{\subst{\td{B}{\psi\psi'}}{N}{a}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\app{\td{M}{\psi'}}{N}}{\sys{\td{\xi_i}{\psi'}}{y.\app{\td{N_i}{\psi'}}{N}}}}
{\app{\td{\dsubst{N_i}{r'}{y}}{\psi'}}{N'}}
{\subst{\td{B}{\psi\psi'}}{N}{a}}.
\]
This follows by $\cwftype{Kan}[\Psi'']{\subst{\td{B}{\psi\psi'}}{N}{a}}$ and
$\td{r_i}{\psi'}=\td{r_i'}{\psi'}$ on the left, and \cref{rul:fun-elim}.
($\Coe$) Now, suppose that $\tds{(\Psi',x)}{\psi}{\Psi}$ and
$\ceqtm[\Psi']{M}{M'}%
{\picl{a}{\dsubst{\td{A}{\psi}}{r}{x}}{\dsubst{\td{B}{\psi}}{r}{x}}}$, and show
that
$\ceqtm[\Psi']{\Coe*{x.\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}%
{\Coe{x.\picl{a}{\td{A'}{\psi}}{\td{B'}{\psi}}}{r}{r'}{M'}}%
{\picl{a}{\dsubst{\td{A}{\psi}}{r'}{x}}{\dsubst{\td{B}{\psi}}{r'}{x}}}$.
By \cref{lem:expansion} on both sides and \cref{rul:fun-intro}, we must show
that for any $\tds{\Psi''}{\psi'}{\Psi'}$ and
$\ceqtm[\Psi'']{N}{N'}{\dsubst{\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}}$,
\begin{align*}
&{\Coe{x.\subst{\td{B}{\psi\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N}}{a}}%
{\td{r}{\psi'}}{\td{r'}{\psi'}}%
{\app{\td{M}{\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N}}}} \\
\ceqtmtab[\Psi'']{}%
{\Coe{x.\subst{\td{B'}{\psi\psi'}}{\Coe{x.\td{A'}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N'}}{a}}%
{\td{r}{\psi'}}{\td{r'}{\psi'}}%
{\app{\td{M'}{\psi'}}{\Coe{x.\td{A'}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N'}}}}%
{\subst{\dsubst{\td{B}{\psi\psi'}}{\td{r'}{\psi'}}{x}}{N}{a}}.
\end{align*}
By $\ceqtype{Kan}[\Psi'',x]{\td{A}{\psi\psi'}}{\td{A'}{\psi\psi'}}$, we have
$\ceqtm[\Psi'']%
{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N}}%
{\Coe{x.\td{A'}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N'}}%
{\dsubst{\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}}$, and the corresponding instances
of $\td{B}{\psi\psi'}$ and $\td{B'}{\psi\psi'}$ are equal as Kan types.
By \cref{rul:fun-elim} we have
\[
\ceqtm[\Psi'']
{\app{\td{M}{\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N}}}
{\app{\td{M'}{\psi'}}{\Coe{x.\td{A'}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N'}}}
{\subst{\dsubst{\td{B}{\psi\psi'}}{\td{r}{\psi'}}{x}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N}}{a}}
\]
so the above $\Coe$ are equal in
$\subst{\dsubst{\td{B}{\psi\psi'}}{\td{r'}{\psi'}}{x}}%
{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r'}{\psi'}}{N}}{a}$. The result
follows by \cref{lem:ceqtypep-ceqtm} and
$\ceqtm[\Psi'']{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r'}{\psi'}}{N}}%
{N}{\dsubst{\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}}$.
Finally, show that if $r=r'$ then
$\ceqtm[\Psi']{\Coe*{x.\picl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}{M}%
{\picl{a}{\dsubst{\td{A}{\psi}}{r'}{x}}{\dsubst{\td{B}{\psi}}{r'}{x}}}$.
By \cref{lem:expansion} on the left, \cref{rul:fun-eta} on the right, and
\cref{rul:fun-intro}, it suffices to show that for any
$\tds{\Psi''}{\psi'}{\Psi'}$ and
$\ceqtm[\Psi'']{N}{N'}{\dsubst{\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}}$,
\[
\ceqtm[\Psi'']
{\Coe{x.\subst{\td{B}{\psi\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N}}{a}}%
{\td{r}{\psi'}}{\td{r'}{\psi'}}%
{\app{\td{M}{\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{\td{r}{\psi'}}{N}}}}
{\app{\td{M}{\psi'}}{N'}}
{\subst{\dsubst{\td{B}{\psi\psi'}}{\td{r'}{\psi'}}{x}}{N}{a}}.
\]
By $\td{r}{\psi'}=\td{r'}{\psi'}$, $\cwftype{Kan}[\Psi'',x]{\td{A}{\psi\psi'}}$,
\cref{rul:fun-elim}, and
$\cwftype{Kan}[\Psi'',x]{\subst{\td{B}{\psi\psi'}}{\Coe{x.\td{A}{\psi\psi'}}{\td{r'}{\psi'}}{x}{N}}{a}}$,
it suffices to show
$\ceqtm[\Psi'']{\app{\td{M}{\psi'}}{N}}{\app{\td{M}{\psi'}}{N'}}%
{\subst{\dsubst{\td{B}{\psi\psi'}}{\td{r'}{\psi'}}{x}}{N}{a}}$, which again
follows by \cref{rul:fun-elim}.
\end{proof}
\subsection{Dependent pair types}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; in $\tau$,
whenever $\ceqtype{pre}{A}{A'}$,
$\eqtype{pre}{\oft{a}{A}}{B}{B'}$, and
$\phi = \{(\pair{M}{N},\pair{M'}{N'}) \mid
\ceqtm{M}{M'}{A} \land \ceqtm{N}{N'}{\subst{B}{M}{a}}\}$, we have
$\tau(\Psi,\sigmacl{a}{A}{B},\sigmacl{a}{A'}{B'},\phi)$.
\begin{rul}[Pretype formation]\label{rul:pair-form-pre}
If $\ceqtype{pre}{A}{A'}$ and $\eqtype{pre}{\oft aA}{B}{B'}$ then
$\ceqtype{pre}{\sigmacl{a}{A}{B}}{\sigmacl{a}{A'}{B'}}$.
\end{rul}
\begin{proof}
We have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\sigmacl{a}{A}{B},\sigmacl{a}{A'}{B'},\_)$ because
$\sisval{\sigmacl{a}{A}{B}}$ and judgments are preserved by dimension
substitution. For $\ensuremath{\mathsf{Coh}}(\vper{\sigmacl{a}{A}{B}})$, assume
$\vper{\sigmacl{a}{A}{B}}_\psi(\pair{M}{N},\pair{M'}{N'})$.
Then $\ceqtm[\Psi']{M}{M'}{\td{A}{\psi}}$ and
$\ceqtm[\Psi']{N}{N'}{\subst{\td{B}{\psi}}{M}{a}}$; again,
$\sisval{\pair{M}{N}}$ and these judgments are preserved by dimension
substitution, so
$\ensuremath{\mathsf{Tm}}(\td{\vper{\sigmacl{a}{A}{B}}}{\psi})(\pair{M}{N},\pair{M'}{N'})$.
\end{proof}
\begin{rul}[Introduction]\label{rul:pair-intro}
If $\ceqtm{M}{M'}{A}$ and $\ceqtm{N}{N'}{\subst{B}{M}{a}}$ then
$\ceqtm{\pair{M}{N}}{\pair{M'}{N'}}{\sigmacl{a}{A}{B}}$.
\end{rul}
\begin{proof}
Immediate by \cref{rul:pair-form-pre}.
\end{proof}
\begin{rul}[Elimination]\label{rul:pair-elim}
If $\ceqtm{P}{P'}{\sigmacl{a}{A}{B}}$ then
$\ceqtm{\fst{P}}{\fst{P'}}{A}$ and
$\ceqtm{\snd{P}}{\snd{P'}}{\subst{B}{\fst{P}}{a}}$.
\end{rul}
\begin{proof}
For any $\tds{\Psi'}{\psi}{\Psi}$, $\td{P}{\psi}\ensuremath{\Downarrow}\pair{M_\psi}{N_\psi}$,
$\coftype[\Psi']{M_\psi}{\td{A}{\psi}}$, and
$\coftype[\Psi']{N_\psi}{\subst{\td{B}{\psi}}{M_\psi}{a}}$. For part (1), apply
coherent expansion to $\fst{P}$ with family $\{M_\psi\}^{\Psi'}_\psi$; then
$\ceqtm[\Psi']{\td{(M_{\id})}{\psi}}{M_\psi}{\td{A}{\psi}}$ by
$\coftype{P}{\sigmacl{a}{A}{B}}$ at $\id,\psi$. By \cref{lem:cohexp-ceqtm},
$\ceqtm{\fst{P}}{M_{\id}}{A}$, and part (1) follows by
$\ceqtm{M_{\id}}{M'_{\id}}{A}$ and a symmetric argument on the right side.
For part (2), apply coherent expansion to $\snd{P}$ with family
$\{N_\psi\}^{\Psi'}_\psi$. We have $\ceqtm[\Psi']{\td{(N_{\id})}{\psi}}{N_\psi}%
{\subst{\td{B}{\psi}}{\td{(M_{\id})}{\psi}}{a}}$ by
$\coftype{P}{\sigmacl{a}{A}{B}}$ at $\id,\psi$, so by \cref{lem:cohexp-ceqtm},
$\ceqtm{\snd{P}}{N_{\id}}{\subst{B}{M_{\id}}{a}}$. Part (2) follows by
$\ceqtype{pre}{\subst{B}{M_{\id}}{a}}{\subst{B}{\fst{P}}{a}}$ (by
$\eqtype{pre}{\oft{a}{A}}{B}{B'}$ and $\ceqtm{M_{\id}}{\fst{P}}{A}$),
$\ceqtm{N_{\id}}{N'_{\id}}{\subst{B}{M_{\id}}{a}}$, and
a symmetric argument on the right side.
\end{proof}
\begin{rul}[Computation]
If $\coftype{M}{A}$ then $\ceqtm{\fst{\pair{M}{N}}}{M}{A}$.
If $\coftype{N}{B}$ then $\ceqtm{\snd{\pair{M}{N}}}{N}{B}$.
\end{rul}
\begin{proof}
Immediate by \cref{lem:expansion}.
\end{proof}
\begin{rul}[Eta]\label{rul:pair-eta}
If $\coftype{P}{\sigmacl{a}{A}{B}}$ then
$\ceqtm{P}{\pair{\fst{P}}{\snd{P}}}{\sigmacl{a}{A}{B}}$.
\end{rul}
\begin{proof}
By \cref{lem:coftype-evals-ceqtm}, $P\ensuremath{\Downarrow}\pair{M}{N}$,
$\ceqtm{P}{\pair{M}{N}}{\sigmacl{a}{A}{B}}$, $\coftype{M}{A}$, and
$\coftype{N}{\subst{B}{M}{a}}$. By \cref{rul:pair-intro,lem:coftype-ceqtm} and
transitivity, we show $\lift{\vper{A}}(M,\fst{P})$ and
$\lift{\vper{\subst{B}{M}{a}}}(N,\snd{P})$. This is immediate by
$\fst{P}\ensuremath{\longmapsto}^*\fst{\pair{M}{N}}\ensuremath{\longmapsto} M$ and
$\snd{P}\ensuremath{\longmapsto}^*\snd{\pair{M}{N}}\ensuremath{\longmapsto} N$.
\end{proof}
\begin{rul}[Kan type formation]
If $\ceqtype{Kan}{A}{A'}$ and $\eqtype{Kan}{\oft aA}{B}{B'}$ then
$\ceqtype{Kan}{\sigmacl{a}{A}{B}}{\sigmacl{a}{A'}{B'}}$.
\end{rul}
\begin{proof}
It suffices to check the five Kan conditions.
($\Hcom$) First, suppose that $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$
for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$
for any $i$,
\end{enumerate}
and show
$\ceqtm[\Psi']{\Hcom*{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}{\xi_i}}%
{\Hcom{\sigmacl{a}{\td{A'}{\psi}}{\td{B'}{\psi}}}{r}{r'}{M'}{\sys{\xi_i}{y.N_i'}}}%
{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$. By \cref{lem:expansion} on both
sides and \cref{rul:pair-intro}, it suffices to show (the binary version of)
\begin{gather*}
\coftype[\Psi']
{\Hcom{\td{A}{\psi}}{r}{r'}{\fst{M}}{\sys{\xi_i}{y.\fst{N_i}}}}
{\td{A}{\psi}} \\
\coftype[\Psi']
{\Com{z.\subst{\td{B}{\psi}}{F}{a}}{r}{r'}{\snd{M}}{\sys{\xi_i}{y.\snd{N_i}}}}
{\subst{\td{B}{\psi}}{\Hcom{\td{A}{\psi}}}{a}} \\
\text{where}\ F={\Hcom{\td{A}{\psi}}{r}{z}{\fst{M}}{\sys{\xi_i}{y.\fst{N_i}}}}.
\end{gather*}
We have $\coftype[\Psi']{\Hcom{\td{A}{\psi}}}{\td{A}{\psi}}$ and
$\coftype[\Psi',z]{F}{\td{A}{\psi}}$ by $\cwftype{Kan}{A}$ and \cref{rul:pair-elim}.
We show $\coftype[\Psi']{\Com{z.\subst{\td{B}{\psi}}{F}{a}}}%
{\subst{\td{B}{\psi}}{\Hcom{\td{A}{\psi}}}{a}}$ by \cref{thm:com},
observing that $\cwftype{Kan}[\Psi',z]{\subst{\td{B}{\psi}}{F}{a}}$,
$\dsubst{F}{r'}{z} = \Hcom{\td{A}{\psi}}$,
\begin{enumerate}
\item $\coftype[\Psi']{\snd{M}}{\subst{\td{B}{\psi}}{\dsubst{F}{r}{z}}{a}}$
by $\ceqtm[\Psi']{\dsubst{F}{r}{z}}{\fst{M}}{\td{A}{\psi}}$ and
\cref{rul:pair-elim},
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j}%
{\subst{\td{B}{\psi}}{\dsubst{F}{y}{z}}{a}}$ by
$\ceqtm[\Psi',y]<r_i=r_i'>{\dsubst{F}{y}{z}}{\fst{N_i}}{\td{A}{\psi}}$ and
\cref{rul:pair-elim}, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\snd{\dsubst{N_i}{r}{y}}}{\snd{M}}%
{\subst{\td{B}{\psi}}{\dsubst{F}{r}{z}}{a}}$
by $\ceqtm[\Psi']{\dsubst{F}{r}{z}}{\fst{M}}{\td{A}{\psi}}$ and
\cref{rul:pair-elim}.
\end{enumerate}
Next, we must show that if $r=r'$ then
$\ceqtm[\Psi']{\Hcom{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}{M}%
{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$. By \cref{lem:expansion},
$\ceqtm[\Psi']{\Hcom{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}%
{\pair{\Hcom{\td{A}{\psi}}}{\Com{z.\subst{\td{B}{\psi}}{F}{a}}}}%
{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$. By \cref{def:kan,thm:com},
$\ceqtm[\Psi']{\Hcom{\td{A}{\psi}}}{\fst{M}}{\td{A}{\psi}}$,
$\ceqtm[\Psi']{\Com{z.\subst{\td{B}{\psi}}{F}{a}}}{\snd{M}}%
{\subst{\td{B}{\psi}}{\dsubst{F}{r}{z}}{a}}$, and
$\ceqtype{Kan}[\Psi']{\subst{\td{B}{\psi}}{\dsubst{F}{r}{z}}{a}}
{\subst{\td{B}{\psi}}{\fst{M}}{a}}$. The result follows by \cref{rul:pair-eta}.
For the final $\Hcom$ property, show that if $r_i=r_i'$ then
$\ceqtm[\Psi']{\Hcom{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}{\dsubst{N_i}{r'}{y}}%
{\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}$. The result follows by
$\ceqtm[\Psi']{\Hcom{\td{A}{\psi}}}{\fst{\dsubst{N_i}{r'}{y}}}{\td{A}{\psi}}$,
$\ceqtm[\Psi']{\Com{z.\subst{\td{B}{\psi}}{F}{a}}}{\snd{\dsubst{N_i}{r'}{y}}}%
{\subst{\td{B}{\psi}}{\dsubst{F}{r'}{z}}{a}}$, and
$\ceqtype{Kan}[\Psi']{\subst{\td{B}{\psi}}{\dsubst{F}{r'}{z}}{a}}
{\subst{\td{B}{\psi}}{\fst{\dsubst{N_i}{r'}{y}}}{a}}$.
($\Coe$) Now, suppose that $\tds{(\Psi',x)}{\psi}{\Psi}$ and
$\ceqtm[\Psi']{M}{M'}{\dsubst{(\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}})}{r}{x}}$,
and show $\ceqtm[\Psi']{\Coe*{x.\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}}%
{\Coe{x.\sigmacl{a}{\td{A'}{\psi}}{\td{B'}{\psi}}}{r}{r'}{M'}}%
{\dsubst{(\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}})}{r'}{x}}$. By
\cref{lem:expansion} and \cref{rul:pair-intro}, it suffices to show (the binary
version of)
\begin{gather*}
\coftype[\Psi']{\Coe{x.\td{A}{\psi}}{r}{r'}{\fst{M}}}{\dsubst{\td{A}{\psi}}{r'}{x}} \\
\coftype[\Psi']{\Coe{x.\subst{\td{B}{\psi}}{\Coe{x.\td{A}{\psi}}{r}{x}{\fst{M}}}{a}}{r}{r'}{\snd{M}}}%
{\subst{\dsubst{\td{B}{\psi}}{r'}{x}}{\Coe{x.\td{A}{\psi}}{r}{r'}{\fst{M}}}{a}}
\end{gather*}
We know that $\coftype[\Psi']{\Coe{x.\td{A}{\psi}}{r}{r'}{\fst{M}}}%
{\dsubst{\td{A}{\psi}}{r'}{x}}$ and
$\cwftype{Kan}[\Psi',x]{\subst{\td{B}{\psi}}{\Coe{x.\td{A}{\psi}}{r}{x}{\fst{M}}}{a}}$
by $\cwftype{Kan}[\Psi',x]{\td{A}{\psi}}$,
$\wftype{Kan}[\Psi',x]{\oft{a}{\td{A}{\psi}}}{\td{B}{\psi}}$, and
\cref{rul:pair-elim}. We also know that
$\coftype[\Psi']{\snd{M}}{\subst{\dsubst{\td{B}{\psi}}{r}{x}}{\fst{M}}{a}}$ and
$\ceqtm[\Psi']{\dsubst{(\Coe{x.\td{A}{\psi}}{r}{x}{\fst{M}})}{r}{x}}%
{\fst{M}}{\dsubst{A}{r}{x}}$, so
$\coftype[\Psi']{\Coe{x.\subst{\td{B}{\psi}}{\dots}{a}}}%
{\subst{\dsubst{\td{B}{\psi}}{r'}{x}}{\Coe{x.\td{A}{\psi}}{r}{r'}{\fst{M}}}{a}}$
and the result follows.
Finally, show that if $r=r'$ then
$\ceqtm[\Psi']{\Coe{x.\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}}}{r}{r}{M}}{M}%
{\dsubst{(\sigmacl{a}{\td{A}{\psi}}{\td{B}{\psi}})}{r}{x}}$. By
\cref{lem:expansion,rul:pair-intro,rul:pair-eta}, this follows from
$\ceqtm[\Psi']{\Coe{x.\td{A}{\psi}}}{\fst{M}}{\dsubst{\td{A}{\psi}}{r}{x}}$
and $\ceqtm[\Psi']{\Coe{x.\subst{\td{B}{\psi}}{\dots}{a}}}{\snd{M}}%
{\subst{\dsubst{\td{B}{\psi}}{r}{x}}{\fst{M}}{a}}$.
\end{proof}
\subsection{Path types}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; in $\tau$,
whenever $\ceqtype{pre}[\Psi,x]{A}{A'}$,
$\ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, and
$\phi = \{(\dlam{x}{M},\dlam{x}{M'}) \mid
\ceqtm[\Psi,x]{M}{M'}{A} \land
\forall\ensuremath{\varepsilon}. (\ceqtm{\dsubst{M}{\ensuremath{\varepsilon}}{x}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}})\}$, we have
$\tau(\Psi,\Path{x.A}{P_0}{P_1},\Path{x.A'}{P_0'}{P_1'},\phi)$.
\begin{rul}[Pretype formation]\label{rul:path-form-pre}
If $\ceqtype{pre}[\Psi,x]{A}{A'}$ and
$\ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtype{pre}{\Path{x.A}{P_0}{P_1}}{\Path{x.A'}{P_0'}{P_1'}}$.
\end{rul}
\begin{proof}
We have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\Path{x.A}{P_0}{P_1},\Path{x.A'}{P_0'}{P_1'},\_)$
because $\sisval{\Path{x.A}{P_0}{P_1}}$ and judgments are preserved by dimension
substitution. To show $\ensuremath{\mathsf{Coh}}(\vper{\Path{x.A}{P_0}{P_1}})$, suppose that
$\vper{\Path{x.A}{P_0}{P_1}}(\dlam{x}{M},\dlam{x}{M'})$. Then
$\ceqtm[\Psi,x]{M}{M'}{A}$ and
$\ceqtm{\dsubst{M}{\ensuremath{\varepsilon}}{x}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$, so
$\ceqtm[\Psi',x]{\td{M}{\psi}}{\td{M'}{\psi}}{\td{A}{\psi}}$ and
$\ceqtm[\Psi']{\dsubst{\td{M}{\psi}}{\ensuremath{\varepsilon}}{x}}{\td{P_\ensuremath{\varepsilon}}{\psi}}{\dsubst{\td{A}{\psi}}{\ensuremath{\varepsilon}}{x}}$
for any $\tds{\Psi'}{\psi}{\Psi}$, so by $\sisval{\dlam{x}{M}}$,
$\ensuremath{\mathsf{Tm}}(\td{\vper{\Path{x.A}{P_0}{P_1}}}{\psi})(\dlam{x}{M},\dlam{x}{M'})$.
\end{proof}
\begin{rul}[Introduction]\label{rul:path-intro}
If $\ceqtm[\Psi,x]{M}{M'}{A}$ and
$\ceqtm{\dsubst{M}{\ensuremath{\varepsilon}}{x}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtm{\dlam{x}{M}}{\dlam{x}{M'}}{\Path{x.A}{P_0}{P_1}}$.
\end{rul}
\begin{proof}
Then $\vper{\Path{x.A}{P_0}{P_1}}(\dlam{x}{M},\dlam{x}{M'})$, so the result
follows by $\ensuremath{\mathsf{Coh}}(\vper{\Path{x.A}{P_0}{P_1}})$.
\end{proof}
\begin{rul}[Elimination]\label{rul:path-elim}
~\begin{enumerate}
\item If $\ceqtm{M}{M'}{\Path{x.A}{P_0}{P_1}}$ then
$\ceqtm{\dapp{M}{r}}{\dapp{M'}{r}}{\dsubst{A}{r}{x}}$.
\item If $\coftype{M}{\Path{x.A}{P_0}{P_1}}$ then
$\ceqtm{\dapp{M}{\ensuremath{\varepsilon}}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Apply coherent expansion to $\dapp{M}{r}$ with family
$\{ \dsubst{M_\psi}{\td{r}{\psi}}{x} \mid \td{M}{\psi}\ensuremath{\Downarrow} \dlam{x}{M_\psi}
\}^{\Psi'}_\psi$. By $\coftype{M}{\Path{x.A}{P_0}{P_1}}$ at $\id,\psi$ we know
$\ceqtm[\Psi',x]{\td{(M_{\id})}{\psi}}{M_\psi}{\td{A}{\psi}}$, so
$\ceqtm[\Psi']{\dsubst{\td{(M_{\id})}{\psi}}{\td{r}{\psi}}{x}}%
{\dsubst{M_\psi}{\td{r}{\psi}}{x}}{\td{\dsubst{A}{r}{x}}{\psi}}$.
Thus by \cref{lem:cohexp-ceqtm},
$\ceqtm{\dapp{M}{r}}{\dsubst{M_{\id}}{r}{x}}{\dsubst{A}{r}{x}}$; part (1)
follows by the same argument on the right side and
$\ceqtm[\Psi,x]{M_{\id}}{M'_{\id}}{A}$.
Part (2) follows from
$\ceqtm{\dapp{M}{\ensuremath{\varepsilon}}}{\dsubst{M_{\id}}{\ensuremath{\varepsilon}}{x}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ and
$\ceqtm{\dsubst{M_{\id}}{\ensuremath{\varepsilon}}{x}}{P_\ensuremath{\varepsilon}}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$.
\end{proof}
\begin{rul}[Computation]\label{rul:path-comp}
If $\coftype[\Psi,x]{M}{A}$ then
$\ceqtm{\dapp{(\dlam{x}{M})}{r}}{\dsubst{M}{r}{x}}{\dsubst{A}{r}{x}}$.
\end{rul}
\begin{proof}
Immediate by $\dapp{(\dlam{x}{M})}{r}\ensuremath{\steps_\stable}\dsubst{M}{r}{x}$,
$\coftype{\dsubst{M}{r}{x}}{\dsubst{A}{r}{x}}$, and \cref{lem:expansion}.
\end{proof}
\begin{rul}[Eta]\label{rul:path-eta}
If $\coftype{M}{\Path{x.A}{P_0}{P_1}}$ then
$\ceqtm{M}{\dlam{x}{(\dapp{M}{x})}}{\Path{x.A}{P_0}{P_1}}$.
\end{rul}
\begin{proof}
By \cref{lem:coftype-evals-ceqtm}, $M\ensuremath{\Downarrow}\dlam{x}{N}$ and
$\ceqtm{M}{\dlam{x}{N}}{\Path{x.A}{P_0}{P_1}}$. By \cref{rul:path-elim},
$\ceqtm[\Psi,x]{\dapp{M}{x}}{\dapp{(\dlam{x}{N})}{x}}{A}$, so by
\cref{lem:expansion} on the right, $\ceqtm[\Psi,x]{\dapp{M}{x}}{N}{A}$.
By \cref{rul:path-intro},
$\ceqtm{\dlam{x}{(\dapp{M}{x})}}{\dlam{x}{N}}{\Path{x.A}{P_0}{P_1}}$, and the
result follows by transitivity.
\end{proof}
\begin{rul}[Kan type formation]
If $\ceqtype{Kan}[\Psi,x]{A}{A'}$ and
$\ceqtm{P_\ensuremath{\varepsilon}}{P_\ensuremath{\varepsilon}'}{\dsubst{A}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtype{Kan}{\Path{x.A}{P_0}{P_1}}{\Path{x.A'}{P_0'}{P_1'}}$.
\end{rul}
\begin{proof}
It suffices to check the five Kan conditions.
($\Hcom$) First, suppose that $\tds{\Psi'}{\psi}{\Psi}$,
\begin{enumerate}
\item $\etc{\xi_i}=\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\Path{x.\td{A}{\psi}}{\td{P_0}{\psi}}{\td{P_1}{\psi}}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}%
{\Path{x.\td{A}{\psi}}{\td{P_0}{\psi}}{\td{P_1}{\psi}}}$ for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}%
{\Path{x.\td{A}{\psi}}{\td{P_0}{\psi}}{\td{P_1}{\psi}}}$ for any $i$,
\end{enumerate}
and show the equality
$\ceqtm[\Psi']%
{\Hcom*{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}{\xi_i}}%
{\Hcom{\td{(\Path{x.A'}{P_0'}{P_1'})}{\psi}}{r}{r'}{M'}{\sys{\xi_i}{N_i'}}}%
{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}$. By \cref{lem:expansion,rul:path-intro} on
both sides it suffices to show
\begin{align*}
&{\Hcom{\td{A}{\psi}}{r}{r'}{\dapp{M}{x}}{\sys{x=\ensuremath{\varepsilon}}{\_.\td{P_\ensuremath{\varepsilon}}{\psi}},\sys{\xi_i}{y.\dapp{N_i}{x}}}} \\
\ceqtmtab[\Psi',x]{}%
{\Hcom{\td{A'}{\psi}}{r}{r'}{\dapp{M'}{x}}{\sys{x=\ensuremath{\varepsilon}}{\_.\td{P_\ensuremath{\varepsilon}'}{\psi}},\sys{\xi_i}{y.\dapp{N_i'}{x}}}}%
{\td{A}{\psi}}
\end{align*}
and
$\ceqtm[\Psi']{\dsubst{(\Hcom{\td{A}{\psi}})}{\ensuremath{\varepsilon}}{x}}{\td{P_\ensuremath{\varepsilon}}{\psi}}{\dsubst{\td{A}{\psi}}{\ensuremath{\varepsilon}}{x}}$.
By our hypotheses and \cref{rul:path-elim},
\begin{enumerate}
\item $\ceqtm[\Psi',x]{\dapp{M}{x}}{\dapp{M'}{x}}{\td{A}{\psi}}$,
\item $\ceqtm[\Psi',x]<x=\ensuremath{\varepsilon}>{\td{P_\ensuremath{\varepsilon}}{\psi}}{\td{P_\ensuremath{\varepsilon}'}{\psi}}{\td{A}{\psi}}$ and
$\ceqtm[\Psi',x]<x=\ensuremath{\varepsilon}>{\td{P_\ensuremath{\varepsilon}}{\psi}}{\dapp{M}{x}}{\td{A}{\psi}}$,
\item
$\ceqtm[\Psi',x,y]<r_i=r_i',r_j=r_j'>{\dapp{N_i}{x}}{\dapp{N_j'}{x}}{\td{A}{\psi}}$,
$\ceqtm[\Psi',x,y]<r_i=r_i',x=\ensuremath{\varepsilon}>{\dapp{N_i}{x}}{\td{P_\ensuremath{\varepsilon}'}{\psi}}{\td{A}{\psi}}$, and
$\ceqtm[\Psi',x]<r_i=r_i'>{\dapp{\dsubst{N_i}{r}{y}}{x}}{\dapp{M}{x}}{\td{A}{\psi}}$,
\end{enumerate}
and so by \cref{def:kan},
$\ceqtm[\Psi',x]{\Hcom{\td{A}{\psi}}}{\Hcom{\td{A'}{\psi}}}{\td{A}{\psi}}$ and
$\ceqtm{\dsubst{(\Hcom{\td{A}{\psi}})}{\ensuremath{\varepsilon}}{x}}{\td{P_\ensuremath{\varepsilon}}{\psi}}{\td{A}{\psi}}$.
Next, show if $r=r'$ then
$\ceqtm[\Psi']{\Hcom*{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}{\xi_i}}{M}%
{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}$. By \cref{rul:path-intro,def:kan} the left
side equals $\dlam{x}{(\dapp{M}{x})}$, and \cref{rul:path-eta} completes this
part.
Finally, if $r_i=r_i'$ then
$\ceqtm[\Psi']{\Hcom*{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}{\xi_i}}{\dsubst{N_i}{r'}{y}}%
{\td{(\Path{x.A}{P_0}{P_1})}{\psi}}$. By \cref{rul:path-intro,def:kan} the left
side equals $\dlam{x}{(\dapp{\dsubst{N_i}{r'}{y}}{x})}$, and \cref{rul:path-eta}
completes this part.
($\Coe$) Now, suppose that $\tds{(\Psi',y)}{\psi}{\Psi}$ and
$\ceqtm[\Psi']{M}{M'}%
{\dsubst{\td{({\Path{x.A}{P_0}{P_1}})}{\psi}}{r}{y}}$, and show
that
$\ceqtm[\Psi']{\Coe*{y.\td{({\Path{x.A}{P_0}{P_1}})}{\psi}}}%
{\Coe{y.\td{({\Path{x.A'}{P_0'}{P_1'}})}{\psi}}{r}{r'}{M'}}%
{\dsubst{\td{({\Path{x.A}{P_0}{P_1}})}{\psi}}{r'}{y}}$.
By \cref{lem:expansion} on both sides and \cref{rul:path-intro}, we show
\[
\ceqtm[\Psi',x]
{\Com{y.\td{A}{\psi}}{r}{r'}{\dapp{M}{x}}{\sys{x=\ensuremath{\varepsilon}}{y.\td{P_\ensuremath{\varepsilon}}{\psi}}}}
{\Com{y.\td{A'}{\psi}}{r}{r'}{\dapp{M'}{x}}{\sys{x=\ensuremath{\varepsilon}}{y.\td{P_\ensuremath{\varepsilon}'}{\psi}}}}
{\dsubst{\td{A}{\psi}}{r'}{y}}
\]
and
$\ceqtm[\Psi']
{\dsubst{(\Com{y.\td{A}{\psi}})}{\ensuremath{\varepsilon}}{x}}
{\dsubst{\td{P_\ensuremath{\varepsilon}}{\psi}}{r'}{y}}
{\dsubst{\dsubst{\td{A}{\psi}}{r'}{y}}{\ensuremath{\varepsilon}}{x}}$.
By our hypotheses and \cref{rul:path-elim},
$\ceqtm[\Psi',x]{\dapp{M}{x}}{\dapp{M'}{x}}{\dsubst{\td{A}{\psi}}{r}{y}}$,
$\ceqtm[\Psi',x,y]<x=\ensuremath{\varepsilon}>{\td{P_\ensuremath{\varepsilon}}{\psi}}{\td{P_\ensuremath{\varepsilon}'}{\psi}}{\td{A}{\psi}}$,
and
$\ceqtm[\Psi',x]<x=\ensuremath{\varepsilon}>{\dsubst{\td{P_\ensuremath{\varepsilon}}{\psi}}{r}{y}}{\dapp{M}{x}}{\dsubst{\td{A}{\psi}}{r}{y}}$,
so by \cref{thm:com},
$\ceqtm[\Psi',x]{\Com{y.\td{A}{\psi}}}{\Com{y.\td{A'}{\psi}}}{\dsubst{\td{A}{\psi}}{r'}{y}}$
and
$\ceqtm[\Psi']{\dsubst{(\Com{y.\td{A}{\psi}})}{\ensuremath{\varepsilon}}{x}}{\dsubst{\td{P_\ensuremath{\varepsilon}}{\psi}}{r'}{y}}%
{\dsubst{\dsubst{\td{A}{\psi}}{r'}{y}}{\ensuremath{\varepsilon}}{x}}$.
Finally, show that if $r=r'$ then
$\ceqtm[\Psi']{\Coe*{y.\td{({\Path{x.A}{P_0}{P_1}})}{\psi}}}{M}%
{\dsubst{\td{({\Path{x.A}{P_0}{P_1}})}{\psi}}{r'}{y}}$.
By \cref{rul:path-intro,thm:com} the left side equals $\dlam{x}{(\dapp{M}{x})}$,
and \cref{rul:path-eta} completes the proof.
\end{proof}
\subsection{Equality pretypes}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; in $\tau$, whenever
$\ceqtype{pre}{A}{A'}$,
$\ceqtm{M}{M'}{A}$,
$\ceqtm{N}{N'}{A}$, and
$\phi = \{(\ensuremath{\star},\ensuremath{\star}) \mid \ceqtm{M}{N}{A} \}$,
$\tau(\Psi,\Eq{A}{M}{N},\Eq{A'}{M'}{N'},\phi)$.
\begin{rul}[Pretype formation]
If $\ceqtype{pre}{A}{A'}$,
$\ceqtm{M}{M'}{A}$, and
$\ceqtm{N}{N'}{A}$, then
$\ceqtype{pre}{\Eq{A}{M}{N}}{\Eq{A'}{M'}{N'}}$.
\end{rul}
\begin{proof}
We have $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\Eq{A}{M}{N},\Eq{A'}{M'}{N'},\vper{\Eq{A}{M}{N}})$
because $\sisval{\Eq{A}{M}{N}}$ and judgments are preserved by dimension
substitution. To show $\ensuremath{\mathsf{Coh}}(\vper{\Eq{A}{M}{N}})$, suppose that
$\vper{\Eq{A}{M}{N}}_\psi(\ensuremath{\star},\ensuremath{\star})$. Then $\ceqtm{M}{N}{A}$, so
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{N}{\psi}}{\td{A}{\psi}}$ for all $\tds{\Psi'}{\psi}{\Psi}$, so
$\ensuremath{\mathsf{Tm}}(\td{\vper{\Eq{A}{M}{N}}}{\psi})(\ensuremath{\star},\ensuremath{\star})$ holds by this and $\sisval{\ensuremath{\star}}$.
\end{proof}
\begin{rul}[Introduction]
If $\ceqtm{M}{N}{A}$ then $\coftype{\ensuremath{\star}}{\Eq{A}{M}{N}}$.
\end{rul}
\begin{proof}
Then $\vper{\Eq{A}{M}{N}}(\ensuremath{\star},\ensuremath{\star})$, so the result follows by
$\ensuremath{\mathsf{Coh}}(\vper{\Eq{A}{M}{N}})$.
\end{proof}
\begin{rul}[Elimination]
If $\coftype{E}{\Eq{A}{M}{N}}$ then $\ceqtm{M}{N}{A}$.
\end{rul}
\begin{proof}
Then $\lift{\vper{\Eq{A}{M}{N}}}(E,E)$ so $E\ensuremath{\Downarrow}\ensuremath{\star}$ and $\ceqtm{M}{N}{A}$.
\end{proof}
\begin{rul}[Eta]
If $\coftype{E}{\Eq{A}{M}{N}}$ then $\ceqtm{E}{\ensuremath{\star}}{\Eq{A}{M}{N}}$.
\end{rul}
\begin{proof}
Immediate by \cref{lem:coftype-evals-ceqtm}.
\end{proof}
\subsection{Circle}
\label{ssec:C}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; we have
$\tau(\Psi,\ensuremath{\mathbb{S}^1},\ensuremath{\mathbb{S}^1},\mathbb{C}_\Psi)$, where $\mathbb{C}$ is the least
context-indexed relation such that:
\begin{enumerate}
\item $\mathbb{C}_\Psi(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$,
\item $\mathbb{C}_{(\Psi,x)}(\lp{x},\lp{x})$, and
\item $\mathbb{C}_\Psi(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$
whenever
\begin{enumerate}
\item $r\neq r'$;
$r_i \neq r_i'$ for all $i$;
$r_i = r_j$, $r_i' = 0$, and $r_j' = 1$ for some $i,j$;
\item $\ensuremath{\mathsf{Tm}}(\mathbb{C}(\Psi))(M,M')$;
\item $\ensuremath{\mathsf{Tm}}(\mathbb{C}(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$ for all $i,j$ and
$\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i',r_j=r_j'$; and
\item $\ensuremath{\mathsf{Tm}}(\mathbb{C}(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})$ for
all $i,j$ and $\tds{\Psi'}{\psi}{\Psi}$ satisfying $r_i=r_i'$.
\end{enumerate}
\end{enumerate}
By $\sisval{\ensuremath{\mathbb{S}^1}}$ it is immediate that $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathbb{S}^1},\ensuremath{\mathbb{S}^1},\mathbb{C}(\Psi))$.
\begin{lemma}\label{lem:C-prekan}
If
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(M,M')$,
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$ for all $i,j$ and
$\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i',r_j=r_j'$, and
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})$ for
all $i,j$ and $\tds{\Psi'}{\psi}{\Psi}$ satisfying $r_i=r_i'$,
\end{enumerate}
then $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$.
\end{lemma}
\begin{proof}
Let us abbreviate the above $\Fcom$ terms $L$ and $R$ respectively. Expanding
the definition of $\ensuremath{\mathsf{Tm}}$, for any $\tds{\Psi_1}{\psi_1}{\Psi}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$ we must show $\td{L}{\psi_1}\ensuremath{\Downarrow} L_1$,
$\td{R}{\psi_1}\ensuremath{\Downarrow} R_1$, and
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}$ relates
$\td{L_1}{\psi_2}$, $\td{L}{\psi_1\psi_2}$,
$\td{R_1}{\psi_2}$, and $\td{R}{\psi_1\psi_2}$.
We proceed by cases on the first step taken by $\td{L}{\psi_1}$ and
$\td{L}{\psi_1\psi_2}$.
\begin{enumerate}
\item $\td{r}{\psi_1}=\td{r'}{\psi_1}$.
Then $\td{L}{\psi_1}\ensuremath{\steps_\stable} \td{M}{\psi_1}$, $\td{R}{\psi_1}\ensuremath{\steps_\stable}
\td{M'}{\psi_1}$, and the result follows by $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(M,M')$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_j}{\psi_1}=\td{r_j'}{\psi_1}$
(where $\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i<j$), and
$\td{r}{\psi_1\psi_2}=\td{r'}{\psi_1\psi_2}$.
Then $\td{L}{\psi_1}\ensuremath{\longmapsto} \td{\dsubst{N_j}{r'}{y}}{\psi_1}$,
$\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M}{\psi_1\psi_2}$,
$\td{R}{\psi_1}\ensuremath{\longmapsto} \td{\dsubst{N_j'}{r'}{y}}{\psi_1}$, and
$\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M'}{\psi_1\psi_2}$.
Because $\psi_1$ satisfies $r_j=r_j'$, by (3) and (4)
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi_1,y))(\td{N_j}{\psi_1},\td{N_j'}{\psi_1})$ and
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi_1))(\td{\dsubst{N_j}{r}{y}}{\psi_1},\td{M}{\psi_1})$.
By the former at $\dsubst{}{\td{r'}{\psi_1}}{y},\psi_2$,
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{\dsubst{N_j}{r'}{y}}{\psi_1\psi_2},\td{L_1}{\psi_2})$
and $\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{L_1}{\psi_2},\td{R_1}{\psi_2})$.
The latter at $\psi_2,\id[\Psi_2]$ yields
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{\dsubst{N_j}{r}{y}}{\psi_1\psi_2},\td{M}{\psi_1\psi_2})$;
by transitivity and $\td{r}{\psi_1\psi_2}=\td{r'}{\psi_1\psi_2}$ we have
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{L_1}{\psi_2},\td{L}{\psi_1\psi_2})$.
Finally, by $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(M,M')$ we have
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{L}{\psi_1\psi_2},\td{R}{\psi_1\psi_2})$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}=\td{r_i'}{\psi_1}$ (and this is the least such $i$),
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}=\td{r_j'}{\psi_1\psi_2}$ (and this is the least such $j\leq i$).
Then $\td{L}{\psi_1}\ensuremath{\longmapsto} \td{\dsubst{N_i}{r'}{y}}{\psi_1}$,
$\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{\dsubst{N_j}{r'}{y}}{\psi_1\psi_2}$,
$\td{R}{\psi_1}\ensuremath{\longmapsto} \td{\dsubst{N_i'}{r'}{y}}{\psi_1}$, and
$\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{\dsubst{N_j'}{r'}{y}}{\psi_1\psi_2}$.
In this case, $\td{\dsubst{}{r'}{y}}{\psi_1\psi_2}$ satisfies
$r_i=r_i',r_j=r_j'$, and the result follows because $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi_2))$
relates
$\td{\dsubst{N_i}{r'}{y}}{\psi_1\psi_2}$,
$\td{\dsubst{N_j}{r'}{y}}{\psi_1\psi_2}$,
$\td{\dsubst{N_i'}{r'}{y}}{\psi_1\psi_2}$, and
$\td{\dsubst{N_j'}{r'}{y}}{\psi_1\psi_2}$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$, and
$\td{r}{\psi_1\psi_2} = \td{r'}{\psi_1\psi_2}$.
Then $\isval{\td{L}{\psi_1}}$,
$\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, and
$\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{M'}{\psi_1\psi_2}$. In this case,
$\td{L_1}{\psi_2} = \td{L}{\psi_1\psi_2}$ and
$\td{R_1}{\psi_2} = \td{R}{\psi_1\psi_2}$, so the result follows by
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(M,M')$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$,
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}=\td{r_j'}{\psi_1\psi_2}$ (the least such $j$).
Then $\isval{\td{L}{\psi_1}}$,
$\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{\dsubst{N_j}{r'}{y}}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, and
$\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto} \td{\dsubst{N_j'}{r'}{y}}{\psi_1\psi_2}$. The result
follows because
$\td{L_1}{\psi_2} = \td{L}{\psi_1\psi_2}$,
$\td{R_1}{\psi_2} = \td{R}{\psi_1\psi_2}$, and
because $\td{\dsubst{}{r'}{y}}{\psi_1\psi_2}$ satisfies $r_j=r_j'$,
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi_2))%
(\td{\dsubst{N_j}{r'}{y}}{\psi_1\psi_2},\td{\dsubst{N_j'}{r'}{y}}{\psi_1\psi_2})$.
\item $\td{r}{\psi_1}\neq\td{r'}{\psi_1}$,
$\td{r_i}{\psi_1}\neq\td{r_i'}{\psi_1}$ for all $i$, and
$\td{r}{\psi_1\psi_2}\neq\td{r'}{\psi_1\psi_2}$, and
$\td{r_j}{\psi_1\psi_2}\neq\td{r_j'}{\psi_1\psi_2}$ for all $j$.
Then $\isval{\td{L}{\psi_1}}$,
$\isval{\td{L}{\psi_1\psi_2}}$,
$\isval{\td{R}{\psi_1}}$, and
$\isval{\td{R}{\psi_1\psi_2}}$, so it suffices to show
$\vper{\ensuremath{\mathbb{S}^1}}_{\Psi_2}(\td{L}{\psi_1\psi_2},\td{R}{\psi_1\psi_2})$.
We know $\etc{\td{r_i}{\psi_1\psi_2}=\td{r_i'}{\psi_1\psi_2}}$ is valid and
$\td{r_i}{\psi_1\psi_2}\neq\td{r_i'}{\psi_1\psi_2}$ for all $i$, so there must
be some $i,j$ for which $\td{r_i}{\psi_1\psi_2} = \td{r_j}{\psi_1\psi_2}$,
$\td{r_i'}{\psi_1\psi_2} = 0$, and $\td{r_j'}{\psi_1\psi_2} = 1$. The result
follows immediately by the third clause of the definition of $\vper{\ensuremath{\mathbb{S}^1}}$.
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Pretype formation]
$\cwftype{pre}{\ensuremath{\mathbb{S}^1}}$.
\end{rul}
\begin{proof}
It remains to show $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathbb{S}^1}})$. There are three cases:
\begin{enumerate}
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$.
Immediate because $\sisval{\ensuremath{\mathsf{base}}}$.
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi,x))(\lp{x},\lp{x})$.
Show that if $\tds{\Psi_1}{\psi_1}{(\Psi,x)}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$, $\lp{\td{x}{\psi_1}}\ensuremath{\Downarrow} M_1$ and
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi_2}(\td{M_1}{\psi_2},\lp{\td{x}{\psi_1\psi_2}})$.
If $\td{x}{\psi_1} = \ensuremath{\varepsilon}$ then $M_1=\ensuremath{\mathsf{base}}$, $\lp{\td{x}{\psi_1\psi_2}}\ensuremath{\longmapsto}
\ensuremath{\mathsf{base}}$, and $\vper{\ensuremath{\mathbb{S}^1}}_{\Psi_2}(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$.
If $\td{x}{\psi_1} = x'$ and $\td{x'}{\psi_2} = \ensuremath{\varepsilon}$, then $M_1=\lp{x'}$,
$\lp{\td{x'}{\psi_2}}\ensuremath{\longmapsto} \ensuremath{\mathsf{base}}$, $\lp{\td{x}{\psi_1\psi_2}}\ensuremath{\longmapsto} \ensuremath{\mathsf{base}}$, and
$\vper{\ensuremath{\mathbb{S}^1}}_{\Psi_2}(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$.
Otherwise, $\td{x}{\psi_1} = x'$ and $\td{x'}{\psi_2} = x''$, so $M_1=\lp{x'}$
and $\vper{\ensuremath{\mathbb{S}^1}}_{\Psi_2}(\lp{x''},\lp{x''})$.
\item
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}}(\Psi))(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$
where\dots
This is a special case of \cref{lem:C-prekan}. (Note that $\etc{r_i=r_i'}$ is
valid.)
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Introduction]
$\coftype{\ensuremath{\mathsf{base}}}{\ensuremath{\mathbb{S}^1}}$,
$\ceqtm{\lp{\ensuremath{\varepsilon}}}{\ensuremath{\mathsf{base}}}{\ensuremath{\mathbb{S}^1}}$, and
$\coftype{\lp{r}}{\ensuremath{\mathbb{S}^1}}$.
\end{rul}
\begin{proof}
The first is a consequence of $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathbb{S}^1}})$; the second follows by
$\lp{\ensuremath{\varepsilon}}\ensuremath{\steps_\stable} \ensuremath{\mathsf{base}}$ and \cref{lem:expansion}; the third is a consequence of
$\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathbb{S}^1}})$ when $r=x$, and of \cref{lem:expansion} when $r=\ensuremath{\varepsilon}$.
\end{proof}
\begin{rul}[Kan type formation]\label{rul:C-form-kan}
$\cwftype{Kan}{\ensuremath{\mathbb{S}^1}}$.
\end{rul}
\begin{proof}
It suffices to check the five Kan conditions.
($\Hcom$) First, suppose that
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathbb{S}^1}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\ensuremath{\mathbb{S}^1}}$ for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\ensuremath{\mathbb{S}^1}}$ for any $i$,
\end{enumerate}
and show $\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathbb{S}^1}}{r_i=r_i'}}%
{\Hcom{\ensuremath{\mathbb{S}^1}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\ensuremath{\mathbb{S}^1}}$.
This is immediate by \cref{lem:expansion} on both sides (because $\Hcom{\ensuremath{\mathbb{S}^1}}
\ensuremath{\steps_\stable} \Fcom$) and \cref{lem:C-prekan}.
Next, show that if $r=r'$ then $\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathbb{S}^1}}{r_i=r_i'}}{M}{\ensuremath{\mathbb{S}^1}}$.
This is immediate by $\Hcom*{\ensuremath{\mathbb{S}^1}}{r_i=r_i'}\ensuremath{\steps_\stable} \Fcom*{r_i=r_i'}\ensuremath{\steps_\stable} M$ and
\cref{lem:expansion}.
For the final $\Hcom$ property, show that if $r_i = r_i'$ then
$\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathbb{S}^1}}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\ensuremath{\mathbb{S}^1}}$. We already know
each side is an element of $\ensuremath{\mathbb{S}^1}$, so by \cref{lem:coftype-evals-ceqtm} it
suffices to show
$\lift{\vper{\ensuremath{\mathbb{S}^1}}}_{\Psi'}(\Hcom*{\ensuremath{\mathbb{S}^1}}{r_i=r_i'},\dsubst{N_i}{r'}{y})$.
If $r=r'$ then $\Hcom \ensuremath{\longmapsto}^2 M$ and the result follows by
$\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\ensuremath{\mathbb{S}^1}}$, because
$\id[\Psi']$ satisfies $r_i=r_i'$. Otherwise, let $r_j=r_j'$ be the first true
equation. Then $\Hcom \ensuremath{\longmapsto}^2 \dsubst{N_j}{r'}{y}$ and this follows by
$\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j}{\ensuremath{\mathbb{S}^1}}$.
($\Coe$) Now, suppose that $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathbb{S}^1}}$ and show
$\ceqtm[\Psi']{\Coe*{x.\ensuremath{\mathbb{S}^1}}}{\Coe{x.\ensuremath{\mathbb{S}^1}}{r}{r'}{M'}}{\ensuremath{\mathbb{S}^1}}$. This is immediate by
$\Coe*{x.\ensuremath{\mathbb{S}^1}}\ensuremath{\steps_\stable} M$ and \cref{lem:expansion} on both sides. Similarly, if
$r=r'$ then $\ceqtm[\Psi']{\Coe*{x.\ensuremath{\mathbb{S}^1}}}{M}{\ensuremath{\mathbb{S}^1}}$ by \cref{lem:expansion} on the
left.
\end{proof}
\begin{rul}[Computation]\label{rul:C-comp-base}
If $\coftype{P}{B}$ then $\ceqtm{\Celim{c.A}{\ensuremath{\mathsf{base}}}{P}{x.L}}{P}{B}$.
\end{rul}
\begin{proof}
Immediate by $\Celim{c.A}{\ensuremath{\mathsf{base}}}{P}{x.L}\ensuremath{\steps_\stable} P$ and \cref{lem:expansion}.
\end{proof}
\begin{rul}[Computation]\label{rul:C-comp-loop}
If $\coftype[\Psi,x]{L}{B}$ and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\dsubst{B}{\ensuremath{\varepsilon}}{x}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtm{\Celim{c.A}{\lp{r}}{P}{x.L}}{\dsubst{L}{r}{x}}{\dsubst{B}{r}{x}}$.
\end{rul}
\begin{proof}
If $r=\ensuremath{\varepsilon}$ then this is immediate by \cref{lem:expansion} and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\dsubst{B}{\ensuremath{\varepsilon}}{x}}$. If $r=y$ then we apply
coherent expansion to the left side with family
$\{\td{P}{\psi} \mid \td{y}{\psi}=\ensuremath{\varepsilon}\}^{\Psi'}_\psi \cup
\{\dsubst{\td{L}{\psi}}{z}{x} \mid \td{y}{\psi}=z\}^{\Psi'}_\psi$. The
$\id[\Psi]$ element of this family is $\dsubst{L}{y}{x}$;
when $\td{y}{\psi}=\ensuremath{\varepsilon}$ we have
$\ceqtm[\Psi']{\td{\dsubst{L}{y}{x}}{\psi}}{\td{P}{\psi}}%
{\td{\dsubst{B}{y}{x}}{\psi}}$
(by $\td{\dsubst{}{y}{x}}{\psi}=\td{\dsubst{}{\ensuremath{\varepsilon}}{x}}{\psi}$),
and when $\td{y}{\psi}=z$ we have
$\ceqtm[\Psi']{\td{\dsubst{L}{y}{x}}{\psi}}{\dsubst{\td{L}{\psi}}{z}{x}}%
{\td{\dsubst{B}{y}{x}}{\psi}}$
(by $\dsubst{\psi}{z}{x}=\td{\dsubst{}{y}{x}}{\psi}$).
Thus by \cref{lem:cohexp-ceqtm},
$\ceqtm{\Celim{c.A}{\lp{y}}{P}{x.L}}{\dsubst{L}{y}{x}}{\dsubst{B}{y}{x}}$.
\end{proof}
To establish the elimination rule we must induct over the definition of
$\vper{\ensuremath{\mathbb{S}^1}}$. As $\vper{\ensuremath{\mathbb{S}^1}}$ was defined in \cref{sec:typesys} as the least
pre-fixed point of an order-preserving operator $C$ on context-indexed
relations, we define our induction hypothesis as an auxiliary context-indexed
PER on values $\Phi_\Psi(M_0,M_0')$ that holds when
\begin{enumerate}
\item $\vper{\ensuremath{\mathbb{S}^1}}_\Psi(M_0,M_0')$ and
\item whenever $\eqtype{Kan}{\oft{c}{\ensuremath{\mathbb{S}^1}}}{A}{A'}$,
$\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$,
$\ceqtm[\Psi,x]{L}{L'}{\subst{A}{\lp{x}}{c}}$, and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$,
$\ceqtm{\Celim{c.A}{M_0}{P}{x.L}}{\Celim{c.A'}{M_0'}{P'}{x.L'}}{\subst{A}{M_0}{c}}$.
(In other words, the elimination rule holds for $M_0$ and $M_0'$.)
\end{enumerate}
\begin{lemma}\label{lem:C-elim-lift}
If $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$ then
whenever $\eqtype{Kan}{\oft{c}{\ensuremath{\mathbb{S}^1}}}{A}{A'}$,
$\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$,
$\ceqtm[\Psi,x]{L}{L'}{\subst{A}{\lp{x}}{c}}$, and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$,
$\ceqtm{\Celim{c.A}{M}{P}{x.L}}{\Celim{c.A'}{M'}{P'}{x.L'}}{\subst{A}{M}{c}}$.
\end{lemma}
\begin{proof}
First we apply coherent expansion to the left side with family
$\{ \Celim{c.\td{A}{\psi}}{M_\psi}{\td{P}{\psi}}{x.\td{L}{\psi}} \mid
\td{M}{\psi}\ensuremath{\Downarrow} M_\psi \}^{\Psi'}_\psi$, by showing that
\[
\ceqtm[\Psi']%
{\Celim{c.\td{A}{\psi}}{M_\psi}{\td{P}{\psi}}{x.\td{L}{\psi}}}%
{\Celim{c.\td{A}{\psi}}{\td{(M_{\id})}{\psi}}{\td{P}{\psi}}{x.\td{L}{\psi}}}%
{\td{(\subst{A}{M}{c})}{\psi}}.
\]
The left side is an element of this type by $\Phi_{\Psi'}(M_\psi,M_\psi)$
and
$\ceqtype{Kan}[\Psi']{\subst{\td{A}{\psi}}{M_\psi}{c}}{\subst{\td{A}{\psi}}{\td{M}{\psi}}{c}}$
(by $\ceqtm[\Psi']{M_\psi}{\td{M}{\psi}}{\ensuremath{\mathbb{S}^1}}$). The right side is an
element by $\Phi_\Psi(M_{\id},M_{\id})$ and
$\ceqtype{Kan}{\subst{A}{M_{\id}}{c}}{\subst{A}{M}{c}}$. The equality follows from
$\td{(M_{\id})}{\psi}\ensuremath{\Downarrow} M_2$,
$\Phi_{\Psi'}(M_\psi,M_2)$, and
\cref{lem:coftype-evals-ceqtm}. Thus by \cref{lem:cohexp-ceqtm},
$\ceqtm{\Celim{c.A}{M}{P}{x.L}}{\Celim{c.A}{M_{\id}}{P}{x.L}}{\subst{A}{M}{c}}$.
By the same argument on the right side,
$\ceqtype{Kan}{\subst{A}{M}{c}}{\subst{A'}{M'}{c}}$ (by $\ceqtm{M}{M'}{\ensuremath{\mathbb{S}^1}}$), and
transitivity, it suffices to show
$\ceqtm{\Celim{c.A}{M_{\id}}{P}{x.L}}{\Celim{c.A'}{M'_{\id}}{P'}{x.L'}}{\subst{A}{M}{c}}$;
this is immediate by $\Phi_\Psi(M_{\id},M'_{\id})$ and
$\ceqtype{Kan}{\subst{A}{M_{\id}}{c}}{\subst{A}{M}{c}}$.
\end{proof}
\begin{lemma}\label{lem:C-elim-ind}
If ${C(\Phi)}_\Psi(M_0,M_0')$ then $\Phi_\Psi(M_0,M_0')$.
\end{lemma}
\begin{proof}
We must show that $\vper{\ensuremath{\mathbb{S}^1}}_\Psi(M_0,M_0')$, and that if
$\eqtype{Kan}{\oft{c}{\ensuremath{\mathbb{S}^1}}}{A}{A'}$,
$\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$,
$\ceqtm[\Psi,x]{L}{L'}{\subst{A}{\lp{x}}{c}}$, and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtm{\Celim{c.A}{M_0}{P}{x.L}}{\Celim{c.A'}{M_0'}{P'}{x.L'}}{\subst{A}{M_0}{c}}$.
There are three cases to consider.
\begin{enumerate}
\item ${C(\Phi)}_\Psi(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$.
Then $\vper{\ensuremath{\mathbb{S}^1}}_\Psi(\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}})$ by definition, and the elimination rule holds
by \cref{rul:C-comp-base} on both sides (with $B=\subst{A}{\ensuremath{\mathsf{base}}}{c}$) and
$\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$.
\item ${C(\Phi)}_{(\Psi,y)}(\lp{y},\lp{y})$.
Then $\vper{\ensuremath{\mathbb{S}^1}}_{(\Psi,y)}(\lp{y},\lp{y})$ by definition, and the elimination
rule holds by \cref{rul:C-comp-loop} on both sides (with
$B=\subst{A}{\lp{x}}{c}$ and
$\ceqtype{Kan}{\dsubst{\subst{A}{\lp{x}}{c}}{\ensuremath{\varepsilon}}{x}}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$) and
$\ceqtm{\dsubst{L}{y}{x}}{\dsubst{L'}{y}{x}}{\subst{A}{\lp{y}}{c}}$.
\item ${C(\Phi)}_\Psi(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$ where
\begin{enumerate}
\item $r\neq r'$; $r_i \neq r_i'$ for all $i$;
$r_i = r_j$, $r_i' = 0$, and $r_j' = 1$ for some $i,j$;
\item $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$;
\item $\ensuremath{\mathsf{Tm}}(\Phi(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$ for all $i,j$ and
$\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i',r_j=r_j'$; and
\item $\ensuremath{\mathsf{Tm}}(\Phi(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})$ for
all $i,j$ and $\tds{\Psi'}{\psi}{\Psi}$ satisfying $r_i=r_i'$.
\end{enumerate}
By construction, $\Phi\subseteq\vper{\ensuremath{\mathbb{S}^1}}$, so
$\ensuremath{\mathsf{Tm}}(\Phi)\subseteq\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}})$ and $\vper{\ensuremath{\mathbb{S}^1}}_\Psi(\Fcom,\Fcom)$.
By \cref{lem:C-elim-lift} and $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$,
$\ceqtm{\Celim{c.A}{M}}{\Celim{c.A'}{M'}}{\subst{A}{M}{c}}$.
For all $\psi$ satisfying $r_i=r_i',r_j=r_j'$ we have
$\ensuremath{\mathsf{Tm}}(\Phi(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$, so by
\cref{lem:C-elim-lift},
$\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>%
{\Celim{c.A}{N_i}}{\Celim{c.A'}{N_j'}}{\subst{A}{N_i}{c}}$.
Similarly, $\ceqtm<r_i=r_i'>{\Celim{c.A}{M}}%
{\Celim{c.A}{\dsubst{N_i}{r}{y}}}{\subst{A}{M}{c}}$.
Apply coherent expansion to the term $\Celim{c.A}{\Fcom*{\xi_i}}{P}{x.L}$ at the
type $\cwftype{Kan}{\subst{A}{\Fcom*{\xi_i}}{c}}$ with family:
\[\begin{cases}
\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}{\td{P}{\psi}}{x.\td{L}{\psi}}
& \text{$\td{r}{\psi} = \td{r'}{\psi}$} \\
\Celim{c.\td{A}{\psi}}{\td{\dsubst{N_j}{r'}{y}}{\psi}}{\td{P}{\psi}}{x.\td{L}{\psi}}
& \text{$\td{r}{\psi} \neq \td{r'}{\psi}$,
least $j$ s.t. $\td{r_j}{\psi} = \td{r_j'}{\psi}$} \\
\Com{z.\subst{\td{A}{\psi}}{F}{c}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}{\td{P}{\psi}}{x.\td{L}{\psi}}}%
{\sys{\td{\xi_i}{\psi}}{y.T_i}}
& \text{otherwise} \\
\qquad
F = \Fcom{\td{r}{\psi}}{z}{\td{M}{\psi}}{\sys{\td{\xi_i}{\psi}}{y.\td{N_i}{\psi}}} &\\
\qquad
T_i = \Celim{c.\td{A}{\psi}}{\td{N_i}{\psi}}{\td{P}{\psi}}{x.\td{L}{\psi}} &\\
\end{cases}\]
We must check three equations, noting that $\id$ falls in the third category
above. First:
\[
\ceqtm[\Psi']
{\Com{z.\subst{\td{A}{\psi}}{F}{c}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}}%
{\sys{\td{\xi_i}{\psi}}{y.T_i}}}
{\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}}
{\subst{\td{A}{\psi}}{\td{\Fcom}{\psi}}{c}}
\]
when $\td{r}{\psi} = \td{r'}{\psi}$. This follows from \cref{thm:com},
$\subst{\td{A}{\psi}}{\td{\Fcom}{\psi}}{c} =
\dsubst{\subst{\td{A}{\psi}}{F}{c}}{\td{r'}{\psi}}{z}$,
and by \cref{def:kan},
$\ceqtype{Kan}[\Psi']{\dsubst{\subst{\td{A}{\psi}}{F}{c}}{\td{r'}{\psi}}{z}}%
{\td{\subst{A}{M}{c}}{\psi}}$ and
$\ceqtype{Kan}[\Psi',z]<\td{r_i}{\psi}=\td{r_i'}{\psi}>%
{\subst{\td{A}{\psi}}{F}{c}}{\td{\subst{A}{\dsubst{N_i}{z}{y}}{c}}{\psi}}$.
Next, we must check
\[
\ceqtm[\Psi']
{\Com{z.\subst{\td{A}{\psi}}{F}{c}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}}%
{\sys{\td{\xi_i}{\psi}}{y.T_i}}}
{\Celim{c.\td{A}{\psi}}{\td{\dsubst{N_j}{r'}{y}}{\psi}}}
{\subst{\td{A}{\psi}}{\td{\Fcom}{\psi}}{c}}
\]
when $\td{r}{\psi}\neq\td{r'}{\psi}$, $\td{r_j}{\psi}=\td{r_j'}{\psi}$, and
$\td{r_i}{\psi}\neq\td{r_i'}{\psi}$ for $i<j$; again this holds by
\cref{thm:com}. Finally, we must check
\[
\coftype[\Psi']
{\Com{z.\subst{\td{A}{\psi}}{F}{c}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\Celim{c.\td{A}{\psi}}{\td{M}{\psi}}}%
{\sys{\td{\xi_i}{\psi}}{y.T_i}}}
{\subst{\td{A}{\psi}}{\td{\Fcom}{\psi}}{c}}
\]
when $\td{r}{\psi}\neq\td{r'}{\psi}$ and $\td{r_i}{\psi}\neq\td{r_i'}{\psi}$ for
all $i$; again this holds by \cref{thm:com}. Therefore by
\cref{lem:cohexp-ceqtm},
\begin{align*}
& {\Celim{c.A}{\Fcom*{\xi_i}}{P}{x.L}} \\
\ceqtmtab{}
{\Com{z.\subst{\td{A}{\psi}}{\Fcom^{r\rightsquigarrow z}}{c}}{r}{r'}%
{\Celim{c.A}{M}{P}{x.L}}%
{\sys{\xi_i}{y.\Celim{c.A}{N_i}{P}{x.L}}}}
{\subst{A}{\Fcom}{c}}.
\end{align*}
By transitivity and a symmetric argument on the right side, it suffices to show
that two $\Com$s are equal, which follows by \cref{thm:com}.
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Elimination]\label{rul:C-elim}
If $\ceqtm{M}{M'}{\ensuremath{\mathbb{S}^1}}$,
$\eqtype{Kan}{\oft{c}{\ensuremath{\mathbb{S}^1}}}{A}{A'}$,
$\ceqtm{P}{P'}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$,
$\ceqtm[\Psi,x]{L}{L'}{\subst{A}{\lp{x}}{c}}$, and
$\ceqtm{\dsubst{L}{\ensuremath{\varepsilon}}{x}}{P}{\subst{A}{\ensuremath{\mathsf{base}}}{c}}$ for $\ensuremath{\varepsilon}\in\{0,1\}$, then
$\ceqtm{\Celim{c.A}{M}{P}{x.L}}{\Celim{c.A'}{M'}{P'}{x.L'}}{\subst{A}{M}{c}}$.
\end{rul}
\begin{proof}
\Cref{lem:C-elim-ind} states that $\Phi$ is a pre-fixed point of $C$; because
$\vper{\ensuremath{\mathbb{S}^1}}$ is the least pre-fixed point of $C$, $\vper{\ensuremath{\mathbb{S}^1}}\subseteq\Phi$, and
therefore $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathbb{S}^1}})\subseteq\ensuremath{\mathsf{Tm}}(\Phi)$. We conclude that
$\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$, and the result follows by \cref{lem:C-elim-lift}.
\end{proof}
\subsection{Weak booleans}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; we have
$\tau(\Psi,\ensuremath{\mathsf{wbool}},\ensuremath{\mathsf{wbool}},\mathbb{B}_\Psi)$, where $\mathbb{B}$ is the least
context-indexed relation such that:
\begin{enumerate}
\item $\mathbb{B}_\Psi(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}})$,
\item $\mathbb{B}_\Psi(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})$, and
\item $\mathbb{B}_\Psi(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$
whenever
\begin{enumerate}
\item $r\neq r'$;
$r_i \neq r_i'$ for all $i$;
$r_i = r_j$, $r_i' = 0$, and $r_j' = 1$ for some $i,j$;
\item $\ensuremath{\mathsf{Tm}}(\mathbb{B}(\Psi))(M,M')$;
\item $\ensuremath{\mathsf{Tm}}(\mathbb{B}(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$ for all $i,j$ and
$\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i',r_j=r_j'$; and
\item $\ensuremath{\mathsf{Tm}}(\mathbb{B}(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})$ for
all $i,j$ and $\tds{\Psi'}{\psi}{\Psi}$ satisfying $r_i=r_i'$.
\end{enumerate}
\end{enumerate}
By $\sisval{\ensuremath{\mathsf{wbool}}}$ it is immediate that $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{wbool}},\ensuremath{\mathsf{wbool}},\mathbb{B}(\Psi))$.
We have included $\ensuremath{\mathsf{wbool}}$ to demonstrate two Kan structures that one may equip
to ordinary inductive types: trivial structure (as in $\ensuremath{\mathsf{bool}}$) and free
structure (as in $\ensuremath{\mathsf{wbool}}$, mirroring $\ensuremath{\mathbb{S}^1}$). As the $\Fcom$ structure of $\ensuremath{\mathsf{wbool}}$
is identical to that of $\ensuremath{\mathbb{S}^1}$, the proofs in this section are mostly identical to
those in \cref{ssec:C}.
\begin{lemma}\label{lem:wbool-prekan}
If
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi))(M,M')$,
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})$ for all $i,j$ and
$\tds{\Psi'}{\psi}{(\Psi,y)}$ satisfying $r_i=r_i',r_j=r_j'$, and
\item $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})$ for
all $i,j$ and $\tds{\Psi'}{\psi}{\Psi}$ satisfying $r_i=r_i'$,
\end{enumerate}
then $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi))(\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}})$.
\end{lemma}
\begin{proof}
Identical to \cref{lem:C-prekan}.
\end{proof}
\begin{rul}[Pretype formation]
$\cwftype{pre}{\ensuremath{\mathsf{wbool}}}$.
\end{rul}
\begin{proof}
Show $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{wbool}}})$:
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi))(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}})$ and
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi))(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})$ because $\sisval{\ensuremath{\mathsf{true}}}$ and
$\sisval{\ensuremath{\mathsf{false}}}$, and $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}}(\Psi))(\Fcom,\Fcom)$ by
\cref{lem:wbool-prekan}.
\end{proof}
\begin{rul}[Introduction]
If $\ceqtm{M}{M'}{\ensuremath{\mathsf{bool}}}$ then $\ceqtm{M}{M'}{\ensuremath{\mathsf{wbool}}}$.
\end{rul}
\begin{proof}
Follows from $\vper{\ensuremath{\mathsf{bool}}}\subseteq\vper{\ensuremath{\mathsf{wbool}}}$ and the fact that $\ensuremath{\mathsf{Tm}}$ is
order-preserving.
\end{proof}
\begin{rul}[Kan type formation]
$\cwftype{Kan}{\ensuremath{\mathsf{wbool}}}$.
\end{rul}
\begin{proof}
Identical to \cref{rul:C-form-kan}.
\end{proof}
We already proved the computation rules in \cref{rul:bool-comp}. The
elimination rule differs from that of $\ensuremath{\mathsf{bool}}$, however: the motive $b.A$ must be
Kan, because the eliminator must account and the proof must account for
canonical $\Fcom$ elements of $\ensuremath{\mathsf{wbool}}$.
\begin{rul}[Elimination]
If $\ceqtm{M}{M'}{\ensuremath{\mathsf{wbool}}}$,
$\eqtype{Kan}{\oft{b}{\ensuremath{\mathsf{wbool}}}}{A}{A'}$,
$\ceqtm{T}{T'}{\subst{A}{\ensuremath{\mathsf{true}}}{b}}$, and
$\ceqtm{F}{F'}{\subst{A}{\ensuremath{\mathsf{false}}}{b}}$, then
$\ceqtm{\ifb{b.A}{M}{T}{F}}{\ifb{b.A'}{M'}{T'}{F'}}{\subst{A}{M}{b}}$.
\end{rul}
\begin{proof}
This proof is analogous to the proof of \cref{rul:C-elim}. First, we define a
context-indexed PER $\Phi_\Psi(M_0,M_0')$ that holds when
$\vper{\ensuremath{\mathsf{wbool}}}_\Psi(M_0,M_0')$ and the elimination rule is true for $M_0,M_0'$.
Next, we prove that if $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$ then the elimination rule is true
for $M,M'$. Finally, we prove that $\Phi$ is a pre-fixed point of the operator
defining $\mathbb{B}$. (Here we must check that the elimination rule holds for
$\ensuremath{\mathsf{true}}$ and $\ensuremath{\mathsf{false}}$, which are immediate by \cref{rul:bool-comp}.) Therefore
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{wbool}}})\subseteq\ensuremath{\mathsf{Tm}}(\Phi)$, so the elimination rule applies to
$\ceqtm{M}{M'}{\ensuremath{\mathsf{wbool}}}$.
\end{proof}
\subsection{Univalence}
Recall the abbreviations:
\begin{align*}
\isContr{C} &:= \prd{C}{(\picl{c}{C}{\picl{c'}{C}{\Path{\_.C}{c}{c'}}})} \\
\Equiv{A}{B} &:=
\sigmacl{f}{\arr{A}{B}}{(\picl{b}{B}{\isContr{\sigmacl{a}{A}{\Path{\_.B}{\app{f}{a}}{b}}}})}
\end{align*}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; in $\tau$, when
$\ceqtype{pre}[\Psi,x]<x=0>{A}{A'}$,
$\ceqtype{pre}[\Psi,x]{B}{B'}$,
$\ceqtm[\Psi,x]<x=0>{E}{E'}{\Equiv{A}{B}}$, and
$\phi(\uain{x}{M,N},\uain{x}{M',N'})$ for
\begin{enumerate}
\item $\ceqtm[\Psi,x]{N}{N'}{B}$,
\item $\ceqtm[\Psi,x]<x=0>{M}{M'}{A}$, and
\item $\ceqtm[\Psi,x]<x=0>{\app{\fst{E}}{M}}{N}{B}$,
\end{enumerate}
we have $\tau((\Psi,x),\ua{x}{A,B,E},\ua{x}{A',B',E'},\phi)$.
\begin{rul}[Pretype formation]\label{rul:ua-form-pre}
~\begin{enumerate}
\item If $\cwftype{pre}{A}$ then $\ceqtype{pre}{\ua{0}{A,B,E}}{A}$.
\item If $\cwftype{pre}{B}$ then $\ceqtype{pre}{\ua{1}{A,B,E}}{B}$.
\item If $\ceqtype{pre}<r=0>{A}{A'}$,
$\ceqtype{pre}{B}{B'}$, and
$\ceqtm<r=0>{E}{E'}{\Equiv{A}{B}}$, then
$\ceqtype{pre}{\ua{r}{A,B,E}}{\ua{r}{A',B',E'}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Parts (1--2) are immediate by \cref{lem:expansion}. To show part (3), we must
first establish that $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ua{r}{A,B,E},\ua{r}{A',B',E'},\gamma)$,
that is, abbreviating these terms $L$ and $R$, for all
$\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$,
$\td{L}{\psi_1}\ensuremath{\Downarrow} L_1$, $\td{R}{\psi_1}\ensuremath{\Downarrow} R_1$,
$\lift{\tau}(\Psi_2,\td{L_1}{\psi_2},\td{L}{\psi_1\psi_2},\_)$,
$\lift{\tau}(\Psi_2,\td{R_1}{\psi_2},\td{R}{\psi_1\psi_2},\_)$, and
$\lift{\tau}(\Psi_2,\td{L_1}{\psi_2},\td{R_1}{\psi_2},\_)$.
We proceed by cases on the first step taken by $\td{L}{\psi_1}$ and
$\td{L}{\psi_1\psi_2}$.
\begin{enumerate}
\item $\td{r}{\psi_1} = 0$.
Then $\td{L}{\psi_1}\ensuremath{\steps_\stable}\td{A}{\psi_1}$,
$\td{R}{\psi_1}\ensuremath{\steps_\stable}\td{A'}{\psi_1}$, and the result follows by
$\ceqtype{pre}[\Psi_1]{\td{A}{\psi_1}}{\td{A'}{\psi_1}}$.
\item $\td{r}{\psi_1} = 1$.
Then $\td{L}{\psi_1}\ensuremath{\steps_\stable}\td{B}{\psi_1}$,
$\td{R}{\psi_1}\ensuremath{\steps_\stable}\td{B'}{\psi_1}$, and the result follows by
$\ceqtype{pre}{B}{B'}$.
\item $\td{r}{\psi_1} = x$ and $\td{r}{\psi_1\psi_2} = 0$.
Then $\isval{\td{L}{\psi_1}}$, $\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{A}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, $\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{A'}{\psi_1\psi_2}$, and
the result follows by
$\ceqtype{pre}[\Psi_2]{\td{A}{\psi_1\psi_2}}{\td{A'}{\psi_1\psi_2}}$.
\item $\td{r}{\psi_1} = x$ and $\td{r}{\psi_1\psi_2} = 1$.
Then $\isval{\td{L}{\psi_1}}$, $\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{B}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, $\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{B'}{\psi_1\psi_2}$, and
the result follows by $\ceqtype{pre}{B}{B'}$.
\item $\td{r}{\psi_1} = x$ and $\td{r}{\psi_1\psi_2} = x'$.
Then $\isval{\td{L}{\psi_1}}$, $\isval{\td{L}{\psi_1\psi_2}}$,
$\isval{\td{R}{\psi_1}}$, $\isval{\td{R}{\psi_1\psi_2}}$, and by
$\ceqtype{pre}[\Psi_2]<x'=0>{\td{A}{\psi_1\psi_2}}{\td{A'}{\psi_1\psi_2}}$,
$\ceqtype{pre}[\Psi_2]{\td{B}{\psi_1\psi_2}}{\td{B'}{\psi_1\psi_2}}$, and
$\ceqtm[\Psi_2]<x'=0>{\td{E}{\psi_1\psi_2}}{\td{E'}{\psi_1\psi_2}}%
{\Equiv{\td{A}{\psi_1\psi_2}}{\td{B}{\psi_1\psi_2}}}$, we have
$\tau(\Psi_2,\ua{x'}{\td{A}{\psi_1\psi_2},\td{B}{\psi_1\psi_2},\td{E}{\psi_1\psi_2}},
\ua{x'}{\td{A'}{\psi_1\psi_2},\td{B'}{\psi_1\psi_2},\td{E'}{\psi_1\psi_2}},\_)$.
\end{enumerate}
To complete part (3), we must show $\ensuremath{\mathsf{Coh}}(\vper{\ua{r}{A,B,E}})$, that is, for
any $\tds{\Psi'}{\psi}{\Psi}$, if
$\vper{\ua{\td{r}{\psi}}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}}(M_0,N_0)$ then
$\ensuremath{\mathsf{Tm}}(\vper{\ua{\td{r}{\psi}}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}})(M_0,N_0)$.
If $\td{r}{\psi}=0$ this
follows by
$\vper{\ua{0}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}} = \vper{\td{A}{\psi}}$
and $\ensuremath{\mathsf{Coh}}(\vper{\td{A}{\psi}})$; if $\td{r}{\psi}=1$ then
this follows by $\vper{\ua{1}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}} =
\vper{\td{B}{\psi}}$ and $\ensuremath{\mathsf{Coh}}(\vper{\td{B}{\psi}})$. The remaining case is
$\vper{\ua{x}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}}(\uain{x}{M,N},\uain{x}{M',N'})$,
in which
$\ceqtm[\Psi']{N}{N'}{\td{B}{\psi}}$,
$\ceqtm[\Psi']<x=0>{M}{M'}{\td{A}{\psi}}$, and
$\ceqtm[\Psi']<x=0>{\app{\fst{\td{E}{\psi}}}{M}}{N}{\td{B}{\psi}}$.
Again we proceed by cases on the first step taken by the $\psi_1$ and
$\psi_1\psi_2$ instances of the left side.
\begin{enumerate}
\item $\td{x}{\psi_1} = 0$.
Then $\td{L}{\psi_1}\ensuremath{\steps_\stable}\td{M}{\psi_1}$,
$\td{R}{\psi_1}\ensuremath{\steps_\stable}\td{M'}{\psi_1}$, and the result follows by
$\vper{\ua{0}{\td{A}{\psi\psi_1},\dots}} = \vper{\td{A}{\psi\psi_1}}$ and
$\ceqtm[\Psi_1]{\td{M}{\psi_1}}{\td{M'}{\psi_1}}{\td{A}{\psi\psi_1}}$.
\item $\td{x}{\psi_1} = 1$.
Then $\td{L}{\psi_1}\ensuremath{\steps_\stable}\td{N}{\psi_1}$,
$\td{R}{\psi_1}\ensuremath{\steps_\stable}\td{N'}{\psi_1}$, and the result follows by
$\vper{\ua{1}{\td{A}{\psi\psi_1},\dots}} = \vper{\td{B}{\psi\psi_1}}$ and
$\ceqtm[\Psi']{N}{N'}{\td{B}{\psi}}$.
\item $\td{x}{\psi_1} = x'$ and $\td{x}{\psi_1\psi_2} = 0$.
Then $\isval{\td{L}{\psi_1}}$, $\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{M}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, $\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{M'}{\psi_1\psi_2}$, and
the result follows by $\vper{\ua{0}{\td{A}{\psi\psi_1\psi_2},\dots}} =
\vper{\td{A}{\psi\psi_1\psi_2}}$ and
$\ceqtm[\Psi_2]{\td{M}{\psi_1\psi_2}}{\td{M'}{\psi_1\psi_2}}{\td{A}{\psi\psi_1\psi_2}}$.
\item $\td{x}{\psi_1} = x'$ and $\td{x}{\psi_1\psi_2} = 1$.
Then $\isval{\td{L}{\psi_1}}$, $\td{L}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{N}{\psi_1\psi_2}$,
$\isval{\td{R}{\psi_1}}$, $\td{R}{\psi_1\psi_2}\ensuremath{\longmapsto}\td{N'}{\psi_1\psi_2}$, and
the result follows by $\vper{\ua{1}{\td{A}{\psi\psi_1\psi_2},\dots}} =
\vper{\td{B}{\psi\psi_1\psi_2}}$ and $\ceqtm[\Psi']{N}{N'}{\td{B}{\psi}}$.
\item $\td{x}{\psi_1} = x'$ and $\td{x}{\psi_1\psi_2} = x''$.
Then $\isval{\td{L}{\psi_1}}$, $\isval{\td{L}{\psi_1\psi_2}}$,
$\isval{\td{R}{\psi_1}}$, $\isval{\td{R}{\psi_1\psi_2}}$, and by
$\ceqtm[\Psi_2]{\td{N}{\psi_1\psi_2}}{\td{N'}{\psi_1\psi_2}}{\td{B}{\psi\psi_1\psi_2}}$,
$\ceqtm[\Psi_2]<x''=0>{\td{M}{\psi_1\psi_2}}{\td{M'}{\psi_1\psi_2}}{\td{A}{\psi\psi_1\psi_2}}$,
and
$\ceqtm[\Psi_2]<x''=0>{\app{\fst{\td{E}{\psi\psi_1\psi_2}}}{\td{M}{\psi_1\psi_2}}}%
{\td{N}{\psi_1\psi_2}}{\td{B}{\psi}}$,
$\vper{\ua{x''}{\td{A}{\psi\psi_1\psi_2},\dots}}
(\uain{x''}{\td{M}{\psi_1\psi_2},\td{N}{\psi_1\psi_2}},
\uain{x''}{\td{M'}{\psi_1\psi_2},\td{N'}{\psi_1\psi_2}})$.
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Introduction]\label{rul:ua-intro}
~\begin{enumerate}
\item If $\coftype{M}{A}$ then $\ceqtm{\uain{0}{M,N}}{M}{A}$.
\item If $\coftype{N}{B}$ then $\ceqtm{\uain{1}{M,N}}{N}{B}$.
\item If $\ceqtm<r=0>{M}{M'}{A}$,
$\ceqtm{N}{N'}{B}$,
$\coftype<r=0>{E}{\Equiv{A}{B}}$, and
$\ceqtm<r=0>{\app{\fst{E}}{M}}{N}{B}$, then
$\ceqtm{\uain{r}{M,N}}{\uain{r}{M',N'}}{\ua{r}{A,B,E}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Parts (1--2) are immediate by $\uain{0}{M,N}\ensuremath{\steps_\stable} M$,
$\uain{1}{M,N}\ensuremath{\steps_\stable} N$, and \cref{lem:expansion}. For part (3), if $r=0$
(resp., $r=1$) the result follows by part (1) (resp., part (2)) and
\cref{rul:ua-form-pre}. If $r=x$ then it follows by $\ensuremath{\mathsf{Coh}}(\vper{\ua{x}{A,B,E}})$
and the definition of $\vper{\ua{x}{A,B,E}}$.
\end{proof}
\begin{rul}[Elimination]\label{rul:ua-elim}
~\begin{enumerate}
\item If $\coftype{M}{A}$ and $\coftype{F}{\arr{A}{B}}$, then
$\ceqtm{\uaproj{0}{M,F}}{\app{F}{M}}{B}$.
\item If $\coftype{M}{B}$ then $\ceqtm{\uaproj{1}{M,F}}{M}{B}$.
\item If $\ceqtm{M}{M'}{\ua{r}{A,B,E}}$ and
$\ceqtm<r=0>{F}{\fst{E}}{\arr{A}{B}}$, then
$\ceqtm{\uaproj{r}{M,F}}{\uaproj{r}{M',\fst{E}}}{B}$.
\end{enumerate}
\end{rul}
\begin{proof}
Parts (1--2) are immediate by $\uaproj{0}{M,F}\ensuremath{\steps_\stable} \app{F}{M}$,
$\uaproj{1}{M,F}\ensuremath{\steps_\stable} M$, and \cref{lem:expansion}. For part (3), if $r=0$
(resp., $r=1$) the result follows by part (1) (resp., part (2)),
\cref{rul:fun-elim}, and \cref{rul:ua-form-pre}. If $r=x$ then we apply coherent
expansion to the left side with family
\[\begin{cases}
\app{\td{F}{\psi}}{\td{M}{\psi}} & \text{$\td{x}{\psi}=0$} \\
\td{M}{\psi} & \text{$\td{x}{\psi}=1$} \\
N_\psi & \text{$\td{x}{\psi}=x'$, $\td{M}{\psi}\ensuremath{\Downarrow}\uain{x'}{O_\psi,N_\psi}$} \\
\end{cases}\]
where $\coftype[\Psi']<x'=0>{O_\psi}{\td{A}{\psi}}$,
$\coftype[\Psi']{N_\psi}{\td{B}{\psi}}$, and
$\ceqtm[\Psi']<x'=0>{\app{\fst{\td{E}{\psi}}}{O_\psi}}{N_\psi}{\td{B}{\psi}}$.
First, show that if $\td{x}{\psi}=0$,
$\ceqtm[\Psi']{\app{\td{F}{\psi}}{\td{M}{\psi}}}{\td{(N_{\id})}{\psi}}{\td{B}{\psi}}$.
By \cref{lem:coftype-evals-ceqtm},
$\ceqtm{M}{\uain{x}{O_{\id},N_{\id}}}{\ua{x}{A,B,E}}$, so by
\cref{rul:ua-intro},
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{(O_{\id})}{\psi}}{\td{A}{\psi}}$.
By assumption,
$\ceqtm[\Psi']{\td{F}{\psi}}{\fst{\td{E}{\psi}}}{\arr{\td{A}{\psi}}{\td{B}{\psi}}}$.
This case is completed by \cref{rul:fun-elim} and
$\ceqtm[\Psi']{\app{\fst{\td{E}{\psi}}}{\td{(O_{\id})}{\psi}}}{\td{(N_{\id})}{\psi}}{\td{B}{\psi}}$.
Next, show that if $\td{x}{\psi}=1$,
$\ceqtm[\Psi']{\td{M}{\psi}}{\td{(N_{\id})}{\psi}}{\td{B}{\psi}}$.
This case is immediate by \cref{rul:ua-intro} and
$\ceqtm{M}{\uain{x}{O_{\id},N_{\id}}}{\ua{x}{A,B,E}}$ under $\psi$.
Finally, show that if $\td{x}{\psi}=x'$,
$\ceqtm[\Psi']{N_{\psi}}{\td{(N_{\id})}{\psi}}{\td{B}{\psi}}$.
By $\coftype{M}{\ua{x}{A,B,E}}$ under $\id,\psi$ we have
$\vper{\ua{x}{A,B,E}}_\psi(\uain{x'}{O_\psi,N_\psi},\uain{x'}{\td{(O_{\id})}{\psi},\td{(N_{\id})}{\psi}})$,
completing this case.
By \cref{lem:cohexp-ceqtm} we conclude $\ceqtm{\uaproj{x}{M,F}}{N_{\id}}{B}$,
and by a symmetric argument, $\ceqtm{\uaproj{x}{M',\fst{E}}}{N'_{\id}}{B}$. We
complete the proof with transitivity and $\ceqtm{N_{\id}}{N'_{\id}}{B}$ by
$\vper{\ua{x}{A,B,E}}(\uain{x}{O_{\id},N_{\id}},\uain{x}{O'_{\id},N'_{\id}})$.
\end{proof}
\begin{rul}[Computation]\label{rul:ua-comp}
If $\coftype<r=0>{M}{A}$,
$\coftype{N}{B}$,
$\coftype<r=0>{F}{\arr{A}{B}}$, and
$\ceqtm<r=0>{\app{F}{M}}{N}{B}$, then
$\ceqtm{\uaproj{r}{\uain{r}{M,N},F}}{N}{B}$.
\end{rul}
\begin{proof}
If $r=0$ then by \cref{lem:expansion} it suffices to show
$\ceqtm{\app{F}{\uain{0}{M,N}}}{N}{B}$; by \cref{rul:fun-elim,rul:ua-intro}
this holds by our hypothesis $\ceqtm{\app{F}{M}}{N}{B}$.
If $r=1$ the result is immediate by \cref{lem:expansion}.
If $r=x$ we apply coherent expansion to the left side with family
\[\begin{cases}
\app{\td{F}{\psi}}{\uain{0}{\td{M}{\psi},\td{N}{\psi}}}
& \text{$\td{x}{\psi}=0$} \\
\td{N}{\psi} & \text{$\td{x}{\psi}=1$ or $\td{x}{\psi}=x'$}
\end{cases}\]
If $\td{x}{\psi}=0$ then
$\ceqtm[\Psi']{\app{\td{F}{\psi}}{\uain{0}{\td{M}{\psi},\td{N}{\psi}}}}%
{\td{N}{\psi}}{\td{B}{\psi}}$ by \cref{rul:fun-elim,rul:ua-intro} and
$\ceqtm<x=0>{\app{F}{M}}{N}{B}$. If $\td{x}{\psi}\neq 0$ then
$\coftype[\Psi']{\td{N}{\psi}}{\td{B}{\psi}}$ and the result follows by
\cref{lem:cohexp-ceqtm}.
\end{proof}
\begin{rul}[Eta]\label{rul:ua-eta}
If $\coftype{N}{\ua{r}{A,B,E}}$ and
$\ceqtm<r=0>{M}{N}{A}$, then
$\ceqtm{\uain{r}{M,\uaproj{r}{N,\fst{E}}}}{N}{\ua{r}{A,B,E}}$.
\end{rul}
\begin{proof}
If $r=0$ or $r=1$ the result is immediate by
\cref{lem:expansion,rul:ua-form-pre}. If $r=x$ then by
\cref{lem:coftype-evals-ceqtm}, $\ceqtm{N}{\uain{x}{M',P'}}{\ua{x}{A,B,E}}$
where $\coftype<x=0>{M'}{A}$, $\coftype{P'}{B}$, and
$\ceqtm<x=0>{\app{\fst{E}}{M'}}{P'}{B}$. By \cref{rul:ua-intro} it suffices
to show that $\ceqtm<x=0>{M}{M'}{A}$,
$\ceqtm{\uaproj{x}{N,\fst{E}}}{P'}{B}$, and
$\ceqtm<x=0>{\app{\fst{E}}{M'}}{P'}{B}$ (which is immediate).
To show $\ceqtm<x=0>{M}{M'}{A}$ it suffices to prove
$\ceqtm<x=0>{N}{M'}{A}$, which follows from
$\ceqtm{N}{\uain{x}{M',P'}}{\ua{x}{A,B,E}}$ and
\cref{rul:ua-form-pre,rul:ua-intro}. To show
$\ceqtm{\uaproj{x}{N,\fst{E}}}{P'}{B}$, by \cref{rul:ua-elim} it suffices to
check $\ceqtm{\uaproj{x}{\uain{x}{M',P'},\fst{E}}}{P'}{B}$, which holds by
\cref{rul:ua-comp}.
\end{proof}
\begin{lemma}\label{lem:ua-hcom}
If $\ceqtype{Kan}<x=0>{A}{A'}$,
$\ceqtype{Kan}{B}{B'}$,
$\ceqtm<x=0>{E}{E'}{\Equiv{A}{B}}$,
\begin{enumerate}
\item $\etc{\xi_i}=\etc{r_i=r_i'}$ is valid,
\item $\ceqtm{M}{M'}{\ua{x}{A,B,E}}$,
\item $\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\ua{x}{A,B,E}}$
for any $i,j$, and
\item $\ceqtm<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\ua{x}{A,B,E}}$
for any $i$,
\end{enumerate}
then
\begin{enumerate}
\item $\ceqtm{\Hcom*{\ua{x}{A,B,E}}{\xi_i}}%
{\Hcom{\ua{x}{A',B',E'}}{r}{r'}{M'}{\sys{\xi_i}{y.N_i'}}}{\ua{x}{A,B,E}}$;
\item if $r=r'$ then
$\ceqtm{\Hcom{\ua{x}{A,B,E}}{r}{r}{M}{\sys{\xi_i}{y.N_i}}}{M}{\ua{x}{A,B,E}}$;
and
\item if $r_i = r_i'$ then
$\ceqtm{\Hcom*{\ua{x}{A,B,E}}{\xi_i}}{\dsubst{N_i}{r'}{y}}{\ua{x}{A,B,E}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (1), apply coherent expansion to $\Hcom*{\ua{x}{A,B,E}}{\xi_i}$ with family
\[\begin{cases}
\Hcom{\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\td{M}{\psi}}{\sys{\td{\xi_i}{\psi}}{y.\td{N_i}{\psi}}}
& \text{$\td{x}{\psi}=0$} \\
\Hcom{\td{B}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}%
{\td{M}{\psi}}{\sys{\td{\xi_i}{\psi}}{y.\td{N_i}{\psi}}}
& \text{$\td{x}{\psi}=1$} \\
\td{(\uain{x}{\dsubst{O}{r'}{y},
\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\etc{T}}})}{\psi}
& \text{$\td{x}{\psi}=x'$} \\
\quad O =
\Hcom{A}{r}{y}{M}{\sys{\xi_i}{y.N_i}} &\\
\mkspacer{\quad \etc{T} ={}}
\sys{\xi_i}{y.\uaproj{x}{N_i,\fst{E}}}, &\\
\hphantom{{}={}}
\tube{x=0}{y.\app{\fst{E}}{O}}, &\\
\hphantom{{}={}}
\tube{x=1}{y.\Hcom{B}{r}{y}{M}{\sys{\xi_i}{y.N_i}}}
\end{cases}\]
Consider $\psi=\id$. Using rules for dependent functions, dependent types, and
univalence:
\begin{enumerate}
\item $\coftype[\Psi,y]<x=0>{O}{A}$ and
$\ceqtm<x=0>{\dsubst{O}{r}{y}}{M}{A}$
(by $\ceqtype{pre}<x=0>{\ua{x}{A,B,E}}{A}$).
\item
$\coftype{\uaproj{x}{M,\fst{E}}}{B}$ where
$\ceqtm<x=0>{\uaproj{x}{M,\fst{E}}}{\app{\fst{E}}{M}}{B}$
and $\ceqtm<x=1>{\uaproj{x}{M,\fst{E}}}{M}{B}$.
\item
$\ceqtm[\Psi,y]<r_i=r_i',r_j=r_j'>%
{\uaproj{x}{N_i,\fst{E}}}{\uaproj{x}{N_j,\fst{E}}}{B}$ and
$\ceqtm<r_i=r_i'>{\uaproj{x}{M,\fst{E}}}{\uaproj{x}{\dsubst{N_i}{r}{y},\fst{E}}}{B}$.
\item
$\coftype[\Psi,y]<x=0>{\app{\fst{E}}{O}}{B}$,
$\ceqtm[\Psi,y]<x=0,r_i=r_i'>{\app{\fst{E}}{O}}{\uaproj{x}{N_i,\fst{E}}}{B}$
(both equal $\app{\fst{E}}{N_i}$), and
$\ceqtm<x=0>{\app{\fst{E}}{\dsubst{O}{r}{y}}}{\uaproj{x}{M,\fst{E}}}{B}$
(both equal $\app{\fst{E}}{M}$).
\item
$\coftype[\Psi,y]<x=1>{\Hcom{B}{r}{y}{M}{\sys{\xi_i}{y.N_i}}}{B}$
(by $\ceqtype{pre}<x=1>{\ua{x}{A,B,E}}{B}$),
$\ceqtm[\Psi,y]<x=1,r_i=r_i'>%
{\Hcom{B}{r}{y}{M}{\sys{\xi_i}{y.N_i}}}{\uaproj{x}{N_i,\fst{E}}}{B}$
(both equal $N_i$), and
$\ceqtm[\Psi]<x=1>%
{\Hcom{B}{r}{r}{M}{\sys{\xi_i}{y.N_i}}}{\uaproj{x}{M,\fst{E}}}{B}$
(both equal $M$).
\item By the above,
$\coftype{\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\etc{T}}}{B}$ and
$\ceqtm<x=0>{\Hcom{B}}{\app{\fst{E}}{\dsubst{O}{r'}{y}}}{B}$, so
$\coftype{\uain{x}{\dsubst{O}{r'}{y},\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\etc{T}}}}{\ua{x}{A,B,E}}$.
\end{enumerate}
When $\td{x}{\psi}=x'$, coherence is immediate. When $\td{x}{\psi}=0$,
$\ceqtm[\Psi']{\uain{0}{\dsubst{O}{\td{r'}{\psi}}{y},\dots}}{\Hcom{\td{A}{\psi}}}{\td{A}{\psi}}$
as required. When $\td{x}{\psi}=1$,
$\ceqtm[\Psi']{\uain{1}{\dots,\Hcom{\td{B}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\dots}{\etc{T}}}}%
{\Hcom{\td{B}{\psi}}}{\td{B}{\psi}}$ as required. Therefore
\cref{lem:cohexp-ceqtm} applies, and part (1) follows by repeating this argument
on the right side.
For part (2), show that
$\ceqtm{\uain{x}{\dsubst{O}{r'}{y},\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\etc{T}}}}%
{M}{\ua{x}{A,B,E}}$ when $r=r'$. By the above,
$\ceqtm{\uain{x}{\dots}}{\uain{x}{M,\uaproj{x}{M,\fst{E}}}}{\ua{x}{A,B,E}}$, so
the result follows by \cref{rul:ua-eta}.
For part (3), show
$\ceqtm{\uain{x}{\dsubst{O}{r'}{y},\Hcom{B}{r}{r'}{\uaproj{x}{M,\fst{E}}}{\etc{T}}}}%
{\dsubst{N_i}{r'}{y}}{\ua{x}{A,B,E}}$ when $r_i=r_i'$. By the above,
$\ceqtm{\uain{x}{\dots}}{\uain{x}{\dsubst{N_i}{r'}{y},\uaproj{x}{\dsubst{N_i}{r'}{y},\fst{E}}}}{\ua{x}{A,B,E}}$,
so the result again follows by \cref{rul:ua-eta}.
\end{proof}
\begin{lemma}\label{lem:ua-coe-xy}
If $\ceqtype{Kan}[\Psi,y]<x=0>{A}{A'}$,
$\ceqtype{Kan}[\Psi,y]{B}{B'}$,
$\ceqtm[\Psi,y]<x=0>{E}{E'}{\Equiv{A}{B}}$, and
$\ceqtm{M}{M'}{\dsubst{(\ua{x}{A,B,E})}{r}{y}}$ for $x\neq y$, then
$\ceqtm{\Coe{y.\ua{x}{A,B,E}}{r}{r'}{M}}%
{\Coe{y.\ua{x}{A',B',E'}}{r}{r'}{M'}}%
{\dsubst{(\ua{x}{A,B,E})}{r'}{y}}$ and
$\ceqtm{\Coe{y.\ua{x}{A,B,E}}{r}{r}{M}}{M}%
{\dsubst{(\ua{x}{A,B,E})}{r}{y}}$.
\end{lemma}
\begin{proof}
We apply coherent expansion to $\Coe{y.\ua{x}{A,B,E}}{r}{r'}{M}$ with family
\[\begin{cases}
\Coe{y.\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}
&\text{$\td{x}{\psi} = 0$} \\
\Coe{y.\td{B}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}
&\text{$\td{x}{\psi} = 1$} \\
\td{(\uain{x}{\Coe{y.A}{r}{r'}{M},\Com{y.B}{r}{r'}{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{\etc{T}}})}{\psi}
&\text{$\td{x}{\psi} = x'$} \\
\mkspacer{\quad\etc{T}={}}
\tube{x=0}{y.\app{\fst{E}}{\Coe{y.A}{r}{y}{M}}}, &\\
\hphantom{{}={}}
\tube{x=1}{y.\Coe{y.B}{r}{y}{M}} &
\end{cases}\]
Consider $\psi=\id$.
\begin{enumerate}
\item $\coftype{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{\dsubst{B}{r}{y}}$
(by $\coftype{M}{\ua{x}{\dsubst{A}{r}{y},\dots}}$),
$\ceqtm<x=0>{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{\app{\fst{\dsubst{E}{r}{y}}}{M}}{\dsubst{B}{r}{y}}$,
and $\ceqtm<x=1>{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{M}{\dsubst{B}{r}{y}}$.
\item $\coftype[\Psi,y]<x=0>{\app{\fst{E}}{\Coe{y.A}{r}{y}{M}}}{B}$ because
$\coftype[\Psi,y]<x=0>{\fst{E}}{\arr{A}{B}}$ and
$\coftype[\Psi,y]<x=0>{\Coe{y.A}{r}{y}{M}}{A}$
(by $\coftype[\Psi]<x=0>{M}{\dsubst{A}{r}{y}}$). Under $\dsubst{}{r}{y}$ this
$\ceqtm<x=0>{{}}{\app{\fst{\dsubst{E}{r}{y}}}{M}}{\dsubst{B}{r}{y}}$.
\item $\coftype[\Psi,y]<x=1>{\Coe{y.B}{r}{y}{M}}{B}$
(by $\coftype[\Psi]<x=1>{M}{\dsubst{B}{r}{y}}$)
and $\ceqtm<x=1>{\Coe{y.B}{r}{r}{M}}{M}{\dsubst{B}{r}{y}}$.
\item Therefore $\coftype{\Com{y.B}}{\dsubst{B}{r'}{y}}$,
$\ceqtm<x=0>{\Com{y.B}}{\app{\fst{\dsubst{E}{r'}{y}}}{\Coe{y.A}{r}{r'}{M}}}{\dsubst{B}{r'}{y}}$,
and $\ceqtm<x=1>{\Com{y.B}}{\Coe{y.B}{r}{r'}{M}}{\dsubst{B}{r'}{y}}$. It
follows that
$\coftype{\uain{x}{\dots}}{\ua{x}{\dsubst{A}{r'}{y},\dsubst{B}{r'}{y},\dsubst{E}{r'}{y}}}$.
\end{enumerate}
When $\td{x}{\psi}=x'$, coherence is immediate. When $\td{x}{\psi}=0$, we have
$\ceqtm[\Psi']%
{\uain{0}{\Coe{y.\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}},\dots}}%
{\Coe{y.\td{A}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}}%
{\dsubst{\td{A}{\psi}}{\td{r'}{\psi}}{y}}$. When $\td{x}{\psi}=1$,
$\ceqtm[\Psi']{\uain{1}{\dots}}
{\Coe{y.\td{B}{\psi}}{\td{r}{\psi}}{\td{r'}{\psi}}{\td{M}{\psi}}}
{\dsubst{\td{B}{\psi}}{\td{r'}{\psi}}{y}}$.
Therefore \cref{lem:cohexp-ceqtm} applies, and the first part follows by
the same argument on the right side.
For the second part,
$\ceqtm{\Coe{y.\ua{x}{A,B,E}}{r}{r}{M}}%
{\uain{x}{\Coe{y.A}{r}{r}{M},\Com{y.B}{r}{r}{\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}{\etc{T}}}}%
{\dsubst{(\ua{x}{A,B,E})}{r}{y}}$, which equals
$\uain{x}{M,\uaproj{x}{M,\fst{\dsubst{E}{r}{y}}}}$ and $M$ by \cref{rul:ua-eta}.
\end{proof}
\begin{lemma}\label{lem:ua-coe-from-0}
If $\ceqtype{Kan}[\Psi,x]<x=0>{A}{A'}$,
$\ceqtype{Kan}[\Psi,x]{B}{B'}$,
$\ceqtm[\Psi,x]<x=0>{E}{E'}{\Equiv{A}{B}}$, and
$\ceqtm{M}{M'}{\dsubst{(\ua{x}{A,B,E})}{0}{x}}$, then
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{0}{r'}{M}}%
{\Coe{x.\ua{x}{A',B',E'}}{0}{r'}{M'}}%
{\dsubst{(\ua{x}{A,B,E})}{r'}{x}}$ and
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{0}{0}{M}}{M}%
{\dsubst{(\ua{x}{A,B,E})}{0}{x}}$.
\end{lemma}
\begin{proof}
By \cref{lem:expansion} on both sides, it suffices to show (the binary version of)
\[
\coftype{\uain{r'}{M,\Coe{x.B}{0}{r'}{\app{\fst{\dsubst{E}{0}{x}}}{M}}}}%
{\dsubst{(\ua{x}{A,B,E})}{r'}{x}}.
\]
By \cref{rul:ua-form-pre}, $\coftype{M}{\dsubst{A}{0}{x}}$, so
$\coftype{\app{\fst{\dsubst{E}{0}{x}}}{M}}{\dsubst{B}{0}{x}}$ and
$\coftype{\Coe{x.B}{0}{r'}{\dots}}{\dsubst{B}{r'}{x}}$.
Then $\coftype<r'=0>{M}{\dsubst{A}{r'}{x}}$ and
$\ceqtm<r'=0>{\Coe{x.B}{0}{r'}{\dots}}%
{\app{\fst{\dsubst{E}{0}{x}}}{M}}%
{\dsubst{B}{r'}{x}}$ so the first part follows by \cref{rul:ua-intro}. When
$r'=0$, $\ceqtm{\uain{0}{M,\dots}}{M}{\dsubst{(\ua{x}{A,B,E})}{0}{x}}$,
completing the second part.
\end{proof}
\begin{lemma}\label{lem:ua-coe-from-1}
If $\ceqtype{Kan}[\Psi,x]<x=0>{A}{A'}$,
$\ceqtype{Kan}[\Psi,x]{B}{B'}$,
$\ceqtm[\Psi,x]<x=0>{E}{E'}{\Equiv{A}{B}}$, and
$\ceqtm{N}{N'}{\dsubst{(\ua{x}{A,B,E})}{1}{x}}$, then
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{1}{r'}{N}}%
{\Coe{x.\ua{x}{A',B',E'}}{1}{r'}{N'}}%
{\dsubst{(\ua{x}{A,B,E})}{r'}{x}}$ and
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{1}{1}{N}}{N}{\dsubst{(\ua{x}{A,B,E})}{1}{x}}$.
\end{lemma}
\begin{proof}
By \cref{lem:expansion} on both sides, it suffices to show (the binary version of)
$\coftype{\uain{r'}{\fst{O},P}}{\dsubst{(\ua{x}{A,B,E})}{r'}{x}}$ where
\begin{gather*}
O = \fst{\app{\snd{\dsubst{E}{r'}{x}}}{\Coe{x.B}{1}{r'}{N}}} \\
P = \Hcom{\dsubst{B}{r'}{x}}{1}{0}{\Coe{x.B}{1}{r'}{N}}{
\tube{r'=0}{y.\dapp{\snd{O}}{y}},
\tube{r'=1}{\_.\Coe{x.B}{1}{r'}{N}}}.
\end{gather*}
By \cref{rul:ua-form-pre}, $\coftype{N}{\dsubst{B}{1}{x}}$, so
$\coftype{\Coe{x.B}{1}{r'}{N}}{\dsubst{B}{r'}{x}}$ and
\[
\coftype{O}{\sigmacl{a}{\dsubst{A}{r'}{x}}%
{\Path{\_.\dsubst{B}{r'}{x}}{\app{\fst{\dsubst{E}{r'}{x}}}{a}}{\Coe{x.B}{1}{r'}{N}}}}.
\]
Therefore $\coftype[\Psi,y]<r'=0>{\dapp{\snd{O}}{y}}{\dsubst{B}{r'}{x}}$ and
$\ceqtm<r'=0>{\dapp{\snd{O}}{1}}{\Coe{x.B}{1}{r'}{N}}{\dsubst{B}{r'}{x}}$, so
by $\cwftype{Kan}{\dsubst{B}{r'}{x}}$, $\coftype{P}{\dsubst{B}{r'}{x}}$. We also have
$\coftype<r'=0>{\fst{O}}{\dsubst{A}{r'}{x}}$ and
$\ceqtm<r'=0>{\app{\fst{\dsubst{E}{r'}{x}}}{\fst{O}}}{P}{\dsubst{B}{r'}{x}}$
(by
$\ceqtm<r'=0>{\dapp{\snd{O}}{0}}{\app{\fst{\dsubst{E}{r'}{x}}}{\fst{O}}}{\dsubst{B}{r'}{x}}$)
so the first part follows by \cref{rul:ua-intro}. When $r'=1$,
$\ceqtm{\uain{1}{\fst{O},P}}{P}{\dsubst{(\ua{x}{A,B,E})}{1}{x}}$, but $P \ensuremath{\mathbin{\doteq}}
\Coe{x.B}{1}{r'}{N} \ensuremath{\mathbin{\doteq}} N$, completing the second part.
\end{proof}
\begin{lemma}\label{lem:ua-coe-from-y}
If $\ceqtype{Kan}[\Psi,x]<x=0>{A}{A'}$,
$\ceqtype{Kan}[\Psi,x]{B}{B'}$,
$\ceqtm[\Psi,x]<x=0>{E}{E'}{\Equiv{A}{B}}$, and
$\ceqtm{M}{M'}{\dsubst{(\ua{x}{A,B,E})}{y}{x}}$, then
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{y}{r'}{M}}%
{\Coe{x.\ua{x}{A',B',E'}}{y}{r'}{M'}}%
{\dsubst{(\ua{x}{A,B,E})}{r'}{x}}$ and
$\ceqtm{\Coe{x.\ua{x}{A,B,E}}{y}{y}{M}}{M}%
{\dsubst{(\ua{x}{A,B,E})}{y}{x}}$.
\end{lemma}
\begin{proof}
We apply coherent expansion to $\Coe{x.\ua{x}{A,B,E}}{y}{r'}{M}$ with the family
$\Coe{x.\ua{x}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}}{\ensuremath{\varepsilon}}{\td{r'}{\psi}}{\td{M}{\psi}}$
when $\td{y}{\psi} = \ensuremath{\varepsilon}$ and
$\td{(\uain{r'}{\fst{R},\Hcom{\dsubst{B}{r'}{x}}{1}{0}{\dsubst{P}{r'}{x}}{\etc{T}}})}{\psi}$
otherwise, where
\begin{gather*}
O_\ensuremath{\varepsilon} = \uaproj{w}{\Coe{x.\ua{x}{A,B,E}}{\ensuremath{\varepsilon}}{w}{M},\fst{\dsubst{E}{w}{x}}} \\
P = \Com{x.B}{y}{x}{\uaproj{y}{M,\fst{\dsubst{E}{y}{x}}}}{\etc{\tube{y=\ensuremath{\varepsilon}}{w.O_\ensuremath{\varepsilon}}}} \\
Q_\ensuremath{\varepsilon}[a] = \pair%
{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}%
{\dlam{z}{\Com{y.\dsubst{B}{0}{x}}{\ensuremath{\varepsilon}}{y}{\dsubst{\dsubst{P}{0}{x}}{\ensuremath{\varepsilon}}{y}}%
{\etc{U}}}} \\
\etc{U} =
\tube{z=0}{y.\app{\fst{\dsubst{E}{0}{x}}}{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}},
\tube{z=1}{y.\dsubst{P}{0}{x}} \\
R = \dapp{\app{\app{\snd{\app{\snd{\dsubst{E}{0}{x}}}{\dsubst{P}{0}{x}}}}{Q_0[\dsubst{M}{0}{y}]}}%
{Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]}}{y} \\
\etc{T} =
\etc{\tube{y=\ensuremath{\varepsilon}}{\_.\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}}},
\tube{y=r'}{\_.\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}},
\tube{r'=0}{z.\dapp{\snd{R}}{z}}.
\end{gather*}
Consider $\psi=\id$.
\begin{enumerate}
\item $\coftype[\Psi,w]<y=\ensuremath{\varepsilon}>{O_\ensuremath{\varepsilon}}{\dsubst{B}{w}{x}}$
by $\coftype[\Psi,w]<y=\ensuremath{\varepsilon}>{\Coe{x.\ua{x}{A,B,E}}{\ensuremath{\varepsilon}}{w}{M}}%
{\ua{w}{\dsubst{A}{w}{x},\dots}}$ (by
$\coftype{M}{\ua{y}{\dsubst{A}{y}{x},\dots}}$) and
$\coftype[\Psi,w]<w=0>{\fst{\dsubst{E}{w}{x}}}{\arr{\dsubst{A}{w}{x}}{\dsubst{B}{w}{x}}}$.
\item $\coftype[\Psi,x]{P}{B}$ by
$\coftype{\uaproj{y}{M,\fst{\dsubst{E}{y}{x}}}}{\dsubst{B}{y}{x}}$ and
$\ceqtm<y=\ensuremath{\varepsilon}>{\dsubst{O_\ensuremath{\varepsilon}}{y}{w}}{\uaproj{y}{M,\fst{\dsubst{E}{y}{x}}}}%
{\dsubst{B}{y}{x}}$.
\item Let $C =
\sigmacl{a'}{\dsubst{A}{0}{x}}{\Path{\_.\dsubst{B}{0}{x}}{\app{\fst{\dsubst{E}{0}{x}}}{a'}}{\dsubst{P}{0}{x}}}$.
Then $\coftype{Q_\ensuremath{\varepsilon}[a]}{C}$ for any
$\coftype{a}{\dsubst{\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}}$ with $y\ensuremath{\mathbin{\#}} a$ and
$\ceqtm{\dsubst{\dsubst{P}{0}{x}}{\ensuremath{\varepsilon}}{y}}%
{\app{\fst{\dsubst{\dsubst{E}{0}{x}}{\ensuremath{\varepsilon}}{y}}}{a}}%
{\dsubst{\dsubst{B}{0}{x}}{\ensuremath{\varepsilon}}{y}}$, because
$\coftype{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}{\dsubst{A}{0}{x}}$ and by
\begin{enumerate}
\item
$\coftype{\dsubst{\dsubst{P}{0}{x}}{\ensuremath{\varepsilon}}{y}}{\dsubst{\dsubst{B}{0}{x}}{\ensuremath{\varepsilon}}{y}}$,
\item
$\coftype{\app{\fst{\dsubst{E}{0}{x}}}{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}}{\dsubst{B}{0}{x}}$,
\item
$\ceqtm{\dsubst{\dsubst{P}{0}{x}}{\ensuremath{\varepsilon}}{y}}%
{\app{\fst{\dsubst{\dsubst{E}{0}{x}}{\ensuremath{\varepsilon}}{y}}}{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{\ensuremath{\varepsilon}}{a}}}%
{\dsubst{\dsubst{B}{0}{x}}{\ensuremath{\varepsilon}}{y}}$, and
\item $\coftype{\dsubst{P}{0}{x}}{\dsubst{B}{0}{x}}$,
\end{enumerate}
we have
$\coftype{\dlam{z}{\Com}}%
{\Path{\_.\dsubst{B}{0}{x}}%
{\app{\fst{\dsubst{E}{0}{x}}}{\Coe{y.\dsubst{A}{0}{x}}{\ensuremath{\varepsilon}}{y}{a}}}%
{\dsubst{P}{0}{x}}}$.
\item
$\coftype{Q_0[\dsubst{M}{0}{y}]}{C}$ because
$\coftype{\dsubst{M}{0}{y}}{\dsubst{\dsubst{A}{0}{x}}{0}{y}}$ and
$\dsubst{\dsubst{P}{0}{x}}{0}{y}$ $\ensuremath{\mathbin{\doteq}}$
$\dsubst{\dsubst{O_0}{0}{w}}{0}{y}$ $\ensuremath{\mathbin{\doteq}}$
$\app{\fst{\dsubst{\dsubst{E}{0}{x}}{0}{y}}}{\dsubst{M}{0}{y}}$.
\item
$\coftype{Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]}{C}$ because
$\coftype{\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}}%
{\dsubst{\dsubst{A}{0}{x}}{1}{y}}$ (by
$\coftype{\dsubst{M}{1}{y}}{\dsubst{\dsubst{B}{1}{x}}{1}{y}}$) and
$\dsubst{\dsubst{P}{0}{x}}{1}{y}$ $\ensuremath{\mathbin{\doteq}}$
$\dsubst{\dsubst{O_1}{0}{w}}{1}{y}$ which in turn equals
$\app{\fst{\dsubst{\dsubst{E}{0}{x}}{1}{y}}}{\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}}$.
\item $\coftype{R}{C}$ because
$\coftype{\snd{\app{\snd{\dsubst{E}{0}{x}}}{\dsubst{P}{0}{x}}}}%
{(\picl{c}{C}{\picl{c'}{C}{\Path{\_.C}{c}{c'}}})}$ and we further apply this to
$Q_0[\dsubst{M}{0}{y}]$, $Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]$,
and $y$.
\item
$\coftype{\Hcom{\dsubst{B}{r'}{x}}{1}{0}{\dsubst{P}{r'}{x}}{\etc{T}}}{\dsubst{B}{r'}{x}}$
because
\begin{enumerate}
\item $\coftype<y=\ensuremath{\varepsilon}>{\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}}{\dsubst{B}{r'}{x}}$,
\item
$\coftype<y=r'>{\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}}{\dsubst{B}{r'}{x}}$,
\item $\coftype[\Psi,z]<r'=0>{\dapp{\snd{R}}{z}}{\dsubst{B}{r'}{x}}$ by
$\coftype[\Psi,z]{\dapp{\snd{R}}{z}}{\dsubst{B}{0}{x}}$,
\item $\coftype{\dsubst{P}{r'}{x}}{\dsubst{B}{r'}{x}}$,
\item $\ceqtm<y=\ensuremath{\varepsilon}>{\dsubst{P}{r'}{x}}{\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}}{\dsubst{B}{r'}{x}}$,
\item $\ceqtm<y=r'>{\dsubst{P}{r'}{x}}%
{\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}}{\dsubst{B}{r'}{x}}$,
\item $\ceqtm<r'=0>{\dsubst{P}{r'}{x}}{\dapp{\snd{R}}{1}}{\dsubst{B}{r'}{x}}$
by $\ceqtm{\dapp{\snd{R}}{1}}{\dsubst{P}{0}{x}}{\dsubst{B}{0}{x}}$,
\item $\ceqtm<y=\ensuremath{\varepsilon},y=r'>{\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}}%
{\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}}{\dsubst{B}{r'}{x}}$,
\item $\ceqtm[\Psi,z]<y=0,r'=0>{\dsubst{O_0}{r'}{w}}%
{\dapp{\snd{R}}{z}}{\dsubst{B}{r'}{x}}$ by
$\dapp{\snd{R}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dapp{\snd{Q_0[\dsubst{M}{0}{y}]}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dapp{(\dlam{z}{\dsubst{\dsubst{P}{0}{x}}{0}{y}})}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dsubst{O_0}{0}{w}$,
\item $\ceqtm[\Psi,z]<y=1,r'=0>{\dsubst{O_1}{r'}{w}}%
{\dapp{\snd{R}}{z}}{\dsubst{B}{r'}{x}}$ because we have
$\dapp{\snd{R}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dapp{\snd{Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dapp{(\dlam{z}{\dsubst{\dsubst{P}{0}{x}}{1}{y}})}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dsubst{O_1}{0}{w}$, and
\item
$\ceqtm[\Psi,z]<y=r',r'=0>{\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}}%
{\dapp{\snd{R}}{z}}{\dsubst{B}{r'}{x}}$ because
$\dapp{\snd{R}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dapp{\snd{Q_0[\dsubst{M}{0}{y}]}}{z}$ $\ensuremath{\mathbin{\doteq}}$
$\dsubst{\dsubst{P}{0}{x}}{0}{y}$ $\ensuremath{\mathbin{\doteq}}$
$\uaproj{y}{M,\fst{\dsubst{E}{y}{x}}}$.
\end{enumerate}
\item
$\coftype{\uain{r'}{\fst{R},\Hcom{\dsubst{B}{r'}{x}}}}{\ua{r'}{\dsubst{A}{r'}{x},\dots}}$
because $\coftype<r'=0>{\fst{R}}{\dsubst{A}{0}{x}}$,
$\coftype{\Hcom{\dsubst{B}{r'}{x}}}{\dsubst{B}{r'}{x}}$, and
$\ceqtm<r'=0>{\app{\fst{\dsubst{E}{r'}{x}}}{\fst{R}}}%
{\Hcom{\dsubst{B}{r'}{x}}}{\dsubst{B}{r'}{x}}$ (by
$\Hcom{\dsubst{B}{r'}{x}}$ $\ensuremath{\mathbin{\doteq}}$ $\dapp{\snd{R}}{0}$).
\end{enumerate}
When $\td{y}{\psi}=y'$, coherence is immediate.
When $\td{y}{\psi}=\ensuremath{\varepsilon}$, we prove coherence by \cref{rul:ua-eta}, using
$\ceqtm<y=\ensuremath{\varepsilon}>{\Hcom{\dsubst{B}{r'}{x}}}%
{\uaproj{r'}{\Coe{x.\ua{x}{A,B,E}}{\ensuremath{\varepsilon}}{r'}{M},\fst{\dsubst{E}{r'}{x}}}}%
{\dsubst{B}{r'}{x}}$ (by $\ensuremath{\mathbin{\doteq}}$ $\dsubst{O_\ensuremath{\varepsilon}}{r'}{w}$),
$\ceqtm<y=0,r'=0>{\fst{R}}{M}{\dsubst{A}{r'}{x}}$ (by $\ensuremath{\mathbin{\doteq}}$
$\fst{Q_0[\dsubst{M}{0}{y}]}$), and
$\ceqtm<y=1,r'=0>{\fst{R}}{\Coe{x.\ua{x}{A,B,E}}{1}{0}{M}}{\dsubst{A}{r'}{x}}$
(by $\ensuremath{\mathbin{\doteq}}$ $\fst{Q_1[\dsubst{(\Coe{x.\ua{x}{A,B,E}}{1}{0}{M})}{1}{y}]}$).
Therefore \cref{lem:cohexp-ceqtm} applies, and the first part follows by the
same argument on the right side.
The second part follows by \cref{rul:ua-eta},
$\ceqtm<y=r'>{\Hcom{\dsubst{B}{r'}{x}}}%
{\uaproj{r'}{M,\fst{\dsubst{E}{r'}{x}}}}%
{\dsubst{B}{r'}{x}}$, and
$\ceqtm<y=r',r'=0>{\fst{R}}{M}{\dsubst{A}{r'}{x}}$ (as calculated previously).
\end{proof}
\begin{rul}[Kan type formation]
~\begin{enumerate}
\item If $\cwftype{Kan}{A}$ then $\ceqtype{Kan}{\ua{0}{A,B,E}}{A}$.
\item If $\cwftype{Kan}{B}$ then $\ceqtype{Kan}{\ua{1}{A,B,E}}{B}$.
\item If $\ceqtype{Kan}<r=0>{A}{A'}$,
$\ceqtype{Kan}{B}{B'}$, and
$\ceqtm<r=0>{E}{E'}{\Equiv{A}{B}}$, then
$\ceqtype{Kan}{\ua{r}{A,B,E}}{\ua{r}{A',B',E'}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Parts (1--2) follow from \cref{lem:expansion}. For part (3), we check the
Kan conditions.
($\Hcom$) For any $\tds{\Psi'}{\psi}{\Psi}$, consider a valid composition scenario in
$\ua{\td{r}{\psi}}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}$. If $\td{r}{\psi}=0$
(resp., $1$) then the composition is in $\td{A}{\psi}$ (resp., $\td{B}{\psi}$)
and the $\Hcom$ Kan conditions follow from
$\ceqtype{Kan}[\Psi']{\td{A}{\psi}}{\td{A'}{\psi}}$ (resp.,
$\ceqtype{Kan}[\Psi']{\td{B}{\psi}}{\td{B'}{\psi}}$). Otherwise, $\td{r}{\psi}=x$ and
the $\Hcom$ Kan conditions follow from \cref{lem:ua-hcom} at
$\ceqtype{Kan}[\Psi']<x=0>{\td{A}{\psi}}{\td{A'}{\psi}}$,
$\ceqtype{Kan}[\Psi']{\td{B}{\psi}}{\td{B'}{\psi}}$, and
$\ceqtm[\Psi']<x=0>{\td{E}{\psi}}{\td{E'}{\psi}}{\Equiv{\td{A}{\psi}}{\td{B}{\psi}}}$.
($\Coe$) Consider any $\tds{(\Psi',x)}{\psi}{\Psi}$ and
$\ceqtm[\Psi']{M}{M'}{\dsubst{\td{(\ua{r}{A,B,E})}{\psi}}{s}{x}}$. If
$\td{r}{\psi}=0$ (resp., $1$) then the $\Coe$ Kan conditions follow from
$\ceqtype{Kan}[\Psi',x]{\td{A}{\psi}}{\td{A'}{\psi}}$
(resp., $\ceqtype{Kan}[\Psi',x]{\td{B}{\psi}}{\td{B'}{\psi}}$).
If $\td{r}{\psi}=y\neq x$, then the $\Coe$ Kan conditions follow from
\cref{lem:ua-coe-xy}. Otherwise, $\td{r}{\psi}=x$; the result follows
from \cref{lem:ua-coe-from-0} if $s=0$,
from \cref{lem:ua-coe-from-1} if $s=1$, and
from \cref{lem:ua-coe-from-y} if $s=y$.
\end{proof}
\subsection{Void}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; we have
$\tau(\Psi,\ensuremath{\mathsf{void}},\ensuremath{\mathsf{void}},\phi)$ for $\phi$ the empty relation. By
$\sisval{\ensuremath{\mathsf{void}}}$, $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{void}},\ensuremath{\mathsf{void}},\alpha)$ where each
$\alpha_{\Psi'}$ is empty.
\begin{rul}[Pretype formation]
$\cwftype{pre}{\ensuremath{\mathsf{void}}}$.
\end{rul}
\begin{proof}
We have already observed $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{void}},\ensuremath{\mathsf{void}},\vper{\ensuremath{\mathsf{void}}})$;
$\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{void}}})$ trivially because each $\vper{\ensuremath{\mathsf{void}}}_{\Psi'}$ is empty.
\end{proof}
\begin{rul}[Elimination]\label{rul:void-elim}
It is never the case that $\coftype{M}{\ensuremath{\mathsf{void}}}$.
\end{rul}
\begin{proof}
If $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{void}}})(M,M)$ then $\lift{\vper{\ensuremath{\mathsf{void}}}}_\Psi(M,M)$, but
$\lift{\vper{\ensuremath{\mathsf{void}}}}_\Psi$ is empty.
\end{proof}
If $\oftype{\ensuremath{\Gamma}}{M}{\ensuremath{\mathsf{void}}}$ then it must be impossible to produce elements of
each pretype in $\ensuremath{\Gamma}$, in which case every (non-context-restricted) judgment
holds under $\ensuremath{\Gamma}$. In \cref{sec:rules}, we say that if $\coftype{M}{\ensuremath{\mathsf{void}}}$ then
$\judg{\ensuremath{\mathcal{J}}}$.
\begin{rul}[Kan type formation]
$\cwftype{Kan}{\ensuremath{\mathsf{void}}}$.
\end{rul}
\begin{proof}
It suffices to check the five Kan conditions. In each condition, we suppose that
$\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathsf{void}}}$, so by \cref{rul:void-elim} they vacuously hold.
\end{proof}
\subsection{Booleans}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; we have
$\tau(\Psi,\ensuremath{\mathsf{bool}},\ensuremath{\mathsf{bool}},\phi)$ for $\phi=\{(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}}),(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})\}$.
By $\sisval{\ensuremath{\mathsf{bool}}}$, $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{bool}},\ensuremath{\mathsf{bool}},\alpha)$ where each
$\alpha_{\Psi'} = \phi$.
\begin{rul}[Pretype formation]
$\cwftype{pre}{\ensuremath{\mathsf{bool}}}$.
\end{rul}
\begin{proof}
We have already observed $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{bool}},\ensuremath{\mathsf{bool}},\vper{\ensuremath{\mathsf{bool}}})$; for
$\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{bool}}})$ we must show that $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{bool}}})(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}})$ and
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{bool}}})(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})$. These hold by
$\sisval{\ensuremath{\mathsf{true}}}$, $\vper{\ensuremath{\mathsf{bool}}}_{\Psi'}(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}})$,
$\sisval{\ensuremath{\mathsf{false}}}$, and $\vper{\ensuremath{\mathsf{bool}}}_{\Psi'}(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})$.
\end{proof}
\begin{rul}[Introduction]
$\coftype{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}$ and $\coftype{\ensuremath{\mathsf{false}}}{\ensuremath{\mathsf{bool}}}$.
\end{rul}
\begin{proof}
Immediate by $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{bool}}})$.
\end{proof}
\begin{rul}[Computation]\label{rul:bool-comp}
If $\coftype{T}{B}$ then $\ceqtm{\ifb{b.A}{\ensuremath{\mathsf{true}}}{T}{F}}{T}{B}$.
If $\coftype{F}{B}$ then $\ceqtm{\ifb{b.A}{\ensuremath{\mathsf{false}}}{T}{F}}{F}{B}$.
\end{rul}
\begin{proof}
Immediate by $\ifb{b.A}{\ensuremath{\mathsf{true}}}{T}{F}\ensuremath{\steps_\stable} T$, $\ifb{b.A}{\ensuremath{\mathsf{false}}}{T}{F}\ensuremath{\steps_\stable}
F$, and \cref{lem:expansion}.
\end{proof}
\begin{rul}[Elimination]\label{rul:bool-elim}
If $\ceqtm{M}{M'}{\ensuremath{\mathsf{bool}}}$,
$\wftype{pre}{\oft{b}{\ensuremath{\mathsf{bool}}}}{C}$,
$\ceqtm{T}{T'}{\subst{C}{\ensuremath{\mathsf{true}}}{b}}$, and
$\ceqtm{F}{F'}{\subst{C}{\ensuremath{\mathsf{false}}}{b}}$, then
$\ceqtm{\ifb{b.A}{M}{T}{F}}{\ifb{b.A'}{M'}{T'}{F'}}{\subst{C}{M}{b}}$.
\end{rul}
\begin{proof}
Apply coherent expansion to the left side with
$\{\ifb{b.\td{A}{\psi}}{M_\psi}{\td{T}{\psi}}{\td{F}{\psi}} \mid
\td{M}{\psi} \ensuremath{\Downarrow} M_\psi\}^{\Psi'}_\psi$. We must show
$\ceqtm[\Psi']%
{\ifb{b.\td{A}{\psi}}{M_\psi}{\td{T}{\psi}}{\td{F}{\psi}}}%
{\ifb{b.\td{A}{\psi}}{\td{(M_{\id})}{\psi}}{\td{T}{\psi}}{\td{F}{\psi}}}%
{\subst{\td{C}{\psi}}{\td{M}{\psi}}{b}}$.
Either $M_\psi=\ensuremath{\mathsf{true}}$ or $M_\psi=\ensuremath{\mathsf{false}}$. In either case $M_{\id}=M_\psi$
because $\lift{\vper{\ensuremath{\mathsf{bool}}}}_{\Psi'}(\td{(M_{\id})}{\psi},M_\psi)$ and
$M_{\id}=\ensuremath{\mathsf{true}}$ or $M_{\id}=\ensuremath{\mathsf{false}}$. Consider the case $M_\psi=\ensuremath{\mathsf{true}}$:
we must show
$\coftype[\Psi']{\ifb{b.\td{A}{\psi}}{\ensuremath{\mathsf{true}}}{\td{T}{\psi}}{\td{F}{\psi}}}%
{\subst{\td{C}{\psi}}{\td{M}{\psi}}{b}}$.
By \cref{lem:coftype-evals-ceqtm} we have
$\ceqtm[\Psi']{\td{M}{\psi}}{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}$ so
$\ceqtype{pre}[\Psi']{\subst{\td{C}{\psi}}{\td{M}{\psi}}{b}}{\subst{\td{C}{\psi}}{\ensuremath{\mathsf{true}}}{b}}$.
The result follows by \cref{rul:bool-comp} (with
$B=\subst{\td{C}{\psi}}{\ensuremath{\mathsf{true}}}{b}$). The $M_\psi=\ensuremath{\mathsf{false}}$ case is symmetric.
We conclude by \cref{lem:cohexp-ceqtm} that
$\ceqtm{\ifb{b.A}{M}{T}{F}}{\ifb{b.A}{M_{\id}}{T}{F}}{\subst{C}{M}{b}}$. By
transitivity, \cref{lem:coftype-evals-ceqtm}, and the same argument on the
right, it suffices to show
$\ceqtm{\ifb{b.A}{M_{\id}}{T}{F}}{\ifb{b.A'}{M'_{\id}}{T'}{F'}}{\subst{C}{M_{\id}}{b}}$.
By $\ceqtm{M}{M'}{\ensuremath{\mathsf{bool}}}$, either $M_{\id}=M'_{\id}=\ensuremath{\mathsf{true}}$ or
$M_{\id}=M'_{\id}=\ensuremath{\mathsf{false}}$, and in either case the result follows by
\cref{rul:bool-comp} on both sides.
\end{proof}
Notice that \cref{rul:bool-elim} places no restrictions on the motives $b.A$ and
$b.A'$; these motives are only relevant in the elimination rule for $\ensuremath{\mathsf{wbool}}$.
\begin{lemma}\label{lem:bool-discrete}
If $\coftype[\Psi,y]{M}{\ensuremath{\mathsf{bool}}}$ then
$\ceqtm{\dsubst{M}{r}{y}}{\dsubst{M}{r'}{y}}{\ensuremath{\mathsf{bool}}}$.
\end{lemma}
\begin{proof}
By $\lift{\vper{\ensuremath{\mathsf{bool}}}}_{(\Psi,y)}(M,M)$ we know $M\ensuremath{\Downarrow}\ensuremath{\mathsf{true}}$ or
$M\ensuremath{\Downarrow}\ensuremath{\mathsf{false}}$, so by \cref{lem:coftype-evals-ceqtm} either
$\ceqtm[\Psi,y]{M}{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}$ or $\ceqtm[\Psi,y]{M}{\ensuremath{\mathsf{false}}}{\ensuremath{\mathsf{bool}}}$. In the
former case, both $\ceqtm{\dsubst{M}{r}{y}}{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}$ and
$\ceqtm{\dsubst{M}{r'}{y}}{\ensuremath{\mathsf{true}}}{\ensuremath{\mathsf{bool}}}$, and similarly in the latter case.
\end{proof}
\begin{rul}[Kan type formation]\label{rul:bool-form-kan}
$\cwftype{Kan}{\ensuremath{\mathsf{bool}}}$.
\end{rul}
\begin{proof}
It suffices to check the five Kan conditions.
($\Hcom$) Suppose that
\begin{enumerate}
\item $\etc{r_i=r_i'}$ is valid,
\item $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathsf{bool}}}$,
\item $\ceqtm[\Psi',y]<r_i=r_i',r_j=r_j'>{N_i}{N_j'}{\ensuremath{\mathsf{bool}}}$ for any $i,j$, and
\item $\ceqtm[\Psi']<r_i=r_i'>{\dsubst{N_i}{r}{y}}{M}{\ensuremath{\mathsf{bool}}}$ for any $i$,
\end{enumerate}
and show $\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathsf{bool}}}{r_i=r_i'}}%
{\Hcom{\ensuremath{\mathsf{bool}}}{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}}{\ensuremath{\mathsf{bool}}}$.
This is immediate by \cref{lem:expansion} on both sides, because
$\Hcom*{\ensuremath{\mathsf{bool}}}{r_i=r_i'}\ensuremath{\steps_\stable} M$ and $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathsf{bool}}}$.
Similarly, if $r=r'$ it is immediate that
$\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathsf{bool}}}{r_i=r_i'}}{M}{\ensuremath{\mathsf{bool}}}$.
Now suppose that $r_i = r_i'$, and show
$\ceqtm[\Psi']{\Hcom*{\ensuremath{\mathsf{bool}}}{r_i=r_i'}}{\dsubst{N_i}{r'}{y}}{\ensuremath{\mathsf{bool}}}$.
By \cref{lem:expansion} it suffices to show
$\ceqtm[\Psi']{M}{\dsubst{N_i}{r'}{y}}{\ensuremath{\mathsf{bool}}}$, which holds by
$\ceqtm[\Psi']{M}{\dsubst{N_i}{r}{y}}{\ensuremath{\mathsf{bool}}}$ and \cref{lem:bool-discrete}.
($\Coe$) Suppose that $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathsf{bool}}}$, and show that
$\ceqtm[\Psi']{\Coe*{x.\ensuremath{\mathsf{bool}}}}{\Coe{x.\ensuremath{\mathsf{bool}}}{r}{r'}{M'}}{\ensuremath{\mathsf{bool}}}$. This is
immediate by \cref{lem:expansion} on both sides, because $\Coe*{x.\ensuremath{\mathsf{bool}}}\ensuremath{\steps_\stable}
M$ and $\ceqtm[\Psi']{M}{M'}{\ensuremath{\mathsf{bool}}}$. Similarly, if $r=r'$ it is immediate that
$\ceqtm[\Psi']{\Coe*{x.\ensuremath{\mathsf{bool}}}}{M}{\ensuremath{\mathsf{bool}}}$.
\end{proof}
\subsection{Natural numbers}
Let $\tau=\Kan\mu(\nu)$ or $\pre\mu(\nu,\sigma)$ for any cubical type systems
$\nu,\sigma$; we have
$\tau(\Psi,\ensuremath{\mathsf{nat}},\ensuremath{\mathsf{nat}},\mathbb{N}_\Psi)$, where $\mathbb{N}$ is the least
context-indexed relation such that $\mathbb{N}_\Psi(\ensuremath{\mathsf{z}},\ensuremath{\mathsf{z}})$ and
$\mathbb{N}_\Psi(\suc{M},\suc{M'})$ when $\ensuremath{\mathsf{Tm}}(\mathbb{N}(\Psi))(M,M')$. By
$\sisval{\ensuremath{\mathsf{nat}}}$, $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,\ensuremath{\mathsf{nat}},\ensuremath{\mathsf{nat}},\mathbb{N}(\Psi))$.
\begin{rul}[Pretype formation]
$\cwftype{pre}{\ensuremath{\mathsf{nat}}}$.
\end{rul}
\begin{proof}
It suffices to show $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{nat}}})$. We have $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}})(\ensuremath{\mathsf{z}},\ensuremath{\mathsf{z}})$ and
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}})(\suc{M},\suc{M'})$ when $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}})(M,M')$ by
$\sisval{\ensuremath{\mathsf{z}}}$, $\sisval{\suc{M}}$, and
$\ensuremath{\mathsf{Tm}}(\td{\vper{\ensuremath{\mathsf{nat}}}}{\psi})(\td{M}{\psi},\td{M'}{\psi})$ for all $\tds{\Psi'}{\psi}{\Psi}$.
\end{proof}
\begin{rul}[Introduction]
$\coftype{\ensuremath{\mathsf{z}}}{\ensuremath{\mathsf{nat}}}$ and if $\ceqtm{M}{M'}{\ensuremath{\mathsf{nat}}}$ then
$\ceqtm{\suc{M}}{\suc{M'}}{\ensuremath{\mathsf{nat}}}$.
\end{rul}
\begin{proof}
Immediate by $\ensuremath{\mathsf{Coh}}(\vper{\ensuremath{\mathsf{nat}}})$.
\end{proof}
\begin{rul}[Elimination]\label{rul:nat-elim}
If $\wftype{pre}{\oft{n}{\ensuremath{\mathsf{nat}}}}{A}$,
$\ceqtm{M}{M'}{\ensuremath{\mathsf{nat}}}$,
$\ceqtm{Z}{Z'}{\subst{A}{\ensuremath{\mathsf{z}}}{n}}$, and
$\eqtm{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{S'}{\subst{A}{\suc{n}}{n}}$, then
$\ceqtm{\natrec{M}{Z}{n.a.S}}{\natrec{M'}{Z'}{n.a.S'}}{\subst{A}{M}{n}}$.
\end{rul}
\begin{proof}
We induct over the definition of $\vper{\ensuremath{\mathsf{nat}}}$. The equality relation of $\ensuremath{\mathsf{nat}}$,
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}})$, is the lifting of the least pre-fixed point of an
order-preserving operator $N$ on context-indexed relations over values.
Therefore, we prove (1) the elimination rule lifts from values to elements; (2)
the elimination rule holds for values; and thus (3) the elimination rule holds
for elements.
Define $\Phi_\Psi(M_0,M_0')$ to hold
when $\vper{\ensuremath{\mathsf{nat}}}_\Psi(M_0,M_0')$ and for all
$\wftype{pre}{\oft{n}{\ensuremath{\mathsf{nat}}}}{A}$,
$\ceqtm{Z}{Z'}{\subst{A}{\ensuremath{\mathsf{z}}}{n}}$, and
$\eqtm{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{S'}{\subst{A}{\suc{n}}{n}}$, we have
$\ceqtm{\natrec{M_0}{Z}{n.a.S}}{\natrec{M_0'}{Z'}{n.a.S'}}{\subst{A}{M_0}{n}}$.
\begin{enumerate}
\item If $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$ then the elimination rule holds for $M,M'$.
By definition, $\Phi\subseteq\vper{\ensuremath{\mathsf{nat}}}$, so because $\ensuremath{\mathsf{Tm}}$ is order-preserving,
$\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}}(\Psi))(M,M')$. Apply coherent expansion to
$\natrec{M}{Z}{n.a.S}$ at $\cwftype{pre}{\subst{A}{M}{n}}$ with
$\{\natrec{M_\psi}{\td{Z}{\psi}}{n.a.\td{S}{\psi}}\mid
\td{M}{\psi}\ensuremath{\Downarrow} M_\psi\}^{\Psi'}_\psi$.
Then $\coftype[\Psi']{\natrec{M_\psi}{\td{Z}{\psi}}{n.a.\td{S}{\psi}}}%
{\subst{\td{A}{\psi}}{M_\psi}{n}}$ for all $\tds{\Psi'}{\psi}{\Psi}$ because
$\lift{\Phi}_\Psi(M,M')$ by $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$. We must show
\[
\ceqtm[\Psi']{\natrec{M_\psi}{\td{Z}{\psi}}{n.a.\td{S}{\psi}}}%
{\natrec{\td{(M_{\id})}{\psi}}{\td{Z}{\psi}}{n.a.\td{S}{\psi}}}%
{\subst{\td{A}{\psi}}{M_\psi}{n}}
\]
but by \cref{lem:coftype-ceqtm} and
$\ceqtm[\Psi']{\td{(M_{\id})}{\psi}}{M_\psi}{\ensuremath{\mathsf{nat}}}$ it suffices to show these
$\natrec$ are related by $\lift{\vper{\subst{\td{A}{\psi}}{M_\psi}{n}}}$, which
follows from $\lift{\Phi}_{\Psi'}(\td{(M_{\id})}{\psi},M_\psi)$.
\item If $\vper{\ensuremath{\mathsf{nat}}}_\Psi(M_0,M_0')$ then $\Phi_\Psi(M_0,M_0')$.
We prove that $N(\Phi)\subseteq\Phi$; then $\Phi$ is a pre-fixed point of $N$,
and $\vper{\ensuremath{\mathsf{nat}}}\subseteq\Phi$ because $\vper{\ensuremath{\mathsf{nat}}}$ is the least pre-fixed
point of $N$. Suppose $N(\Phi)_\Psi(M_0,M_0')$. There are two cases:
\begin{enumerate}
\item $M_0=M_0'=\ensuremath{\mathsf{z}}$.
Show $\ceqtm{\natrec{\ensuremath{\mathsf{z}}}{Z}{n.a.S}}{\natrec{\ensuremath{\mathsf{z}}}{Z'}{n.a.S'}}{\subst{A}{\ensuremath{\mathsf{z}}}{n}}$,
which is immediate by $\ceqtm{Z}{Z'}{\subst{A}{\ensuremath{\mathsf{z}}}{n}}$ and \cref{lem:expansion}
on both sides.
\item $M_0=\suc{M}$, $M_0'=\suc{M'}$, and $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$.
Show $\ceqtm{\natrec{\suc{M}}{Z}{n.a.S}}{\natrec{\suc{M'}}{Z'}{n.a.S'}}%
{\subst{A}{\suc{M}}{n}}$. By \cref{lem:expansion} on both sides, it suffices to
show
\[
\ceqtm{\subst{\subst{S}{M}{n}}{\natrec{M}{Z}{n.a.S}}{a}}%
{\subst{\subst{S'}{M'}{n}}{\natrec{M'}{Z'}{n.a.S'}}{a}}%
{\subst{A}{\suc{M}}{n}}.
\]
We have $\ceqtm{M}{M'}{\ensuremath{\mathsf{nat}}}$ and
$\ceqtm{\natrec{M}{Z}{n.a.S}}{\natrec{M'}{Z'}{n.a.S'}}{\subst{A}{M}{n}}$ by
$\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$, so the result follows by
$\eqtm{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{S'}{\subst{A}{\suc{n}}{n}}$.
\end{enumerate}
\item Assume $\ensuremath{\mathsf{Tm}}(\vper{\ensuremath{\mathsf{nat}}}(\Psi))(M,M')$; $\ensuremath{\mathsf{Tm}}$ is order-preserving and
$\vper{\ensuremath{\mathsf{nat}}}\subseteq\Phi$, so $\ensuremath{\mathsf{Tm}}(\Phi(\Psi))(M,M')$. Thus the elimination
rule holds for $M,M'$, completing the proof.
\qedhere
\end{enumerate}
\end{proof}
\begin{rul}[Computation]
~\begin{enumerate}
\item If $\coftype{Z}{A}$ then $\ceqtm{\natrec{\ensuremath{\mathsf{z}}}{Z}{n.a.S}}{Z}{A}$.
\item If $\wftype{pre}{\oft{n}{\ensuremath{\mathsf{nat}}}}{A}$,
$\coftype{M}{\ensuremath{\mathsf{nat}}}$,
$\coftype{Z}{\subst{A}{\ensuremath{\mathsf{z}}}{n}}$, and
$\oftype{\oft{n}{\ensuremath{\mathsf{nat}}},\oft{a}{A}}{S}{\subst{A}{\suc{n}}{n}}$, then
$\ceqtm{\natrec{\suc{M}}{Z}{n.a.S}}%
{\subst{\subst{S}{M}{n}}{\natrec{M}{Z}{n.a.S}}{a}}%
{\subst{A}{\suc{M}}{n}}$.
\end{enumerate}
\end{rul}
\begin{proof}
Part (1) is immediate by \cref{lem:expansion}. For part (2), we have
$\coftype{\natrec{M}{Z}{n.a.S}}{\subst{A}{M}{n}}$ and thus
$\coftype{\subst{\subst{S}{M}{n}}{\natrec{M}{Z}{n.a.S}}{a}}{\subst{A}{\suc{M}}{n}}$
by \cref{rul:nat-elim}, so the result again follows by \cref{lem:expansion}.
\end{proof}
\begin{rul}[Kan type formation]
$\cwftype{Kan}{\ensuremath{\mathsf{nat}}}$.
\end{rul}
\begin{proof}
Identical to \cref{rul:bool-form-kan}.
\end{proof}
\section{Types}
\label{sec:types}
In \cref{sec:typesys} we defined two sequences of cubical type systems, and in
\cref{sec:meanings} we defined the judgments of higher type theory relative to
any cubical type system. In this section we will prove that $\pre\tau_\omega$
validates certain rules, summarized in part in \cref{sec:rules}. For
non-universe connectives, we in fact prove that the rules hold in every
$\tau^\kappa_n$ and $\tau^\kappa_\omega$.
\input{types/funpair}
\input{types/patheq}
\input{types/inductive}
\input{types/hits}
\input{types/ua}
\input{types/fcom}
\input{types/universes}
\section{Cubical type systems}
\label{sec:typesys}
In this paper, we define the judgments of higher type theory relative to a
\emph{cubical type system}, a family of relations over values in the
previously-described programming language. In this section we describe how to
construct a particular cubical type system that will validate the rules given in
\cref{sec:rules}; this construction is based on similar constructions outlined
by \citet{allen1987types} and \citet{harper1992typesys}.
\begin{definition}
A \emph{candidate cubical type system} is a relation $\tau(\Psi,A_0,B_0,\phi)$
over $\wfval{A_0}$, $\wfval{B_0}$, and binary relations $\phi(M_0,N_0)$ over
$\wfval{M_0}$ and $\wfval{N_0}$.
\end{definition}
For any relation $R$ with value arguments, we define $\lift{R}$ as its
evaluation lifting to terms. For example, $\lift{\tau}(\Psi,A,B,\phi)$ when
there exist $A_0$ and $B_0$ such that $A\ensuremath{\Downarrow} A_0$, $B\ensuremath{\Downarrow} B_0$, and
$\tau(\Psi,A_0,B_0,\phi)$.
\begin{definition}
A \emph{$\Psi$-relation} is a family of binary relations $\alpha_\psi(M,N)$
indexed by substitutions $\tds{\Psi'}{\psi}{\Psi}$, relating $\wftm[\Psi']{M}$ and
$\wftm[\Psi']{N}$. (We will write $\alpha(M,N)$ in place of
$\alpha_{\id}(M,N)$.) We are often interested in \emph{$\Psi$-relations over
values}, which relate only values. If a $\Psi$-relation depends only on the
choice of $\Psi'$ and not $\psi$, we instead call it \emph{context-indexed} and
write $\alpha_{\Psi'}(M,N)$.
\end{definition}
We can precompose any $\Psi$-relation $\alpha$ by a dimension substitution
$\tds{\Psi'}{\psi}{\Psi}$ to yield a $\Psi'$-relation $(\td{\alpha}{\psi})_{\psi'}(M,N) =
\alpha_{\psi\psi'}(M,N)$. Context-indexed relations are indeed families of
binary relations indexed by contexts $\Psi'$, because the choice of $\Psi$ and
$\psi$ are irrelevant---every $\Psi,\Psi'$ have at least one dimension
substitution between them. We write $R(\Psi')$ for the context-indexed relation
$R$ regarded as a $\Psi'$-relation.
\begin{definition}\label{def:ptyrel}
For any candidate cubical type system $\tau$, the relation
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ over $\wftm{A}$, $\wftm{B}$, and a $\Psi$-relation
over values $\alpha$ holds if for all $\tds{\Psi'}{\psi}{\Psi}$ we have
$\lift{\tau}(\Psi',\td{A}{\psi},\td{B}{\psi},\alpha_\psi)$, and for all
$\tds{\Psi_1}{\psi_1}{\Psi}$ and $\tds{\Psi_2}{\psi_2}{\Psi_1}$, we have
$\td{A}{\psi_1} \ensuremath{\Downarrow} A_1$, $\td{B}{\psi_1} \ensuremath{\Downarrow} B_1$,
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{A}{\psi_1\psi_2},\phi)$,
$\lift{\tau}(\Psi_2,\td{A}{\psi_1\psi_2},\td{A_1}{\psi_2},\phi)$,
$\lift{\tau}(\Psi_2,\td{B_1}{\psi_2},\td{B}{\psi_1\psi_2},\phi)$,
$\lift{\tau}(\Psi_2,\td{B}{\psi_1\psi_2},\td{B_1}{\psi_2},\phi)$, and
$\lift{\tau}(\Psi_2,\td{A_1}{\psi_2},\td{B_1}{\psi_2},\phi)$.
\end{definition}
\begin{definition}\label{def:tmrel}
For any $\Psi$-relation on values $\alpha$, the relation $\ensuremath{\mathsf{Tm}}(\alpha)(M,N)$ over
$\wftm{M}$ and $\wftm{N}$ holds if for all $\tds{\Psi_1}{\psi_1}{\Psi}$ and
$\tds{\Psi_2}{\psi_2}{\Psi_1}$, we have $\td{M}{\psi_1} \ensuremath{\Downarrow} M_1$,
$\td{N}{\psi_1} \ensuremath{\Downarrow} N_1$,
$\lift{\alpha_{\psi_1\psi_2}}(\td{M_1}{\psi_2},\td{M}{\psi_1\psi_2})$,
$\lift{\alpha_{\psi_1\psi_2}}(\td{M}{\psi_1\psi_2},\td{M_1}{\psi_2})$,
$\lift{\alpha_{\psi_1\psi_2}}(\td{N_1}{\psi_2},\td{N}{\psi_1\psi_2})$,
$\lift{\alpha_{\psi_1\psi_2}}(\td{N}{\psi_1\psi_2},\td{N_1}{\psi_2})$, and
$\lift{\alpha_{\psi_1\psi_2}}(\td{M_1}{\psi_2},\td{N_1}{\psi_2})$.
\end{definition}
\begin{definition}
A $\Psi$-relation on values $\alpha$ is \emph{value-coherent}, or
$\ensuremath{\mathsf{Coh}}(\alpha)$, when for all $\tds{\Psi'}{\psi}{\Psi}$, if $\alpha_\psi(M_0,N_0)$ then
$\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M_0,N_0)$.
\end{definition}
These relations are closed under dimension substitution by construction---for
any $\tds{\Psi'}{\psi}{\Psi}$, if $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,B,\alpha)$ then
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\td{A}{\psi},\td{B}{\psi},\td{\alpha}{\psi})$, if
$\ensuremath{\mathsf{Tm}}(\alpha)(M,N)$ then $\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(\td{M}{\psi},\td{N}{\psi})$, and
if $\ensuremath{\mathsf{Coh}}(\alpha)$ then $\ensuremath{\mathsf{Coh}}(\td{\alpha}{\psi})$.
\subsection{Fixed points}
$\Psi$-relations (and context-indexed relations) over values form a complete
lattice when ordered by inclusion. By the Knaster-Tarski fixed point theorem,
any order-preserving operator $F(x)$ on a complete lattice has a least fixed
point $\mu x.F(x)$ that is also its least pre-fixed point
\citep[2.35]{daveypriestleylattices}.
We define the canonical element equality relations of inductive
types---$\mathbb{N}$ for natural numbers, $\mathbb{B}$ for weak booleans, and
$\mathbb{C}$ for the circle---as context-indexed relations (written here as
three-place relations) that are least fixed points of order-preserving
operators:
\begin{align*}
\mathbb{N} &= \mu R.
(\{(\Psi,\ensuremath{\mathsf{z}},\ensuremath{\mathsf{z}})\}\cup \{(\Psi,\suc{M},\suc{M'}) \mid \ensuremath{\mathsf{Tm}}(R(\Psi))(M,M')\}) \\
\mathbb{B} &= \mu R.
(\{(\Psi,\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}}),(\Psi,\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})\} \cup \textsc{FKan}(R)) \\
\mathbb{C} &= \mu R.
(\{(\Psi,\ensuremath{\mathsf{base}},\ensuremath{\mathsf{base}}),((\Psi,x),\lp{x},\lp{x})\} \cup \textsc{FKan}(R))
\end{align*}
where
{\def\hphantom{{}={}}{\hphantom{{}={} \{}}
\begin{align*}
\textsc{FKan}(R) &=
\{(\Psi,\Fcom*{r_i=r_i'},\Fcom{r}{r'}{M'}{\sys{r_i=r_i'}{y.N_i'}}) \mid \\&\hphantom{{}={}}
(r\neq r') \land
(\forall i. r_i \neq r_i') \land
(\exists i,j. (r_i = r_j) \land (r_i' = 0) \land (r_j' = 1)) \land
\ensuremath{\mathsf{Tm}}(R(\Psi))(M,M') \\&\hphantom{{}={}}
{}\land (\forall i,j,\tds{\Psi'}{\psi}{(\Psi,y)}.
((\td{r_i}{\psi} = \td{r_i'}{\psi}) \land
(\td{r_j}{\psi} = \td{r_j'}{\psi})) \implies
\ensuremath{\mathsf{Tm}}(R(\Psi'))(\td{N_i}{\psi},\td{N_j'}{\psi})) \\&\hphantom{{}={}}
{}\land (\forall i,\tds{\Psi'}{\psi}{\Psi}.
(\td{r_i}{\psi} = \td{r_i'}{\psi}) \implies
\ensuremath{\mathsf{Tm}}(R(\Psi'))(\td{\dsubst{N_i}{r}{y}}{\psi},\td{M}{\psi})) \})
\end{align*}}%
The operators $\ensuremath{\mathsf{Tm}}$ and $\textsc{FKan}$ are order-preserving because they
only use their argument relations in positive positions.
Similarly, candidate cubical type systems form a complete lattice, and we define
a sequence of candidate cubical type systems as least fixed points of
order-preserving operators, using the following auxiliary definitions for each
type former:
{\def\hphantom{{}={}}{\hphantom{{}={}\{}}
\allowdisplaybreaks
\begin{align*}
\textsc{Fun}(\tau) &= \{
(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi) \mid \\&\hphantom{{}={}}
\exists \alpha,\beta^{(-,-,-)}. \ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A',\alpha)
\land \ensuremath{\mathsf{Coh}}(\alpha) \\&\hphantom{{}={}}
{}\land (\forall\psi,M,M'. \ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M') \implies\\&\hphantom{{}={}}\qquad
\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})
\land \ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})) \\&\hphantom{{}={}}
{}\land (\phi = \{(\lam{a}{N},\lam{a}{N'}) \mid
\forall\psi,M,M'. \ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M') \implies \\&\hphantom{{}={}}\qquad
\ensuremath{\mathsf{Tm}}(\beta^{\psi,M,M'})(\subst{\td{N}{\psi}}{M}{a},\subst{\td{N'}{\psi}}{M'}{a})
\}) \} \\
\textsc{Pair}(\tau) &= \{
(\Psi,\sigmacl{a}{A}{B},\sigmacl{a}{A'}{B'},\phi) \mid \\&\hphantom{{}={}}
\exists \alpha,\beta^{(-,-,-)}. \ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A',\alpha)
\land \ensuremath{\mathsf{Coh}}(\alpha) \\&\hphantom{{}={}}
{}\land (\forall\psi,M,M'. \ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M') \implies\\&\hphantom{{}={}}\qquad
\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})
\land \ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})) \\&\hphantom{{}={}}
{}\land (\phi = \{(\pair{M}{N},\pair{M'}{N'}) \mid
\ensuremath{\mathsf{Tm}}(\alpha)(M,M') \land \ensuremath{\mathsf{Tm}}(\beta^{\id[\Psi],M,M'})(N,N')
\}) \} \\
\textsc{Path}(\tau) &= \{
(\Psi,\Path{x.A}{P_0}{P_1},\Path{x.A'}{P_0'}{P_1'},\phi) \mid \\&\hphantom{{}={}}
\exists \alpha. \ensuremath{\mathsf{PTy}}(\tau)((\Psi,x),A,A',\alpha) \land \ensuremath{\mathsf{Coh}}(\alpha)
\land (\forall\ensuremath{\varepsilon}. \ensuremath{\mathsf{Tm}}(\dsubst{\alpha}{\ensuremath{\varepsilon}}{x})(P_\ensuremath{\varepsilon},P_\ensuremath{\varepsilon}')) \\&\hphantom{{}={}}
{}\land (\phi = \{(\dlam{x}{M},\dlam{x}{M'}) \mid
\ensuremath{\mathsf{Tm}}(\alpha)(M,M') \land
(\forall\ensuremath{\varepsilon}. \ensuremath{\mathsf{Tm}}(\dsubst{\alpha}{\ensuremath{\varepsilon}}{x})(\dsubst{M}{\ensuremath{\varepsilon}}{x},P_\ensuremath{\varepsilon}))
\}) \} \\
\textsc{Eq}(\tau) &= \{
(\Psi,\Eq{A}{M}{N},\Eq{A'}{M'}{N'},\phi) \mid \\&\hphantom{{}={}}
\exists \alpha. \ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A',\alpha) \land \ensuremath{\mathsf{Coh}}(\alpha)
\land \ensuremath{\mathsf{Tm}}(\alpha)(M,M') \land \ensuremath{\mathsf{Tm}}(\alpha)(N,N') \\&\hphantom{{}={}}
{}\land (\phi = \{(\ensuremath{\star},\ensuremath{\star}) \mid \ensuremath{\mathsf{Tm}}(\alpha)(M,N) \}) \} \\
\textsc{V}(\tau) &= \{
((\Psi,x),\ua{x}{A,B,E},\ua{x}{A',B',E'},\phi) \mid \\&\hphantom{{}={}}
\exists \beta,\alpha^{(-)},\eta^{(-)}.
\ensuremath{\mathsf{PTy}}(\tau)((\Psi,x),B,B',\beta) \land \ensuremath{\mathsf{Coh}}(\beta) \\&\hphantom{{}={}}
{}\land (\forall\psi.(\td{x}{\psi} = 0) \implies
\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\td{A}{\psi},\td{A'}{\psi},\alpha^\psi)
\land \ensuremath{\mathsf{Coh}}(\alpha^\psi) \\&\hphantom{{}={}}\qquad
{}\land \ensuremath{\mathsf{PTy}}(\tau)(\Psi',\Equiv{\td{A}{\psi}}{\td{B}{\psi}},
\Equiv{\td{A}{\psi}}{\td{B}{\psi}},\eta^\psi)
\land \ensuremath{\mathsf{Tm}}(\eta^\psi)(\td{E}{\psi},\td{E'}{\psi})) \\&\hphantom{{}={}}
{}\land (\phi = \{(\uain{x}{M,N},\uain{x}{M',N'}) \mid
\ensuremath{\mathsf{Tm}}(\beta)(N,N') \land (\forall\psi. (\td{x}{\psi} = 0) \implies \\&\hphantom{{}={}}\qquad
\ensuremath{\mathsf{Tm}}(\alpha^\psi)(\td{M}{\psi},\td{M'}{\psi}) \land
\ensuremath{\mathsf{Tm}}(\td{\beta}{\psi})(\app{\fst{\td{E}{\psi}}}{\td{M}{\psi}},\td{N}{\psi}) )\} \} \\
\textsc{Fcom}(\tau) &= \{
(\Psi,\Fcom{r}{r'}{A}{\sys{r_i=r_i'}{y.B_i}},
\Fcom{r}{r'}{A'}{\sys{r_i=r_i'}{y.B_i'}}) \mid \\&\hphantom{{}={}}
\exists \alpha,\beta^{(-,-,-)}.
(r\neq r') \land
(\forall i. r_i \neq r_i') \land
(\exists i,j. (r_i = r_j) \land (r_i' = 0) \land (r_j' = 1)) \\&\hphantom{{}={}}
{}\land \ensuremath{\mathsf{PTy}}(\tau)(\Psi,A,A',\alpha) \land \ensuremath{\mathsf{Coh}}(\alpha) \\&\hphantom{{}={}}
{}\land (\forall i,j,\tds{\Psi'}{\psi}{(\Psi,y)}.
((\td{r_i}{\psi} = \td{r_i'}{\psi}) \land
(\td{r_j}{\psi} = \td{r_j'}{\psi})) \implies \\&\hphantom{{}={}}\qquad
\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\td{B_i}{\psi},\td{B_j'}{\psi},\beta^{\psi,i,j})
\land \ensuremath{\mathsf{Coh}}(\beta^{\psi,i,j})) \\&\hphantom{{}={}}
{}\land (\forall i,\psi.
(\td{r_i}{\psi} = \td{r_i'}{\psi}) \implies
\ensuremath{\mathsf{PTy}}(\tau)(\Psi',\td{\dsubst{B_i}{r}{y}}{\psi},\td{A}{\psi},\_)) \}) \\&\hphantom{{}={}}
{}\land (\phi = \{(\Kbox{r}{r'}{M}{\sys{r_i=r_i'}{N_i}},
\Kbox{r}{r'}{M'}{\sys{r_i=r_i'}{N_i'}}) \mid \ensuremath{\mathsf{Tm}}(\alpha)(M,M') \\&\hphantom{{}={}}\qquad
{}\land (\forall i,j,\psi.
((\td{r_i}{\psi} = \td{r_i'}{\psi}) \land
(\td{r_j}{\psi} = \td{r_j'}{\psi})) \implies
\ensuremath{\mathsf{Tm}}(\dsubst{\beta^{\psi,i,j}}{\td{r'}{\psi}}{y})
(\td{N_i}{\psi},\td{N_j'}{\psi})) \\&\hphantom{{}={}}\qquad
{}\land (\forall i,\psi.
(\td{r_i}{\psi} = \td{r_i'}{\psi}) \implies
\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(\td{M}{\psi},
\Coe{y.\td{B_i}{\psi}}{\td{r'}{\psi}}{\td{r}{\psi}}{\td{N_i}{\psi}})) \}) \} \\
\textsc{Void} &= \{
(\Psi,\ensuremath{\mathsf{void}},\ensuremath{\mathsf{void}}, \{\}) \} \\
\textsc{Nat} &= \{
(\Psi,\ensuremath{\mathsf{nat}},\ensuremath{\mathsf{nat}},\mathbb{N}_\Psi) \} \\
\textsc{Bool} &= \{
(\Psi,\ensuremath{\mathsf{bool}},\ensuremath{\mathsf{bool}},\{(\ensuremath{\mathsf{true}},\ensuremath{\mathsf{true}}),(\ensuremath{\mathsf{false}},\ensuremath{\mathsf{false}})\}) \} \\
\textsc{WB} &= \{
(\Psi,\ensuremath{\mathsf{wbool}},\ensuremath{\mathsf{wbool}},\mathbb{B}_\Psi) \} \\
\textsc{Circ} &= \{
(\Psi,\ensuremath{\mathbb{S}^1},\ensuremath{\mathbb{S}^1},\mathbb{C}_\Psi) \} \\
\textsc{UPre}(\nu) &= \{
(\Psi,\Upre[j],\Upre[j],\phi) \mid \nu(\Psi,\Upre[j],\Upre[j],\phi) \} \\
\textsc{UKan}(\nu) &= \{
(\Psi,\UKan[j],\UKan[j],\phi) \mid \nu(\Psi,\UKan[j],\UKan[j],\phi) \}
\end{align*}}
In the $\textsc{V}$ case, and for the remainder of this paper, we use the
abbreviations
\begin{align*}
\isContr{C} &:= \prd{C}{(\picl{c}{C}{\picl{c'}{C}{\Path{\_.C}{c}{c'}}})} \\
\Equiv{A}{B} &:=
\sigmacl{f}{\arr{A}{B}}{(\picl{b}{B}{\isContr{\sigmacl{a}{A}{\Path{\_.B}{\app{f}{a}}{b}}}})}.
\end{align*}
For candidate cubical type systems $\nu,\sigma,\tau$, define
\begin{align*}
P(\nu,\sigma,\tau) ={}&
\textsc{Fun}(\tau) \cup
\textsc{Pair}(\tau) \cup
\textsc{Path}(\tau) \cup
\textsc{Eq}(\tau) \cup
\textsc{V}(\tau) \cup
\textsc{Fcom}(\sigma) \\& {}\cup
\textsc{Void} \cup
\textsc{Nat} \cup
\textsc{Bool} \cup
\textsc{WB} \cup
\textsc{Circ} \cup
\textsc{UPre}(\nu) \cup
\textsc{UKan}(\nu) \\
K(\nu,\sigma) ={}&
\textsc{Fun}(\sigma) \cup
\textsc{Pair}(\sigma) \cup
\textsc{Path}(\sigma) \cup
\textsc{V}(\sigma) \cup
\textsc{Fcom}(\sigma) \\& {}\cup
\textsc{Void} \cup
\textsc{Nat} \cup
\textsc{Bool} \cup
\textsc{WB} \cup
\textsc{Circ} \cup
\textsc{UKan}(\nu)
\end{align*}
The operator $P$ includes $\textsc{Eq}$ and $\textsc{UPre}$ while $K$ does not;
furthermore, in $P$ only $\textsc{Fcom}$ varies in $\sigma$. The operators $P$
and $K$ are order-preserving in all arguments because $\ensuremath{\mathsf{PTy}}$ and each type
operator only use their argument in strictly positive positions.
\begin{lemma}\label{lem:lfp-misc}
In any complete lattice,
\begin{enumerate}
\item If $F(x)$ and $G(x)$ are order-preserving and $F(x)\subseteq G(x)$ for all
$x$, then $\mu x.F(x) \subseteq \mu x.G(x)$.
\item If $F(x,y)$ and $G(x,y)$ are order-preserving and $F(x,y)\subseteq G(x,y)$
whenever $x\subseteq y$, then $\mu_f\subseteq\mu_g$ where $(\mu_f,\mu_g) = \mu
(x,y). (F(x,y),G(x,y))$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (1), $\mu x.G(x)$ is a pre-fixed point of $F$ because $F(\mu
x.G(x))\subseteq G(\mu x.G(x)) = \mu x.G(x)$. But $\mu x.F(x)$ is the least
such, so $\mu x.F(x) \subseteq \mu x.G(x)$.
For part (2), let $\mu_{\cap} = \mu_f \cap \mu_g$.
Note $(\mu_{\cap}, \mu_g)$ is a pre-fixed point of $(x,y)\mapsto (F(x,y), G(x,y))$
because, by assumption and $(\mu_f,\mu_g)$ being a fixed point,
$F(\mu_{\cap}, \mu_g) \subseteq F(\mu_f, \mu_g) = \mu_f$ and
$F(\mu_{\cap}, \mu_g) \subseteq G(\mu_{\cap}, \mu_g) \subseteq G(\mu_f,\mu_g) = \mu_g$.
This implies $(\mu_f, \mu_g) \subseteq (\mu_{\cap}, \mu_g)$ and thus $\mu_f \subseteq \mu_g$.
\end{proof}
\begin{lemma}
Let $\pre\mu(\nu,\sigma) = \mu\tau. P(\nu,\sigma,\tau)$ and let $\Kan\mu(\nu) =
\mu\sigma. K(\nu,\sigma)$. Then $\pre\mu(\nu,\sigma)$ and $\Kan\mu(\nu)$ are
order-preserving and $\Kan\mu(\nu)\subseteq\pre\mu(\nu,\Kan\mu(\nu))$ for all
$\nu$.
\end{lemma}
\begin{proof}
Part (1) is immediate by part (1) of \cref{lem:lfp-misc}, because
whenever $\nu\subseteq\nu'$ and $\sigma\subseteq\sigma'$,
$P(\nu,\sigma,-)\subseteq P(\nu',\sigma',-)$ and $K(\nu,-)\subseteq K(\nu',-)$.
For part (2), a theorem of \citet{Bekic1984} on simultaneous fixed points
implies $(\Kan\mu(\nu),\pre\mu(\nu,\Kan\mu(\nu))) = \mu (\sigma,\tau).
(K(\nu,\sigma),P(\nu,\sigma,\tau))$. Because each type operator is
order-preserving, $K(\nu,\sigma)\subseteq P(\nu,\sigma,\tau)$ whenever
$\sigma\subseteq\tau$. The result follows by part (2) of \cref{lem:lfp-misc}.
\end{proof}
We mutually define three sequences of candidate cubical type systems:
$\nu_{i+1}$ containing $i$ universes,
$\pre{\tau_{i+1}}$ containing the pretypes in a system with $i$ universes, and
$\Kan{\tau_{i+1}}$ containing the Kan types in a system with $i$ universes:
{\def\hphantom{{}={}}{\hphantom{{}={}}}
\begin{align*}
\nu_0 &= \emptyset \\
\nu_n &= \{ (\Psi,\Ux[j],\Ux[j],\phi) \mid (j<n) \land
(\phi = \{(A_0,B_0) \mid \tau^\kappa_j(\Psi,A_0,B_0,\_)\}) \} \\
\pre{\tau_n} &= \pre\mu(\nu_n,\Kan\mu(\nu_n)) \\
\Kan{\tau_n} &= \Kan\mu(\nu_n) \\
\nu_\omega &= \{ (\Psi,\Ux[j],\Ux[j],\phi) \mid
\phi = \{(A_0,B_0) \mid \tau^\kappa_j(\Psi,A_0,B_0,\_)\} \} \\
\pre{\tau_\omega} &= \pre\mu(\nu_\omega,\Kan\mu(\nu_\omega)) \\
\Kan{\tau_\omega} &= \Kan\mu(\nu_\omega) \\
\end{align*}}
Observe that $\nu_n\subseteq\nu_{n+i}$, $\nu_n\subseteq\nu_\omega$,
$\tau_n^\kappa\subseteq\tau_{n+i}^\kappa$, $\tau_n^\kappa\subseteq\tau_\omega^\kappa$,
$\Kan{\tau_n}\subseteq\pre{\tau_n}$, and $\Kan{\tau_\omega}\subseteq\pre{\tau_\omega}$.
\subsection{Cubical type systems}
In the remainder of this paper, we consider only candidate cubical type systems
satisfying a number of additional conditions:
\begin{definition}
A \emph{cubical type system} is a candidate cubical type system $\tau$
satisfying:
\begin{description}[align=right,labelwidth=3.5cm]
\item[Functionality.] If $\tau(\Psi,A_0,B_0,\phi)$ and $\tau(\Psi,A_0,B_0,\phi')$
then $\phi=\phi'$.
\item[PER-valuation.] If $\tau(\Psi,A_0,B_0,\phi)$ then $\phi$ is symmetric and
transitive.
\item[Symmetry.] If $\tau(\Psi,A_0,B_0,\phi)$ then $\tau(\Psi,B_0,A_0,\phi)$.
\item[Transitivity.] If $\tau(\Psi,A_0,B_0,\phi)$ and $\tau(\Psi,B_0,C_0,\phi)$
then $\tau(\Psi,A_0,C_0,\phi)$.
\item[Value-coherence.] If $\tau(\Psi,A_0,B_0,\phi)$ then
$\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A_0,B_0,\alpha)$ for some $\alpha$.
\end{description}
\end{definition}
If $\tau$ is a cubical type system, then $\ensuremath{\mathsf{PTy}}(\tau)$ is functional, symmetric,
transitive, and $\Psi$-PER-valued in the above senses. If $\alpha$ is a
$\Psi$-PER, then every $\td{\alpha}{\psi}$ is a $\Psi'$-PER, and $\ensuremath{\mathsf{Tm}}(\alpha)$
is a PER.
\begin{lemma}\label{lem:cts-cts}
If $\nu,\sigma$ are cubical type systems, then $\Kan\mu(\nu)$ and
$\pre\mu(\nu,\sigma)$ are cubical type systems.
\end{lemma}
\begin{proof}
Because the operators $\textsc{Fun}$, $\textsc{Pair}$\dots are disjoint, we can
check them individually in each case. We describe the proof for
$\pre\mu(\nu,\sigma)$; the proof for $\Kan\mu(\nu)$ follows analogously.
\begin{enumerate}
\item \emph{Functionality.}
Define a candidate cubical type system $\Phi = \{(\Psi,A_0,B_0,\phi) \mid
\forall\phi'. \pre\mu(\nu,\sigma)(\Psi,A_0,B_0,\phi') \implies (\phi=\phi')\}$.
Let us show that $\Phi$ is a pre-fixed point of $P(\nu,\sigma,-)$ (that is,
$P(\nu,\sigma,\Phi)\subseteq\Phi$). Because $\pre\mu(\nu,\sigma)$ is the least
pre-fixed point, it will follow that $\pre\mu(\nu,\sigma)\subseteq\Phi$, and
that $\pre\mu(\nu,\sigma)$ is functional.
Assume that $\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Thus $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$, and in particular, for all $\tds{\Psi'}{\psi}{\Psi}$,
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A}{\psi},\td{A'}{\psi},\phi')$ implies
$\alpha_\psi = \phi'$, so $\alpha$ is unique in $\pre\mu(\nu,\sigma)$ when it
exists. Similarly, each $\beta^{(-,-,-)}$ is unique in $\pre\mu(\nu,\sigma)$
when it exists. The relation $\phi$ is determined uniquely by $\alpha$ and
$\beta^{(-,-,-)}$. Now let us show
$\Phi(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$, that is, assume
$\pre\mu(\nu,\sigma)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi')$ and show
$\phi=\phi'$. It follows that $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,A,A',\alpha')$ for
some $\alpha'$, and similarly for some family $\beta'$, but $\alpha=\alpha'$ and
each $\beta=\beta'$. Because $\phi'$ is defined using the same $\alpha$ and
$\beta^{(-,-,-)}$ as $\phi$, we conclude $\phi=\phi'$. Other cases are similar;
for $\textsc{Fcom},\textsc{UPre},\textsc{UKan}$ we use that $\nu,\sigma$ are
functional.
\item \emph{PER-valuation.}
Define $\Phi = \{(\Psi,A_0,B_0,\phi) \mid \text{$\phi$ is a PER}\}$, and show
that $\Phi$ is a pre-fixed point of $P(\nu,\sigma,-)$. It follows that
$\pre\mu(\nu,\sigma)$ is PER-valued, by $\pre\mu(\nu,\sigma)\subseteq\Phi$.
Assume that $\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Then $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$, and in particular, for all $\tds{\Psi'}{\psi}{\Psi}$,
$\lift{\Phi}(\Psi',\td{A}{\psi},\td{A'}{\psi},\alpha_\psi)$, so each
$\alpha_\psi$ is a PER. Similarly, each $\beta^{\psi,M,M'}_{\psi'}$ is a PER.
Now we must show $\Phi(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$. The relation
$\phi$ is a PER because $\ensuremath{\mathsf{Tm}}(\alpha\psi)$ and $\ensuremath{\mathsf{Tm}}(\beta^{\psi,M,M'})$ are PERs,
because $\alpha_\psi$ and $\beta^{\psi,M,M'}_{\psi'}$ are PERs. Most cases
proceed in this fashion. For $\textsc{Nat}$, $\textsc{WB}$, and
$\textsc{Circ}$ we show that $\mathbb{N}$, $\mathbb{B}$, and $\mathbb{C}$ are
symmetric and transitive at each dimension (employing the same strategy as in
parts (3--4)); for $\textsc{Fcom},\textsc{UPre},\textsc{UKan}$ we use that
$\sigma,\nu$ are PER-valued.
\item \emph{Symmetry.}
Define $\Phi = \{(\Psi,A_0,B_0,\phi) \mid
\pre\mu(\nu,\sigma)(\Psi,B_0,A_0,\phi)\}$. Let us show that $\Phi$ is a pre-fixed point
of $P(\nu,\sigma,-)$. It will follow that $\pre\mu(\nu,\sigma)$ is symmetric, by
$\pre\mu(\nu,\sigma)\subseteq\Phi$.
Assume that $\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Then $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$ and $\ensuremath{\mathsf{Coh}}(\alpha)$, and thus
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A'}{\psi},\td{A}{\psi},\alpha_\psi)$,
$\td{A}{\psi_1} \ensuremath{\Downarrow} A_1$, $\td{A'}{\psi_1} \ensuremath{\Downarrow} A_1'$, and
$\lift{\pre\mu(\nu,\sigma)}(\Psi_2,-,-,\phi)$ relates
$(\td{A}{\psi_1\psi_2},\td{A_1}{\psi_2})$,
$(\td{A_1}{\psi_2},\td{A}{\psi_1\psi_2})$,
$(\td{A'}{\psi_1\psi_2},\td{A_1'}{\psi_2})$,
$(\td{A_1'}{\psi_2},\td{A'}{\psi_1\psi_2})$, and
$(\td{A_1'}{\psi_2},\td{A_1}{\psi_2})$.
Similar facts hold by virtue of $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})$ and $\ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})$.
We must show $\Phi(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$, that is,
$\pre\mu(\nu,\sigma)(\Psi,\picl{a}{A'}{B'},\picl{a}{A}{B},\phi)$.
This requires $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,A',A,\alpha)$ and $\ensuremath{\mathsf{Coh}}(\alpha)$,
which follows from the above facts; and also
$\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,\subst{\td{B'}{\psi}}{M}{a},
\subst{\td{B}{\psi}}{M'}{a},\beta^{\psi,M,M'})$ and $\ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})$
whenever $\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M')$, which follows from the symmetry of
$\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})$ (since each $\alpha_\psi$ is a PER, by (2)), and the
above facts. Other cases are similar; for $\textsc{Fcom}$ we use that $\sigma$
is symmetric.
\item \emph{Transitivity.}
Define $\Phi = \{(\Psi,A_0,B_0,\phi) \mid \forall C_0.
\pre\mu(\nu,\sigma)(\Psi,B_0,C_0,\phi)\implies
\pre\mu(\nu,\sigma)(\Psi,A_0,C_0,\phi)\}$. Let us show that $\Phi$ is a
pre-fixed point of $P(\nu,\sigma,-)$. It will follow that $\pre\mu(\nu,\sigma)$
is transitive, by $\pre\mu(\nu,\sigma)\subseteq\Phi$.
Assume that $\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Then $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$, and thus if
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A'}{\psi},C_0,\alpha_\psi)$ then
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A}{\psi},C_0,\alpha_\psi)$. Furthermore,
$\td{A}{\psi_1} \ensuremath{\Downarrow} A_1$, $\td{A'}{\psi_1} \ensuremath{\Downarrow} A_1'$, and for any $C_0$,
$\lift{\pre\mu(\nu,\sigma)}(\Psi_2,-,-,\phi)$ relates
$(\td{A}{\psi_1\psi_2},C_0)$ if and only if $(\td{A_1}{\psi_2},C_0)$;
$(\td{A'}{\psi_1\psi_2},C_0)$ if and only if $(\td{A_1'}{\psi_2},C_0)$;
and if $(\td{A_1'}{\psi_2},C_0)$ then $(\td{A_1}{\psi_2},C_0)$. Similar facts
hold by virtue of $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})$.
Now we must show $\Phi(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$, that is, if
$\pre\mu(\nu,\sigma)(\Psi,\picl{a}{A'}{B'},C_0,\phi)$ then
$\pre\mu(\nu,\sigma)(\Psi,\picl{a}{A}{B},C_0,\phi)$. By inspecting $P$, we see this is
only possible if $C_0 = \picl{a}{A''}{B''}$, in which case
$\pre\mu(\nu,\sigma)(\Psi,\picl{a}{A'}{B'},\picl{a}{A''}{B''},\phi)$. Thus we have
$\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,A',A'',\alpha')$ and $\ensuremath{\mathsf{Coh}}(\alpha')$, so
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A'}{\psi},\td{A''}{\psi},\alpha'_\psi)$, and by
hypothesis, $\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A}{\psi},\td{A''}{\psi},\alpha_\psi)$
and $\ensuremath{\mathsf{Coh}}(\alpha)$. We already know $\td{A}{\psi_1} \ensuremath{\Downarrow} A_1$,
$\td{A''}{\psi_1} \ensuremath{\Downarrow} A_1''$,
and that $\lift{\pre\mu(\nu,\sigma)}(\Psi_2,-,-,\phi)$ relates
$(\td{A''}{\psi_1\psi_2},\td{A''_1}{\psi_2})$ and vice versa. By
$(\td{A'_1}{\psi_2},\td{A''_1}{\psi_2})$ and the above, we have
$(\td{A_1}{\psi_2},\td{A''_1}{\psi_2})$. Finally, by
$(\td{A'}{\psi_1\psi_2},\td{A'_1}{\psi_2})$ and transitivity we have
$(\td{A'_1}{\psi_2},\td{A'_1}{\psi_2})$, hence by transitivity and symmetry
$(\td{A'_1}{\psi_2},\td{A_1}{\psi_2})$, and again by transitivity
$(\td{A_1}{\psi_2},\td{A_1}{\psi_2})$; as needed,
$(\td{A_1}{\psi_2},\td{A_0}{\psi_2})$ and vice versa follow by transitivity.
As before, $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B''}{\psi}}{M'}{a},\beta^{\psi,M,M'})$ and $\ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})$
when $\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M')$ follows by transitivity of
$\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})$ (since each $\alpha_\psi$ is a PER, by (2)). Other
cases are similar; for $\textsc{Fcom}$ we use that $\sigma$ is transitive.
\item \emph{Value-coherence.}
Define $\Phi = \{(\Psi,A_0,B_0,\phi) \mid
\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,A_0,B_0,\alpha)\}$. Let us show that $\Phi$ is a
pre-fixed point of $P(\nu,\sigma,-)$. The property $P(\nu,\sigma,\Phi)\subseteq\Phi$ holds
trivially for base types $\textsc{Void}$, $\textsc{Nat}$\dots as well as
universes $\textsc{UPre}$ and $\textsc{UKan}$; we check $\textsc{Fun}$
($\textsc{Pair}$, $\textsc{Path}$, and $\textsc{Eq}$ are similar) and
$\textsc{V}$ ($\textsc{Fcom}$ is similar). It will follow that
$\pre\mu(\nu,\sigma)$ is value-coherent, by $\pre\mu(\nu,\sigma)\subseteq\Phi$.
Assume that $\textsc{Fun}(\Phi)(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi)$.
Then by $\ensuremath{\mathsf{PTy}}(\Phi)(\Psi,A,A',\alpha)$ and $\ensuremath{\mathsf{Coh}}(\alpha)$, we have
$\lift{\Phi}(\Psi',\td{A}{\psi},\td{A'}{\psi},\alpha_\psi)$,
$\td{A}{\psi_1}\ensuremath{\Downarrow} A_1$, $\td{A'}{\psi_1}\ensuremath{\Downarrow} A_1'$,
$\lift{\Phi}(\Psi_2,\td{A_1}{\psi_2},\td{A}{\psi_1\psi_2},\phi')$, and so forth.
Note that for values $A_0,B_0$, if $\ensuremath{\mathsf{PTy}}(\tau)(\Psi,A_0,B_0,\alpha)$ then
$\tau(\Psi,A_0,B_0,\alpha_{\id})$ by definition. Therefore
$\lift{\pre\mu(\nu,\sigma)}(\Psi',\td{A}{\psi},\td{A'}{\psi},\alpha_\psi)$, and
so forth. We get similar facts for each $\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M')$ by
$\ensuremath{\mathsf{PTy}}(\Phi)(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})$ and $\ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})$.
We must show $\Phi(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\phi')$, that
is, $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,\picl{a}{A}{B},\picl{a}{A'}{B'},\gamma)$. We
know $\sisval{\picl{a}{A}{B}}$, and by the above,
$\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,A,A',\alpha)$, $\ensuremath{\mathsf{Coh}}(\alpha)$, and when
$\ensuremath{\mathsf{Tm}}(\td{\alpha}{\psi})(M,M')$,
$\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi',\subst{\td{B}{\psi}}{M}{a},
\subst{\td{B'}{\psi}}{M'}{a},\beta^{\psi,M,M'})$ and $\ensuremath{\mathsf{Coh}}(\beta^{\psi,M,M'})$.
The result holds because $\ensuremath{\mathsf{PTy}}$, $\ensuremath{\mathsf{Tm}}$, and $\ensuremath{\mathsf{Coh}}$ are closed under dimension
substitution.
The $\textsc{V}$ case is mostly similar, but not all instances of
$\ua{x}{A,B,E}$ have the same head constructor. Repeating the previous argument,
by $\textsc{V}(\Phi)(\Psi,\ua{x}{A,B,E},\ua{x}{A',B',E'})$ we have that
$\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,B,B',\beta)$ and for all $\psi$ with $\td{x}{\psi} =
0$, $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi',\td{A}{\psi},\td{A'}{\psi},\alpha^\psi)$. However,
in order to prove $\ensuremath{\mathsf{PTy}}(\pre\mu(\nu,\sigma))(\Psi,\ua{x}{A,B,E},\ua{x}{A',B',E'})$, we
must observe that when $\td{x}{\psi} = 0$,
$\ua{0}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}\ensuremath{\longmapsto}\td{A}{\psi}$; when
$\td{x}{\psi} = 1$,
$\ua{1}{\td{A}{\psi},\td{B}{\psi},\td{E}{\psi}}\ensuremath{\longmapsto}\td{B}{\psi}$; and for
every $\psi_1,\psi_2$ the appropriate relations hold in $\pre\mu(\nu,\sigma)$.
See \cref{rul:ua-form-pre} for the full proof, and \cref{lem:fcom-preform} for
the corresponding proof for $\textsc{Fcom}$.
\qedhere
\end{enumerate}
\end{proof}
\begin{theorem}
$\tau^\kappa_n$ and $\tau^\kappa_\omega$ are cubical type systems.
\end{theorem}
\begin{proof}\mbox{}
\paragraph{System $\tau^\kappa_n$.}
Use strong induction on $n$. Clearly $\nu_0$
is a cubical type system; by \cref{lem:cts-cts} so are $\Kan\tau_0$ and
thus $\pre\tau_0$. Suppose $\tau^\kappa_j$ are cubical type systems for $j<n$.
Then $\nu_n$ is a cubical type system: functionality, symmetry,
transitivity, and value-coherence are immediate; PER-valuation follows from the
previous $\tau^\kappa_j$ being cubical type systems. The induction step follows
by \cref{lem:cts-cts}.
\paragraph{System $\tau^\kappa_\omega$.}
Because each $\tau^\kappa_n$ is a cubical type system, so
is $\nu_\omega$ (as before), and so are $\tau^\kappa_\omega$.
\end{proof}
The cubical type systems employed by \citet{ah2016cubicaldep} are equivalent to
candidate cubical type systems satisfying conditions (1--4): define $A_0
\approx^\Psi B_0$ to hold when $\tau(\Psi,A_0,B_0,\phi)$, and $M_0
\approx^\Psi_{A_0} N_0$ when $\phi(M_0,N_0)$. Condition (5) is needed in the
construction of universes.
\subsection*{Acknowledgements}
We are greatly indebted to Steve Awodey, Marc Bezem, Evan Cavallo, Daniel
Gratzer, Simon Huber, Dan Licata, Ed Morehouse, Anders M\"ortberg, Jonathan
Sterling, and Todd Wilson for their contributions and advice.
This paper directly continues work previously described in
\citet{ahw2016cubical}, \citet{ah2016cubicaldep}, and \citet{ahw2017cubical},
whose primary antecedents are two-dimensional type theory \citep{lh2dtt}, the
\citet{bch} cubical model of type theory, and the cubical type theories of
\citet{cohen2016cubical} and \citet{licata2014cubical}.
The authors gratefully acknowledge the support of the Air Force Office of
Scientific Research through MURI grant FA9550-15-1-0053. Any opinions, findings
and conclusions or recommendations expressed in this material are those of the
authors and do not necessarily reflect the views of the AFOSR.
The second author would also like to thank the Isaac Newton Institute for
Mathematical Sciences for its support and hospitality during the program ``Big
Proof'' when part of work on this paper was undertaken. The program was supported by EPSRC
grant number EP/K032208/1.
\newpage
\input{opsem}
\newpage
\input{typesys}
\newpage
\input{meanings}
\newpage
\input{types}
\newpage
\input{rules}
\newpage
\input{future}
\newpage
\bibliographystyle{plainnat}
| {'timestamp': '2017-12-06T02:13:27', 'yymm': '1712', 'arxiv_id': '1712.01800', 'language': 'en', 'url': 'https://arxiv.org/abs/1712.01800'} |
\section{Introduction}
Dual stable Grothendieck polynomials $g_{\l/\m}(x)$ were introduced in \cite{LP2007}. They are Hopf-dual to the stable Grothendieck polynomials, which represent some classes of the structure sheaves of Schubert varieties. The connection of stable and dual stable Grothendieck polynomials with the $K$-theory of the Grassmannian has been discussed in various papers including \cite{LS1982,FK1996,B2002}, and \cite{LP2007}. The paper \cite{LP2007} gives an explicit combinatorial rule for the coefficients of polynomials $g_{\lambda}(x)$ in the basis of Schur polynomials $s_{\mu}(x)$. We extend this result to the case of $g_{\l/\m}(x)$ for a skew shape ${\l/\m}$, and give a different rule (for straight shapes, it coincides with the rule of \cite{B2012}) that provides the same coefficients for straight shapes and extends the classical Littlewood-Richardson rule. We do this by constructing a crystal graph (see \cite{K1995}) on the set ${\mathcal{R}}({\l/\m})$ of all reverse plane partitions of shape ${\l/\m}$ with entries not exceeding a fixed number $m>0$.
\subsection{Main results}
To a reverse plane partition $T\in {\mathcal{R}}({\l/\m})$ we assign a reading word $r(T)$ in the following way: ignore each entry of $T$ that is equal to the entry directly below it; then read all the remaining entries in the left-to-right bottom-to-top order (the usual reading order for the Young tableaux). After that we define a family of operators $e_1,e_2,\dots,e_{m-1}$ on the set ${\mathcal{R}}({\l/\m})$ which are essentially the usual parenthesation operators applied to the reading word (see \cite{LS1978}).
\begin{theorem}
\label{thm:crystal}
The operators $e_1,e_2,\dots, e_{m-1}$ satisfy the crystal axioms (which can be found in \cite{K1995} and will also be discussed in the sequel).
\end{theorem}
Therefore we get a crystal graph structure on ${\mathcal{R}}({\l/\m})$. As a direct application of that (see \cite{K1995}), we get a Littlewood-Richardson rule for the reverse plane partitions:
\begin{corollary}
\label{cor:LR}
The dual stable Grothendieck polynomial $g_{\l/\m}(x)$ is expanded in terms of Schur polynomials $s_\nu(x)$ as follows:
$$ g_{\l/\m}(x)=\sum_\nu h_{\l/\m}^\nu s_\nu(x),$$
where the sum is over all Young diagrams $\nu$, and the coefficient $h_{\l/\m}^\nu$ is equal to the number of reverse plane partitions $T$ of shape ${\l/\m}$ and weight $\nu$ such that the reading word $r(T)$ is a lattice word.
\end{corollary}
We also give a self-contained proof of this Corollary without using the theory of crystal graphs. Note that the highest degree homogeneous component of $g_{\l/\m}(x)$ is the skew-Schur polynomial $s_{\l/\m}(x)$, so Corollary \ref{cor:LR} is an extension of the Littlewood-Richardson rule for skew-Schur polynomials.
\begin{remark}
\def{\mathrm{ceq}}{{\mathrm{ceq}}}
In \cite{GGL2015}, the following refinement $\tilde g_{\l/\m}(x;t)$ of $g_{\l/\m}(x)$ was introduced. For a reverse plane partition $T\in{\mathcal{R}}({\l/\m})$ let ${\mathrm{ceq}}(T):=(c_1,c_2,\dots)$ be a weak composition whose $i$-th entry $c_i$ is equal to the number of columns $j$ such that the boxes $(i,j)$ and $(i+1,j)$ both belong to ${\l/\m}$ and the entries of $T$ in these boxes are the same. Let $t=(t_1,t_2,\dots)$ be a vector of indeterminates, and put $t^{{\mathrm{ceq}}(T)}:=t_1^{c_1}t_2^{c_2}\dots$. Then the bounded degree power series $\tilde g_{\l/\m}(x;t)$ is defined as a sum over all reverse plane partitions $T$ of shape ${\l/\m}$ of $x^Tt^{{\mathrm{ceq}}(T)}$. It will be clear later that the operators $e_1,e_2,\dots,e_{m-1}$ preserve this ${\mathrm{ceq}}$-statistic, therefore, Corollary \ref{cor:LR} also admits a refinement:
$$\tilde g_{\l/\m}(x;t)=\sum_\alpha t^\alpha \sum_\nu h_{\l/\m}^{\nu,\alpha} s_\nu(x),$$
where the first sum is over all weak compositions $\alpha$, and $h_{\l/\m}^{\nu,\alpha}$ counts the number of reverse plane partitions $T$ of shape ${\l/\m}$ and weight $\nu$ such that the reading word $r(T)$ is a lattice word with an extra property that ${\mathrm{ceq}}(T)=\alpha$.
\end{remark}
\subsection{Previous research}
There already is a combinatorial rule for the coefficients $h_{\l/\m}^\nu$ in \cite{LP2007} for the case when ${\mu}=\emptyset$ and ${\l/\m}={\lambda}$ is a straight shape. Namely, $h_{\lambda}^\nu$ equals to the number $f_{\lambda}^\nu$ of \textit{elegant fillings} of ${\lambda}/\nu$, that is, the number of semi-standard Young tableaux $T$ of shape ${\lambda}/\nu$ such that all entries in the $i$-th row of $T$ are strictly less than $i$. This formula is Hopf-dual to the corresponding formula for stable Grothendieck polynomials that appeared earlier in \cite[Theorem 2.16]{L2000}, which implies that the dual stable Grothendieck polynomials are indeed Hopf-dual to the usual stable Grothendieck polynomials. To prove this rule, Lam and Pylyavskyy in \cite{LP2007} construct a weight preserving bijection between reverse plane partitions of shape ${\lambda}$ and pairs $(S,U)$, where $S$ is a semi-standard Young tableau of some shape $\mu$ and $U$ is an elegant filling of ${\lambda}/\mu$. Following this bijection one can deduce that $T$ is a reverse plane partition of shape ${\lambda}$ and weight $\nu$ whose reading word is a lattice word if and only if it corresponds to a pair $(S,U)$ such that $S$ is the filling of the shape $\nu$ with all entries in the $i$-th row equal to $i$, and $U$ is an elegant tableau of shape ${\lambda}/\nu$. Therefore the bijection from \cite{LP2007} restricted to the reverse plane partitions whose reading word is a lattice word proves the equality of the numbers $h_{\lambda}^\nu$ and $f_{\lambda}^\nu$.
For straight shapes, a combinatorial rule that involved the coefficients $h_{\lambda}^\nu$ instead of $f_{\lambda}^\nu$ was given in \cite[Proposition 5.3]{B2012} together with bijections that also show the equality of the numbers $h_{\lambda}^\nu$ and $f_{\lambda}^\nu$.
\subsection{The structure of the paper}
The rest of this section contains some background information about dual stable Grothendieck polynomials, crystal graphs, and introduction to the operators $e_i$ that occur in the statement of Theorem \ref{thm:crystal}.
The second section is dedicated to the proof of Theorem \ref{thm:crystal} and Corollary \ref{cor:LR} by exploring further properties and connections between the reading words of reverse plane partitions and the action of operators $e_i$.
\subsection{Preliminaries}
\subsubsection{Reverse plane partitions}
We follow the notations of \cite{LP2007}. Let ${\l/\m}$ be a skew shape and $m$ a positive integer. A \textit{reverse plane partition} $T$ of shape ${\l/\m}$ with entries in $[m]:=\{1,\dots,m\}$ is a tableau of this shape such that its entries do not exceed $m$ and weakly increase both in rows and in columns. For $i\in [m]$, by $T(i)$ we denote the number of columns of $T$ that contain $i$. To each reverse plane partition $T$ we attach a monomial $x^T=\Pi_{i\in[m]} x_i^{T(i)}$. For a skew shape ${\l/\m}$, define the dual stable Grothendieck polynomial $g_{\l/\m}(x_1,\dots,x_m)$ as the sum of weights of all reverse plane partitions $T$ of shape ${\l/\m}$ with entries in $[m]$:
$$g_{\l/\m}(x)=\sum_T x^T.$$
As it was shown in \cite{LP2007}, these polynomials are symmetric.
\subsubsection{Crystal graphs}
\label{subsubsection:crystal}
\def{\mathcal{S}}{{\mathcal{S}}}
Crystal graphs are important for representation theory of certain quantized universal enveloping algebras, and have been a fruitful topic of research for the past two decades. We give a brief adaptation of the crystal graph theory based on \cite{K1995,S2003,L1994} with a very low yet sufficient for the rest of this paper level of detail.
A crystal graph $G$ can be viewed as a set $V$ of vertices together with a set $e_1,\dots,e_{m-1}:V\to V\cup \{0\}$ of operators that act on the vertices of $G$ and return either a vertex of $G$ or zero. In addition, these operators are required to satisfy a set of simple \textit{crystal axioms}. If they do, then they are called \textit{crystal operators}, and $G$ is called \textit{a crystal graph}.
Instead of providing the list of these axioms, we give an important example of a crystal graph, which is the only crystal graph that we will be interested in. Fix $n>0$. Let ${\mathcal{S}}:=[m]^n$ be the set of all strings of length $n$ in the alphabet $[m]$. For $s=(s_1,s_2,\dots,s_n)\in{\mathcal{S}}$, the weight $w(s)=(w_1(s),\dots,w_m(s))$ is defined as
$$w_i(s):=\# \{j\in[n]:s_j=i\}.$$
For $i\in [m-1]$ we define the operator $E_i:{\mathcal{S}}\to{\mathcal{S}}\cup\{0\}$. For $s:=(s_1,s_2,\dots, s_n)\in {\mathcal{S}}$ the value $E_i(s)$ is evaluated using the following algorithm:
\begin{enumerate}
\item \label{step:ignore}Ignore all entries of $s$ other than the ones equal to $i$ or to $i+1$;
\item \label{step:pair}Ignore all occurrences of $i+1$ immediately followed by $i$;
\item \label{step:replace}After doing the previous step as many times as possible we obtain a string that consists of several $i$'s followed by several $i+1$'s. If there is at least one $i+1$, then $E_i$ replaces the leftmost $i+1$ by an $i$, and otherwise we set $E_i(s):=0$.
\end{enumerate}
In other words, $E_i$ labels each $i$ by a closing parenthesis, each $i+1$ by an opening parenthesis, and then it replaces the leftmost unmatched opening parenthesis by a closing one if there are any unmatched opening parentheses present. As an example, let $i=1,m=3,n=13$ and consider the following string $s:=(1,2,2,3,1,3,2,2,2,1,3,1,2)$. After step (\ref{step:ignore}) we ignore all $3$'s, so the string $s$ becomes $(1,2,2,*,1,*,2,2,2,1,*,1,2)$. Here the ignored entries are represented as stars. Next, we do step \ref{step:pair} as many times as needed, so our string is modified as follows:
\begin{eqnarray*}
s&=& (1,2,2,3,1,3,2,2,2,1,3,1,2)\\
&\to& (1,2,2,*,1,*,2,2,2,1,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,2,1,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,*,*,*,1,2)\\
&\to& (1,2,*,*,*,*,2,2,*,*,*,1,2)\\
&\to& (1,2,*,*,*,*,2,*,*,*,*,*,2).
\end{eqnarray*}
Now we can easily calculate the $E_1$-orbit of $s$:
\begin{eqnarray*}
E_1^0(s)&=& (1,2,2,3,1,3,2,2,2,1,3,1,2)\\
E_1^1(s)&=& (1,\mathbf{1},2,3,1,3,2,2,2,1,3,1,2)\\
E_1^2(s)&=& (1,1,2,3,1,3,\mathbf{1},2,2,1,3,1,2)\\
E_1^3(s)&=& (1,1,2,3,1,3,1,2,2,1,3,1,\mathbf{1})\\
E_1^4(s)&=& 0.
\end{eqnarray*}
\def{\textrm{Im}}{{\textrm{Im}}}
\def{\textrm{Id}}{{\textrm{Id}}}
Similarly, define the operators $F_i$ to be the operators that replace the rightmost unmatched closing parenthesis by an opening one. The operators $E_i$ and $F_i$ are ``inverse to each other'' in the sense that for any two strings $u,v\in {\mathcal{S}}$, $E_i(u)=v$ if and only if $F_i(v)=u$.
These operators satisfy the crystal axioms and therefore have a lot of nice properties, which we summarize in the following Lemma:
\begin{lemma}
\label{lemma:crystal}
\begin{enumerate}
\item Each connected component of the corresponding edge-colored graph has exactly one vertex $v\in {\mathcal{S}}$ such that for every $i\in [m-1]$, $E_i(v)=0$.
\item This component is completely determined (up to an isomorphism of edge-colored graphs) by the weight $w(v)$, which is clearly a weakly decreasing sequence of integers.
\item The sum of $x^{w(u)}$ over all vertices $u$ in this connected component is equal to the Schur polynomial $s_{w(v)}$.
\end{enumerate}
\end{lemma}
Even though all of these properties follow from the fact that $E_i$ and $F_i$ satisfy crystal axioms, we prove them just to make the proof of Corollary \ref{cor:LR} self-contained. Note that a somewhat related proof can be found in \cite{RS1998}.
\begin{proof}
Note that if the words $u,u'\in {\mathcal{S}}$ are Knuth equivalent (see \cite{K1970}), then the words $E_i(u)$ and $E_i(u')$ are Knuth equivalent (or both zero), and also the words $F_i(u)$ and $F_i(u')$ are Knuth equivalent (or both zero). And for each word $u\in{\mathcal{S}}$ there is exactly one word $u'\in{\mathcal{S}}$ which is Knuth equivalent to $u$ and such that it is a reading word of some semi-standard Young tableau $T$. But the operators $E_i$ and $F_i$ applied to the reading word of $T$ produce a reading word of some other tableau that has the same shape as $T$.
Now all three properties follow from the fact that any two semi-standard Young tableaux of the same straight shape can be obtained from one another by applying a sequence of operators $E_i$ and $F_i$. To show this, consider a tableau $T_0$ of shape ${\lambda}$ such that for every $j$, all of its entries in the $j$-th row are equal to $j$. Consider an integer $k\geq 1$, and let $T$ be a tableau of shape ${\lambda}$ such that for $j\geq k$, all entries of $T$ in the $j$-th row are equal to $j$ and such that for $j<k$, the entries of $T$ in the $j$-th row are less than or equal to $k$. Then we claim that such $T$ can be obtained from $T_0$ by applying a sequence of operators $F_i$ for different $i$'s. This statement is true for $k=1$ and can be easily proven by induction for all $k\geq 1$.
\end{proof}
\subsection{The crystal operators for reverse plane partitions}
\subsubsection{The descent-resolution algorithm}
We describe the descent-resolution algorithm for reverse plane partitions from \cite{GGL2015}, where it was used in order to describe the analogue of the Bender-Knuth involution for reverse plane partitions. Let ${\l/\m}$ be a skew shape, and fix $i\in[m-1]$. For a tableau $T'$ of shape ${\l/\m}$ such that the entries of $T'$ are equal to either $i$ or $i+1$ and weakly increase in columns but not necessarily in rows, we say that a column of $T'$ is \textit{$i$-pure}, if it contains an $i$ but does not contain an $i+1$. Similarly, we call a column \textit{$i+1$-pure} if it contains an $i+1$ but does not contain an $i$. If a column contains both $i$ and $i+1$, then we call this column \textit{mixed}.
\begin{definition}[see \cite{GGL2015}]
A tableau $T'$ is a \textit{benign tableau} if the entries of $T'$ weakly increase in columns and for every two mixed columns $A$ and $B$ ($A$ is to the left of $B$), the lowest $i$ in $A$ is not higher than the lowest $i$ in $B$. In other words, the vertical coordinates of the borders between $i$'s and $i+1$'s in mixed columns weakly increase from left to right (see Figure \ref{fig:benign}).
\end{definition}
\newcommand{\multicolumn{1}{|c|}{}}{\multicolumn{1}{|c|}{}}
\newcommand{\mm}[1]{\multicolumn{1}{|c|}{#1}}
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||ccc||ccc||}\hline
& & & & & & & & \\
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \mm{1} & \multicolumn{1}{|c|}{} & \mm{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \mm{2} & \multicolumn{1}{|c|}{} \\
\cline{3-3} \mm{2} & \multicolumn{1}{|c|}{} & \mm{2}\\
\cline{2-3} \multicolumn{1}{|c|}{} & & \\
\cline{1-1}\\
\end{tabular}& &
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} & \mm{1}\\
\cline{3-3} \mm{2} & \mm{1} & \multicolumn{1}{|c|}{} \\
\cline{2-2} \multicolumn{1}{|c|}{} & \mm{2} & \mm{2}\\
\cline{2-3} \multicolumn{1}{|c|}{} & & \\
\cline{1-1}\\
\end{tabular}& &
&
\begin{tabular}{ccc}
\cline{3-3} & & \multicolumn{1}{|c|}{} \\
\cline{1-2} \multicolumn{1}{|c|}{} & \mm{1} & \multicolumn{1}{|c|}{} \\
\cline{2-2} \mm{1} & \multicolumn{1}{|c|}{} & \mm{2}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \mm{2} & \multicolumn{1}{|c|}{} \\
\cline{2-3} \mm{2} & & \\
\cline{1-1} \\
\end{tabular}&\\
& (a) & & & (b) & & & (c) & \\\hline
\end{tabular}
\caption{\label{fig:benign} The table (a) is not benign, (b) is benign but is not a reverse plane partition, (c) is a reverse plane partition.}
\end{figure}
The descent-resolution algorithm takes a benign tableau $T'$ and converts it into a reverse plane partition of the same shape and weight.
A benign tableau $T'$ may easily fail to be a reverse plane partition. More specifically, it may contain an $i+1$ with an $i$ directly to the right of it -- we call such a situation \textit{a descent}. Let $A$ be the column containing an $i+1$ and $A+1$ be the column containing an $i$. Then there are three possible types of descents depending on the types of the columns $A$ and $A+1$ (their abbreviations are relevant when $i=1$):
\begin{enumerate}
\item[(2M)] $A$ is $i+1$-pure and $A+1$ is mixed
\item[(M1)] $A$ is mixed and $A+1$ is $i$-pure
\item[(21)] $A$ is $i+1$-pure and $A+1$ is $i$-pure
\end{enumerate}
There is a fourth type of descents in which both columns are mixed, but the benign tableau property implies that such descents are impossible. For a descent of each of these three types, \cite{GGL2015} provides a \textit{descent-resolution step}, which changes only the entries of $A$ and $A+1$ and resolves this descent.
For descents of the first two types, the descent-resolution step switches the roles of the columns but preserves the vertical coordinate of the lowest $i$ in the mixed column; this determines the operation uniquely. For a descent of the third type, it simply replaces all $i$'s by $i+1$'s and vice versa in both columns. It is clear that the resulting tableau will also be a benign tableau. The descent-resolution steps for $i=1$ are visualized in Figure \ref{fig:reduction}.
\def{\mathbf{1}}{{\mathbf{1}}}
\def{\mathbf{2}}{{\mathbf{2}}}
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||ccc||ccc||}\hline
& & & & & & & & \\
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{2}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}\\
& (M1) & & & (2M) & & & (21) & \\\hline
\end{tabular}
\caption{\label{fig:reduction} The descent-resolution steps (taken from \cite{GGL2015}).}
\end{figure}
The descent-resolution algorithm performs these descent-resolution steps until there are no descents left, which means that we get a reverse plane partition. This algorithm terminates, because $i$-pure columns always move to the right while $i+1$-pure columns always move to the left. Also, it is shown in \cite{GGL2015} that the result of the algorithm does not depend on the order in which the descents are resolved.
\subsubsection{The definition of $e_i$'s and $f_i$'s}
Let ${\l/\m}$ be a skew shape, and fix $i\in[m-1]$. For a reverse plane partition $T$ of shape ${\l/\m}$ with entries in $[m]$, define $e_i(T)$ as follows. First, consider only the subtableau of $T$ that consists of entries equal to either $i$ or $i+1$. Then, label each $i$-pure column by a closing parenthesis and each $i+1$-pure column by an opening parenthesis (and ignore the mixed columns).
Choose the ($i+1$-pure) column $A$ that corresponds to the leftmost unmatched opening parenthesis (if all opening parentheses are matched, set $e_i(T):=0$). Replace all the $i+1$'s in $A$ by $i$'s, and then apply the descent-resolution algorithm to the resulting benign tableau.
Similarly, $f_i$ chooses the ($i$-pure) column $B$ that corresponds to the rightmost unmatched closing parenthesis and replaces all the $i$'s in it by $i+1$'s and then applies the descent-resolution algorithm.
We discuss the properties of $e_i$'s and $f_i$'s and their connection to the defined above reading word in the next section.
\section{Properties of the reading words of reverse plane partitions}
Recall that the reading word $r(T)$ of a reverse plane partition $T$ of shape ${\l/\m}$ is defined as the usual left-to-right bottom-to-top Young tableaux reading word that ignores each entry that has the same entry below it. An example is shown in Figure \ref{fig:rw}.
\begin{figure}[h]
$$\young(::12,:114,1114,1334,235:,245,34)\to \young(::\ 2,:\ \ \ ,\ 11\ ,1\ 34,\ 3\ :,2\ 5,34)\to 34253134112$$
\caption{\label{fig:rw} The reading word of a skew-shaped reverse plane partition.}
\end{figure}
We assume the coordinates of the boxes are in the matrix notation. For a reverse plane partition $T$, define its \textit{height vector} $h(T)$ to be the sequence of vertical coordinates of the entries of $T$ that contribute to $r(T)$ arranged in the exact same order as they appear in the reading word. For example, for $T$ as in Figure \ref{fig:rw} we put $h(T):=(7,7,6,6,5,4,4,4,3,3,1)$. It is always a weakly decreasing sequence of positive integers. Similarly, we define the height vector of a benign tableau. Note that each descent-resolution step preserves the height vector, and, therefore, so do the operators $e_i$ and $f_i$.
\begin{lemma}
\label{lemma:injective}
Fix a skew shape ${\l/\m}$ and a sequence $h$ of positive integers. Then for each reading word $r$ there is at most one reverse plane partition $T$ of shape ${\l/\m}$ with $r(T)=r$ and $h(T)=h$.
\end{lemma}
\begin{proof}
Suppose that there exists a reverse plane partition $T$ of shape ${\l/\m}$ with $r(T)=r$ and $h(T)=h$. Then $T$ can be uniquely reconstructed from $r$ and $h$ by filling the boxes of ${\l/\m}$ in the reading order:
\begin{enumerate}
\item Set $j=1$;
\item Let $B$ be the first (in the reading order) box of ${\l/\m}$ which is not filled with a number. Let $a$ be the value in the box directly below it, and let $c$ be the value in the box directly to the left of it (if there is no such box then we put $a:=+\infty$ or $c:=0$);
\item If the height of $B$ is not equal to $h_j$, then set the entry in the box $B$ equal to $a$ and proceed to the next box (go to step 2);
\item If the number $r_j$ does not satisfy $c\leq r_j<a$, then, again, set the entry in the box $B$ equal to $a$ and proceed to the next box;
\item Otherwise, we set the entry in the box $B$ equal to $r_j$, increase $j$ by $1$ and proceed to the next box.
\end{enumerate}
Note that if $r$ and $h$ are the reading word and the height vector of some reverse plane partition, then the entries of $h$ weakly decrease, and the entries of $r$ that have the same height weakly increase. We prove by induction that the first $k$ entries of $T$ (in the reading order) are the same as the first $k$ entries of the reverse plane partition that the algorithm produces. For $k=0$ it is true. Now, we want to put $r_j$ somewhere into the row $h_j$ so that the entry below it is strictly bigger than $r_j$ and so that the entries in the row weakly increase. Thus if $r_j$ cannot be put into the current box (because either $r_j\geq a$ or $r_j<b$), then this box should be ignored by the reading word, so its value should be the same as the value directly below it. If $b\leq r_j<a$, then $r_j$ has to be put into the current box, because if we put $r_j$ somewhere to the right, then we have to fill this box with the value directly below it (with $a$), which is strictly bigger than $r_j$, so the entries in the row will not be weakly increasing.
\end{proof}
Recall that the operators $F_i$ are defined on the set ${\mathcal{S}}=[m]^n$ of all strings of length $n$, and replace the rightmost unmatched closing parenthesis (corresponding to an entry equal to $i$) by an opening parenthesis (by an $i+1$). Meanwhile, the operators $f_i$ act on ${\mathcal{R}}({\l/\m})$, which is the set of all reverse plane partitions of shape ${\l/\m}$ with entries less than or equal to $m$. It turns out that these two actions commute with the operation of taking the reading word:
\begin{lemma}
\label{lemma:intertw}
Let $T$ be a reverse plane partition. Then
$$F_i(r(T))=r(f_i(T)).$$
In particular, if $f_i(T)$ is zero then $F_i(r(T))$ is zero and the converse is also true.
\end{lemma}
And, because $e_i$ and $f_i$ are ``inverse to each other'' (in the same sense as above), and the same is true for $E_i$ and $F_i$, we get
\begin{corollary}
Let $T$ be a reverse plane partition. Then
$$E_i(r(T))=r(e_i(T)).$$
\end{corollary}
\begin{proof}[{Proof of Lemma \ref{lemma:intertw}}]
The operator $f_i$ labels $i$-pure columns by closing parentheses and $i+1$-pure columns by opening parentheses. Then it finds the rightmost unmatched closing parenthesis and replaces the corresponding $i$-pure column by an $i+1$-pure column. After that we get a benign tableau $T'$, and then we apply the descent-resolution algorithm to $T'$ which produces a reverse plane partition $T''=:f_i(T)$. Our proof consists of two parts:
\begin{enumerate}
\item $r(T')=r(T'')$;
\item $F_i(r(T))=r(T')$.
\end{enumerate}
\begin{remark}
Note that both of these parts are false for $e_i$ and $E_i$. To make them true, one needs to introduce the reading word that ignores each entry equal to the entry directly \textit{above} it, rather than directly below it.
\end{remark}
We start with the first part. Note that even though $T'$ and $T''$ differ by a sequence of descent-resolution steps, it is not true in general that the descent-resolution steps preserve the reading word. Fortunately, as we will see later, all the appearing descents are of the first type. And the corresponding descent-resolution step (see Figure \ref{fig:reduction1}) clearly does not change the reading word.
\begin{figure}[here]
\centering
\begin{tabular}{||ccc||}\hline
& & \\
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{} \\
\multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{1}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{2} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}
&
$\rightarrow$
&
\begin{tabular}{cc}
\cline{2-2} & \multicolumn{1}{|c|}{}\\
\cline{1-1} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{1} \\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{}\\
\cline{2-2} \multicolumn{1}{|c|}{1} & \multicolumn{1}{|c|}{}\\
\multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{2}\\
\cline{2-2} \multicolumn{1}{|c|}{} \\
\cline{1-1}
\end{tabular}\\ & & \\\hline
\end{tabular}
\caption{\label{fig:reduction1} The first descent-resolution step (M1).}
\end{figure}
The reason we only need this descent-resolution step is the definition of $f_i$. Namely, $f_i$ changes only one $i$-pure column $A$ into an $i+1$-pure column. And this column is required to be labeled by the rightmost unmatched closing parenthesis. Let $B$ be the leftmost $i+1$-pure column to the right of $A$, and let $C$ be the rightmost $i$-pure column to the left of $A$. If there was an $i$-pure column between $A$ and $B$, then it would also be unmatched, so $A$ would not be labeled by the rightmost unmatched closing parenthesis. Also, if there was an $i+1$-pure column $D$ between $C$ and $A$, then it would have to be matched to some $i$-pure column between $D$ and $A$, so $C$ would not be the rightmost $i$-pure column to the left of $A$. All in all we can see that all the columns between $C$ and $A$ and between $A$ and $B$ are mixed. If either $C$ or $B$ is undefined, then all the columns to the left (resp., to the right) of $A$ are mixed.
Now it is clear why only the descents of the first type appear while the descent-resolution steps are performed. The column $A$ becomes $i+1$-pure, so the only possible descent can occur between $A$ and $A+1$, and as we resolve it, the $i+1$-pure column moves to the right. But because it is surrounded by mixed columns, the only appearing descents are between this $i+1$-pure column and the mixed column to the right of it. And if this $i+1$-pure column moves to the position $B-1$, then there are no descents left, because $B$ is also $i+1$-pure. This finishes the proof of the first part.
The second part asks for a certain correspondence between two different matchings. The first one appears when we label $i$-pure columns by closing parentheses, $i+1$-pure columns by opening parentheses, and then say that two pure columns match each other if their labels (two parentheses) match each other in the parenthesis sequence. In this situation we say that these two columns \textit{match in the reverse plane partition}. The second matching appears when we label the entries of the reading word by parentheses and say that two entries of the reading word match each other if their labels match each other. In this situation we say that these two entries \textit{match in the reading word}.
The second part of the Lemma states that an $i$-pure column is labeled by the rightmost unmatched closing parenthesis in the reverse plane partition
if and only if the corresponding entry in the reading word is also labeled by the rightmost unmatched closing parenthesis in the reading word. Here we can restrict our attention to reverse plane partitions that are filled only with $i$'s and $i+1$'s. For a column $A$, let $j(A)$ be the position of the corresponding entry of the reading word if $A$ is either $i$- or $i+1$-pure. If $A$ is mixed, then set $j^-(A)$ to be the position of the entry of the reading word corresponding to $i$ and set $j^+(A)$ to be the position of the entry of the reading word corresponding to $i+1$.
We need to check three implications:
\begin{enumerate}
\item If a column $A$ is $i$-pure and unmatched in the reverse plane partition, then the entry $j(A)$ is unmatched in the reading word.
\item If a column $A$ is mixed, then the entry $j^-(A)$ is matched to something (not necessarily to $j^+(A)$) in the reading word.
\item If a column $A$ is $i$-pure and matched to some $i+1$-pure column $B$ in the reverse plane partition, then the entry $j(A)$ is also matched to something (not necessarily to $j(B)$) in the reading word.
\end{enumerate}
It is clear that these three properties together imply that the $i$-pure columns unmatched in the reverse plane partition correspond exactly to the unmatched $i$'s in the reading word. And because the reading word preserves the order of pure columns, the second part of the lemma reduces to proving these three implications.
Note that if a column $A$ is $i$-pure, then for every other column $B$ that is to the right (resp., left) of $A$, the entry $j(B)$ or $j^-(B)$ or $j^+(B)$ if defined is also to the right (resp. left) of $j(A)$. Another simple useful observation is that if we have any injective map that attaches to each $i+1$ in the reading word an $i$ to the right of it, then all the $i+1$'s in this reading word are matched. Now we are ready to check the implications (1)-(3).
(1) If a column $A$ is $i$-pure and unmatched, then we can just throw everything to the right of $A$ and $j(A)$ out. Now, every $i+1$-pure column to the left of $A$ is matched to something in the reverse plane partition, so for every $i+1$ to the left of $j(A)$ in the reading word we have an $i$ that is between it and $j(A)$, and for different $i+1$'s these $i$'s are also different. Therefore every $i+1$ to the left of $j(A)$ is matched in the reading word as well, so $j(A)$ is unmatched in the reading word.
(2) Suppose $A$ is mixed. If we throw out all the columns that are to the right of $A$, then several $i+1$'s between $j^+(A)$ and $j^-(A)$ will be thrown out of the reading word, but all the $i$'s to the left of $j^-(A)$ will remain untouched. Let $B$ be the rightmost $i$-pure column to the left of $A$. Now we throw out all the columns to the left of $B$ and also $B$ itself, which corresponds to erasing the part the reading word from the beginning to $j(B)$ (if there was no such $B$ then we do not throw anything out of the reading word). Now we have a reverse plane partition that contains no $i$-pure columns, so by the counting argument $j^-(A)$ is matched in the reading word. But then it was also matched in the original reading word.
(3) Suppose $A$ is $i$-pure and is matched in the reverse plane partition to some $i+1$-pure column $B$ to the left of $A$. Let $C$ be the rightmost $i$-pure column to the left of $B$. We throw out everything that is to the right of $A$ or to the left of $C$, which corresponds to keeping all the entries of the reading word between $j(C)$ and $j(A)$. We also remove $C$ and $j(C)$. All the $i$-pure columns between $A$ and $B$ are matched in the reverse plane partition to some $i+1$-pure columns between $A$ and $B$, and there are no $i$-pure columns between $B$ and $C$, so the number of $i+1$'s between $j(C)$ and $j(A)$ is strictly bigger than the number of $i$'s between $j(C)$ and $j(A)$, so $j(A)$ has to be matched to something in the reading word. We finish the proof of the third implication, which finishes the proof of the second (last) part of the Lemma.
\end{proof}
Let ${\l/\m}$ be a skew shape, and let $h$ be a sequence of positive integers. Lemmas \ref{lemma:injective} and \ref{lemma:intertw} give a vertex-injective map from the graph of all reverse plane partitions $T$ of shape ${\l/\m}$ with $h(T)=h$ to the graph ${\mathcal{S}}$ of all strings of the same length as $h$, and this map takes the operators $e_i$ and $f_i$ to $E_i$ and $F_i$. Therefore each connected component of the graph of all reverse plane partitions is isomorphic to the corresponding connected component of the graph ${\mathcal{S}}$. Now the proof of Theorem \ref{thm:crystal} follows from the observations about crystal graphs made in Subsection \ref{subsubsection:crystal}, in particular, the proof of Corollary \ref{cor:LR} follows from Lemma \ref{lemma:crystal}. \qed
\section*{Acknowledgments}
I am grateful to Prof. Alex Postnikov and to Darij Grinberg for their valuable remarks.
| {'timestamp': '2015-10-15T02:13:32', 'yymm': '1501', 'arxiv_id': '1501.00051', 'language': 'en', 'url': 'https://arxiv.org/abs/1501.00051'} |
\section{Introduction}
In this section, we will provide the background of the research, state the main results of the paper
and give an outline of the ideas for the proofs.
\subsection{Density problems}\
For a totally ergodic system $(X,\mathcal{X} ,\mu,T)$
(this means $T^k$ is ergodic for any positive integer $k$),
Furstenberg \cite{FH81} shown that for any non-constant integer polynomial
$p$ and $f \in L^2(\mu)$,
\begin{equation}\label{poly-total-ergodic}
\lim_{N\to \infty} \frac{1}{N} \sum_{n=0}^{N-1} f(T^{p(n)}x)=\int f\; \mathrm{d}\mu
\end{equation}
in $L^2$ norm,
where an integer polynomial is a
polynomial with rational coefficients taking integer values on the integers.
Bourgain \cite{JB89} shown that (\ref{poly-total-ergodic}) holds pointwisely for any
$f \in L^r(\mu)$ with $r> 1$.
\medskip
For topological dynamics,
the following question is natural.
\begin{ques}\label{Q1}
Let $(X,T)$ be a totally minimal system
(this means $T^k$ is minimal for any positive integer $k$)
and let $p$ be a non-constant integer polymonial.
Is there a point $x\in X$
such that the set $\{T^{p(n)}x : n\in \mathbb{Z}\}$ is dense in $X$?
\end{ques}
Note that for Question \ref{Q1} we cannot use the results of Furstenberg and Bourgain on polynomial convergence for totally ergodic systems,
as not every minimal system admits a totally ergodic measure.
In addition,
the total minimality assumption is necessary,
as can be seen by considering a periodic orbit of period 3 and taking an integer polynomial by $p(n)=n^2$.
In \cite{GHSWY20}, it was shown that the answer to Question \ref{Q1} is positive for any integer polynomial of degree 2.
In order to precisely state the equidistribution results for totally ergodic nilsystems
obtained by Frantzikinakis and Kra \cite{FK05},
we start with the following definition.
A family of integer polynomials $\{p_1,\ldots,p_d\}$ is said to be
{\it independent} if for all integers $m_1,\ldots,m_d$ with at least some $m_j\neq 0,j\in\{1,\ldots,d\}$,
the polynomial $\sum_{j=1}^{d}m_jp_j$ is not constant.
In \cite{FK05},
it was shown that for a totally ergodic nilsystem
which is equivalent to be totally minimal (see for example \cite{LA,Le05A,PW}),
there exists some point whose orbit along an independent family of integer polynomials is
uniformly distributed and thus dense.
They also pointed out that the assumption that the polynomial family is independent is necessary,
as can be seen by considering an irrational rotation on the circle.
In this paper, we give an affirmative answer to Question \ref{Q1}.
We prove:
\begin{Maintheorem}\label{poly-orbit}
Let $(X,T)$ be a totally minimal system,
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
Then there is a dense $G_\delta$ subset $\Omega$ of $ X$ such that
the set
\begin{equation}\label{product-space}
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\end{equation}
is dense in $X^d$ for every $x\in \Omega$.
\end{Maintheorem}
If one assumes that $(X,T)$ is minimal and weakly mixing,
Huang, Shao and Ye \cite{HSY19} showed that
for any family of distinct integer polynomials,
the set (\ref{product-space}) is dense
\subsection{Topological characteristic factors}\
For a measure preserving system $(X,\mathcal{X} ,\mu,T)$ and $f_1,\ldots,f_d\in L^\infty(\mu)$,
the study of convergence of the {\it multiple ergodic averages}
\begin{equation}\label{MEA}
\frac{1}{N} \sum_{n=0}^{N-1}f_1(T^n x)\cdots f_d(T^{dn}x)
\end{equation}
started from Furstenberg's elegant proof of Szemer\'{e}di Theorem \cite{Sz75}
via an ergodic theoretical analysis \cite{FH}.
After nearly 30 years' efforts of many researchers, this
problem (for $L^2$ convergence) was finally solved in \cite{HK05,TZ07}.
The basic approach is to find an appropriate factor, called a characteristic factor,
that controls the limit behavior in $L^2(\mu)$ of the averages (\ref{MEA}).
For the origin of these ideas and this terminology, see \cite{FH}.
To be more precise, let $(X,\mathcal{X} ,\mu,T)$ be a measure preserving system
and let $(Y,\mathcal{Y},\nu,T)$ be a factor of $X$.
We say that $Y$ is a \emph{characteristic factor} of $X$ if for all
$f_1,\ldots,f_d\in L^\infty(\mu)$,
\[
\lim_{N\to \infty}\frac{1}{N} \sum_{n=0}^{N-1}f_1(T^n x)\cdots f_d(T^{dn}x)
-\frac{1}{N} \sum_{n=0}^{N-1}\mathbb{E}(f_1| \mathcal{Y})(T^nx)\cdots
\mathbb{E}(f_d| \mathcal{Y})(T^{dn}x)=0
\]
in $L^2$ norm.
The next step is to obtain a concrete description for some well chosen characteristic factor
in order to prove convergence.
The result in \cite{HK05,TZ07} shows that
such a characteristic factor can be described as an inverse limit of nilsystems,
which is also called a pro-nilfactor.
\medskip
The counterpart of characteristic factors for topological dynamics was first studied by
Glasner in \cite{GE94}. To state the result we need a notion called saturated subset.
Given a map $\pi:X\to Y$ of sets $X$ and $Y$, a subset $L$ of $X$ is called $\pi$-\emph{saturated} if
\[
\{ x\in L:\pi^{-1}(\{\pi(x)\})\subset L \}=L
\]
i.e., $L=\pi^{-1}(\pi(L))$.
Here is the definition of topological characteristic factors:
\begin{definition}\cite{GE94}\label{def-saturated}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of topological dynamical systems and $d\in \mathbb{N}$.
$(Y,T)$ is said to be a \emph{$d$-step topological characteristic factor}
if there exists a
dense $G_\delta$ subset $\Omega$ of $X$ such that for each $x\in \Omega$ the orbit closure
\[
L_x^d(X):=\overline{\mathcal{O}}\big((\underbrace{x,\ldots,x}_{\text{$d$ times}}),T\times \ldots \times T^d\big)
\]
is $\underbrace{\pi\times \ldots\times \pi}_{\text{$d$ times}}$-saturated.
That is, $(x_1,\ldots,x_d)\in L_x^d(X)$ if and only if
$(x_1',\ldots,x_d')\in L_x^d(X)$ whenever $\pi(x_i)=\pi(x_i')$ for every $i=1,\ldots, d$.
\end{definition}
In \cite{GE94}, it was shown that for minimal systems, up to a canonically defined proximal
extension, a characteristic family for $T\times \ldots \times T^d$ is the family of canonical PI flows of class $(d-1)$.
In particular, if $(X,T)$ is distal, then its largest class $(d-1)$ distal factor
is its topological characteristic factor along $T\times \ldots \times T^d$. Moreover,
if $(X,T)$ is weakly mixing, then the trivial system is its topological characteristic factor.
For more related results we refer the reader to \cite{GE94}.
\medskip
On the other hand,
to get the corresponding pro-nilfactors for topological dynamics,
in a pioneer work, Host, Kra and Maass \cite{HKM10} introduced the notion of
{\it regionally proximal relation of order $d$}
for a topological dynamical system $(X,T)$, denoted by $\mathbf{RP}^{[d]}(X)$.
For $d\in\mathbb{N}$, we say that a minimal system $(X,T)$ is a \emph{d-step pro-nilsystem}
if $\mathbf{RP}^{[d]}(X)=\Delta$ and this is equivalent for $(X,T)$ being
an inverse limit of minimal $d$-step nilsystems.
For a minimal distal system $(X,T)$, it was proved that
$\mathbf{RP}^{[d]}(X)$ is an equivalence relation and $X/\mathbf{RP}^{[d]}(X)$
is the maximal $d$-step pro-nilfactor \cite{HKM10}.
Later, Shao and Ye \cite{SY12} showed that in fact for
any minimal system, the regionally proximal relation of order $d$ is an equivalence
relation and it also has the so-called lifting property.
\medskip
Very recently,
the result in \cite{GHSWY20} improves Glasner's result to pro-nilsystems significantly.
That is, they proved:
\begin{theorem}[Glasner-Huang-Shao-Weiss-Ye]\label{key-thm0}
Let $(X,T)$ be a minimal system, and let $\pi:X\rightarrow X/\mathbf{RP}^{[\infty]}(X)= X_\infty$ be the factor map.
Then there exist minimal systems $X^*$ and $X_\infty^*$ which are almost one to one
extensions of $X$ and $X_\infty$ respectively, and a commuting diagram below such that $X_\infty^*$ is a
$d$-step topological characteristic factor of $X^*$ for all $d\ge 2$.
\begin{equation}\label{commuting-diagram}
\xymatrix{
X \ar[d]_{\pi} & X^* \ar[d]^{\pi^*} \ar[l]_{\sigma^*} \\
X_\infty & X_\infty^* \ar[l]_{\tau^*}
}
\end{equation}
\end{theorem}
\medskip
In the theorem above, one can see that
for any open subsets $V_0,V_1,\ldots,V_d$ of $X^*$ with
$\bigcap_{i=0}^d \pi^*(V_i)\neq \emptyset$,
there is some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-n}V_1\cap \ldots\cap T^{-dn} V_d\neq \emptyset.
\]
Based on this result,
in this paper we use PET-induction which was introduced by Bergelson in \cite{BV87},
to give a polynomial version of their work:
\begin{Maintheorem}\label{polynomial-TCF}
Let $(X,T)$ be a minimal system,
and let $\pi:X\to X/\mathbf{RP}^{[\infty]}(X)= X_{\infty}$ be the factor map.
Then there exist minimal systems $X^*$ and $X_\infty^*$
which are almost one to one extensions of $X$ and $X_\infty$ respectively,
and a commuting diagram as in (\ref{commuting-diagram}) such that
for any open subsets $V_0,V_1,\ldots,V_d$ of $X^*$ with
$\bigcap_{i=0}^d \pi^*(V_i)\neq \emptyset$ and any distinct non-constant integer polynomials $p_i$
with $p_i(0)=0$ for $i=1,\ldots,d$,
there exists some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-p_1(n)}V_1\cap \ldots\cap T^{-p_d(n)} V_d\neq \emptyset.
\]
\end{Maintheorem}
\subsection{Strategy of the proofs}\
To prove Theorem \ref{poly-orbit},
by Theorem \ref{key-thm0}
it suffices to verify the system $(X^*,T)$
which is also totally minimal.
It is equivalent to prove that
for any given non-empty open subsets $V_0,V_1,\ldots,V_d$ of $X^*$,
there exists some $n\in \mathbb{Z}$
such that
\[
V_0\cap T^{-p_1(n)}V_1\cap \ldots\cap T^{-p_d(n)} V_d\neq \emptyset.
\]
Notice that $X_\infty^*$ is an
almost one to one extension of a totally minimal $\infty$-step pro-nilsystem
which can be approximated arbitrarily
well by a nilsystem (see \cite[Theorem 3.6]{DDMSY13}),
we get that $( X_{\infty}^*,T)$ satisfies Theorem \ref{poly-orbit} (Lemma \ref{equi-condition}),
which implies there is some $m\in \mathbb{Z}$ such that
\[
\pi^*(V_0)\cap T^{-p_1(m)}\pi^*(V_1)\cap\ldots \cap T^{-p_d(m)}\pi^*(V_d)\neq \emptyset.
\]
Using Theorem \ref{polynomial-TCF}
for open sets $V_0,T^{-p_1(m)}V_1,\ldots,T^{-p_d(m)}V_d$
and integer polynomials $p_1(\cdot+m)-p_1(m),\ldots,p_d(\cdot+m)-p_d(m)$,
there is some $k\in \mathbb{Z}$ such that
\[
V_0\cap T^{-p_1(k+m)}V_1\cap \ldots\cap T^{-p_d(k+m)} V_d\neq \emptyset,
\]
as was to be shown.
\medskip
To prove Theorem \ref{polynomial-TCF},
we use PET-induction, which was introduced by Bergelson in \cite{BV87},
where PET stands for {\it polynomial ergodic theorem} or
{\it polynomial exhaustion technique} (see \cite{BV87,BM00}).
See also \cite{BL96,BL99} for more on PET-induction.
Basically, we associate any finite collection of integer polynomials with a `complexity',
and reduce the complexity at some step to the trivial one.
Note that in some step,
the cardinal number of the collection may increase while the complexity decreases.
We note that when doing the induction procedure,
we find that the known results are not enough to
guarantee them,
and we need a stronger result (Theorem \ref{polynomial-case}).
After we introduce PET-induction in Subsection \ref{PET-induction-def}, we will explain the main ideas for the
proof via proving some simple examples.
\subsection{The organization of the paper}\
The paper is organized as follows.
In Section \ref{pre}, the basic notions used in the paper are introduced.
In Section \ref{pf-thm-A}, we first give a proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}
whose proof is very complicated.
In Section \ref{pf-thm-B}, we prove Theorem \ref{polynomial-TCF}.
\bigskip
\noindent {\bf Acknowledgments.}
The author would like to thank Professors Wen Huang, Song Shao and Xiangdong Ye for helping discussions.
\section{Preliminaries}\label{pre}
In this section we gather definitions and preliminary results that
will be necessary later on.
Let $\mathbb{N}$ and $\mathbb{Z}$ be the sets of all positive integers
and integers respectively.
\subsection{Topological dynamical systems}\
A \emph{topological dynamical system}
(or \emph{system}) is a pair $(X,T)$,
where $X$ is a compact metric space with a metric $\rho$, and $T : X \to X$
is a homeomorphism.
For $x\in X$, the \emph{orbit} of $x$ is given by $\mathcal{O}(x,T)=\{T^nx: n\in \mathbb{Z}\}$.
For convenience, we denote the orbit closure of $x\in X$
under $T$ by $\overline{\mathcal{O}}(x,T)$,
instead of $\overline{\mathcal{O}(x,T)}$.
A system $(X,T)$ is said to be \emph{minimal} if
every point has a dense orbit,
and \emph{totally minimal} if $(X,T^k)$ is minimal for
any positive integer $k$.
\medskip
A \emph{homomorphism} between systems $(X,T)$ and $(Y,T)$ is a continuous onto map
$\pi:X\to Y$ which intertwines the actions; one says that $(Y,T)$ is a \emph{factor} of $(X,T)$
and that $(X,T)$ is an \emph{extension} of $(Y,T)$. One also refers to $\pi$ as a \emph{factor map} or
an \emph{extension} and one uses the notation $\pi : (X,T) \to (Y,T)$.
An extension $\pi$ is determined
by the corresponding closed invariant equivalence relation $R_\pi=\{(x,x')\in X\times X\colon \pi(x)=\pi(x')\}$.
\subsection{Regional proximality of higher order}\
For $\vec{n} = (n_1,\ldots, n_d)\in \mathbb{Z}^d$ and $\epsilon=(\epsilon_1,\ldots,\epsilon_d)\in \{0,1\}^d$, we
define
$\displaystyle\vec{n}\cdot \epsilon = \sum_{i=1}^d n_i\epsilon_i $.
\begin{definition}\cite{HKM10}\label{definition of pronilsystem and pronilfactor}
Let $(X,T)$ be a system and $d\in \mathbb{N}$.
The \emph{regionally proximal relation of order $d$} is the relation $\mathbf{RP}^{[d]}(X)$
defined by: $(x,y)\in\textbf{RP}^{[d]}(X)$ if
and only if for any $\delta>0$, there
exist $x',y'\in X$ and
$\vec{n}\in \mathbb{N}^d$ such that:
$\rho(x,x')<\delta,\rho(y,y')<\delta$ and
\[
\rho( T^{\vec{n}\cdot\epsilon} x', T^{\vec{n}\cdot\epsilon} y')<\delta,
\quad \forall\;\epsilon\in \{0,1\}^d\backslash\{ \vec{0}\}.
\]
A system is called a \emph{$d$-step pro-nilsystem}
if its regionally proximal relation of order $d$ is trivial,
i.e., coincides with the diagonal.
\end{definition}
It is clear that for any system $(X,T)$,
\[
\ldots\subset \mathbf{RP}^{[d+1]}(X)\subset \mathbf{RP}^{[d]}(X)\subset
\ldots \subset\mathbf{RP}^{[2]}(X)\subset \mathbf{RP}^{[1]}(X).
\]
\begin{theorem}\cite[Theorem 3.3]{SY12}\label{cube-minimal}
For any minimal system and $d\in \mathbb{N}$,
the regionally proximal relation of order $d$
is a closed invariant equivalence relation.
\end{theorem}
It follows that for any minimal system $(X,T)$,
\[
\mathbf{RP}^{[\infty]}(X)=\bigcap_{d\geq1}\mathbf{RP}^{[d]}(X)
\]
is also a closed invariant equivalence relation.
Now we formulate the definition of $\infty$-step pro-nilsystems.
\begin{definition}
A minimal system $(X,T)$ is an \emph{$\infty$-step pro-nilsystem},
if the equivalence relation $\mathbf{RP}^{[\infty]}(X)$ is trivial,
i.e., coincides with the diagonal.
\end{definition}
The regionally proximal relation of order $d$ allows us to construct the \emph{maximal $d$-step pro-nilfactor}
of a minimal system. That is, any factor of $d$-step pro-nilsystem
factorizes through this system.
\begin{theorem}\label{lift-property}\cite[Theorem 3.8]{SY12}
Let $\pi :(X,T)\to (Y,T)$ be a factor map of minimal systems and $d\in \mathbb{N}\cup\{\infty\}$. Then,
\begin{enumerate}
\item $(\pi \times \pi) \mathbf{RP}^{[d]}(X)=\mathbf{RP}^{[d]}(Y)$.
\item $(Y,T)$ is a $d$-step pro-nilsystem if and only if $\mathbf{RP}^{[d]}(X)\subset R_\pi$.
\end{enumerate}
In particular, the quotient of $(X,T)$ under $\mathbf{RP}^{[d]}(X)$
is the maximal $d$-step pro-nilfactor of $X$.
\end{theorem}
\subsection{Nilpotent groups, nilmanifolds and nilsystems}\
Let $L$ be a group.
For $g,h\in L$, we write $[g,h]=ghg^{-1}h^{-1}$ for the commutator of $g$ and $h$,
we write $[A,B]$ for the subgroup spanned by $\{[a,b]:a\in A,b\in B\}$.
The commutator subgroups $L_j,j\geq1$, are defined inductively by setting $L_1=L$
and $L_{j+1}=[L_j,L],j\geq1$.
Let $k\geq 1$ be an integer.
We say that $L$ is \emph{k-step nilpotent} if $L_{k+1}$ is the trivial subgroup.
Let $L$ be a $k$-step nilpotent Lie group and $\Gamma$ be a discrete cocompact subgroup of $L$.
The compact manifold $X=L/\Gamma$ is called a \emph{k-step nilmanifold.}
The group $L$ acts on $X$ by left translation and we write this action as $(g,x)\mapsto gx$.
Let $\tau\in L$ and $T$ be the transformation $x\mapsto \tau x$ of $X$.
Then $(X,T)$ is called a \emph{k-step nilsystem}.
We also make use of inverse limits of nilsystems and so we recall the definition of an inverse limit
of systems (restricting ourselves to the case of sequential inverse limits).
If $\{(X_i,T_i)\}_{i\in \mathbb{N}}$ are systems with $\text{diam}(X_i)\leq 1$ and $\phi_i:X_{i+1}\to X_i$
are factor maps, the \emph{inverse limit} of the systems is defined to be the compact subset of
$\prod_{i\in \mathbb{N}}X_i$
given by $\{(x_i)_{i\in\mathbb{N}}:\phi_i(x_{i+1})=x_i,i\in \mathbb{N}\}$,
which is denoted by $\lim\limits_{\longleftarrow}\{ X_i\}_{i\in \mathbb{N}}$.
It is a compact metric space endowed with the
distance $\rho(x,y)=\sum_{i\in \mathbb{N}}1/ 2^i \rho_i(x_i,y_i)$.
We note that the maps $\{T_i\}_{i\in \mathbb{N}}$ induce a transformation $T$ on the inverse limit.
\medskip
The following structure theorems characterize inverse limits of nilsystems.
\begin{theorem}[Host-Kra-Maass]\cite[Theorem 1.2]{HKM10}\label{description}
Let $d\geq2$ be an integer.
A minimal system is a $d$-step pro-nilsystem
if and only if
it is an inverse limit of minimal $d$-step nilsystems.
\end{theorem}
\begin{theorem}\cite[Theorem 3.6]{DDMSY13}\label{system-of-order-infi}
A minimal system is an $\infty$-step pro-nilsystem
if and only if it is an inverse limit of minimal nilsystems.
\end{theorem}
\subsection{Some facts about hyperspaces and fundamental extensions}\
Let $X$ be a compact metric space with a metric $\rho$.
Let $2^X$ be the collection of non-empty closed subsets of $X$.
We may define a metric on $2^X$ as follows:
\begin{align*}
\rho_H(A,C) &= \inf\{ \epsilon>0:A\subset B(C,\epsilon),C\subset B(A,\epsilon) \} \\
& =\max\{ \max_{a\in A} \rho(a,C),\max_{c\in C} \rho(c,A)\},
\end{align*}
where $\rho(x,A)=\inf_{y\in A}\rho(x,y)$ and $B(A,\epsilon)=\{x\in X:\rho(x,A)<\epsilon\}$.
The metric $\rho_H$ is called the \emph{Hausdorff metric} on $2^X$.
\medskip
Let $\pi:(X,T)\to (Y,T)$ be a factor map of systems.
We say that:
\begin{enumerate}
\item $\pi$ is an \emph{open} extension if it is open as a map;
\item $\pi$ is an \emph{almost one to one} extension if there is a dense $G_\delta$ subset
$\Omega$ of $X$ such that $\pi^{-1}(\{\pi (x)\})=\{x\}$ for every $x\in \Omega$.
\end{enumerate}
The following is a well known fact about open mappings (see \cite[Appendix A.8]{JDV} for example).
\begin{theorem}\label{open-map}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of systems.
Then the map $\pi^{-1}:Y\to 2^X,y \mapsto \pi^{-1}(y)$ is continuous
if and only if $\pi$ is open.
\end{theorem}
\subsection{Polynomial orbits in minimal systems}\
We have the following characterization of polynomial orbits in minimal systems.
\begin{lemma}\label{equi-condition}
Let $(X,T)$ be a minimal system and let $p_1,\ldots,p_d$ be non-constant integer polynomials.
Then the following statements are equivalent:
\begin{enumerate}
\item There exists a dense $G_\delta$ subset $\Omega$
of $X$ such that
the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$ for every $x\in \Omega$.
\item There exists some $x\in X$ such that the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\item For any non-empty open subsets $U,V_1,\ldots,V_d$ of $X$,
there is some $n\in \mathbb{Z}$ such that
\[
U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d\neq \emptyset.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
(1) $\Rightarrow$ (2) is obvious.
(2) $\Rightarrow$ (3):
Assume there is some $x\in X$ such that
the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
It is clear that for any $m\in \mathbb{Z}$, the set
\[
X(x,m):= \{\big(T^{p_1(n)}(T^mx),\ldots, T^{p_d(n)}(T^mx)\big):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
Fix non-empty open subsets $U,V_1,\ldots,V_d$ of $X$.
As $(X,T)$ is minimal, there is some $m\in \mathbb{N}$ with $T^mx\in U$.
Notice $ X(x,m)$ is dense in $X^d$,
we can choose some $n\in \mathbb{Z}$ such that
$T^{p_i(n)}(T^mx)\in V_i$ for $i=1,\ldots,d$, which implies
\[
T^mx\in U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d.
\]
(3) $\Rightarrow$ (1):
Assume that for any given non-empty open subsets $U,V_1,\ldots,V_d$ of $X$,
there is some $n\in \mathbb{Z}$ such that
$U\cap T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d\neq \emptyset.$
Let $\mathcal{F}$ be a countable base of $X$, and let
\[
\Omega:=\bigcap_{V_1,\ldots,V_d\in \mathcal{F}} \bigcup_{n\in \mathbb{Z}}
T^{-p_1(n)}V_1\cap\ldots \cap T^{-p_d(n)}V_d.
\]
Then it is easy to see that the dense $G_\delta$ subset $\Omega$ is what we need.
\end{proof}
The following result can be derived from
\cite[Theorem 1.2]{FK05}.
\begin{cor}\label{uniform-dis}
Let $(X=L/\Gamma,T)$ be a totally minimal nilsystem,
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
Then there is some $x\in X$ such that the set
\[
\{ (T^{p_1(n)}x,\ldots,T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\end{cor}
By Theorem \ref{system-of-order-infi} and Corollary \ref{uniform-dis} we have:\
\begin{cor}\label{uni-distributed-AA}
Let $(X,T)$ be a totally minimal system
and assume that $\{p_1,\ldots,p_d\}$ is an independent family of integer polynomials.
If $(X,T)$ is an almost one to one extension of an $\infty$-step pro-nilsystem,
then there is some $x\in X$ such that the set
\[
\{(T^{p_1(n)}x,\ldots, T^{p_d(n)}x):n\in \mathbb{Z}\}
\]
is dense in $X^d$.
\end{cor}
\subsection{Polynomial recurrence}\
\medskip
Recall that a collection $\mathcal{F}$ of subsets of $\mathbb{Z}$ is a \emph{family}
if it is hereditary upward, i.e.,
$F_1 \subset F_2$ and $F_1 \in \mathcal{F}$ imply $F_2 \in \mathcal{F}$.
A family $\mathcal{F}$ is called \emph{proper} if it is neither empty nor the entire power set of $\mathbb{Z}$,
or, equivalently if $\mathbb{Z}\in \mathcal{F}$ and $\emptyset\in \mathcal{F}$.
For a family $\mathcal{F}$ its \emph{dual} is the family
$\mathcal{F}^*:=\{ F\subset \mathbb{Z}: F\cap F' \neq \emptyset \;\mathrm{for} \; \mathrm{all}\; F'\in \mathcal{F} \} $.
It is not hard to see that
$\mathcal{F}^*=\{F\subset \mathbb{Z}:\mathbb{Z}\backslash F\notin \mathcal{F}\}$, from which we have that if $\mathcal{F}$ is a family then $(\mathcal{F}^*)^*=\mathcal{F}$.
If a family $\mathcal{F}$ is closed under finite intersections and is proper, then it is called a \emph{filter}.
A family $\mathcal{F}$ has the {\it Ramsey property} if $A = A_1\cup A_2 \in \mathcal{F}$ implies that $A_1 \in \mathcal{F}$
or $A_2 \in F$. It is well known that a proper family has the Ramsey property if and
only if its dual $\mathcal{F}^*$ is a filter \cite{FH}.
\medskip
For $j\in \mathbb{N}$ and a finite subset $\{p_1, \ldots, p_j\}$ of $\mathbb{Z}$, the
\emph{finite IP-set of length $j$} generated by $\{p_{1}, \ldots, p_j\}$
is the set
\[
\big\{p_{1}\epsilon_{1}+ \ldots+ p_j\epsilon_j: \epsilon_1,\ldots,\epsilon_j\in \{0,1\}\big\} \backslash \{0\}.
\]
The collection of all sets containing finite IP-sets with arbitrarily long lengths is denoted by $\mathcal{F}_{fip}$.
\begin{lemma}\cite[Lemma 8.1.6]{HSY16}
$\mathcal{F}_{fip}$ has the Ramsey property.
\end{lemma}
Then we have:
\begin{cor}\label{filter}
$\mathcal{F}_{fip}^*$ is a filter.
\end{cor}
\medskip
For a system $(X,T)$, $x\in X$, a non-constant integer polynomial $p$ and a non-empty open subset $V$ of $X$, set
\[
N(x,V)=\{ n\in \mathbb{Z}:T^nx\in V\}\quad
\mathrm{and}\quad
N_p(x,V)=\{n\in \mathbb{Z}:T^{p(n)}x\in V\}.
\]
\begin{prop}\cite[Proposition 8.1.5]{HSY16}\label{RP-infi}
Let $(X,T)$ be a minimal system and $(x,y)\in X\times X\backslash \Delta$.
Then $(x,y)\in \mathbf{RP}^{[\infty]}(X)$ if and only if $N(x,V)\in \mathcal{F}_{fip}$
for every open neighborhood $V$ of $y$.
\end{prop}
The following proposition follows from the argument in the proof of \cite[Theorem 8.1.7]{HSY16},
which also can be derived from \cite[Theorem 0.2]{BL18}.
\begin{prop}\label{fip-family}
Let $(X,T)$ be a minimal $\infty$-step pro-nilsystem.
Then for any $x\in X$, $N(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{prop}
The following proposition is from \cite[Section 2.11]{Le05B}.
\begin{prop}\label{poly-in-nilsystem}
Let $(X,T)$ be a nilsystem, $x\in X$ and an open neighborhood $U$ of $x$.
For any non-constant integer polynomial $p$ with $p(0)=0$,
we can find another `larger' nilsystem $(Y, S)$ with $y\in Y$ and
an open neighborhood $V$ of $y$ such that
\[
\{n\in \mathbb{Z}:T^{p(n)}x\in U\}\supset \{n\in \mathbb{Z}:S^ny\in V\}.
\]
\end{prop}
It follows Theorem \ref{system-of-order-infi} that
a minimal $\infty$-step pro-nilsystem is an inverse limit of minimal nilsystems.
By Propositions \ref{fip-family} and \ref{poly-in-nilsystem}
we can get:
\begin{prop}\label{infi-poly-rec}
Let $(X,T)$ be a minimal $\infty$-step pro-nilsystem
and let $p$ be a non-constant integer polynomial with $p(0)=0$.
Then for any $x\in X$,
$N_{p}(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{prop}
Notice that $\mathcal{F}_{fip}^*$ is a filter,
thus by Proposition \ref{infi-poly-rec} we have:
\begin{cor}\label{return-time-AA}
Let $(X,T)$ be an almost one to one extension of a minimal $\infty$-step pro-nilsystem
and let $p_1,\ldots,p_d$ be non-constant integer polynomials with $p_i(0)=0$ for $i=1,\ldots,d$.
Then there exists a dense $G_\delta$ subset $\Omega$ of $X$
such that for any $x\in \Omega$,
$\bigcap_{i=1}^dN_{p_i}(x,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $x$.
\end{cor}
\subsection{A useful lemma}\
To end this section we give a useful lemma which can be derived from
the proof of \cite[Theorem 5.6]{GHSWY20}.
For completeness, we include the proof here.
To do this, we need the following topological characteristic factor theorem.
\begin{theorem}\cite[Theorem 4.2]{GHSWY20}\label{key-thm}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of minimal systems.
If $\pi$ is open and $X/ \mathbf{RP}^{[\infty]}(X)$ is a factor of $Y$,
then $Y$ is a $d$-step topological characteristic factor of $X$ for all $d\in \mathbb{N}$.
\end{theorem}
With the help of the above powerful theorem we are able to show:
\begin{lemma}\label{fip-infi-fiber}
Let $\pi:(X,T)\to (Y,T)$ be a factor map of minimal systems.
If $\pi$ is open and $X/ \mathbf{RP}^{[\infty]}(X)$ is a factor of $Y$,
then for any distinct positive integers $a_1,\ldots,a_s$,
there is a dense $G_\delta$ subset $\Omega$ of $X$ such that for
any open subsets $V_0,V_1,\ldots,V_s$ of $X$ with
$\bigcap_{i=0}^s \pi(V_i)\neq \emptyset$ and any $z\in V_0 \cap \Omega$ with $\pi(z)\in \bigcap_{i=0}^s \pi(V_i)$,
there exists some $A\in \mathcal{F}_{fip}$
such that $T^{a_in}z\in V_i$ for every $i=1,\ldots,s$ and $n\in A$.
\end{lemma}
\begin{proof}
By Theorem \ref{key-thm},
for every $d\in \mathbb{N}$
there is a dense $G_\delta$ subset $\Omega_d$ of $X$ such that for each $x\in \Omega_d$,
$L_x^d(X)=\overline{\mathcal{O}}((x,\ldots,x),T\times \ldots \times T^d)$
is $\pi\times\ldots\times \pi$-saturated.\footnote{See Definition \ref{def-saturated}.}
Set $\Omega=\bigcap_{d\in \mathbb{N}}\Omega_d$,
then $\Omega$ is a dense $G_\delta$ subset of $X$.
We next show that the set $\Omega$ meets our requirement.
Now fix distinct positive integers $a_1,\ldots,a_s$.
Let $V_0,V_1,\ldots,V_s$ be open subsets of $X$ with
$W:=\bigcap_{i=0}^s \pi(V_i)\neq \emptyset$,
then $\pi^{-1}(W)\cap V_i\neq \emptyset$ for every $i=0,1,\ldots,s$.
Let $z\in \Omega \cap V_0 \cap \pi^{-1}(W)$.
For $i=1,\ldots,s$, let $z_i\in \pi^{-1}(\{ \pi(z) \} )\cap V_i$ and choose $\delta>0$ with $B(z_i,\delta)\subset V_i$.
Set $N=\max\{a_1,\ldots,a_s\}$.
Let $\{b_j\}_{j\in \mathbb{N}}$ be a sequence of positive integers such that $b_{j+1}\geq N(b_1+\ldots+b_j)+1$,
and let $I_j$ be the finite IP-set generated by $\{b_1,\ldots,b_j\}$.
\medskip
\noindent {\bf Claim:}
For $i,i'\in\{1,\ldots,N\}$ and $m,m'\in I_j$,
$im=i'm'$ if and only if $i=i'$ and $m=m'$.
\begin{proof}[Proof of Claim]
Suppose for a contradiction that there exist
$i,i'\in\{1,\ldots,N\}$ with $i<i'$ and $m,m'\in I_j$ such that $im=i'm'$.
Let $\epsilon_1,\ldots,\epsilon_j,\epsilon_1',\ldots,\epsilon_j'\in \{0,1\}$
such that $m=b_1\epsilon_1+\ldots+b_j\epsilon_j$ and $m'=b_1\epsilon_1'+\ldots+b_j\epsilon_j'$.
Let
\[
j_0=\max\{ 1\leq n \leq j: \epsilon_n+\epsilon_n'>0\}.
\]
If $j_0=1$, then $m=m'=b_1$,
and thus $im< i'm'$.
If $j_0\geq 2$, then we have
\begin{align*}
b_{j_0}\leq |i\epsilon_{j_0}-i'\epsilon_{j_0}'|b_{j_0} &=\big|i\sum_{n=1}^{j_0-1}b_n\epsilon_n-
i'\sum_{n=1}^{j_0-1}b_n\epsilon_n'\big| \\
& \leq \sum_{n=1}^{j_0-1}b_n|i\epsilon_n-i'\epsilon'_n|\\
& \leq N(b_1+\ldots+b_{j_0-1}),
\end{align*}
which is a contradiction
by the choice of $b_{j_0}$.
This shows the claim.
\end{proof}
For $k\in \mathbb{N}$, let $B_k=b_1+\ldots+b_k$ and
let $\vec{z}_k=(z_1^{k},\ldots,z_{B_kN}^k)\in X^{B_k N} $
such that
\[
z_{j}^k=
\begin{cases}
z_i, & j=a_im,\;\mathrm{where}\; i\in \{1,\ldots,s\},\; m\in I_k;\\
z, & \hbox{otherwise.}
\end{cases}
\]
By the claim above, every $\vec{z}_k$ is well defined.
Recall that for any $d\in \mathbb{N}$,
$L_z^d(X)$
is $\pi\times \ldots\times \pi$-saturated,
then $\vec{y}\in L_z^d(X)$ for any $\vec{y}=(y_1,\ldots,y_d)\in X^d$ with
$\pi(z)=\pi(y_i)$ for $i=1,\ldots,d$.
In particular, $\vec{z}_k\in L_z^{B_kN}(X)$
which implies that there is some $n_k\in \mathbb{N}$
such that
$\rho(T^{jn_k}z,z_j^k)<\delta$ for $j=1,\ldots,B_kN$.
Let $A_k=\{m n_k :m\in I_k\}$ and $A=\bigcup_{k\in \mathbb{N}}A_k$.
Then $A\in \mathcal{F}_{fip}$ and
$T^{a_in}z\in B(z_i,\delta)\subset V_i$ for $i=1,\ldots,s$ and $n\in A$.
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}}\label{pf-thm-A}
In this section, assuming Theorem \ref{polynomial-TCF} we give a proof of Theorem \ref{poly-orbit}.
We start with the following simple observation.
\begin{lemma}\label{total-minimality-proximal}
Let $\pi:(X,T)\to (Y,T)$ be an almost one to one extension of minimal systems.
Then $(X,T)$ is totally minimal if and only if $(Y,T)$ is totally minimal.
\end{lemma}
Now we are in position to show Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}.
\begin{proof}[Proof of Theorem \ref{poly-orbit} assuming Theorem \ref{polynomial-TCF}]
Let $(X,T)$ be a totally minimal system
and let $X_\infty=X/\mathbf{RP}^{[\infty]}(X)$.
It follows from Theorem \ref{polynomial-TCF} that there exist minimal systems $X^*$ and $X_\infty^*$
which are almost one to one extensions of $X$ and $X_\infty$ respectively,
and a commuting diagram below:
\[
\xymatrix{
X \ar[d]_{\pi} & X^* \ar[d]^{\pi^*} \ar[l]_{\sigma^*} \\
X_\infty & X_\infty^* \ar[l]_{\tau^*}
}
\]
By Lemma \ref{total-minimality-proximal}, $(X^*,T)$ and $(X_\infty^*,T)$ are both totally minimal.
It suffices to verify Theorem \ref{poly-orbit} for system $(X^*,T)$.
Let $\{p_1,\ldots,p_d\}$ be an independent family of integer polynomials,
and let $V_0,V_1,\ldots,V_d$ be non-empty open subsets of $X^*$.
As $\pi^*$ is open, $\pi^*(V_0),\pi^*(V_1),\ldots,\pi^*(V_d)$ are non-empty open subsets of $X^*_\infty$.
Notice that $X^*_\infty$ is an almost one to one extension of $X_\infty$
which is a minimal $\infty$-step pro-nilsystem,
thus by Lemma \ref{equi-condition} and Corollary \ref{uni-distributed-AA},
there is some $m\in \mathbb{N}$ such that
\begin{align*}
& \pi^*(V_0)\cap\pi^*( T^{-p_1(m)}V_1)\cap\ldots \cap \pi^*( T^{-p_d(m)}V_d) \\
= & \pi^*(V_0)\cap T^{-p_1(m)}\pi^*(V_1)\cap\ldots \cap T^{-p_d(m)}\pi^*(V_d)\neq \emptyset
\end{align*}
For $i=1,\ldots,d$, let $p_i'(n)=p_i(n+m)-p_i(m)$.
Then every $p_i'$ is an integer polynomial with $p_i'(0)=0$.
Now using Theorem \ref{polynomial-TCF} for integer polynomials $p_1',\ldots,p_d'$
and open sets $V_0, T^{-p_1(m)}V_1,\ldots,T^{-p_d(m)}V_d$,
there exists some $k\in \mathbb{N}$ such that
\begin{align*}
& V_0\cap T^{-p_1(k+m)}V_1\cap\ldots \cap T^{-p_d(k+m)}V_d \\
= & V_0\cap T^{-p_1'(k)}(T^{-p_1(m)}V_1)\cap\ldots \cap T^{-p_d'(k)}( T^{-p_d(m)}V_d)\neq \emptyset,
\end{align*}
which implies Theorem \ref{poly-orbit} for system $(X^*,T)$ by Lemma \ref{equi-condition}.
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{polynomial-TCF}}\label{pf-thm-B}
In this section, we will prove Theorem \ref{polynomial-TCF}.
Let $\mathcal{P}^*$ be the set of all non-constant integer polynomials
{\bf taking zero value at zero}.
A \emph{system} $\mathcal{A}$ is a finite subset of $\mathcal{P}^*$.
\subsection{The PET-induction}\label{PET-induction-def}\
Two integer polynomials $p,q$ will be called \emph{equivalent}
if they have the same degree and the leading coefficients
of the polynomials $p,q$ coincide as well.
If $C$ is a set of equivalent integer polynomials,
its \emph{degree} $w(C)$ is the degree of any its members.
For every system $\mathcal{A}$, we define its \emph{weight vector} $\phi(\mathcal{A})$
as follows.
Let $w_1<\ldots<w_k$ be the
set of the distinct degrees of all equivalence classes appeared in $\mathcal{A}$.
For $1\leq i\leq k$, let $\phi(w_i)$ be the number of the equivalence classes of elements of $\mathcal{A}$
with degree $w_i$. Let the weight vector $\phi(\mathcal{A})$ be
\[
\phi(\mathcal{A})=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big).
\]
\medskip
For example, the weight vector of $\{c_1n,\ldots,c_sn\}$ is $(s,1)$ if $c_1,\ldots,c_s$
are distinct non-zero integers;
the weight vector of $\{an^2+b_1n,\ldots,an^2+b_tn\}$ ($a$ is a non-zero integer) is $(1,2)$;
and
the weight vector of $\{an^2+b_1n,\ldots,an^2+b_tn,\;c_1n,\ldots,c_sn\}$
($a$ is a non-zero integer and $c_1,\ldots,c_s$
are distinct non-zero integers) is $\big((s,1),(1,2)\big)$;
and the weight vector of the general polynomials of degree not more than 2
is $\big((s,1),(k,2)\big)$.
Let $\mathcal{A},\mathcal{A}'$ be two systems.
We say that $\mathcal{A}'$ \emph{precedes} $\mathcal{A}$
if there exists a degree $w$ such that $\phi(\mathcal{A}')(w)<\phi(\mathcal{A})(w)$ and
$\phi(\mathcal{A})(u)=\phi(\mathcal{A}')(u)$ for any degree $u>w$.
We denote it by $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
Under the order of weight vectors, we have
\begin{align*}
&(1,1)\prec (2,1)\prec\ldots\prec (m,1)\prec\ldots \prec(1,2)\prec\big((1,1),(1,2)\big)\prec\ldots \prec\\
& \big((m,1),(1,2)\big)\prec\ldots\prec(2,2)\prec\big((1,1),(2,2)\big)\prec\ldots \prec \big((m,1),(2,2)\big)\prec\ldots \prec\\
&\big((m,1),(k,2)\big)\prec\ldots \prec(1,3)\prec\big((1,1),(1,3)\big)\prec\ldots
\big((m,1),(k,2),(1,3)\big)\prec\ldots \prec\\
& (2,3)\prec\ldots \prec
\big((a_1,1),(a_2,2)\ldots,(a_k,k)\big)\prec\ldots.
\end{align*}
For $p\in \mathcal{P}^*$ and $m\in \mathbb{Z}$,
define $(\partial_m p)(n):=p(n+m)-p(m)$.
It is clear that $\partial_m p \in \mathcal{P}^*$ for any $p\in \mathcal{P}^*$ and $m\in \mathbb{Z}$.
The following lemma can be found in \cite{BL96,Le94}.
\begin{lemma}\label{PET-induction}
Let $\mathcal{A}$ be a system and let $m_1,\ldots,m_d $ be distinct non-zero integers.
Let $p\in \mathcal{A}$ be an element of the minimal degree in $\mathcal{A}$ and let
\[
\mathcal{A}'=\{q-p,\;\partial_{m_j}q-p:\; q\in \mathcal{A},\; 1\leq j\leq d\},
\]
then $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
\end{lemma}
\subsection{A stronger result}\
Throughout this section,
let $(X,T)$ and $(Y,T)$ be minimal systems, and
let
\[
X\stackrel{\pi}{\longrightarrow} Y\stackrel{\phi}{\longrightarrow} X/\mathbf{RP}^{[\infty]}(X)=:X_\infty
\]
be factor maps such that
$\pi$ is \textbf{open} and $\phi$ is \textbf{almost one to one}.
For systems $\mathcal{A}=\{p_1,\ldots,p_s\}$ and $\mathcal{C}$,
we just say that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for convenience,
if for any open subsets $V_0,V_1,\ldots,V_s$ of $X$ with
$\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{p_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item\label{AAAA1111} $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s \pi(V_i)$ for $q\in \mathcal{C}$.
\end{enumerate}
It follows from Theorem \ref{key-thm0}
that to show Theorem \ref{polynomial-TCF},
it suffices to show the following stronger result:
\begin{theorem}\label{polynomial-case}
For any systems $\mathcal{A}$ and $\mathcal{C}$,
$\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\end{theorem}
\subsubsection{Ideas for the proof of Theorem \ref{polynomial-case}}\
To prove Theorem \ref{polynomial-case}, we will use induction on the weight vector of $\mathcal{A}$.
The first step we do is to show that
$\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$
if the weight vector of $\mathcal{A}$ is $(s,1)$, i.e., $\mathcal{A}=\{c_1n,\ldots,c_sn\}$,
where $c_1,\ldots,c_s$ are distinct non-zero integers.
In the second
step we assume that $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$
for any system $\mathcal{C}'$ and the system $\mathcal{A}'$ whose weight vector is
$\prec \big( (\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$.
Then we show that
$\pi$ also has the property $\Lambda(\mathcal{A},\mathcal{C})$
for any system $\mathcal{C}$ and
the system $\mathcal{A}$
with weight vector $\big( (\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$,
and hence the proof is completed.
Before giving the proof of the second step,
we show Theorem \ref{polynomial-case} holds for system $\mathcal{A}$ with weight vectors $(1,2)$ and $\big((s,1),(1,2)\big)$
as examples
to illustrate our basic ideas.
\subsubsection{The concrete construction}\
We use a simple example to describe how we prove Theorem \ref{polynomial-case}.
Let $(X,T)$ be a minimal system, and assume $\pi:X\to X/\mathbf{RP}^{[\infty]}(X)= X_{\infty}$
is open. For open subsets $U,V$ of $X$
with $\pi(U)\cap \pi(V)\neq\emptyset$, we aim to choose some $n\in \mathbb{Z}$ with
\begin{equation}\label{simple-example}
U\cap T^{-n^2}V\neq \emptyset.
\end{equation}
\noindent {\bf Construction.}
The classical idea (under some assumption) to show (\ref{simple-example}) is the following:\
\noindent {(i):} Cover $X$ by the orbits of $U,V$. That is, choose $d\in \mathbb{N}$
with $\bigcup_{i=1}^d T^iU=X=\bigcup_{i=1}^d T^iV$;\
\noindent {(ii):} Cnstruct $x_1,\ldots,x_d\in X$ and $n_1,\ldots,n_d\in \mathbb{N}$
such that
\[
T^{(n_i+\ldots+n_j)^2}x_j\in T^iV,\quad \forall\; 1\leq i\leq j\leq d.
\]
Once we have achieved this, then $x_d\in T^lU$ for some $ l\in\{1,\ldots, d\}$
which implies
\[
U\cap T^{-(n_l+\ldots+n_d)^2}V\neq \emptyset.
\]
\medskip
For general minimal systems,
this construction needs some modifications.
Now consider the return time set
\begin{equation}\label{return time set-a}
N_k(W_1,W_2):=\{n\in \mathbb{Z}:W_1\cap T^{-kn}W_2\neq \emptyset\}.
\end{equation}
It follows from Theorem \ref{key-thm} that $N_k(W_1,W_2)$ is non-empty for any
non-zero integer $k$ and any open subsets $W_1,W_2$ of $X$ with $\pi(W_1)\cap \pi(W_2)\neq \emptyset$.
Thus to ensure the existence of the construction (i),
the first change we make is the following:
\medskip
\noindent {(i)*:} Cover some fixed fiber instead of the whole space.
That is,
choose some $x\in X$ with $\pi(x)\in \pi(U)\cap \pi(V)$ and choose $a_1,\ldots,a_d\in \mathbb{N}$ such that
\[
\pi^{-1}( \{\pi(x)\})\subset \big(\bigcup_{j=1}^dT^{a_j}U\big)
\cap
\big(\bigcup_{j=1}^dT^{a_j}V\big).
\]
Let us go into the detail of the first two steps in construction (ii).
Choose $x_1\in X$ and $n_1\in \mathbb{N}$ with $ T^{n_1^2}x_1 \in T^{a_1}V$.
Note that by Bergelson-Leibman Theorem \cite{BL96} such choice exists.
What additional condition the point $x_1$ need to satisfy remains to be determined.
For the choice of $x_2$,
a feasible method is to track
$x_1$ along $\partial_{n_1}n^2=n^2+2n_1n$.
To be more precise, it suffices to choose $x_2\in X$ and $n_2\in \mathbb{N}$
such that $T^{n_2^2}x_2\in T^{a_2}V$ and $T^{n_2^2+2n_1n_2}x_2\in V_{x_1}$,
where $V_{x_1}$ is an open neighbourhood of $x_1$ with $T^{n_1^2}V_{x_1}\subset T^{a_1}V$.
This implies that the return time set $\{n\in \mathbb{Z}:T^{a_2}V\cap T^{-2n_1n}V_{x_1}\neq \emptyset\}$
should be non-empty.
By the argument above about the set (\ref{return time set-a}),
a suitable condition is $\pi(T^{a_2}V)\cap \pi(V_{x_1})\neq \emptyset$.
Thus to guarantee the induction procedure,
the points we choose in construction (ii) need to be very close to the fixed fiber,
i.e., $T^{n_1^2}\pi(x_1),T^{n_2^2}\pi(x_2)\in \pi(U)\cap \pi(V)$:
\bigskip
\[
\begin{tikzpicture}
\draw [dashed,-] (-3,4)--(-3,-4);
\draw [dashed,-] (2.6,4)--(2.6,-4);
\draw [dashed,-] (3,4)--(3,-4);
\draw [dashed,-] (-6,-2)--(6,-2);
\node [right] at (-3,-3) {$\pi(x)$};
\node [left] at (-4.5,3) {$T^{a_1}V$};
\node [left] at (-4.5,2.2) {$T^{a_2}V$};
\node [left] at (-4.7,-1.5) {$T^{a_d}V$};
\node [left] at (1.5,3) {$T^{a_1}V$};
\node [left] at (1.5,2.2) {$T^{a_2}V$};
\node [left] at (1.3,-1.5) {$T^{a_d}V$};
\node [right] at (3,-3) {$\pi(x)$};
\node [below] at (-3.4,-0.5){$x_1$};
\node [below] at (2.6,-0.5){$x_1$};
\node [below] at (2,0){$x_2$};
\node [left] at (3,3) {$x$};
\node [left] at (-3,3) {$x$};
\fill
(-3,-3)circle (2pt)
(3,-3)circle (2pt)
(3,3)circle (2pt)
(-3.4,-0.5)circle (2pt)
(2.6,-0.5)circle (2pt)
(2.6,2)circle (2pt)
(-3,3)circle (2pt)
(2,0)circle (2pt);
\draw[thick,red,->] (-3.4,-0.5) to[out=10, in=360] node[right, midway] {$n_1^2$} (-2.6,3);
\draw[thick,red,->] (2.6,-0.5) to[out=10, in=360] node[right, midway] {$n_1^2$} (3.4,3);
\draw[thick,blue,->] (2.6,2) to[out=-20, in=40] node[left, midway] {$2n_1n_2$} (2.6,-0.37);
\draw[thick,red,->] (2,0) to[out=170, in=170] node[left, midway] {$n_2^2$} (2.6,2);
\draw[ thick] (-3, -3) ellipse (1.5 and 0.7);
\draw[thick] (3, -3) ellipse (1.5 and 0.7);
\draw[ thick] (3, 3) ellipse (1.5 and 0.5);
\draw[ thick] (3, 2.2) ellipse (1.6 and 0.4);
\draw[thick] (3, -1.5) ellipse (1.7 and 0.2);
\draw[ thick] (-3, 3) ellipse (1.5 and 0.5);
\draw[ thick] (-3, 2.2) ellipse (1.6 and 0.4);
\draw[ thick] (-3, -1.5) ellipse (1.7 and 0.2);
\node [right,font=\Huge] at (5.5,-2.8) {$X_\infty$};
\node [right,font=\Huge] at (5.5,2.5) {$X$};
\end{tikzpicture}
\]
\medskip
Furthermore,
we will reduce the system of any given complexity to the lower one,
thus the points chosen in construction (ii) also need to be close to the fixed fiber
along polynomials of any higher degrees,
that is why we need property \ref{AAAA1111} in Theorem \ref{polynomial-case} additionally.
Summarize it as follows:
\medskip
\noindent {(ii)*:}
For any system $\mathcal{C}$, construct $x_1,\ldots,x_d\in X$ and $n_1,\ldots,n_d\in \mathbb{N}$
such that
\begin{enumerate}
[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{(n_i+\ldots+n_j)^2}x_j\in T^iV$ for $1\leq i\leq j\leq d$;
\item$T^{q(n)}\pi(x_j)\in \pi(U)\cap \pi(V)$ for $1\leq j\leq d$ and $q\in \mathcal{C}$
\end{enumerate}
\medskip
Practically, we will use construction (i)* and (ii)* for general cases to prove Theorem \ref{polynomial-case}.
When doing this, we find that if in the collection of polynomials there are linear
elements and other non-linear elements, the argument will be very involved.
We will explain in Subsection \ref{exampless} how to overcome this difficulty via proving Case \ref{case2}.
\subsection{The first step: $\mathcal{A}=\{c_1n,\ldots,c_sn\}$}
\begin{lemma}\label{linear-case-with-constraint}
If there are distinct non-zero integers $c_1,\ldots,c_s$ such that $\mathcal{A}=\{c_1n,\ldots,c_sn\}$,
then $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for
any system $\mathcal{C}$.
\end{lemma}
\begin{proof}
Fix distinct non-zero integers $c_1,\ldots,c_s$.
Let $V_0,V_1,\ldots,V_s$ be open subsets of $X$ with
$W:=\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$.
Recall that the map $\pi$ is open, thus $W$ is an open subset of $Y$
and $\pi^{-1}( W)\cap V_i\neq \emptyset$
for every $0\leq i\leq s$.
\medskip
\noindent {\bf Case 1:} $\min\{ c_1,\ldots,c_s\}>0$.
Fix a system $\mathcal{C}$.
By Corollary \ref{return-time-AA},
there is a dense $G_\delta$ subset $\Omega_Y$ of $Y$
such that for any $y\in \Omega_Y$,
$\bigcap_{q\in \mathcal{C}}N_q(y,V)\in \mathcal{F}_{fip}^*$
for every open neighbourhood $V$ of $y$.
By Lemma \ref{fip-infi-fiber}, there exists
a dense $G_\delta$ subset $\Omega_X$ of $X$ such that for any $x \in\Omega_X \cap V_0 \cap \pi^{-1}(W)$,
there exists $A\in \mathcal{F}_{fip}$
such that $T^{c_in}x\in V_i$ for $1\leq i\leq s$ and $n\in A$.
Let $z\in \pi^{-1}(\Omega_Y)\cap \Omega_X \cap V_0 \cap \pi^{-1}(W)$.
Then we have
\[
\{n\in \mathbb{Z}: \pi(z)\in \bigcap_{q\in \mathcal{C}}T^{-q(n)}W\}=
\bigcap_{q\in \mathcal{C}}N_q(\pi(z),W)\in \mathcal{F}_{fip}^*,
\]
and we can choose some $n\in \mathbb{Z}$ such tha
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item $T^{c_in}z\in V_i$ for $1\leq i\leq s$;
\item\label{AAAA1111} $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s \pi(V_i)$ for $q\in \mathcal{C}$.
\end{enumerate}
Thus $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\medskip
\noindent {\bf Case 2:} $\min\{ c_1,\ldots,c_s\}<0$.
Fix a system $\mathcal{C}$.
Assume $c_m=\min\{ c_1,\ldots,c_s\}<0$ for some $m\in \{1,\ldots,s\}$.
Let
\begin{align*}
\mathcal{A}' &=\{-c_mn,\; (c_1-c_m)n,\ldots,(c_{m-1}-c_m)n,\; (c_{m+1}-c_m)n,\ldots,(c_s-c_m)n\}, \\
\mathcal{C}'& =\{q(n)-c_mn:q\in \mathcal{C}\}.
\end{align*}
By Case 1, $\pi $ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$.
Then for open sets $V_m,V_0,\ldots,V_{m-1},V_{m+1},\ldots,V_s$,
there exist $w\in V_m$ and $n\in \mathbb{Z}$ such that
$T^{-c_mn}w\in V_0,T^{(c_i-c_m)n}w\in V_i$ for $i\in \{1,\ldots,s\}\backslash\{m\}$
and $T^{q(n)-c_mn}\pi(w)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
By letting $z=T^{-c_mn}w$, we deduce that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
This completes the proof.
\end{proof}
\subsection{\bf Examples}\label{exampless}\
In this subsection,
we show Theorem \ref{polynomial-case} holds for system $\mathcal{A}$ with weight vectors $(1,2)$
and $\big((s,1),(1,2)\big)$.
\begin{case}\label{case1}
$\phi(\mathcal{A})=(1,2)$.
\end{case}
\begin{proof}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers.
Let $V_0,V_1,\ldots,V_m$ be open subsets of $X$ with
$W:= \bigcap_{m=0}^t\pi(V_m)\neq \emptyset.$
Recall that the map $\pi$ is open, thus $W$ is an open subset of $Y$.
For $0\leq m\leq t$,
by replacing $V_m$ by $V_m\cap \pi^{-1}(W)$
respectively,
we may assume without loss of generality that $\pi(V_m)=W$.
As $(X,T)$ is minimal, there is some $N\in \mathbb{N}$ such that
$\bigcup_{j=1}^NT^j V_m=X$ for every $0\leq m\leq t$.
Let $x\in X$ with $\pi(x)\in W$ and let
\[
\{1\leq j\leq N:\pi(x)\in T^jW\}
=\{a_1,\ldots,a_d\}.
\]
Then we have
\[
\pi^{-1}( \{\pi(x)\})\subset \bigcap_{m=0}^t\big(\bigcup_{j=1}^dT^{a_j}V_m\big) .
\]
As the map $\pi$ is open,
by Theorem \ref{open-map} we can choose $\delta>0$ such that
\begin{equation}\label{relation-1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t \big(\bigcup_{j=1}^dT^{a_j}V_m\big),
\end{equation}
and
\begin{equation}\label{relation-2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$.
Write $p_m(n)=an^2+b_mn$ for $1\leq m\leq t$ and let $\eta=\delta/d$.
Inductively we will construct $x_1,\ldots,x_d\in X$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_j}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l})}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, then for $1\leq j \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_d)\in B(\pi(x),d\eta)=B(\pi(x),\delta)$;
\item$T^{p_m(n_j+\ldots+n_d)}x_d\in T^{a_j}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_d)}\pi(x_d)\in B(\pi(x),d\eta)=
B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$ by (\ref{relation-2}).
\end{itemize}
As $\pi(x_d)\in B(\pi(x),\delta)$,
it follows from (\ref{relation-1}) that $x_d\in T^{a_j}V_0$ for some $ j\in\{1,\ldots, d\}$.
Put $z=T^{-a_j}x_d$ and $n=n_j+\ldots+n_d$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z=T^{-a_j}x_d\in V_0$;
\item $T^{p_m(n)}z=T^{-a_j}(T^{p_m(n)}x_d)\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)=T^{-a_j}(T^{q(n)}\pi(x_d))\in W = \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This shows that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\medskip
{\bf We now return to the inductive construction of $x_1,\ldots,x_d$ and $n_1,\ldots,n_d$.}
\medskip
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$. Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=1}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset B(\pi(x),\eta).
\]
\noindent {(i)} When $t=1$, i.e., $\mathcal{A}=\{p_1\}$.
By Bergelson-Leibman Theorem \cite{BL96}, there exsit $x_1\in I_1\cap T^{a_1}V_1$
and $n_1\in \mathbb{N}$ such that for $q\in \mathcal{C}$,
\[
T^{p_1(n_1)}x_1,\; T^{q(n_1)}x_1\in I_1\cap T^{a_1}V_1.
\]
Then for $q\in \mathcal{C}$ we have
\[
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in B(\pi(x),\eta).
\]
\noindent {(ii)} When $t\geq 2$.
Let
\begin{align*}
\mathcal{A}_1 &=\{ p_m-p_1:2\leq m\leq t\}=\{ (b_m-b_1) n:2\leq m \leq t\}, \\
\mathcal{C}_1 & =\{-p_1,\; q-p_1:q\in \mathcal{C}\}.
\end{align*}
By Lemma \ref{linear-case-with-constraint}, $\pi$ has the property $\Lambda(\mathcal{A}_1,\mathcal{C}_1)$.
Then for open sets $I_1\cap T^{a_1}V_1$ and $I_1\cap T^{a_1}V_2,\ldots,I_1\cap T^{a_1}V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_1$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1a)$] $T^{p_m(n_1)-p_1(n_1)}y_1\in I_1\cap T^{a_1} V_m$ for $2\leq m\leq t$;
\item[$(1c)$] $T^{-p_1( n_1)}\pi(y_1),\;T^{q(n_1)-p_1( n_1)}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-p_1( n_1)}y_1$.
By $(1a)$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_1\in T^{a_1}V_m.
\]
By $(1c)$, for $ q\in \mathcal{C}$ we have
\[
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta).
\]
Thus by (i) and (ii) we can choose $x_1\in X$ with $\pi(x_1)\in B(\pi(x),\eta)$ and $n_1\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $T^{p_m(n_{1})}x_{1}\in T^{a_1}V_m$ for $1\leq m\leq t$ ;
\item $T^{q(n_{1})}\pi(x_{1})\in B(\pi(x),\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
\medskip
\noindent {\bf Step l:}
Let $l\geq 2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that for $1\leq j \leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ \pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),\delta)$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_j}V_m$ for $1\leq m \leq t$;
\item $ T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j \leq l-1$,
\begin{align}
\label{newl1} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset T^{a_j}V_m,
&\forall\; 1\leq m \leq t, \\
\label{newl2} T^{q(n_j+\ldots+n_{l-1})}B\big(\pi(x_{l-1}),\eta_l\big)&
\subset B(\pi(x),(l-1)\eta),&\forall \; q\in \mathcal{C}.
\end{align}
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{relation-2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_l} V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l}\subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{A}_l= &\{ p_m-p_1,\;\partial_{n_j+\ldots+n_{l-1}}p_m-p_1:1\leq m\leq t,1\leq j\leq l-1\} \\
= & \{ (b_m-b_1) n,\; (b_m-b_1+2an_j+\ldots+2an_{l-1}) n:1\leq m\leq t,1\leq j\leq l-1\},\\
\mathcal{C}_l=&\{-p_1,\; q-p_1,\;\partial_{n_j+\ldots+n_{l-1}}q-p_1:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
By Lemma \ref{linear-case-with-constraint}, $\pi$ has the property $\Lambda(\mathcal{A}_l,\mathcal{C}_l)$.
Then for open sets
$I_l\cap T^{a_l}V_1,I_l\cap T^{a_l}V_2, \ldots,I_l\cap T^{a_l}V_t ,
\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1)\; \mathrm{times}}$,
there exist $y_l\in I_l\cap T^{a_l}V_1$ and $n_l\in \mathbb{N}$
such that for $1\leq j \leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(la)_1$]$ T^{p_m(n_l)-p_1(n_l)}y_l\in I_l\cap T^{a_l}V_m$ for $2\leq m\leq t$;
\item[$(la)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-p_1(n_l)}y_l\in B(x_{l-1},\eta_l)$
for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-p_1(n_l)}\pi(y_l),\;T^{q(n_l)-p_1(n_l)}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-p_1(n_l)}\pi(y_l)\in S_l
\subset B(\pi(x_{l-1}),\eta_l)$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_l=T^{-p_1( n_l)}y_l$.
By $(la)_1$ for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_l}V_m.
\]
By $(la)_2$ and (\ref{newl1}), for $1\leq m\leq t,1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_j}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\[
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\]
By $(lc)_2$ and (\ref{newl2}), for $q\in \mathcal{C},1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=&T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
We finish the construction by induction.
\end{proof}
\medskip
\begin{case}\label{case2}
$\phi(\mathcal{A})=\big((s,1),(1,2)\big)$.
\end{case}
From $(la)_1,(la)_2$ in the proof of Case \ref{case1}, we can see that the points
$T^{p_m(n_l)}x_l$ and $T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l$
should be in different open sets.
However, this may not hold for the family of polynomials containing linear ones.
The reason is clear,
for any linear polynomial $q\in \mathcal{P}^*$,
one has $\partial_mq=q$ for all $m\in \mathbb{Z}$.
In this case, we cannot use $q$ and $\partial_mq$ to track different open sets.
To overcome this difficulty, we divide the proof of Case \ref{case2} into
the following two claims whose proofs will be given after the proof of Case \ref{case2} since they are very long.
For general cases, the idea is similar.
\begin{claim}\label{ex2}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers,
and let $c_1,\ldots,c_s$ be distinct non-zero integers.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_m$ of $X$
with $\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{c_i n}z\in V_0$ for $1\leq i\leq s$;
\item $ T^{an^2+b_mn}z\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
\begin{claim}\label{ex-claim2}
Let $\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn \}$, where $a$ is a non-zero integer
and $b_1,\ldots,b_t$ are distinct integers,
and let $c_1,\ldots,c_s$ be distinct non-zero integers.
Then for any system $\mathcal{C}$ and open subsets $W_0,W_1,\ldots,W_s$ of $X$
with $\bigcap_{i=0}^s\pi(W_i)\neq \emptyset$,
there exist $z\in W_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{an^2+b_mn}z\in W_0$ for $1\leq m\leq t$;
\item$ T^{c_in}z\in W_i$ for $1\leq i \leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(W_i)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
Using Claims \ref{ex2} and \ref{ex-claim2}, we are able to give a proof of Case \ref{case2}.
\begin{proof}[Proof of Case \ref{case2} assuming Claims \ref{ex2} and \ref{ex-claim2}]
Fix a system $\mathcal{C}$.
Let
\[
\mathcal{A}=\{an^2+b_1n,\ldots,an^2+b_tn,\; c_1n,\ldots,c_s n \}
\]
where $a$ is a non-zero integer,
$b_1,\ldots,b_t$ are distinct integers, and
$c_1,\ldots,c_s$ are distinct non-zero integers.
Let $V_0,V_1,\ldots,V_s,U_1,\ldots,U_t$ be open subsets of $X$ with
\[
W:=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)\neq \emptyset.
\]
Let $\mathcal{A}_1=\{an^2+b_1n,\ldots,an^2+b_tn\}$.
Using Claim \ref{ex2} for system $\mathcal{A}_1$ and integers $c_1,\ldots,c_s$,
then for system $\mathcal{C}$ and open sets $V_0\cap \pi^{-1}(W),U_1,\ldots,U_t$,
there exist $w\in V_0\cap \pi^{-1}(W)$ and $k\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{c_ik}w\in V_0\cap \pi^{-1}(W)$ for $1\leq i \leq s$;
\item $ T^{ak^2+b_mk}w\in U_m$ for $1\leq m\leq t$;
\item$T^{q(k)}\pi(w)\in \pi\big(V_0\cap \pi^{-1}(W)\big)\cap \bigcap_{m=1}^t\pi(U_m)\subset W$ for $q\in \mathcal{C}$.
\end{itemize}
For every $1\leq i \leq s$,
as $T^{c_ik}w\in V_0\cap \pi^{-1}(W)$,
there is some $w_i\in V_i$ with $\pi(T^{c_ik}w)=\pi(w_i)$ which implies
\begin{equation}\label{equal1}
\pi(w)=\pi(T^{-c_ik}w_i).
\end{equation}
Choose $\gamma>0$ such that
\begin{align}
\label{conA1} T^{c_ik}B(T^{-c_ik}w_i,\gamma)&\subset V_i, \quad\;\;\; \forall\; 1\leq i\leq s, \\
\label{conA2} T^{ak^2+b_mk}B(w,\gamma)&\subset U_m, \quad\; \forall\; 1\leq m\leq t,\\
\label{conA3} T^{q(k)}B\big(\pi(w),\gamma\big)&\subset W,\quad\;\; \;\forall\; q\in \mathcal{C}.
\end{align}
Let $W_0=B(w,\gamma)\cap \pi^{-1}\big( B(\pi(w),\gamma)\big)\cap V_0$ and
let $W_i=B(T^{-c_ik}w_i,\gamma)$ for $1\leq i\leq s$.
Then $W_0$ is a non-empty open set as $w\in W_0$,
and $\pi(w)\in \pi(W_i)$ by (\ref{equal1}).
Let
\begin{align*}
\mathcal{A}_2 &=\{\partial_k p:p\in \mathcal{A}_1\}=\{an^2+(b_1+2ak)n,\ldots,an^2+(b_t+2ak)n\}, \\
\mathcal{C}_1 & =\{\partial_k q:q\in \mathcal{C}\}.
\end{align*}
Now using Claim \ref{ex-claim2} for system $\mathcal{A}_2$ and integers $c_1,\ldots,c_s$,
then for system $\mathcal{C}_1$ and open sets $W_0,W_1,\ldots,W_s$,
there exist $z\in W_0$ and $l\in \mathbb{N}$ such that
\begin{align}
\label{qqq1} T^{al^2+(b_m+2ak)l}z&\in W_0\subset B(w,\gamma), \quad\quad\quad\quad\quad\quad\quad\quad\;\;\;\; \forall\; 1\leq m\leq t, \\
\label{qqq2} T^{c_il}z&\in W_i=B(T^{-c_ik}w_i,\gamma), \quad\quad\quad\quad\quad\quad\;\; \;\forall\; 1\leq i\leq s,\\
\label{qqq3}T^{\partial_kq(l)}\pi(z)&\in \bigcap_{i=0}^s\pi(W_i)\subset \pi(W_0)\subset B(\pi(w),\gamma),\quad\;\; \forall\; q\in \mathcal{C}.
\end{align}
By (\ref{conA1}) and (\ref{qqq2}), for $1\leq i \leq s$ we have
\[
T^{c_i(l+k)}z\in T^{c_ik}B(T^{-c_ik}w_i,\gamma)\subset V_i.
\]
By (\ref{conA2}) and (\ref{qqq1}), for $1\leq m\leq t$ we have
\[
T^{a(l+k)^2+b_m(l+k)}z=T^{ak^2+b_mk}(T^{al^2+(b_m+2ak)l}z)\in T^{ak^2+b_mk}B(w,\gamma)\subset U_m.
\]
By (\ref{conA3}) and (\ref{qqq3}), for $q\in \mathcal{C}$ we have
\[
T^{q(l+k)}\pi(z)=T^{q(k)}(T^{\partial_k q(l)}\pi(z))\in T^{q(k)}B(\pi(w),\gamma)\subset W.
\]
Put $n=l+k$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z\in V_0$;
\item$ T^{c_in}z\in V_i$ for $1\leq i \leq s$;
\item $ T^{an^2+b_mn}z\in U_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof of Case \ref{case2}.
\end{proof}
We now proceed to the proof of Claims \ref{ex2} and \ref{ex-claim2}.
\begin{proof}[Proof of Claim \ref{ex2}]
We show this claim by induction on $s$.
When $s=0$, it follows from Case \ref{case1}.
Let $s\geq 1$ be an integer and
suppose the statement of the claim is true for $ (s-1)$.
Let $c_1,\ldots,c_s$ be distinct non-zero integers,
and let $V_0,V_1,\ldots,V_m$ be open subsets of $X$ with $W:=\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$.
By the similar argument in the proof of Case \ref{case1},
we may assume without loss of generality that $\pi(V_m)=W$ for $0\leq m\leq t$,
and there exist $x\in X$ with $\pi(x)\in W$, $a_1,\ldots,a_d\in \mathbb{N}$ and $\delta>0$ such that
\begin{equation}\label{case2-relation1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t\bigcup_{j=1}^dT^{a_j}V_m,
\end{equation}
and
\begin{equation}\label{case2-relation2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$ and let $\eta=\delta/d$.
Write $p_m(n)=an^2+b_mn$ for $1\leq m\leq t$.
Inductively we will construct $x_1,\ldots,x_d\in X,k_1,\ldots,k_{d+1}\in \{1,\ldots,d\}$ with $k_1=1$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for every $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0$;
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{c_i(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, there exist $1\leq j\leq l\leq d$ with $k_j=k_{l+1}$ such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0=T^{a_{k_j}}V_0$;
\item $T^{c_i(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)\subset B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$
\ \ \ by (\ref{case2-relation2}).
\end{itemize}
Put $n=n_j+\ldots+n_l$ and $z=T^{-a_{k_{j}}}x_{l}$, then the claim follows.
\medskip
{\bf
We now return to the inductive construction of $x_1,\ldots,x_d,k_1,\ldots,k_{d+1}$ and $n_1,\ldots,n_d$.
}
\medskip
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$
Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=0}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset \pi(I_1)=
B(\pi(x),\eta).
\]
Let
\[
\mathcal{A}_1 =\{p_m(n)-c_1n:1\leq m\leq t\} \quad\mathrm{and}\quad
\mathcal{C}_1 =\{-c_1n,\; q(n)-c_1n:q\in \mathcal{C}\}.
\]
By our inductive hypothesis,
the conclusion of Claim \ref{ex2} holds for system $\mathcal{A}_1$ and integers $c_2-c_1,\ldots,c_s-c_1$.
Then for system $\mathcal{C}_1$ and open sets $I_1\cap T^{a_1} V_0,I_1\cap T^{a_1} V_1, \ldots,I_1\cap T^{a_1} V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_0$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1a)_1$] $T^{(c_i-c_1) n_1}y_1\in I_1\cap T^{a_1} V_0$ for $2\leq i\leq s$;
\item[$(1a)_2$] $T^{p_m(n_1)-c_1n_1}y_1\in I_1\cap T^{a_1} V_m$ for $1\leq m\leq t$;
\item[$(1c)_{\;\;}$] $T^{-c_1 n_1}\pi(y_1),\; T^{q(n_1)-c_1 n_1}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-c_1 n_1}y_1$.
By $(1a)_1$, for $1\leq i\leq s$ we have
\[
T^{c_i n_1}x_1\in T^{a_1}V_0.
\]
By $(1a)_2$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_{1}\in T^{a_1}V_m.
\]
By $(1c)$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{case2-001}
\pi(x_{1}),\;T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta)\subset B(\pi(x),\delta).
\end{equation}
There is some $k_2\in \{1,\ldots,d\}$ with $x_1\in T^{a_{k_2}}V_0$ by (\ref{case2-relation1}) and (\ref{case2-001}).
\medskip
\noindent {\bf Step l:}
Let $l\geq2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$, $k_1,\ldots,k_{l-1}\in \{1,\ldots,d\}$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that
for $1\leq j\leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$;
\item $T^{c_i(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_0$ for $1\leq i\leq s$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
As $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),d\eta)=B(\pi(x),\delta)$,
by (\ref{case2-relation1})
there is some $k_l\in \{1,\ldots,d\}$
such that $x_{l-1}\in T^{a_{k_l}}V_0$.
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j\leq l-1$,
\begin{align}
\label{hhh1} B( x_{l-1},\eta_l)&\subset T^{a_{k_l}}V_0,& \\
\label{hhh2} T^{c_i(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)&\subset T^{a_{k_{j}}}V_0,
&\forall \;1\leq i\leq s, \\
\label{hhh3} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&
\subset T^{a_{k_{j}}}V_m ,& \forall \;1\leq m\leq t,\\
\label{hhh4} T^{q(n_j+\ldots+n_{l-1})}B(\pi(x_{l-1}),\eta_l)&
\subset B(\pi(x),(l-1)\eta) ,& \forall \;q\in \mathcal{C}.
\end{align}
\medskip
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{case2-relation2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_{k_l}}V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l} \subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{A}_l& =\{p_m(n)-c_1n,\;\partial_{n_j+\ldots+n_{l-1}}p_m(n)-c_1n:1\leq m\leq t,1\leq j\leq l-1\} ,\\
\mathcal{C}_l& =\{-c_1n,\;q(n)-c_1n,\;\partial_{n_j+\ldots+n_{l-1}}q(n)-c_1n:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
By our inductive hypothesis,
the conclusion of Claim \ref{ex2} holds for system $\mathcal{A}_l$ and integers $c_2-c_1,\ldots,c_s-c_1$.
Then for system $\mathcal{C}_l$ and open sets $B(x_{l-1},\eta_l),I_l\cap T^{a_{k_l}}V_1,\ldots,I_l\cap T^{a_{k_l}}V_t,\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1)\;\mathrm{times}}$,
there exist $y_l\in B(x_{l-1},\eta_l)$ and $n_l\in \mathbb{N}$
such that for $1\leq j\leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(la)_1$] $ T^{(c_i-c_1)n_l}y_l\in B(x_{l-1},\eta_l)$ for $2\leq i \leq s$;
\item[$(la)_2$] $ T^{p_m(n_l)-c_1n_l}y_l\in I_l\cap T^{a_{k_l}}V_m$ for $1\leq m \leq t$;
\item[$(la)_3$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-c_1
n_l}y_l\in B(x_{l-1},\eta_l)$ for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-c_1n_l}\pi(y_l),\;T^{q(n_l)-c_1 n_l}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-c_1n_l}\pi(y_l)\in S_l\subset B(\pi(x_{l-1}),\eta_l)$
for $q\in \mathcal{C}$.
\end{enumerate}
\medskip
Set $x_l=T^{-c_1 n_l}y_l$.
By $(la)_1$ and (\ref{hhh1}), for $1\leq i\leq s$ we have
\begin{equation}\label{AQ}
T^{c_i n_l}x_l\in B(x_{l-1},\eta_l)\subset T^{a_{k_l}}V_0.
\end{equation}
By (\ref{hhh2}) and (\ref{AQ}) , for $1\leq i\leq s,1\leq j\leq l-1$ we have
\[
T^{c_i(n_j+\ldots+n_{l-1}+n_l)}x_l\in T^{c_i(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)
\subset
T^{a_{k_j}}V_0 .
\]
By $(la)_2$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_{k_l}}V_m.
\]
By $(la)_3$ and (\ref{hhh3}), for $1\leq m\leq t$ and $1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_j}}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{1212}
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\end{equation}
By $(lc)_2$ and (\ref{hhh4}), for $q\in \mathcal{C}$ and $1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=& T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
There is some $k_{l+1}\in \{1,\ldots,d\}$ with $x_l\in T^{a_{k_{l+1}}}V_0$ by (\ref{case2-relation1})
and (\ref{1212}).
We finish the construction by induction.
\end{proof}
Using Claim \ref{ex2}, we are able to give a proof of Claim \ref{ex-claim2}.
\begin{proof}[Proof of Claim \ref{ex-claim2}]
Let
\begin{align*}
\mathcal{A}_1= & \{-an^2-b_1n,\;-an^2+(c_1-b_1)n,\ldots,\;-an^2+(c_s-b_1)n\}, \\
\mathcal{C}_1= & \{q(n)-an^2-b_1n:q\in \mathcal{C}\}.
\end{align*}
Using Claim \ref{ex2} for system $\mathcal{A}_1$ and integers $b_2-b_1,\ldots,b_m-b_1$,
then for system $\mathcal{C}_1$ and open sets $W_0,W_0,W_1,\ldots,W_s$,
there exist $w\in W_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{(b_m-b_1)n}w\in W_0$ for $2\leq m\leq t$;
\item $ T^{-an^2-b_1n}w\in W_0$;
\item $ T^{-an^2+(c_i-b_1)n}w\in W_i$ for $1\leq i\leq s$;
\item $T^{q(n)-an^2-b_1n}\pi(w)\in \bigcap_{i=0}^s\pi(W_i)$ for $q\in \mathcal{C}$.
\end{itemize}
Put $z=T^{-an^2-b_1n}w$, then the claim follows.
\end{proof}
\subsection{Proofs of Theorems \ref{polynomial-case} and \ref{polynomial-TCF}}\
Now we are able to give proofs of the main results of this section.
Proving them, we need two intermediate claims
whose proofs will be given in later subsection since they are very long.
\medskip
For a weight vector $\vec{w}=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$,
define $\min (\vec{w})=w_1$ and $\max (\vec{w})=w_k$. That is,
for any system $\mathcal{A}$ with weight vector $\vec{w}$,
there are $a,b\in \mathcal{A}$
such that for all $p\in A$,
\[
\min (\vec{w})=w_1=\mathrm{deg}(a)\leq \mathrm{deg}(p) \leq \mathrm{deg}(b)= w_k=\max (\vec{w}).
\]
\begin{claim}\label{gene-claim1}
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if the weight vector of $\mathcal{A}'$ is $\vec{w}$.
Let $\mathcal{A}=\{p_1,\ldots,p_t\}$ be a system with weight vector $\vec{w}$,
and let $\mathcal{B}$ be a system such that $\mathrm{deg}(b)<\min(\vec{w})$ for
all $b\in \mathcal{B}$.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_t$ of $X$
with $\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{b(n)}z\in V_0$ for $b\in \mathcal{B}$;
\item $ T^{p_m(n)}z\in V_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in \bigcap_{m=0}^t\pi(V_m)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
\begin{claim}\label{gene-claim2}
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if the weight vector of $\mathcal{A}'$ is $\vec{w}$.
Let $\mathcal{A}$ be a system with weight vector $\vec{w}$,
and let $c_1,\ldots,c_s\in \mathcal{P}^*$ be distinct linear polynomials such that $c_i\notin \mathcal{A}$ for $1\leq i \leq s$.
Then for any system $\mathcal{C}$ and open subsets $V_0,V_1,\ldots,V_s$ of $X$
with $\bigcap_{i=0}^s\pi(V_i)\neq \emptyset$,
there exist $z\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{a(n)}z\in V_0$ for $a\in \mathcal{A}$;
\item $ T^{c_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
\end{claim}
With the help of Claims \ref{gene-claim1} and \ref{gene-claim2},
we are able to show Theorem \ref{polynomial-case}.
\begin{proof}[Proof of Theorem \ref{polynomial-case} assuming Claims \ref{gene-claim1} and \ref{gene-claim2}]
For a system $\mathcal{A}$,
if $\phi(\mathcal{A})=(s,1)$ for some $s\in \mathbb{N}$,
then $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for any system $\mathcal{C}$
by Lemma \ref{linear-case-with-constraint}.
Now fix a system $\mathcal{A}$ with $(s,1)\prec \phi(\mathcal{A})$ for all $s\in \mathbb{N}$.
That is, there is some $p\in \mathcal{A}$ with $\mathrm{deg}(p)\geq 2$.
Assume $\pi$ has the property
$\Lambda(\mathcal{A}',\mathcal{C}')$
for any systems $\mathcal{A}'$ and $\mathcal{C}'$
if $\phi(\mathcal{A}')\prec\phi(\mathcal{A})$.
Fix a system $\mathcal{C}$.
We next show that $\pi$ also has the property
$\Lambda(\mathcal{A},\mathcal{C})$.
\medskip
\noindent {\bf Case 1:} $\mathrm{deg}(p)\geq2 $ for every $p\in \mathcal{A}$.
Notice that for any non-zero integer $a$,
$\partial_{a}p\neq p$ for every $p\in \mathcal{A}$.
By the similar construction in the proof of Case \ref{case1},
one can deduce that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$.
\medskip
\noindent {\bf Case 2:}
There is some $p\in \mathcal{A}$ with $\mathrm{deg}(p)=1 $.
Let $\mathcal{A}=\{c_1,\ldots,c_s,\; p_1,\ldots,p_t\}$ such that $\mathrm{deg}(c_i)=1$ for $1\leq i\leq s$
and $\mathrm{deg}(p_m)\geq 2$ for $1\leq m\leq t$.
Let $V_0,V_1,\ldots,V_s,U_1,\ldots,U_t$ be open subsets of $X$ with
\[
W:= \bigcap_{i=0}^s\pi(V_i)\cap \bigcap_{m=1}^t\pi(U_m)\neq \emptyset.
\]
Let $\mathcal{A}'=\{p_1,\ldots,p_t\}$ and $\mathcal{B}=\{c_1,\ldots,c_s\}$. Then $\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
By our PET-induction hypothesis, $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C})$.
Using Claim \ref{gene-claim1} for systems
$\mathcal{A}'$ and $\mathcal{B}$,
then for system $\mathcal{C}$ and open sets $V_0\cap \pi^{-1}(W),U_1,\ldots,U_t$,
there exist $w\in V_0\cap \pi^{-1}(W)$ and $k\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item$ T^{c_i(k)}w\in V_0\cap \pi^{-1}(W)$ for $1\leq i \leq s$;
\item $ T^{p_m(k)}w\in U_m$ for $1\leq m\leq t$;
\item $T^{q(k)}\pi(w)\in \pi\big(V_0\cap \pi^{-1}(W)\big)\cap \bigcap_{m=1}^t\pi(U_m)\subset W$ for $q\in \mathcal{C}$.
\end{itemize}
For every $1\leq i \leq s$,
as $T^{c_i(k)}w\in V_0\cap \pi^{-1}(W)$,
there is some $w_i\in V_i$ with $\pi(T^{c_i(k)}w)=\pi(w_i)$ which implies
\begin{equation}\label{general-equal1}
\pi(w)=\pi(T^{-c_i(k)}w_i).
\end{equation}
Choose $\gamma>0$ such that
\begin{align}
\label{gene-conA1}T^{c_i(k)} B(T^{-c_i(k)}w_i,\gamma)&\subset V_i, \;\quad\;\forall\; 1\leq i\leq s, \\
\label{gene-conA2} T^{p_m(k)}B(w,\gamma)&\subset U_m,\quad \forall \;1\leq m\leq t,\\
\label{gene-conA3} T^{q(k)}B(\pi(w),\gamma)&\subset W,\;\quad\;\forall\; q\in \mathcal{C}.
\end{align}
Let $W_0=B(w,\gamma)\cap \pi^{-1}\big( B(\pi(w),\gamma)\big)\cap V_0$
and let $W_i=B(T^{-c_i(k)}w_i,\gamma)$ for $1\leq i\leq s$.
Then $W_0$ is a non-empty open set as $w\in W_0$ and $\pi(w)\in \pi(W_i)$ by (\ref{general-equal1}).
Let
\[
\mathcal{A}''=\{\partial_kp_m:1\leq m\leq t\}\quad\text{ and} \quad
\mathcal{C}'=\{\partial_kq:q\in \mathcal{C}\}.
\]
Then $\pi$ has the property $\Lambda(\mathcal{A}'',\mathcal{C}')$
as $\phi(\mathcal{A}'')=\phi(\mathcal{A}')\prec \phi(\mathcal{A})$.
Using Claim \ref{gene-claim2} for systems
$\mathcal{A}''$ and $\mathcal{B}$, then for system
$\mathcal{C}'$ and open sets $W_0,W_1,\ldots,W_s$,
there exist $z\in W_0\subset V_0$ and $l\in \mathbb{N}$ such that
\begin{align}
\label{gene-conC2} T^{\partial_kp_m(l)}z&\in W_0\subset B(w,\gamma) , &\forall\;1\leq m\leq t, \\
\label{gene-conC1}T^{c_i(l)}z&\in W_i =B(T^{-c_i(k)}w_i,\gamma),&\forall\;1\leq i\leq s, \\
\label{gene-conC3} T^{\partial_kq(l)}\pi(z)&\in \bigcap_{i=0}^s \pi(W_i)\subset \pi(W_0)\subset B(\pi(w),\gamma) ,&\forall\;q\in \mathcal{C}.
\end{align}
Recall that $\mathrm{deg}(c_i)=1$,
by (\ref{gene-conA1}) and (\ref{gene-conC1}) for $1\leq i\leq s$ we have
\[
T^{c_i(l+k)}z=T^{c_i(k)}(T^{c_i(l)}z)\in T^{c_i(k)}B(T^{-c_i(k)}w_i,\gamma)\subset V_i.
\]
By (\ref{gene-conA2}) and (\ref{gene-conC2}), for $1\leq m \leq t$ we have
\[
T^{p_m(l+k)}z= T^{p_m(k)}(T^{\partial_kp_m(l)}z)\in T^{p_m(k)}B(w,\gamma)\subset U_m.
\]
By (\ref{gene-conA3}) and (\ref{gene-conC3}), for $q\in \mathcal{C}$ we have
\[
T^{q(l+k)}\pi(z)
= T^{q(k)}(T^{\partial_kq(l)}\pi(z))\in T^{q(k)}B(\pi(w),\gamma)\subset W.
\]
Now set $n=l+k$, we get that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $z\in V_0$;
\item$ T^{c_i(n)}z\in V_i$ for $1\leq i \leq s$;
\item $ T^{p_m(n)}z\in U_m$ for $1\leq m\leq t$;
\item $T^{q(n)}\pi(z)\in W=\bigcap_{i=0}^s\pi(V_i)\cap\bigcap_{m=1}^t\pi(U_m)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof.
\end{proof}
We are ready to show Theorem \ref{polynomial-TCF}.
\begin{proof}[Proof of Theorem \ref{polynomial-TCF}]
It follows Theorems \ref{key-thm0} and \ref{polynomial-case}.
\end{proof}
We now proceed to the proof of Claims \ref{gene-claim1} and \ref{gene-claim2}.
\begin{proof}[Proof of Claim \ref{gene-claim1}]
Fix a weight vector $\vec{w}$ with $\min(\vec{w})\geq 2$,
otherwise the system $\mathcal{B}$ will be empty and there is nothing to prove.
We prove this claim by PET-induction,
the induction on the weight vector of the system $\mathcal{B}$.
Fix a non-empty system $\mathcal{B}$ such that $\mathrm{deg}(b)<\min(\vec{w})$ for all $b\in \mathcal{B}$,
and
suppose the statement of the claim is true for any systems
$\mathcal{B}',\mathcal{A}',\mathcal{C}$ if $\phi(\mathcal{B}')\prec \phi(\mathcal{B})$ and $\phi(\mathcal{A}')=\vec{w}$.
Let $\mathcal{A}=\{p_1,\ldots,p_t\}$ be a system with weight vector $\vec{w}$,
and let $V_0,V_1,\ldots,V_t$ be open subsets of $X$
with $W:=\bigcap_{m=0}^t\pi(V_m)\neq \emptyset$.
By the similar argument in the proof of Case \ref{case1},
we may assume without loss of generality that $\pi(V_m)=W$ for $0\leq m\leq t$,
and there exist $x\in X$ with $\pi(x)\in W$, $a_1,\ldots,a_d\in \mathbb{N}$ and $\delta>0$ such that
\begin{equation}\label{general-relation1}
\pi^{-1}\big(B(\pi(x),\delta)\big)\subset \bigcap_{m=0}^t\bigcup_{j=1}^dT^{a_j}V_m,
\end{equation}
and
\begin{equation}\label{general-relation2}
B(\pi(x),\delta)\subset W \cap \big(\bigcap_{j=1}^d T^{a_j}W \big).
\end{equation}
Fix a system $\mathcal{C}$ and
let $\eta=\delta/d$.
Inductively we will construct $x_1,\ldots,x_d\in X,k_1,\ldots,k_{d+1}\in \{1,\ldots,d\}$ with $k_1=1$
and $n_1,\ldots,n_d\in \mathbb{N}$
such that for every $1\leq j\leq l \leq d$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0$;
\item $\pi(x_l)\in B(\pi(x),l\eta)$;
\item $T^{b(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
Assume this has been achieved, we can choose $1\leq j\leq l\leq d$ with $k_j=k_{l+1}$ such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $x_l\in T^{a_{k_{l+1}}}V_0=T^{a_{k_j}}V_0$;
\item $T^{b(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_l)}x_l\in T^{a_{k_j}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_l)}\pi(x_l)\in B(\pi(x),l\eta)
\subset B(\pi(x),\delta)\subset \bigcap_{j=1}^d T^{a_j}W $ for $q\in \mathcal{C}$ \ \ \ by (\ref{general-relation2}).
\end{itemize}
Put $n=n_j+\ldots+n_l$ and $z=T^{-a_{k_{j}}}x_{l}$, then the claim follows.
\medskip
{\bf
We now return to the inductive construction of $x_1,\ldots,x_d,k_1,\ldots,k_{d+1}$ and $n_1,\ldots,n_d$.
}
\medskip
Let $b_1\in \mathcal{B}$ be an element of the minimal weight in $\mathcal{B}$.
\noindent {\bf Step 1:}
Let $I_1=\pi^{-1}\big(B(\pi(x),\eta)\big)$.
Then $I_1$ is an open subset of $X$ and
\[
\pi(x)\in \underbrace{\bigcap_{m=0}^t\pi( I_1\cap T^{a_1} V_m) }_{=:S_1}\subset \pi(I_1)=
B(\pi(x),\eta).
\]
Let
\begin{align*}
\mathcal{B}_1 &=\{b-b_1:b\in \mathcal{B}\}, \\
\mathcal{A}_1 & =\{p_m-b_1:1\leq m\leq t\}, \\
\mathcal{C}_1 &=\{-b_1,q-b_1:q\in \mathcal{C}\}.
\end{align*}
Then $\phi(\mathcal{B}_1)\prec \phi(\mathcal{B})$ by Lemma \ref{PET-induction},
and $\phi(\mathcal{A}_1)=\phi(\mathcal{A})$ as $\mathrm{deg}(b_1)<\mathrm{deg}(p_m)$ for every $1\leq m\leq t$.
By our PET-induction hypothesis,
the conclusion of Claim \ref{gene-claim1} holds for systems $\mathcal{A}_1$ and $\mathcal{B}_1$.
Then for system $\mathcal{C}_1$ and open sets $I_1\cap T^{a_1} V_0,I_1\cap T^{a_1} V_1,\ldots,I_1\cap T^{a_1} V_t$,
there exist $y_1\in I_1\cap T^{a_1}V_0$ and $n_1\in \mathbb{N}$
such that
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(1b)$] $T^{b(n_1)-b_1 (n_1)}y_1\in I_1\cap T^{a_1} V_0$ for $b\in \mathcal{B}$;
\item[$(1a)$] $T^{p_m(n_1)-b_1(n_1)}y_1\in I_1\cap T^{a_1} V_m$ for $1\leq m\leq t$;
\item[$(1c)$] $T^{-b_1 (n_1)}\pi(y_1),\; T^{q(n_1)-b_1 (n_1)}\pi(y_1)\in S_1$ for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_1=T^{-b_1 (n_1)}y_1$.
By $(1b)$, for $b\in \mathcal{B}$ we have
\[
T^{b(n_1)}x_{1}\in T^{a_1}V_0.
\]
By $(1a)$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_1)}x_{1}\in T^{a_1}V_m.
\]
By $(1c)$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{general0000}
\pi(x_{1}),\; T^{q(n_{1})}\pi(x_{1})\in S_1\subset B(\pi(x),\eta) \subset B(\pi(x),\delta).
\end{equation}
There is some $k_2\in \{1,\ldots,d\}$ with $x_1\in T^{a_{k_2}}V_0$ by (\ref{general-relation1}) and (\ref{general0000}).
\medskip
\noindent {\bf Step l:}
Let $l\geq2$ be an integer and assume that we have already chosen
$x_1,\ldots,x_{l-1}\in X$, $k_1,\ldots,k_{l-1}\in \{1,\ldots,d\}$ and $n_1,\ldots,n_{l-1}\in \mathbb{N}$ such that for $1\leq j\leq l-1$,
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$;
\item $T^{b(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_0$ for $b\in \mathcal{B}$;
\item $T^{p_m(n_j+\ldots+n_{l-1})}x_{l-1}\in T^{a_{k_{j}}}V_m$ for $1\leq m\leq t$;
\item $T^{q(n_j+\ldots+n_{l-1})}\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)$ for $q\in \mathcal{C}$.
\end{itemize}
As $\pi(x_{l-1})\in B(\pi(x),(l-1)\eta)\subset B(\pi(x),d\eta)=B(\pi(x),\delta)$,
by (\ref{general-relation1})
there is some $k_l\in \{1,\ldots,d\}$ such that $x_{l-1}\in T^{a_{k_l}}V_0$.
Choose $\eta_l>0$ with $\eta_l<\eta$ such that for $1\leq j\leq l-1$,
\begin{align}
\label{generalhhh1} B( x_{l-1},\eta_l)&\subset T^{a_{k_l}}V_0,& \\
\label{generalhhh2} T^{b(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset
T^{a_{k_{j}}}V_0,&\forall\; b\in \mathcal{B}, \\
\label{generalhhh3} T^{p_m(n_j+\ldots+n_{l-1})}B(x_{l-1},\eta_l)&\subset
T^{a_{k_{j}}}V_m, &\forall\; 1\leq m\leq t,\\
\label{generalhhh4} T^{q(n_j+\ldots+n_{l-1})}B(\pi(x_{l-1}),\eta_l)&\subset
B(\pi(x),(l-1)\eta), &\forall\;q\in \mathcal{C}.
\end{align}
\medskip
Let $I_l=\pi^{-1}\big(B(\pi(x_{l-1}),\eta_l)\big)$.
By (\ref{general-relation2}),
we have $ \pi(x_{l-1})\in B(\pi(x),\delta)\subset \bigcap_{j=1}^dT^{a_j}W$
and
\[
\pi(x_{l-1})\in \underbrace{\bigcap_{m=1}^t\pi( I_l\cap T^{a_{k_l}}V_m)\cap
\pi\big(B(x_{l-1},\eta_l)\big) }_{=:S_l} \subset \pi(I_l)=
B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x_{l-1}),\eta).
\]
Let
\begin{align*}
\mathcal{B}_l &=\{b-b_1,\;\partial_{n_j+\ldots+n_{l-1}}b-b_1:b\in \mathcal{B},1\leq j\leq l-1\}, \\
\mathcal{A}_l & = \{p_m-b_1,\;\partial_{n_j+\ldots+n_{l-1}}p_m-b_1:1\leq m\leq t,1\leq j\leq l-1\} ,\\
\mathcal{C}_l &=\{-b_1,\;q-b_1,\;\partial_{n_j+\ldots+n_{l-1}}q-b_1:q\in \mathcal{C},1\leq j\leq l-1\}.
\end{align*}
Then $\phi(\mathcal{B}_l)\prec \phi(\mathcal{B})$ by Lemma \ref{PET-induction},
and $\phi(\mathcal{A}_l)=\phi(\mathcal{A}_1)=\phi(\mathcal{A})$.
By our PET-induction hypothesis,
the conclusion of Claim \ref{gene-claim1} holds for systems $\mathcal{A}_l,\mathcal{B}_l$.
Notice that for any non-zero integer $a$,
$\partial_{a}p_m\neq p_m$ for $1\leq m\leq t$.
Hence for system $\mathcal{C}_l$ and open sets $B(x_{l-1},\eta_l),I_l\cap T^{a_{k_l}}V_1,\ldots,I_l\cap T^{a_{k_l}}V_t,\underbrace{B(x_{l-1},\eta_l),\ldots,B(x_{l-1},\eta_l)}_{t(l-1) \; \mathrm{times}}$,
there exist $y_l\in B(x_{l-1},\eta_l)$ and $n_l\in \mathbb{N}$
such that for $1\leq j\leq l-1$,
\begin{enumerate}[itemsep=4pt,parsep=2pt,label=(\arabic*)]
\item[$(lb)_{\; \;}$]$ T^{b(n_l)-b_1(n_l)}y_l,\;
T^{\partial_{n_j+\ldots+n_{l-1}}b(n_l)-b_1(n_l)}y_l\in B(x_{l-1},\eta_l)$ for $b\in \mathcal{B}$;
\item[$(la)_1$] $ T^{p_m(n_l)-b_1(n_l)}y_l\in I_l\cap T^{a_{k_l}}V_m$ for $1\leq m \leq t$;
\item[$(la)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)-b_1(n_l)}y_l\in B(x_{l-1},\eta_l)$
for $1\leq m\leq t$;
\item[$(lc)_1$] $ T^{-b_1(n_l)},\; T^{q(n_l)-b_1( n_l)}\pi(y_l)\in S_l$ for $q\in \mathcal{C}$;
\item[$(lc)_2$] $ T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)-b_1(n_l)}\pi(y_l)\in S_l\subset B(\pi(x_{l-1}),\eta_l)$
for $q\in \mathcal{C}$.
\end{enumerate}
Set $x_l=T^{-b_1 (n_l)}y_l$.
By $(lb)$ and (\ref{generalhhh1}), for $b\in \mathcal{B}$ we have
\[
T^{b (n_l)}x_l\in B(x_{l-1},\eta_l)\subset T^{a_{k_l}}V_0.
\]
By $(lb)$ and (\ref{generalhhh2}), for $b\in \mathcal{B},1\leq j\leq l-1$ we have
\begin{align*}
T^{b(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{b(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}b(n_l)}x_l)\ \\
& \in T^{b(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_{j}}}V_0.
\end{align*}
By $(la)_1$, for $1\leq m\leq t$ we have
\[
T^{p_m(n_l)}x_l\in T^{a_{k_l}}V_m.
\]
By $(la)_2$ and (\ref{generalhhh3}), for $1\leq m\leq t,1\leq j\leq l-1$ we have
\begin{align*}
T^{p_m(n_j+\ldots+n_{l-1}+n_l)}x_l=& T^{p_m(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}p_m(n_l)}x_l)\ \\
& \in T^{p_m(n_j+\ldots+n_{l-1})} B(x_{l-1},\eta_l)\subset T^{a_{k_{j}}}V_m.
\end{align*}
By $(lc)_1$, for $q\in \mathcal{C}$ we have
\begin{equation}\label{general1212}
\pi(x_l), \;T^{q(n_l)}\pi(x_l)\in S_l\subset B(\pi(x_{l-1}),\eta)
\subset B(\pi(x),l\eta).
\end{equation}
By $(lc)_2$ and (\ref{generalhhh4}), for $q\in \mathcal{C},1\leq j\leq l-1$ we have
\begin{align*}
T^{q(n_j+\ldots+n_{l-1}+n_l)}\pi(x_l)=&T^{q(n_j+\ldots+n_{l-1})}(T^{\partial_{n_j+\ldots+n_{l-1}}q(n_l)}\pi(x_l))\ \\
& \in T^{q(n_j+\ldots+n_{l-1})} B(\pi(x_{l-1}),\eta_l)\subset B(\pi(x),l\eta).
\end{align*}
There is some $k_{l+1}\in \{1,\ldots,d\}$ with $x_l\in T^{a_{k_{l+1}}}V_0$ by (\ref{general-relation2})
and (\ref{general1212}).
We finish the construction by induction.
\end{proof}
\medskip
Using Claim \ref{gene-claim1}, we are able to give a proof of Claim \ref{gene-claim2}.
\begin{proof}[Proof of Claim \ref{gene-claim2}]
Fix a weight vector
$\vec{w}=\big((\phi(w_1),w_1),\ldots,(\phi(w_k),w_k)\big)$.
Let $\mathcal{A}$ be a system with weight vector $\vec{w}$,
and let $c_1,\ldots,c_s\in \mathcal{P}^*$ be distinct linear polynomials such that $c_i\notin \mathcal{A}$ for $1\leq i \leq s$.
Assume that $\pi$ has the property $\Lambda(\mathcal{A},\mathcal{C})$ for any system $\mathcal{C}$.
When $w_k=1$, that is, every polynomial in $\mathcal{A}$
is linear, it follows from Lemma \ref{linear-case-with-constraint}.
When $w_k\geq 2$.
Let $p\in \mathcal{A}$ with $\mathrm{deg}(p)=w_k$ and let
\[
\mathcal{A}_p=\{a\in \mathcal{A}:a,p\; \text{are equivalent}\} \quad
\mathrm{and}\quad
\mathcal{A}_r =\mathcal{A}-\mathcal{A}_p.
\]
Fix a system $\mathcal{C}$ and let
\begin{align*}
\mathcal{ B }\;& =\{a-p:a\in \mathcal{A}_{p}\}, \\
\mathcal{A}' &=\{c_i-p,\;-p,\;a-p:1\leq i\leq s,\;a\in \mathcal{A}_r\}, \\
\mathcal{C}' &= \{q-p:q\in \mathcal{C}\}.
\end{align*}
It is easy to see $\phi(\mathcal{A}')=(\phi(w_k),w_k)\prec \phi(\mathcal{A})$,
thus $\pi$ has the property $\Lambda(\mathcal{A}',\mathcal{C}')$.
For any $b\in \mathcal{B}$, there is some $a\in \mathcal{A}_p$ such that
$\mathrm{deg}(b)=\mathrm{deg}(a-p)<\mathrm{deg}(p)=w_k=\min( \phi(\mathcal{A}'))$.
Using Claim \ref{gene-claim1} for systems
$\mathcal{A}'$ and $\mathcal{B}'$,
then for system $\mathcal{C}$
and open sets
$V_0,V_1,\ldots,V_s,\underbrace{V_0,\ldots,V_0}_{(|\mathcal{A}'|-s )\;\mathrm{times}}$,
there exist $w\in V_0$ and $n\in \mathbb{N}$
such that
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{a(n)-p(n)}w\in V_0$ for $a\in \mathcal{A}_{p}$;
\item $ T^{c_i(n)-p(n)}w\in V_i$ for $1\leq i\leq s$;
\item $ T^{-p(n)}w,\; T^{a(n)-p(n)}w\in V_0$ for $a\in \mathcal{A}_r$;
\item $T^{q(n)-p(n)}\pi(w)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
Now put $z=T^{-p(n)}w\in V_0$, then we have
\begin{itemize}[itemsep=4pt,parsep=2pt]
\item $ T^{a(n)}z\in V_0$ for $a\in \mathcal{A}_p\cup \mathcal{A}_{r}= \mathcal{A}$;
\item $ T^{c_i(n)}z\in V_i$ for $1\leq i\leq s$;
\item $T^{q(n)}\pi(z)\in \bigcap_{i=0}^s\pi(V_i)$ for $q\in \mathcal{C}$.
\end{itemize}
This completes the proof.
\end{proof}
\bibliographystyle{amsplain}
| {'timestamp': '2022-02-18T02:24:40', 'yymm': '2202', 'arxiv_id': '2202.08782', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.08782'} |
\section{Introduction}
\label{sec:intro}
A major open question in the study of exoplanets is the origin of their apparent
obliquity properties---the distribution of the angle $\lambda$ between the
stellar spin and the planet's orbital angular momentum vectors as projected on
the sky (see, e.g., the review by \citealt{WinnFabrycky15}).
Measurements of the Rossiter--McLaughlin effect in hot Jupiters (HJs, defined
here as planets with masses $M_\mathrm{p}\gtrsim0.3\,M_\mathrm{J}$ that have
orbital periods $P_\mathrm{orb} \lesssim 10\,$days) have indicated that
$\lambda$ spans the entire range from~$0^\circ$ to~$180^\circ$, in stark
contrast with the situation in the solar system (where the angle between the
planets' total angular momentum vector and that of the Sun is only
$\sim$$6^\circ$).
In addition, there is a marked difference in the distribution of $\lambda$
between G~stars, where $\sim$$1/2$ of systems are well aligned
($\lambda < 20^\circ$) and the rest are spread out roughly uniformly over the
remainder of the $\lambda$ range, and F~stars of effective temperature
$T_\mathrm{eff} \gtrsim 6250\,$K, which exhibit only a weak excess of
well-aligned systems. There is, however, also evidence for a dependence of the
obliquity distribution on the properties of the planets and not just on those of
the host star; in particular, only planets with $M_\mathrm{p} < 3\,M_\mathrm{J}$
have apparent retrograde orbits ($\lambda > 90^\circ$).
Various explanations have been proposed to account for the broad range of
observed obliquities, but the inferred dependences on $T_\mathrm{eff}$ and
$M_\mathrm{p}$ provide strong constraints on a viable model. In one scenario
\cite[][]{Winn+10, Albrecht+12}, HJs arrive in the vicinity of the host star on
a misaligned orbit and subsequently act to realign the host through a tidal
interaction, which is more effective in cool stars than in hot ones.
In this picture, HJs form at large radii and either migrate inward through their
natal disk while maintaining nearly circular orbits or are placed on a
high-eccentricity orbit after the gaseous disk dissipates---which enables them
to approach the center and become tidally trapped by the star (with their orbits
getting circularized by tidal friction; e.g., \citealt{FordRasio06}).\footnote{
The possibility of HJs forming at their observed locations has also been
considered in the literature \citep[e.g.,][]{Boley+16,Batygin+16}, but the
likelihood of this scenario is still being debated.}
The processes that initiate high-eccentricity migration (HEM), which can be
either planet--planet scattering \citep[e.g.,][]{Chatterjee+08, JuricTremaine08,
BeaugeNesvorny12} or secular interactions that involve a stellar binary
companion or one or more planetary companions (such as Kozai-Lidov oscillations
--- e.g., \citealt{WuMurray03, FabryckyTremaine07, Naoz+11, Petrovich15b}---and
secular chaos---e.g., \citealt{WuLithwick11, LithwickWu14, Petrovich15a,
Hamers+17}), all give rise to HJs with a distribution of misaligned orbits.
In the case of classical disk migration, the observed obliquities can be
attributed to a primordial misalignment of the natal disk that occurred during
its initial assembly from a turbulent interstellar gas \citep[e.g.,][]{Bate+10,
Fielding+15} or as a result of magnetic and/or gravitational torques induced,
respectively, by a tilted stellar dipolar field and a misaligned companion
\citep[e.g.,][]{Lai+11, Batygin12, BatyginAdams13, Lai14, SpaldingBatygin14}.
The tidal realignment hypothesis that underlies the above modeling framework
was challenged by the results of \citet{Mazeh+15}, who examined the rotational
photometric modulations of a large number of {\it Kepler}\/ sources.
Their analysis indicated that the common occurrence of aligned systems around
cool stars characterizes the general population of planets and not just HJs,
and, moreover, that this property extends to orbital periods as long as
$\sim$$50\,$days, about an order of magnitude larger than the maximum value of
$P_\mathrm{orb}$ for which tidal interaction with the star remains important.
To reconcile this finding with the above scenario, \citet{MatsakosKonigl15}
appealed to the results of planet formation and evolution models, which predict
that giant planets form efficiently in protoplanetary disks and that most of
them migrate rapidly to the disk's inner edge, where, if the arriving planet's
mass is not too high ($\lesssim 1\,M_\mathrm{J}$), it could remain stranded near
that radius for up to $\sim$$1\,$Gyr---until it gets tidally ingested by the
host star.
They proposed that the ingestion of a stranded HJ (SHJ)---which is accompanied
by the transfer of its orbital angular momentum to the star---is the dominant
spin-realignment mechanism.
In this picture, the dichotomy in the obliquity properties between cool and hot
stars is a direct consequence of the higher efficiency of magnetic braking and
lower moment of inertia of the former in comparison with the latter.
By applying a simple dynamical model to the observed HJ distributions in~G and
F~stars, \citet{MatsakosKonigl15} inferred that $\sim$50\% of planetary systems
harbor an SHJ with a typical mass of $\sim$$0.6\,M_\mathrm{J}$.
In this picture, the obliquity properties of currently observed HJs---and the
fact that they are consistent with those of lower-mass and more distant
planets---are most naturally explained if most of the planets in a given
system---including any SHJ that may have been present---are formed in, and
migrate along the plane of, a primordially misaligned disk.\footnote{
This explanation does not necessarily imply that all planets that reached the
vicinity of the host star must have moved in by classical migration, although
SHJs evidently arrived in this way.
In fact, \citet{MatsakosKonigl16} inferred that most of the planets that
delineate the boundary of the so-called sub-Jovian desert in the
orbital-period--planet-mass plane got in by a secular HEM process (one that,
however, did not give rise to high orbital inclinations relative to the natal
disk plane).}
This interpretation is compatible with the properties of systems like Kepler-56,
in which two close-in planets have $\lambda \approx 45^\circ$ and yet are nearly
coplanar \citep{Huber+13}, and 55~Cnc, a coplanar five-planet system with
$\lambda \approx 72^\circ$ \citep[e.g.,][]{Kaib+11, BourrierHebrard14}.\footnote{
The two-planet system KOI-89 \citep{Ahlers+15} may be yet another example.}
It is also consistent with the apparent lack of a correlation between the
obliquity properties of observed HJs and the presence of a massive companion
\citep[e.g.,][]{Knutson+14, Ngo+15, Piskorz+15}.
In this paper we explore a variant of the primordial disk misalignment model
first proposed by \citet{Batygin12}, in which, instead of the tilting of the
entire disk by a distant ($\sim$500\,au) stellar companion on an inclined orbit,
we consider the gravitational torque exerted by a much closer ($\sim$5\,au)
\emph{planetary} companion on such an orbit, which acts to misalign \emph{only
the inner region} of the protoplanetary disk.
This model is motivated by the inferences from radial velocity surveys and
adaptive-optics imaging data (\citealt{Bryan+16}; see also \citealt{Knutson+14})
that $\sim$70\% of planetary systems harboring a transiting HJ have a companion
with mass in the range 1--13\,$M_\mathrm{J}$ and semimajor axis in the range
$1$--$20$\,au, and that $\sim$50\% of systems harboring one or two planets
detected by the radial velocity method have a companion with mass in the range
$1$--$20\,M_\mathrm{J}$ and semimajor axis in the range $5$--$20$\,au.
Further motivation is provided by the work of \citet{LiWinn16}, who re-examined
the photometric data analyzed by \citet{Mazeh+15} and found indications that the
good-alignment property of planets around cool stars does not hold for large
orbital periods, with the obliquities of planets with
$P_\mathrm{orb} \gtrsim 10^2\,$days appearing to tend toward a random
distribution.
One possible origin for a giant planet on an inclined orbit with a semimajor
axis $a$ of a few au is planet--planet scattering in the natal disk.
Current theories suggest that giant planets may form in tightly packed
configurations that can become dynamically unstable and undergo orbit crossing
(see, e.g., \citealt{Davies+14} for a review).
The instabilities start to develop before the gaseous disk component dissipates
\citep[e.g.,][]{Matsumura+10, Marzari+10}, and it has been argued
\citep{Chatterjee+08} that the planet--planet scattering process may, in fact,
peak before the disk is fully depleted of gas (see also \citealt{Lega+13}).
A close encounter between two giant planets is likely to result in a collision
if the ratio $(M_\mathrm{p}/M_*)(a/R_\mathrm{p})$ (the Safronov number) is $< 1$
(where $M_*$ is the stellar mass and $R_\mathrm{p}$ is the planet's radius), and
in a scattering if this ratio is $> 1$ \citep[e.g.,][]{FordRasio08}.
The scattering efficiency is thus maximized when a giant planet on a
comparatively wide orbit is involved \citep[cf.][]{Petrovich+14}.
High inclinations might also be induced by resonant excitation in giant planets
that become trapped in a mean-motion resonance through classical (Type II) disk
migration \citep{ThommesLissauer03, LibertTsiganis09}, and this process could,
moreover, provide an alternative pathway to planet--planet scattering
\citep{LibertTsiganis11}.
In these scenarios, the other giant planets that were originally present in the
disk can be assumed to have either been ejected from the system in the course of
their interaction with the remaining misaligned planet or else reached the star
at some later time through disk migration.
As we show in this paper, a planet on an inclined orbit can have a significant
effect on the orientation of the disk region interior to its orbital radius when
the mass of that region decreases to the point where the inner disk's angular
momentum becomes comparable to that of the planet.
For typical mass depletion rates in protoplanetary disks \citep[e.g.,][]
{BatyginAdams13}, this can be expected to happen when the system's age is
$\sim$$10^6$--$10^7\,$yr, which is comparable to the estimated formation time of
Jupiter-mass planets at $\gtrsim 5\,$au.
In the proposed scenario, a planet of mass $M_\mathrm{p} \gtrsim M_\mathrm{J}$
is placed on a high-inclination orbit at a time $t_0 \gtrsim 1\,$Myr that, on
the one hand, is late enough for the disk mass interior to the planet's location
to have decreased to a comparable value, but that, on the other hand, is early
enough for the inner disk to retain sufficient mass after becoming misaligned to
enforce the orbital misalignment of existing planets and/or form new planets in
its reoriented orbital plane (including any Jupiter-mass planets destined to
become an HJ or an SHJ).
The dynamical model adopted in this paper is informed by the
smooth-particle-hydrodynamics simulations carried out by
\citet{Xiang-GruessPapaloizou13}.
They considered the interaction between a massive ($1$--$6\,M_\mathrm{J}$)
planet that is placed on an inclined, circular orbit of radius $5$\,au and a
low-mass ($0.01\,M_*$) protoplanetary disk that extends to $25$\,au.
A key finding of these simulations was that the disk develops a warped
structure, with the regions interior and exterior to the planet's radial
location behaving as separate, rigid disks with distinct inclinations; in
particular, the inner disk was found to exhibit substantial misalignment with
respect to its initial direction when the planet's mass was large enough and its
initial inclination was intermediate between the limits of $0^\circ$ and
$90^\circ$ at which no torque is exerted on the disk.
Motivated by these results, we construct an analytic model for the gravitational
interaction between the planet and the two separate parts of the disk.
The general effect of an interaction of this type between a planet on an
inclined orbit and a rigid disk is to induce a precession of the planet's orbit
about the total angular momentum vector.
In contrast with \citet{Xiang-GruessPapaloizou13}, whose simulations only
extended over a fraction of a precession period, we consider the long-term
evolution of such systems.
In particular, we use our analytic model to study how the ongoing depletion of
the disk's mass affects the orbital orientations of the planet and of the disk's
two parts.
We describe the model in Section~\ref{sec:model} and present our calculations in
Section~\ref{sec:results}.
We discuss the implications of these results to planet obliquity measurements
and to the alignment properties of debris disks in Section~\ref{sec:discussion},
and summarize in Section~\ref{sec:conclusion}.
\section{Modeling approach}
\label{sec:model}
\subsection{Assumptions}
\label{subsec:assumptions}
\begin{figure}
\includegraphics[width=\columnwidth]{initial_fig1.eps}
\caption{
Schematic representation (not to scale) of the initial configuration of our
model.
See text for details.
\label{fig:initial}}
\end{figure}
The initial configuration that we adopt is sketched in Figure~\ref{fig:initial}.
We consider a young star (subscript s) that is surrounded by a Keplerian
accretion disk, and a Jupiter-mass planet (subscript p) on a circular orbit.
The disk consists of two parts: an inner disk (subscript d) that extends between
an inner radius $r_\mathrm{d,in}$ and an outer radius $r_\mathrm{d,out}$, and an
outer disk (subscript h) that extends between $r_\mathrm{h,in}$ and
$r_\mathrm{h,out}$; they are separated by a narrow gap that is centered on the
planet's orbital radius $a$.
The two parts of the disk are initially coplanar, with their normals aligned
with the stellar angular momentum vector $\boldsymbol{S}$, whereas the planet's
orbital angular momentum vector $\boldsymbol{P}$ is initially inclined at an
angle $\psi_\mathrm{p0}$ with respect to $\boldsymbol{S}$ (where the subscript
$0$ denotes the time $t = t_0$ at which the planet is placed on the inclined
orbit).
We assume that, during the subsequent evolution, each part of the disk maintains
a flat geometry and precesses as a rigid body.
The rigidity approximation is commonly adopted in this context and is attributed
to efficient communication across the disk through the propagation of bending
waves or the action of a viscous stress (e.g., \citealt{Larwood+96}; see also
\citealt{Lai14} and references therein).\footnote{
One should, however, bear in mind that real accretion disks are inherently fluid
in nature and therefore cannot strictly obey the rigid-body approximation; see,
e.g., \citet{Rawiraswattana+16}.}
Based on the simulation results presented in \citet{Xiang-GruessPapaloizou13},
we conjecture that this communication is severed at the location of the planet.
This outcome is evidently the result of the planet's opening up a gap in the
disk, although it appears that the gap need not be fully evacuated for this
process to be effective.
In fact, the most strongly warped simulated disk configurations correspond to
comparatively high initial inclination angles, for which the planet spends a
relatively small fraction of the orbital time inside the disk, resulting in gaps
that are less deep and wide than in the fully embedded case.
Our calculations indicate that, during the disk's subsequent evolution, its
inner and outer parts may actually detach as a result of the precessional
oscillation of the inner disk.
This oscillation is particularly strong in the case of highly mass-depleted
disks on which we focus attention in this paper: in the example shown in
Figure~\ref{fig:all-m} below, the initial amplitude of this oscillation is
$\sim$$40^\circ$.
The planet's orbital inclination is subject to damping by dynamical friction
\citep{Xiang-GruessPapaloizou13}, although the damping rate is likely low for
the high values of $\psi_\mathrm{p0}$ that are of particular interest to us
\citep{Bitsch+13}.
Furthermore, in cases where the precessional oscillation of the inner disk
causes the disk to split at the orbital radius of the planet, one can plausibly
expect the local gas density to become too low for dynamical friction to
continue to play a significant role on timescales longer than the initial
oscillation period ($\sim$$10^4$\,yr for the example shown in
Figure~\ref{fig:all-m}).
In light of these considerations, and in the interest of simplicity, we do not
include the effects of dynamical friction in any of our presented models.
As a further simplification, we assume that the planet's orbit remains circular.
The initial orbital eccentricity of a planet ejected from the disk by either of
the two mechanisms mentioned in Section~\ref{sec:intro} may well have a
nonnegligible eccentricity.
However, the simulations performed by \citet{Bitsch+13} indicate that the
dynamical friction process damps eccentricities much faster than inclinations,
so that the orbit can potentially be circularized on a timescale that is shorter
than the precession time (i.e., before the two parts of the disk can become
fully separated).
On the other hand, even if the initial eccentricity is zero, it may be pumped up
by the planet's gravitational interaction with the outer disk if
$\psi_\mathrm{p0}$ is high enough ($\gtrsim 20^\circ$;
\citealt{Teyssandier+13}).
This is essentially the Kozai-Lidov effect, wherein the eccentricity undergoes
periodic oscillations in antiphase with the orbital inclination
\citep{TerquemAjmia10}.
These oscillations were noticed in the numerical simulations of
\citet{Xiang-GruessPapaloizou13} and \citet{Bitsch+13}.
Their period can be approximated by $\tau_\mathrm{KL} \sim (r_\mathrm{h,out}/
r_\mathrm{h,in})^2 (2\pi/|\Omega_\mathrm{ph}|)$ \citep{TerquemAjmia10}, where we
used the expression for the precession frequency $\Omega_\mathrm{ph}$
(Equation~(\ref{eq:omega_ph})) that corresponds to the torque exerted by the
outer disk on the misaligned planet.
For the parameters of the representative mass-depleted disk model shown in
Figure~\ref{fig:all-m}, $\tau_\mathrm{KL} \sim 10^6$\,yr.
This time is longer by a factor of $\sim$$10^2$ than the initial precession
period of the inner disk in this example, implying that the Kozai-Lidov process
will have little effect on the high-amplitude oscillations of $\psi_\mathrm{p}$.
Kozai-Lidov oscillations might, however, modify the details of the long-term
behavior of the inner disk, since $\tau_\mathrm{KL}$ is comparable to the
mass-depletion time $\tau$ (Equation~(\ref{eq:deplete})) that underlies the
secular evolution of the system.
Our model takes into account the tidal interaction of the spinning star with the
inner and outer disks and with the planet, which was not considered in the
aforementioned simulations.
The inclusion of this interaction is motivated by the finding
\citep{BatyginAdams13, Lai14, SpaldingBatygin14} that an evolving protoplanetary
disk with a binary companion on an inclined orbit can experience a resonance
between the disk precession frequency (driven by the companion) and the stellar
precession frequency (driven by the disk), and that this resonance crossing can
generate a strong misalignment between the angular momentum vectors of the disk
and the star.
As it turns out (see Section~\ref{sec:results}), in the case that we
consider---in which the companion is a Jupiter-mass planet with an orbital
radius of a few au rather than a solar-mass star at a distance of a few hundred
au---this resonance is not encountered.
We also show that, even in the case of a binary companion, the misalignment
effect associated with the resonance crossing is weaker than that inferred in
the above works when one also takes into account the torque that the \emph{star}
exerts on the inner disk (see Appendix~\ref{app:resonance}).
\subsection{Equations}
We model the dynamics of the system by following the temporal evolution of the
angular momenta ($\boldsymbol{S}$, $\boldsymbol{D}$, $\boldsymbol{P}$, and
$\boldsymbol{H}$) of the four constituents (the star, the inner disk, the
planet, and the outer disk, respectively) due to their mutual gravitational
torques.
Given that the orbital period of the planet is much shorter than the
characteristic precession time scales of the system, we approximate the planet
as a ring of uniform density, with a total mass equal to that of the planet and
a radius equal to its semimajor axis.
The evolution of the angular momentum $\boldsymbol L_k$ of an object $k$ under
the influence of a torque $\boldsymbol T_{ik}$ exerted by an object $i$ is given
by $d\boldsymbol L_k/dt = \boldsymbol T_{ik}$.
The set of equations that describes the temporal evolution of the four angular
momenta is thus
\begin{equation}
\frac{d\boldsymbol S}{dt} = \boldsymbol T_\mathrm{ds}
+ \boldsymbol T_\mathrm{ps} + \boldsymbol T_\mathrm{hs}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol D}{dt} = \boldsymbol T_\mathrm{sd}
+ \boldsymbol T_\mathrm{pd} + \boldsymbol T_\mathrm{hd}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol P}{dt} = \boldsymbol T_\mathrm{sp}
+ \boldsymbol T_\mathrm{dp} + \boldsymbol T_\mathrm{hp}\,,
\end{equation}
\begin{equation}
\frac{d\boldsymbol H}{dt} = \boldsymbol T_\mathrm{sh}
+ \boldsymbol T_\mathrm{dh} + \boldsymbol T_\mathrm{ph}\,,
\end{equation}
where $\boldsymbol T_{ik} = -\boldsymbol T_{ki}$.
The above equations can also be expressed in terms of the precession frequencies
$\Omega_{ik}$:
\begin{equation}
\frac{d\boldsymbol L_k}{dt}
= \sum_i\boldsymbol T_{ik}
= \sum_i\Omega_{ik}\frac{\boldsymbol L_i\times\boldsymbol L_k}{J_{ik}}\,,
\label{eq:precession}
\end{equation}
where $J_{ik} = |\boldsymbol L_i + \boldsymbol L_k|
= (L_i^2 + L_k^2 + 2L_iL_k\cos{\theta_{ik}})^{1/2}$ and
$\Omega_{ik} = \Omega_{ki}$.
In Appendix~\ref{app:torques} we derive analytic expressions for the torques
$\boldsymbol T_{ik}$ and the corresponding precession frequencies $\Omega_{ik}$.
\subsection{Numerical Setup}
The host is assumed to be a protostar of mass $M_* = M_\odot$,
radius $R_* = 2R_\odot$, rotation rate $\Omega_* = 0.1(GM_*/R_*^3)^{1/2}$, and
angular momentum
\begin{eqnarray}
S &=& k_*M_*R_*^2\Omega_* =
1.71 \times 10^{50}\\
&\times& \left(\frac{k_*}{0.2}\right) \left(\frac{M_*}{M_\odot}\right)
\left(\frac{R_*}{2R_\odot}\right)^2
\left(\frac{\Omega_*}{0.1\sqrt{GM_\odot/(2R_\odot)^3}}\right)\,
\mathrm{erg\,s}\nonumber\,,
\end{eqnarray}
where $k_* \simeq 0.2$ for a fully convective star (modeled as a polytrope of
index $n = 1.5$).
The planet is taken to have Jupiter's mass and radius,
$M_\mathrm{p} = M_\mathrm{J}$ and $R_\mathrm{p} = R_\mathrm{J}$, and a
fixed semimajor axis, $a = 5$\,au, so that its orbital angular momentum is
\begin{eqnarray}
P &=& M_\mathrm{p}(GM_*a)^{1/2} =
1.89 \times 10^{50}
\label{eq:P}\\
&&\times \left(\frac{M_\mathrm{p}}{M_\mathrm{J}}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\,\mathrm{erg\,s}\,.\nonumber
\end{eqnarray}
We consider two values for the total initial disk mass: (1)
$M_\mathrm{t0} = 0.1\,M_*$, corresponding to a comparatively massive disk, and
(2) $M_\mathrm{t0} = 0.02\,M_*$, corresponding to a highly evolved system that
has entered the transition-disk phase.
In both cases we take the disk surface density to scale with radius as $r^{-1}$.
The inner disk extends from $r_\mathrm{d,in} = 4R_\odot$ to
$r_\mathrm{d,out} = a$, and initially has $10\%$ of the total mass.
Its angular momentum is
\begin{eqnarray}
D &=& \frac{2}{3}M_\mathrm{d}\left(GM_*\right)^{1/2}
\frac{r_\mathrm{d,out}^{3/2} - r_\mathrm{d,in}^{3/2}}
{r_\mathrm{d,out} - r_\mathrm{d,in}}
\label{eq:D}\\
&\simeq& 1.32 \times 10^{51}\,
\left(\frac{M_\mathrm{d}}{0.01M_\odot}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{a}{5\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s} \nonumber \,.
\end{eqnarray}
The outer disk has edges at $r_\mathrm{h,in} = a$ and
$r_\mathrm{h,out} = 50$\,au, and angular momentum
\begin{eqnarray}
H &=& \frac{2}{3}M_\mathrm{h}\left(GM_*\right)^{1/2}
\frac{r_\mathrm{h,out}^{3/2} - r_\mathrm{h,in}^{3/2}}
{r_\mathrm{h,out} - r_\mathrm{h,in}}\\
&\simeq& 3.76 \times 10^{52}\,
\left(\frac{M_\mathrm{h}}{0.09M_\odot}\right)
\left(\frac{M_*}{M_\odot}\right)^{1/2}
\left(\frac{r_\mathrm{h,out}}{50\,\mathrm{au}}\right)^{1/2}\, \mathrm{erg\,s}
\nonumber\,.
\end{eqnarray}
We model mass depletion in the disk using the expression first employed in this
context by \citet{BatyginAdams13},
\begin{equation}
M_\mathrm{t}(t) = \frac{M_{\mathrm{t}}(t=0)}{1 + t/\tau}\,,
\label{eq:deplete}
\end{equation}
where we adopt $M_{\mathrm{t}}(t=0)=0.1\,M_\sun$ and $\tau = 0.5$\,Myr as in
\citet{Lai14}.
We assume that this expression can also be applied separately to the inner and
outer parts of the disk.
The time evolution of the inner disk's angular momentum due to mass depletion is
thus given by
\begin{equation}
\label{eq:dDdt}
\left(\frac{d\boldsymbol{D}}{dt}\right)_\mathrm{depl}
= -\frac{D_0}{\tau(1 + t/\tau)^2}\hat{\boldsymbol{D}}
= -\frac{\boldsymbol{D}}{\tau+t}\,.
\end{equation}
For the outer disk we assume that the presence of the planet inhibits efficient
mass accretion, and we consider the following limits: (1) the outer disk's mass
remains constant, and (2) the outer disk loses mass (e.g., through
photoevaporation) at the rate given by Equation~(\ref{eq:deplete}).\footnote{
After the inner disk tilts away from the outer disk, the inner rim of the outer
disk becomes exposed to the direct stellar radiation field, which accelerates
the evaporation process \citep{Alexander+06}.
According to current models, disk evaporation is induced primarily by X-ray and
FUV photons and occurs at a rate of
$\sim$$10^{-9}$--$10^{-8}\,M_\sun\,\mathrm{yr}^{-1}$ for typical stellar
radiation fields (see \citealt{Gorti+16} for a review).
Even if the actual rate is near the lower end of this range, the outer disk in
our low-$M_{\rm t0}$ models would be fully depleted of mass on a timescale of
$\sim$$10$\,Myr; however, a similar outcome for the high-$M_\mathrm{t0}$ models
would require the mass evaporation rate to be near the upper end of the
estimated range.}
We assume that any angular momentum lost by the disk is transported out of the
system (for example, by a disk wind).
We adopt a Cartesian coordinate system ($x,\,y,\,z$) as the ``lab'' frame of
reference (see Figure~\ref{fig:initial}).
Initially, the equatorial plane of the star and the planes of the inner and
outer disks coincide with the $x$--$y$ plane (i.e.,
$\psi_\mathrm{s0} = \psi_\mathrm{d0} = \psi_\mathrm{h0} = 0$, where $\psi_k$
denotes the angle between $\boldsymbol{L}_k$ and the $z$ axis), and only the
orbital plane of the planet has a finite initial inclination
($\psi_\mathrm{p0}$).
The $x$ axis is chosen to coincide with the initial line of nodes of the
planet's orbital plane.
\begin{table*}
\begin{center}
\caption{Model parameters\label{tab:models}}
\begin{tabular}{l|cccccccccc}
\hline\hline
Model & $\boldsymbol{S}$ & $\boldsymbol{D}$ & $\boldsymbol{P}$ & $\boldsymbol{H}$
& $M_\mathrm{d0} \ [M_*] $ & $M_\mathrm{h0}\ [M_*]$ & $M_\mathrm{t0}\ [M_*]$
& $M_\mathrm{p}$ & $a$ [au] & $\psi_\mathrm{p0}\ [^\circ]$ \\
\hline
\texttt{DP-M} & -- & $\surd$ & $\surd$ & -- & $0.010\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{DP-m} & -- & $\surd$ & $\surd$ & -- & $0.002\downarrow$ & -- & -- & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-M} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090\downarrow$ & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-m} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-Mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.010\downarrow$ & $0.090$ -- & $0.10$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{all-mx} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018$ -- & $0.02$ & $M_\mathrm{J}$ & $5$ & $60$ \\
\texttt{retrograde} & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $0.002\downarrow$ & $0.018\downarrow$ & $0.02$ & $M_\mathrm{J}$ & $5$ & $110$ \\
\texttt{binary} & $\surd$ & $\surd$ & $\surd$ & -- & -- & -- & $\ \ \,0.10\downarrow$ & $M_\odot$ & $300$ & $10$ \\
\hline
\end{tabular}
\end{center}
\end{table*}
Table~\ref{tab:models} presents the models we explore and summarizes the
relevant parameters.
Specifically, column 1 contains the models' designations (with the letters
\texttt{M} and \texttt{m} denoting, respectively, high and low disk masses at
time $t=t_0$), columns 2--5 indicate which system components are being
considered, columns 6--9 list the disk and planet masses (with the arrow
indicating active mass depletion), and columns 10 and~11 give the planet's
semimajor axis and initial misalignment angle, respectively.
The last listed model (\texttt{binary}) does not correspond to a planet
misaligning the inner disk but rather to a binary star tilting the entire disk.
This case is considered for comparison with the corresponding model in
\citet{Lai14}.
\section{Results}
\label{sec:results}
The gravitational interactions among the different components of the system that
we consider (star, inner disk, planet, and outer disk) can result in a highly
nonlinear behavior.
To gain insight into these interactions we start by analyzing a much simpler
system, one consisting only of the inner disk and the (initially misaligned)
planet.
The relevant timescales that characterize the evolution of this system are the
precession period $\tau_\mathrm{dp} \equiv 2\pi/\Omega_\mathrm{dp}$
(Equation~(\ref{eq:omega_dp})) and the mass depletion timescale
$\tau = 5\times 10^5\,$yr (Equation~(\ref{eq:deplete})).
\begin{figure*}
\includegraphics[width=\textwidth]{DP-M_fig2.eps}
\caption{
Time evolution of a ``reduced'' system, consisting of just a planet and an
inner disk, for an initial disk mass $M_\mathrm{d0} = 0.01\,M_*$
(model~\texttt{DP-M}).
Top left: the angles that the angular momentum vectors $\boldsymbol{D}$,
$\boldsymbol{P}$ and $\boldsymbol{J}_\mathrm{dp}$ form with the $z$ axis
(the initial direction of $\boldsymbol{D}$), as well as the angle between
$\boldsymbol{D}$ and $\boldsymbol{P}$.
Top right: the projections of the angular momentum unit vectors onto the
$x$--$y$ plane.
Bottom left: the characteristic precession frequency.
Bottom right: the magnitudes of the angular momentum vectors.
In the left-hand panels, the initial $0.1$\,Myr of the evolution is
displayed at a higher resolution.
\label{fig:DP-M}}
\end{figure*}
Figure~\ref{fig:DP-M} shows the evolution of such a system for the case
(model~\texttt{DP-M}) where a Jupiter-mass planet on a misaligned orbit
($\psi_\mathrm{p0} = 60^\circ$) torques an inner disk of initial mass
$M_\mathrm{d0} = 0.01\,M_*$ (corresponding to $M_\mathrm{t0} = 0.1\,M_*$, i.e.,
to $t_0 = 0$ when $M_* = M_\sun$; see Equation~(\ref{eq:deplete})).
The top left panel exhibits the angles $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$
(blue: inner disk; red: planet) as a function of time.
In this and the subsequent figures, we show results for a total duration of
$10$\,Myr.
This is long enough in comparison with $\tau$ to capture the secular evolution
of the system, which is driven by the mass depletion in the inner disk.
To capture the details of the oscillatory behavior associated with the
precession of the individual angular momentum vectors ($\boldsymbol{D}$ and
$\boldsymbol{P}$) about the total angular momentum vector
$\boldsymbol{J}_\mathrm{dp} = \boldsymbol{D} + \boldsymbol{P}$ (subscript
j)---which takes place on the shorter timescale $\tau_\mathrm{dp}$
($\simeq 9\times 10^3$\,yr at $t = t_0$)---we display the initial $0.1$\,Myr in
the top left panel using a higher time resolution and, in addition, show the
projected trajectories of the unit vectors $\hat{\boldsymbol{D}}$,
$\hat{\boldsymbol{P}}$, and $\hat{\boldsymbol{J}}_\mathrm{dp}$ in the $x$--$y$
plane during this time interval in the top right panel.
Given that $0.1\,{\rm Myr} \ll \tau$, the vectors $\hat{\boldsymbol{D}}$ and
$\hat{\boldsymbol{P}}$ execute a circular motion about
$\hat{\boldsymbol{J}}_\mathrm{dp}$ with virtually constant inclinations with
respect to the latter vector (given by the angles $\theta_\mathrm{jd}$ and
$\theta_\mathrm{jp}$, respectively), and the orientation of
$\hat{\boldsymbol{J}}_\mathrm{dp}$ with respect to the $z$ axis (given by the
angle $\psi_\mathrm{j}$) also remains essentially unchanged.
(The projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$ on the $x$--$y$ plane is
displaced from the center along the $y$ axis, reflecting the fact that the
planet's initial line of nodes coincides with the $x$ axis.)
As the vectors $\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$ precess about
$\hat{\boldsymbol{J}}_\mathrm{dp}$, the angles $\psi_\mathrm{d}$ and
$\psi_\mathrm{p}$ oscillate in the ranges
$|\psi_\mathrm{j} - \theta_\mathrm{jd}| \leq \psi_\mathrm{d} \leq
\psi_\mathrm{j} + \theta_\mathrm{jd}$ and
$|\psi_\mathrm{j} - \theta_\mathrm{jp}| \leq \psi_\mathrm{p} \leq
\psi_\mathrm{j} + \theta_\mathrm{jp}$, respectively.
\begin{figure}
\begin{center}
\includegraphics{misalignment_fig3.eps}
\end{center}
\caption{
Schematic sketch of the change in the total angular momentum vector
$\boldsymbol{J}_\mathrm{dp}$ that is induced by mass depletion from the disk
in the limit where the precession period $\tau_{\rm dp}$ is much shorter
than the characteristic depletion time $\tau$.
The two depicted configurations are separated by $0.5\,\tau_\mathrm{dp}$.
\label{fig:vectors}}
\end{figure}
A notable feature of the evolution of this system on a timescale $\gtrsim \tau$
is the increase in the angle $\psi_\mathrm{d}$ (blue line in the top left
panel)---indicating progressive misalignment of the disk with respect to its
initial orientation---as the magnitude of the angular momentum $\boldsymbol{D}$
decreases with the loss of mass from the disk (blue line in the bottom right
panel).
At the same time, the orbital plane of the planet (red line in the top left
panel) tends toward alignment with $\boldsymbol{J}_\mathrm{dp}$.
The magenta lines in the top left and bottom right panels indicate that the
orientation of the vector $\boldsymbol{J}_\mathrm{dp}$ remains fixed even as its
magnitude decreases (on a timescale $\gtrsim \tau$) on account of the decrease
in the magnitude of $\boldsymbol{D}$.
As we demonstrate analytically in Appendix~\ref{app:Jdp}, the constancy of
$\psi_\mathrm{j}$ is a consequence of the inequality
$\tau_\mathrm{dp} \ll \tau$.
To better understand the evolution of the disk and planet orientations, we
consider the (small) variations in $\boldsymbol{D}$ and
$\boldsymbol{J}_\mathrm{dp}$ that are induced by mass depletion over a small
fraction of the precession period.
On the left-hand side of Figure~\ref{fig:vectors} we show a schematic sketch of
the orientations of the vectors $\boldsymbol{D}$, $\boldsymbol{P}$, and
$\boldsymbol{J}_\mathrm{dp}$ at some given time (denoted by the subscript 1) and
a short time later (subscript 2).
During that time interval the vector $\boldsymbol{J}_\mathrm{dp}$ tilts slightly
to the left, and as a result it moves away from $\boldsymbol{D}$ and closer to
$\boldsymbol{P}$.
The sketch on the right-hand side of Figure~\ref{fig:vectors} demonstrates that,
if we were to consider the same evolution a half-cycle later, the same
conclusion would be reached: in this case the vector
$\boldsymbol{J}_{\mathrm{dp}3}$ moves slightly to the right (to become
$\boldsymbol{J}_{\mathrm{dp}4}$), with the angle between
$\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{D}$ again increasing even as the
angle between $\boldsymbol{J}_\mathrm{dp}$ and $\boldsymbol{P}$ decreases.
The angles between the total angular momentum vector and the vectors
$\boldsymbol{D}$ and $\boldsymbol{P}$ are thus seen to undergo a systematic,
secular variation.
The sketch in Figure~\ref{fig:vectors} also indicates that the vector
$\boldsymbol{J}_\mathrm{dp}$ undergoes an oscillation over each precession
cycle.
However, when $\tau_\mathrm{dp} \ll \tau$ and the fractional decrease in
$M_\mathrm{d}$ over a precession period remains $\ll 1$, the amplitude of the
oscillation is very small and $\boldsymbol{J}_\mathrm{dp}$ practically maintains
its initial direction (see Appendix~\ref{app:Jdp} for a formal demonstration of
this result).
In the limit where the disk mass becomes highly depleted and $D \to 0$,
$\boldsymbol{J}_\mathrm{dp} \to \boldsymbol{P}$, i.e., the planet aligns with
the initial direction of $\boldsymbol{J}_\mathrm{dp}$
($\theta_\mathrm{jp} \to 0$ and $\psi_\mathrm{p} \to \psi_\mathrm{j}$).
The disk angular momentum vector then precesses about $\boldsymbol{P}$, with its
orientation angle $\psi_\mathrm{d}$ (blue line in top left panel of
Figure~\ref{fig:DP-M}) oscillating between
$|\psi_\mathrm{p} - \theta_\mathrm{dp}|$ and
$\psi_\mathrm{p} + \theta_\mathrm{dp}$.\footnote{
The angle $\theta_\mathrm{dp}$ between $\boldsymbol{D}$ and $\boldsymbol{P}$
(cyan line in the top left panel of Figure~\ref{fig:DP-M}) remains constant
because there are no torques that can modify it.}
Note that the precession frequency is also affected by the disk's mass depletion
and decreases with time (see Equation~(\ref{eq:omega_dp})); the time evolution
of $\Omega_{\rm dp}$ is shown in the bottom left panel of Figure~\ref{fig:DP-M}.
\begin{figure*}
\includegraphics[width=\textwidth]{DP-m_fig4.eps}
\caption{
Same as Figure~\ref{fig:DP-M}, except that $M_\mathrm{d0} = 0.02\,M_*$
(model~\texttt{DP-m}).
\label{fig:DP-m}}
\end{figure*}
Figure~\ref{fig:DP-m} shows the evolution of a similar
system---model~\texttt{DP-m}---in which the inner disk has a lower initial mass,
$M_\mathrm{d0} = 0.002\,M_*$ (corresponding to $M_\mathrm{t0} = 0.02\,M_*$,
i.e., to $t_0=2$\,Myr when $M_*=M_\sun$; see Equation~(\ref{eq:deplete})).
The initial oscillation frequency in this case is lower than in model
\texttt{DP-M}, as expected from Equation~(\ref{eq:omega_dp}), but it attains the
same asymptotic value (bottom left panel), corresponding to the limit
$J_\mathrm{dp} \to P$ in which $\Omega_\mathrm{dp}$ becomes independent of
$M_\mathrm{d}$.
The initial value of $J_\mathrm{dp}/D$ is higher in the present model than in
the model considered in Figure~\ref{fig:DP-M} ($\simeq 1.5$ vs. $\simeq 1.1$;
see Equations~(\ref{eq:P}) and~(\ref{eq:D})), which results in a higher value of
$\psi_\mathrm{j}$ (and, correspondingly, a higher initial value of
$\theta_\mathrm{jd}$ and lower initial value of $\theta_\mathrm{jp}$).
The higher value of $\psi_\mathrm{j}$ is the reason why the oscillation
amplitude of $\psi_\mathrm{d}$ and the initial oscillation amplitude of
$\psi_\mathrm{p}$ (top left panel) are larger in this case.
The higher value of $J_\mathrm{dp}/D_0$ in Figure~\ref{fig:DP-m} also accounts
for the differences in the projection map shown in the top right panel (a larger
$y$ value for the projection of $\hat{\boldsymbol{J}}_\mathrm{dp}$, a larger
area encircled by the projection of $\hat{\boldsymbol{D}}$, and a smaller area
encircled by the projection of $\hat{\boldsymbol{P}}$).
\begin{figure*}
\includegraphics[width=\textwidth]{all-M_fig5.eps}
\caption{
Time evolution of the full system (star, inner disk, planet, outer disk) for
an initial inner disk mass $M_\mathrm{d0} = 0.01\,M_*$ and initial total
disk mass $M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}).
Panel arrangement is the same as in Figure~\ref{fig:DP-M}, although the
details of the displayed quantities---which are specified in each panel and
now also include the angular momenta of the star ($\boldsymbol{S}$) and the
outer disk ($\boldsymbol{H}$)---are different.
\label{fig:all-M}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{all-m_fig6.eps}
\caption{
Same as Figure~\ref{fig:all-M}, except that $M_\mathrm{d0} = 0.002\,M_*$ and
$M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m}).
\label{fig:all-m}}
\end{figure*}
We now consider the full system for two values of the total disk mass:
$M_\mathrm{t0} = 0.1\,M_*$ (model~\texttt{all-M}, corresponding to $t_0 = 0$;
Figure~\ref{fig:all-M}) and $M_\mathrm{t0} = 0.02\,M_*$ (model~\texttt{all-m},
corresponding to $t_0 = 2$\,Myr; Figure~\ref{fig:all-m}), assuming that both
parts of the disk lose mass according to the relation given by
Equation~(\ref{eq:deplete}).
The inner disks in these two cases correspond, respectively, to the disk masses
adopted in model~\texttt{DP-M} (Figure~\ref{fig:DP-M}) and model~\texttt{DP-m}
(Figure~\ref{fig:DP-m}).
The merit of first considering the simpler systems described by the latter
models becomes apparent from a comparison between the respective figures.
It is seen that the basic behavior of model~\texttt{all-M} is similar to that of
model~\texttt{DP-M}, and that the main differences between model~\texttt{all-M}
and model~\texttt{all-m} are captured by the way in which model~\texttt{DP-m} is
distinct from model~\texttt{DP-M}.
The physical basis for this correspondence is the centrality of the torque
exerted on the inner disk by the planet.
According to Equation~(\ref{eq:precession}), the relative magnitudes of the
torques acting on the disk at sufficiently late times (after $D$ becomes smaller
than the angular momentum of each of the other system components) are reflected
in the magnitudes of the corresponding precession frequencies.
The dominance of the planet's contribution can thus be inferred from the plots
in the bottom left panels of Figures~\ref{fig:all-M} and~\ref{fig:all-m}, which
show that, after the contribution of $D$ becomes unimportant (bottom right
panels), the precession frequency induced by the planet exceeds those induced by
the outer disk and by the star.\footnote{
The star--planet and star--outer-disk precession frequencies
($\Omega_\mathrm{sp}$ and~$\Omega_\mathrm{sh}$; see
Equations~(\ref{eq:omega_sp}) and~(\ref{eq:omega_sh})) are not shown in these
figures because they are too low to fit in the plotted range.}
While the basic disk misalignment mechanism is the same as in the
planet--inner-disk system, the detailed behavior of the full system is
understandably more complex.
One difference that is apparent from a comparison of the left-hand panels in
Figures~\ref{fig:all-M} and~\ref{fig:DP-M} is the higher oscillation frequency
of $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$ in the full model (with the same
frequency also seen in the timeline of $\psi_\mathrm{s}$).
In this case the planet--outer-disk precession frequency $\Omega_\mathrm{ph}$
(Equation~(\ref{eq:omega_ph})) and the inner-disk--outer-disk precession
frequency $\Omega_\mathrm{dh}$ (Equation~(\ref{eq:omega_dh})) are initially
comparable and larger than $\Omega_\mathrm{dp}$, and $\Omega_\mathrm{ph}$
remains the dominant frequency throughout the system's evolution.
The fact that the outer disk imposes a precession on both $\boldsymbol{P}$ and
$\boldsymbol{D}$ has the effect of weakening the interaction between the planet
and the inner disk, which slows down the disk misalignment process.
Another difference is revealed by a comparison of the top right panels: in the
full system, $\hat{\boldsymbol{J}}_\mathrm{dp}$ precesses on account of the
torque induced by the outer disk, so it no longer corresponds to just a single
point in the $x$--$y$ plane.
This, in turn, increases the sizes of the regions traced in this plane by
$\hat{\boldsymbol{D}}$ and $\hat{\boldsymbol{P}}$.
The behavior of the lower-$M_\mathrm{t0}$ model shown in Figure~\ref{fig:all-m}
is also more involved.
In this case, in addition to the strong oscillations of the angles $\psi_i$
already manifested in Figure~\ref{fig:DP-m}, the different precession
frequencies $\Omega_{ik}$ also exhibit large-amplitude oscillations, reflecting
their dependence on the angles $\theta_{ik}$ between the angular momentum
vectors.
In both of the full-system models, the strongest influence on the star is
produced by its interaction with the inner disk, but the resulting precession
frequency ($\Omega_\mathrm{sd}$) remains low.
Therefore, the stellar angular momentum vector essentially retains its original
orientation, which implies that the angle $\psi_\mathrm{d}$ is a good proxy for
the angle between the primordial stellar spin and the orbit of any planet that
eventually forms in the inner disk.
\begin{figure}
\includegraphics[width=\columnwidth]{all-Mx_fig7.eps}
\caption{
Time evolution of the full system in the limit where only the inner disk
undergoes mass depletion and the mass of the outer disk remains unchanged,
for the same initial conditions as in Figure~\ref{fig:all-M}
(model~\texttt{all-Mx}).
The top and bottom panels correspond, respectively, to the top left and
bottom left panels of Figure~\ref{fig:all-M}, but in this case the initial
$0.1$\,Myr of the evolution are not displayed at a higher resolution.
\label{fig:all-Mx}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{all-mx_fig8.eps}
\caption{
Same as Figure~\ref{fig:all-Mx}, but for the initial conditions of
Figure~\ref{fig:all-m} (model~\texttt{all-mx}).
\label{fig:all-mx}}
\end{figure}
We repeated the calculations shown in Figures~\ref{fig:all-M}
and~\ref{fig:all-m} under the assumption that only the inner disk loses mass
while $M_\mathrm{h}$ remains constant (models~\texttt{all-Mx}
and~\texttt{all-mx}; Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx},
respectively).
At the start of the evolution, the frequencies $\Omega_\mathrm{ph}$ and
$\Omega_\mathrm{dh}$ are $\propto$$M_\mathrm{h}$, whereas $\Omega_\mathrm{dp}$
scales linearly (or, in the case of the lower-$M_\mathrm{d0}$ model, close to
linearly) with $M_\mathrm{d}$ (see Appendix~\ref{app:torques}).
In the cases considered in Figures~\ref{fig:all-M} and~\ref{fig:all-m} all these
frequencies decrease with time, so the relative magnitude of
$\Omega_\mathrm{dp}$ remains comparatively large throughout the evolution.
In contrast, in the cases shown in Figures~\ref{fig:all-Mx} and~\ref{fig:all-mx}
the frequencies $\Omega_\mathrm{ph}$ and $\Omega_\mathrm{dh}$ remain constant
and only $\Omega_\mathrm{dp}$ decreases with time.
As the difference between $\Omega_\mathrm{dp}$ and the other two frequencies
starts to grow, the inner disk misalignment process is aborted, and thereafter
the mean values of $\psi_\mathrm{d}$ and $\psi_\mathrm{p}$ remain constant.
This behavior is consistent with our conclusion about the central role that the
torque exerted by the planet plays in misaligning the inner disk: when the fast
precession that the outer disk induces in the orbital motions of both the planet
and the inner disk comes to dominate the system dynamics, the direct coupling
between the planet and the inner disk is effectively broken and the misalignment
process is halted.
Note, however, from Figure~\ref{fig:all-mx} that, even in this case, the angle
$\psi_\mathrm{d}$ can attain a high value (as part of a large-amplitude
oscillation) when $M_\mathrm{t0}$ is small.
\begin{figure*}
\includegraphics[width=\textwidth]{retrograde_fig9.eps}
\caption{
Time evolution with the same initial conditions as in
Figure~\ref{fig:all-m}, except that the planet is initially on a retrograde
orbit ($\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$;
model~\texttt{retrograde}).
The display format is the same as in Figure~\ref{fig:all-Mx}, but in this
case the panels also show a zoomed-in version of the evolution around the
time of the jumps in $\psi_\mathrm{p}$ and $\psi_\mathrm{d}$.
The dashed line in the top panel marks the transition between prograde and
retrograde orientations ($90^\circ$).
\label{fig:retrograde}}
\end{figure*}
To determine whether the proposed misalignment mechanism can also account for
disks (and, eventually, planets) on retrograde orbits, we consider a system in
which the companion planet is placed on such an orbit
(model~\texttt{retrograde}, which is the same as model~\texttt{all-m} except
that $\psi_{\mathrm{p}0}$ is changed from $60^\circ$ to $110^\circ$).
As Figure~\ref{fig:retrograde} demonstrates, the disk in this case evolves to a
retrograde configuration ($\psi_\mathrm{d} > 90^\circ$) at late times even as
the planet's orbit reverts to prograde motion.
A noteworthy feature of the plotted orbital evolution (shown in the
high-resolution portion of the figure) is the rapid increase in the value of
$\psi_\mathrm{d}$ (which is an adequate proxy for $\theta_\mathrm{sd}$ also in
this case)---and corresponding fast decrease in the value of
$\psi_\mathrm{p}$---that occurs when the planet's orbit transitions from a
retrograde to a prograde orientation.
This behavior can be traced to the fact that $\cos{\theta_\mathrm{ph}}$ vanishes
at essentially the same time that $\psi_\mathrm{p}$ crosses $90^\circ$ because
the outer disk (which dominates the total angular momentum) remains well aligned
with the $z$ axis.
This, in turn, implies (see Equation~(\ref{eq:omega_ph})) that, at the time of
the retrograde-to-prograde transition, the planet becomes dynamically decoupled
from the outer disk and only retains a coupling to the inner disk.
Its evolution is, however, different from that of a ``reduced'' system, in which
only the planet and the inner disk interact, because the inner disk remains
dynamically ``tethered'' to the outer disk ($\theta_\mathrm{dh}\ne 90^\circ$).
As we verified by an explicit calculation, the evolution of the reduced system
remains smooth when $\psi_\mathrm{p}$ crosses $90^\circ$.
The jump in $\psi_\mathrm{p}$ exhibited by the full system leads to a
significant increase in the value of $\cos{\theta_\mathrm{ph}}$ and hence of
$\Omega_\mathrm{ph}$, which, in turn, restores (and even enhances) the planet's
coupling to the outer disk after its transition to retrograde motion (see bottom
panel of Figure~\ref{fig:retrograde}).
The maximum value attained by $\theta_\mathrm{sd}$ in this example is
$\simeq 172^\circ$, which, just as in the prograde case shown in
Figure~\ref{fig:all-m}, exceeds the initial misalignment angle of the planetary
orbit (albeit to a much larger extent in this case).
It is, however, worth noting that not all model systems in which the planet is
initially on a retrograde orbit give rise to a retrograde inner disk at the end
of the prescribed evolution time; in particular, we found that the outcome of
the simulated evolution (which depends on whether $\psi_\mathrm{p}$ drops below
$90^\circ$) is sensitive to the value of the initial planetary misalignment
angle $\psi_{\mathrm{p}0}$ (keeping all other model parameters unchanged).
In concluding this section it is instructive to compare the results obtained
for our model with those found for the model originally proposed by
\citet{Batygin12} (see Section~\ref{sec:intro} for references to additional work
on that model).
We introduced our proposed scenario as a variant of the latter model, with a
close-by giant planet taking the place of a distant stellar companion.
In the original proposal the disk misalignment was attributed to the
precessional motion that is induced by the torque that the binary companion
exerts on the disk.
In this picture the spin--orbit angle oscillates (on a timescale $\sim$1\,Myr
for typical parameters) between $0^\circ$ and roughly twice the binary orbital
inclination, so it can be large if observed at the ``right'' time.
Our model retains this feature of the earlier proposal, particularly in cases
where the companion planet is placed on a high-inclination orbit after the disk
has already lost much of its initial mass, but it also exhibits a novel feature
that gives rise to a secular (rather than oscillatory) change in the spin--orbit
angle (which can potentially lead to a substantial increase in this angle).
This new behavior represents an ``exchange of orientations'' between the planet
and the inner disk that is driven by the mass loss from the inner disk and
corresponds to a decrease of the inner disk's angular momentum from a value
higher than that of the planet to a lower value (with the two remaining within
an order of magnitude of each other for representative parameters).
This behavior is not found in a binary system because of the large mismatch
between the angular momenta of the companion and the disk in that case (and, in
fact, it is also suppressed in the case of a planetary companion when the mass
of the outer disk is not depleted).
As we already noted in Section~\ref{subsec:assumptions}, \citet{BatyginAdams13}
suggested that the disk misalignment in a binary system can be significantly
increased due to a resonance between the star--disk and binary--disk precession
frequencies.
(We can use Equations~(\ref{eq:omega_sd}) and~(\ref{eq:omega_dp}), respectively,
to evaluate these frequencies, plugging in values for the outer disk radius,
companion orbital radius, and companion mass that are appropriate for the binary
case.)
\citet{Lai14} clarified the effect of this resonance and emphasized that, for
plausible system parameters, it can be expected to be crossed as the disk
becomes depleted of mass.
However, for the planetary-companion systems considered in this paper the ratio
$|\Omega_\mathrm{sd}/\Omega_\mathrm{dp}|$ remains $< 1$ throughout the
evolution, so no such resonance is encountered in this case.
In both of these systems $\Omega_\mathrm{sd}$ is initially
$\propto M_\mathrm{d}$, so it decreases during the early evolution.
The same scaling also characterizes $\Omega_\mathrm{dp}$ in the planetary case,
which explains why the corresponding curves do not cross.
In contrast, in the binary case (for which the sum of the disk and companion
angular momenta is dominated by the companion's contribution) the frequency
$\Omega_\mathrm{dp}$ does not scale with the disk mass and it thus remains
nearly constant, which makes it possible for the corresponding curves to cross
(see Figure~\ref{fig:binary} in Appendix~\ref{app:resonance}).
Since our formalism also encompasses the binary case, we examined one such
system (model~\texttt{binary})---using the parameters adopted in figure~3 of
\citet{Lai14}---for comparison with the results of that work.
Our findings are presented in Appendix~\ref{app:resonance}.
\section{Discussion}
\label{sec:discussion}
The model considered in this paper represents a variant of the primordial disk
misalignment scenario of \citet{Batygin12} in which the companion is a nearby
planet rather than a distant star and only the inner region of the
protoplanetary disk (interior to the planet's orbit) becomes inclined.
In this section we assess whether this model provides a viable framework for
interpreting the relevant observations.
The first---and most basic---question that needs to be addressed is whether the
proposed misalignment mechanism is compatible with the broad range of apparent
spin--orbit angles indicated by the data.
In Section~\ref{sec:results} we showed that the spin--orbit angle
$\theta_\mathrm{sd}$ can deviate from its initial value of $0^\circ$ either
because of the precessional motion that is induced by the planet's torque on the
disk or on account of the secular variation that is driven by the mass depletion
process.
In the ``reduced'' disk--planet model considered in Figures~\ref{fig:DP-M}
and~\ref{fig:DP-m}, for which the angle $\psi_\mathrm{d}$ is taken as a proxy
for the intrinsic spin--orbit angle, the latter mechanism increases
$\theta_\mathrm{sd}$ to $\sim$$45^\circ$--$50^\circ$ on a timescale of $10$\,Myr
for an initial planetary inclination $\psi_\mathrm{p0} = 60^\circ$.
The maximum disk misalignment is, however, increased above this value by the
precessional oscillation, whose amplitude is higher the lower the initial mass
of the disk.
Based on the heuristic discussion given in connection with
Figure~\ref{fig:vectors}, the maximum possible value of $\psi_\mathrm{d}$
(corresponding to the limit $J_\mathrm{dp} \to P$) is given by
\begin{equation}
\label{eq:psi_max}
\psi_\mathrm{d,max} = \arccos\frac{D_0 + P\cos\psi_\mathrm{p0}}
{(D_0^2 + P^2 + 2D_0P\cos\psi_\mathrm{p0})^{1/2}} + \psi_\mathrm{p0}\,.
\end{equation}
For the parameters of Figure~\ref{fig:DP-m},
$\psi_\mathrm{d,max} \approx 84.5^\circ$, which can be compared with the actual
maximum value ($\simeq 72^\circ$) attained over the course of the $10$-Myr
evolution depicted in this figure.\footnote{
The intrinsic spin--orbit angle is not directly measurable, so its value must be
inferred from that of the apparent (projected) misalignment angle $\lambda$
\citep{FabryckyWinn09}.
In the special case of a planet whose orbital plane contains the line of
sight---an excellent approximation for planets observed by the transits
method---the apparent obliquity cannot exceed the associated intrinsic
misalignment angle (i.e., $\lambda \le \theta_\mathrm{sd}$).}
Although the behavior of the full system (which includes also the outer disk and
the star) is more complicated, we found (see Figures~\ref{fig:all-M}
and~\ref{fig:all-m}) that, if the outer disk also loses mass, the maximum value
attained by $\theta_{\rm sd}$ ($\simeq 67^\circ$) is not much smaller than in
the simplified model.
Note that in the original primordial-misalignment scenario the maximum value of
$\theta_\mathrm{sd}$ ($\simeq 2\,\psi_\mathrm{p0}$) would have been considerably
higher ($\simeq 120^\circ$) for the parameters employed in our example.
However, as indicated by Equation~(\ref{eq:psi_max}), the maximum value
predicted by our model depends on the ratio $P/D_0$ and can in principle exceed
the binary-companion limit if $D_0$ is small and $P$ is sufficiently
large.\footnote{
$D_0$, the magnitude of the initial angular momentum of the inner disk, cannot
be much smaller than the value adopted in models~\texttt{DP-m}
and~\texttt{all-m} in view of the minimum value of $M_\mathrm{d0}$ that is
needed to account for the observed misaligned planets in the
primordial-disk-misalignment scenario (and also for the no-longer-present HJ in
the SHJ picture).}
Repeating the calculations shown in Figure~\ref{fig:all-m} for higher values of
$M_\mathrm{p}$, we found that the maximum value of $\theta_\mathrm{sd}$ is
$\sim$$89^\circ$, $104^\circ$ and~$125^\circ$ when $M_\mathrm{p}/M_\mathrm{J}$
increases from~1 to~2, 3, and~4, respectively.
These results further demonstrate that the disk can be tilted to a retrograde
configuration even when $\psi_\mathrm{p0} < 90^\circ$ if the planet is
sufficiently massive, although a retrograde disk orientation can also be
attained (including in the case of $M_\mathrm{p} \lesssim M_\mathrm{J}$) if the
planet's orbit is initially retrograde (see Figure~\ref{fig:retrograde}).
A low initial value of the disk angular momentum $D$ arises naturally in the
leading scenarios for placing planets in inclined orbits, which favor
comparatively low disk masses (see Section~\ref{sec:intro}).
The distribution of $\psi_\mathrm{p0}$ as well as those of the occurrence rate,
mass, and orbital radius of planets on inclined orbits are required for
determining the predicted distribution of primordial inner-disk misalignment
angles in this scenario, for comparison with observations.\footnote{
\citet{MatsakosKonigl15} were able to reproduce the observed obliquity
distributions of HJs around G and F stars within the framework of the SHJ model
under the assumption that the intrinsic spin--orbit angle has a random
distribution (corresponding to a flat distribution of $\lambda$; see
\citealt{FabryckyWinn09}).}
However, this information, as well as data on the relevant values of
$M_\mathrm{d0}$, are not yet available, so our results for $\theta_\mathrm{sd}$
are only a first step (a proof of concept) toward validating this interpretation
of the measured planet obliquities.
Our proposed misalignment mechanism is most effective when the disk mass within
the planetary orbit drops to $\sim$$M_\mathrm{p}$.
In the example demonstrating this fact (Figure~\ref{fig:all-m}),
$M_\mathrm{d0} \approx 2\,M_\mathrm{J}$.
In the primordial disk misalignment scenario, $M_\mathrm{d0}$ includes the mass
that would eventually be detected in the form of an HJ (or a lower-mass planet)
moving around the central star on a misaligned orbit.
Furthermore, if the ingestion of an HJ on a misaligned orbit is as ubiquitous as
inferred in the SHJ picture, that mass, too, must be included in the tally.
These requirements are consistent with the fact that the typical disk
misalignment time in our model (a few Myr) is comparable to the expected
giant-planet formation time, but this similarity also raises the question of
whether the torque exerted by the initially misaligned planet has the same
effect on the gaseous inner disk and on a giant planet embedded within it.
This question was considered by several authors in the context of a binary
companion \citep[e.g.,][]{Xiang-GruessPapaloizou14, PicognaMarzari15,
Martin+16}.
A useful gauge of the outcome of this dynamical interaction is the ratio of the
precession frequency induced in the embedded planet (which we label
$\Omega_\mathrm{pp}$) to $\Omega_\mathrm{dp}$ \citep{PicognaMarzari15}.
We derive an expression for $\Omega_\mathrm{pp}$ by approximating the inclined
and embedded planets as two rings with radii $a$ and $a_1 < a$, respectively
(see Appendix~\ref{app:torques}), and evaluate $\Omega_\mathrm{dp}$ under the
assumption that the disk mass has been sufficiently depleted for the planetary
contribution ($P$) to dominate $J_\mathrm{dp}$.
This leads to
$\Omega_\mathrm{pp}/\Omega_\mathrm{dp} \simeq 2\,(a_1/r_\mathrm{d,out})^{3/2}$,
which is the same as the estimate obtained by \citet{PicognaMarzari15} for a
binary system.
In the latter case, this ratio is small ($\lesssim 0.1$) for typical parameters,
implying that the embedded planet cannot keep up with the disk precession and
hence that its orbit develops a significant tilt with respect to the disk's
plane.
However, when the companion is a planet, the above ratio equals $(a_1/a)^{3/2}$
and may be considerably larger ($\lesssim 1$), which suggests that the embedded
planet can remain coupled to the disk in this case.
A key prediction of our proposed scenario---which distinguishes it from the
original \citet{Batygin12} proposal---is that there would in general be a
difference in the obliquity properties of ``nearby'' and ``distant'' planets,
corresponding to the different orientations attained, respectively, by the inner
and outer disks.
This prediction is qualitatively consistent with the finding of \citet{LiWinn16}
that the good spin--orbit alignment inferred in cool stars from an analysis of
rotational photometric modulations in \textit{Kepler} sources \citep{Mazeh+15}
becomes weaker (with the inferred orientations possibly tending toward a nearly
random distribution) at large orbital periods
($P_\mathrm{orb} \gtrsim 10^2\,$days).
The interpretation of these results in our picture is that the outer planets
remain aligned with the original stellar-spin direction, whereas the inner
planets---and, according to the SHJ model, also the stellar spin in $\sim$50\%
of sources---assume the orientation of the misaligned inner disk (which samples
a broad range of angles with respect to the initial spin direction).
Further observations and analysis are required to corroborate and refine these
findings so that they can be used to place tighter constrains on the models.
The result reported by \citet{LiWinn16} is seemingly at odds with another set of
observational findings---the discovery that the orbital planes of debris disks
(on scales $\gtrsim 10^2\,$au) are by and large well aligned with the spin axis
of the central star \citep{Watson+11, Greaves+14}.
This inferred alignment also seemingly rules out any interpretation of the
obliquity properties of exoplanets (including the SHJ model) that appeals to a
tidal realignment of the host star by a misaligned HJ.
These apparent difficulties can, however, be alleviated in the context of the
SHJ scenario and our present model.
Specifically, in the SHJ picture the realignment of the host star occurs on a
relatively long timescale ($\lesssim 1\,$Gyr; see \citealt{MatsakosKonigl15}).
This is much longer than the lifetime ($\sim$1--10\,Myr) of the gaseous disk
that gives rise to both the misaligned ``nearby'' planets and the debris disk
(which, in the scenario considered in this paper, are associated with the inner
and outer parts of the disk, respectively).
The inferred alignment properties of debris disks can be understood in this
picture if these disks are not much older than $\sim$1\,Gyr, so that the stellar
spin axis still points roughly along its original direction (which coincides
with the symmetry axis of the outer disk).
We searched the literature for age estimates of the 11 uniformly observed debris
disks tabulated in \citet{Greaves+14} and found that only two (10~CVn and
61~Vir) are definitely much older than $1$\,Gyr.
Now, \citet{MatsakosKonigl15} estimated that $\sim$50\% of systems ingest an SHJ
and should exhibit spin--orbit alignment to within $20^\circ$, with the rest
remaining misaligned.
Thus, the probability of observing an aligned debris disk in an older system is
$\sim 1/2$, implying that the chance of detecting 2 out of 2 such systems is
$\sim 1/4$.
It is, however, worth noting that the two aforementioned systems may not
actually be well aligned: based on the formal measurement uncertainties quoted
in \citet{Greaves+14}, the misalignment angle could be as large as $36^\circ$ in
10~CVn and $31^\circ$ in 61~Vir.
Further measurements that target old systems might be able to test the proposed
explanation, although one should bear in mind that additional factors may affect
the observational findings.
For example, in the tidal-downsizing scenario of planet formation, debris disks
are less likely to exist around stars that host giant planets \citep[see][]
{FletcherNayakshin16}.
\section{Conclusion}
\label{sec:conclusion}
In this paper we conduct a proof-of-concept study of a variant of the primordial
disk misalignment model of \citet{Batygin12}.
In that model, a binary companion with an orbital radius of a few hundred au
exerts a gravitational torque on a protoplanetary disk that causes its plane to
precess and leads to a large-amplitude oscillation of the spin--orbit angle
$\theta_\mathrm{sd}$ (the angle between the angular momentum vectors of the disk
and the central star).
Motivated by recent observations, we explore an alternative model in which the
role of the distant binary is taken by a giant planet with an orbital radius of
just a few au.
Such a companion likely resided originally in the disk, and its orbit most
probably became inclined away from the disk's plane through a gravitational
interaction with other planets (involving either scattering or resonant
excitation).
Our model setup is guided by indications from numerical simulations
\citep{Xiang-GruessPapaloizou13} that, in the presence of the misaligned planet,
the disk separates at the planet's orbital radius into inner and outer parts
that exhibit distinct dynamical behaviors even as each can still be well
approximated as a rigid body.
We integrate the secular dynamical evolution equations in the quadrupole
approximation for a system consisting of the inclined planet, the two disk
parts, and the spinning star, with the disk assumed to undergo continuous mass
depletion.
We show that this model can give rise to a broad range of values for the angle
between the angular momentum vectors of the inner disk and the star (including
values of $\theta_\mathrm{sd}$ in excess of $90^\circ$), but that the
orientation of the outer disk remains virtually unchanged.
We demonstrate that the misalignment is induced by the torque that the planet
exerts on the inner disk and that it is suppressed when the mass depletion time
in the outer disk is much longer than in the inner disk, so that the outer disk
remains comparatively massive and the fast precession that it induces in the
motions of the inner disk and the planet effectively breaks the dynamical
coupling between the latter two.
Our calculations reveal that the largest misalignments are attained when the
initial disk mass is low (on the order of that of observed systems at the onset
of the transition-disk phase).
We argued that, when the misalignment angle is large, the inner and outer parts
of the disk become fully detached and damping of the planet's orbital
inclination by dynamical friction effectively ceases.
This suggests a consistent primordial misalignment scenario: the inner region of
a protoplanetary disk can be strongly misaligned by a giant planet on a
high-inclination orbit if the disk's mass is low (i.e., late in the disk's
evolution); in turn, the planet's orbital inclination is least susceptible to
damping in a disk that undergoes a strong misalignment.
We find that, in addition to the precession-related oscillations seen in the
binary-companion model, the spin--orbit angle also exhibits a secular growth in
the planetary-companion case, corresponding to a monotonic increase in the angle
between the inner disk's and the total (inner disk plus planet) angular momentum
vectors (accompanied by a monotonic decrease in the angle between the planet's
and the total angular momentum vectors).
This behavior arises when the magnitude of the inner disk's angular momentum is
initially comparable to that of the planet but drops below it as a result of
mass depletion (on a timescale that is long in comparison with the precession
period).
This does not happen when the companion is a binary, since in that case the
companion's angular momentum far exceeds that of the inner disk at all times.
On the other hand, in the binary case the mass depletion process can drive the
system to a resonance between the disk--planet and star--disk precession
frequencies, which has the potential of significantly increasing the maximum
value of $\theta_\mathrm{sd}$ \citep[e.g.,][]{BatyginAdams13, Lai14}.
We show that this resonance is not encountered when the companion is a nearby
planet because---in contrast with the binary-companion case, in which the
disk--binary precession frequency remains constant---both of these precession
frequencies decrease with time in the planetary-companion case. However, we
also show that when the torque that the star exerts on the disk is
taken into account (and not just that exerted by the companion, as in previous
treatments), the misalignment effect of the resonance crossing in the binary
case is measurably weaker.
A key underlying assumption of the primordial disk-misalignment model is that
the planets embedded in the disk remain confined to its plane as the disk's
orientation shifts, so that their orbits become misaligned to the same extent as
that of the gaseous disk.
However, the precession frequency that a binary companion induces in the disk
can be significantly higher than the one induced by its direct interaction with
an embedded planet, which would lead to the planet's orbital plane separating
from that of the disk: this argument was used to critique the original version
of the primordial misalignment model \citep[e.g.,][]{PicognaMarzari15}.
However, this potential difficulty is mitigated in the planetary-companion
scenario, where the ratio of these two frequencies is typically substantially
smaller.
The apparent difference in the obliquity properties of HJs around cool and hot
stars can be attributed to the tidal realignment of a cool host star by an
initially misaligned HJ \citep[e.g.,][]{Albrecht+12}.
The finding \citep{Mazeh+15} that this dichotomy is exhibited also by lower-mass
planets and extends to orbital distances where tidal interactions with the star
are very weak motivated the SHJ proposal \citep{MatsakosKonigl15}, which
postulates that $\sim$50\% of systems contain an HJ that arrives through
migration in the protoplanetary disk and becomes stranded near its inner edge
for a period of $\lesssim 1$\,Gyr---during which time the central star continues
to lose angular momentum by magnetic braking---until the tidal interaction with
the star finally causes it to be ingested (resulting in the transfer of the
planet's orbital angular momentum to the star and in the realignment of the
stellar spin in the case of cool stars).
This picture fits naturally with the primordial misalignment model discussed in
this paper.
In this broader scenario, the alignment properties of currently observed planets
(which do not include SHJs) can be explained if these planets largely remain
confined to the plane of their primordial parent disk.
In the case of cool stars the planets exhibit strong alignment on account of the
realignment action of a predecessor SHJ, whereas in the case of hot stars they
exhibit a broad range of spin--orbit angles, reflecting the primordial range of
disk misalignment angles that was preserved on account of the ineffectiveness of
the tidal realignment process in these stars.
A distinguishing prediction of the planetary-companion variant of the primordial
misalignment model in the context of this scenario arises from the expected
difference in the alignment properties of the inner and outer disks, which
implies that the good alignment exhibited by planets around cool stars should
give way to a broad range of apparent spin--orbit angles above a certain orbital
period.
There is already an observational indication of this trend \citep{LiWinn16}, but
additional data are needed to firm it up.
A complementary prediction, which is potentially also testable, is that the
range of obliquities exhibited by planets around hot stars would narrow toward
$\lambda=0^\circ$ at large orbital periods.
This scenario may also provide an explanation for another puzzling observational
finding---that large-scale debris disks are by and large well aligned with the
spin vector of the central star---which, on the face of it, seems inconsistent
with the spin-realignment hypothesis.
In this interpretation, debris disks are associated with the outer parts of
protoplanetary disks and should therefore remain aligned with the central
star---as a general rule for hot stars, but also in the case of cool hosts that
harbor a stranded HJ if they are observed before the SHJ realigns the star.
This explanation is consistent with the fact that the great majority of observed
debris disks have inferred ages $\ll 1$\,Gyr, but the extent to which it
addresses the above finding can be tested through its prediction that a
sufficiently large sample of older systems should also contain misaligned disks.
\acknowledgements
We are grateful to Dan Fabrycky, Tsevi Mazeh, and Sean Mills for fruitful
discussions.
We also thank Gongjie Li and Josh Winn for helpful correspondence, and the
referee for useful comments.
This work was supported in part by NASA ATP grant NNX13AH56G and has made
use of NASA's Astrophysics Data System Bibliographic Services and of
\texttt{matplotlib}, an open-source plotting library for Python
\citep{Hunter07}.
\bibliographystyle{apj}
| {'timestamp': '2016-12-07T02:10:22', 'yymm': '1612', 'arxiv_id': '1612.01985', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.01985'} |
\section{Introduction}
Neural networks and especially deep learning architectures have become more and more popular recently \cite{lecun2015deep}. We believe that deep neural networks are the most powerful tools in a majority of classification problems (as in the case of image classification \cite{resnet}). Unfortunately, the use of neural networks in regression tasks is limited and it has been recently showed that a softmax distribution of clustered values tends to work better, even when the target is continuous \cite{wavenet}. In some cases seemingly continuous values may be understood as categorical ones (e.g. image pixel intensities) and the transformation between the types is straightforward \cite{Oord16}. However, sometimes this transformation cannot be simply incorporated (as in the case when targets span a huge set of possible values). Furthermore, forcing a neural network to predict multiple targets instead of just a single one makes the evaluation slower.
We want to present a method which fulfils the following requirements:
\begin{itemize}
\item gains advantage from the categorical distribution which makes a prediction more accurate,
\item outputs a single value which is a solution to a given regression task,
\item may be evaluated as quickly as in the case of the original regression neural network.
\end{itemize}
The method proposed, called \emph{drawering}, bases on temporarily extending a given neural network that solves a regression task. That modified neural network has properties which improve learning. Once training is done, the original neural network is used standalone. The knowledge from the extended neural network seems to be transferred and the original neural network achieves better results on the regression task.
The method presented is general and may be used to enhance any given neural network which is trained to solve any regression task. It also affects only the learning procedure.
\section{Main idea}
\subsection{Assumptions}
The method presented may be applied for a regression task, hence we assume:
\begin{itemize}
\item the data $D$ consists of pairs $(x_i,y_i)$ where the input $x_i$ is a fixed size real valued vector and target $y_i$ has a continuous value,
\item the neural network architecture $f(\cdot)$ is trained to find a relation between input $x$ and target $y$, for \mbox{$(x,y) \in D$},
\item a loss function $\mathcal{L}_f$ is used to asses performance of $f(\cdot)$ by scoring $\sum_{(x,y)\in D} \mathcal{L}_f(f(x), y)$, the lower the better.
\end{itemize}
\subsection{Neural network modification} \label{firtsMentionOfPercentiles}
In this setup, any given neural network $f(\cdot)$ may be understood as a composition $f(\cdot) = g(h(\cdot))$, where $g(\cdot)$ is the last part of the neural network $f(\cdot)$ i.e. $g(\cdot)$ applies one matrix multiplication and optionally a non-linearity. In other words, a vector $z = h(x)$ is the value of last hidden layer for an input $x$ and the value $g(z)$ may be written as $g(z) = \sigma (Gh(x))$ for a matrix $G$ and some function $\sigma$ (one can notice that $G$ is just a vector). The job which is done by $g(\cdot)$ is just to squeeze all information from the last hidden layer into one value.
In simple words, the neural network $f(\cdot)$ may be divided in two parts, the first, core $h(\cdot)$, which performs majority of calculations and the second, tiny one $g(\cdot)$ which calculates a single value, a prediction, based on the output of $h(\cdot)$.
Our main idea is to extend the neural network $f(\cdot)$. For every input $x$ the value of the last hidden layer $z = h(x)$ is duplicated and processed by two independent, parameterized functions. The first of them is $g(\cdot)$ as before and the second one is called $s(\cdot)$. The original neural network $g(h(\cdot))$ is trained to minimize the given loss function $\mathcal{L}_f$ and the neural network $s(h(\cdot))$ is trained with a new loss function $\mathcal{L}_s$.
An example of the extension described is presended in the Figure \ref{drawering_example}.
For the sake of consistency the loss function $\mathcal{L}_f$ will be called $\mathcal{L}_g$.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{drawering_example}
\caption{The sample extension of the function $f(\cdot)$. The function $g(\cdot)$ always squeezes the last hidden layer into one value. On the other hand the function $s(\cdot)$ may have hidden layers, but the simplest architecture is presented.}
\label{drawering_example}
\end{figure}
Note that functions $g(h(\cdot))$ and $s(h(\cdot))$ share parameters, because $g(h(\cdot))$ and $s(h(\cdot))$ are compositions having the same inner function. Since the parameters of $h(\cdot)$ are shared. It means that learning $s(h(\cdot))$ influences $g(h(\cdot))$ (and the other way around). We want to train all these functions jointly which may be hard in general, but the function $s(\cdot)$ and the loss function $\mathcal{L}_s$ are constructed in a special way presented below.
All real values are clustered into $n$ consecutive intervals i.e. disjoint sets $e_1, e_2, ..., e_n$ such that
\begin{itemize}
\item $\cup_{i=1}^n e_i$ covers all real numbers,
\item $r_j < r_k$ for $r_j \in e_j, r_k \in e_k$, when $j < k$.
\end{itemize}
The function $s(h(\cdot))$ (evaluated for an input $x$) is trained to predict which of the sets $(e_i)_{i=1}^n$ contains $y$ for a pair $(x,y) \in D$. The loss function $\mathcal{L}_s$ may be defined as a non-binary cross-entropy loss which is typically used in classifiation problems. In the simplest form the function $s(\cdot)$ may be just a multiplication by a matrix $S$ (whose first dimension is $n$).
To sum up, \emph{drawering} in its basic form is to add additional, parallel layer which takes as the input the value of the last hidden layer of the original neural network $f(\cdot)$. A modified (\emph{drawered}) neural network is trained to predict not only the original target, but also additional one which depicts order of magnitude of the original target. As a result extended neural network simultaneously solves the regression task and a related classification problem.
One possibility to define sets $e_i$, called \emph{drawers}, is to take suitable percentiles of target values to make each $e_i$ contain roughly the same number of them.
\subsection{Training functions $g(h(\cdot))$ and $s(h(\cdot))$ jointly} \label{howToTrainFunctions}
Training is done using gradient descent, hence it is sufficient to obtain gradients of all functions defined i.e. $h(\cdot)$, $g(\cdot)$ and $s(\cdot)$. For a given pair $(x,y) \in D$ the forward pass for $g(h(x))$ and $s(h(x))$ is calculated (note that a majority of calculations is shared). Afterwards two backpropagations are processed.
The backpropagation for the composition $g(h(x))$ using loss function $\mathcal{L}_g$ returns a vector which is a concatenation of two vectors $grad_g$ and $grad_{h,g}$, such that $grad_g$ is the gradient of function $g(\cdot)$ at the point $h(x)$ and $grad_{h,g}$ is the gradient of function $h(\cdot)$ at the point $x$. Similarly, the backpropagation for $s(h(x))$ using loss function $\mathcal{L}_s$ gives two gradients $grad_s$ and $grad_{h,s}$ for functions $s(\cdot)$ and $h(\cdot)$, respectively.
The computed gradients of $g(\cdot)$ and $s(\cdot)$ parameters (i.e. $grad_g$ and $grad_s$) can be applied as in the normal case -- each one of those functions takes part in only one of the backpropagations.
Updating the parameters belonging to the $h(\cdot)$ part is more complex, because we are obtaining two different gradients $grad_{h,g}$ and $grad_{h,s}$. It is worth noting that $h(\cdot)$ parameters are the only common parameters of the compositions $g(h(x))$ and $s(h(x))$. We want to take an average of the gradients $grad_{h,g}$ and $grad_{h,s}$ and apply (update $h(\cdot)$ parameters). Unfortunately, the orders of magnitute of them may be different. Therefore, taking an unweighted average may result in minimalizing only one of the loss functions $\mathcal{L}_g$ or $\mathcal{L}_s$. To address this problem, the averages $a_g$ and $a_s$ of absolute values of both gradients are calculated.
Formally, the norm $L^1$ is used to define:
\begin{equation}
a_g = \left\lVert grad_{h,g} \right\rVert_1,
\end{equation}
\begin{equation*}
a_s = \left\lVert grad_{h,s} \right\rVert_1.
\end{equation*}
The values $a_g$ and $a_s$ aproximately describe the impacts of the loss functions $\mathcal{L}_g$ and $\mathcal{L}_s$, respectively. The final vector $grad_h$ which will be used as a gradient of $h(\cdot)$ parameters in the gradient descent procedure equals:
\begin{equation}
grad_h = \alpha grad_{h,g} + (1 - \alpha) \frac{a_g}{a_s} grad_{h,s}
\end{equation}
for a hyperparameter $\alpha \in (0,1)$, typically $\alpha = 0.5$. This strategy makes updates of $h(\cdot)$ parameters be of the same order of magnitude as in the procces of learning the original neural network $f(\cdot)$ (without \emph{drawering}).
One can also normalize the gradient $grad_{h,g}$ instead of the gradient $grad_{h,s}$, but it may need more adjustments in the hyperparameters of the learning procedure (e.g. learning rate alteration may be required).
Note that for $\alpha = 1$ the learning procedure will be identical as in the original case where the function $f$ is trained using loss function $\mathcal{L}_g$ only.
It is useful to bear in mind that both backpropagations also share a lot of calculations. In the extreme case when the ratio $\frac{a_c}{a_d}$ is known in advance one backpropagation may be performed simultaneously for loss function $\mathcal{L}_g$ and the weighted loss function $\mathcal{L}_s$. We noticed that the ratio needed is roughly constant between batches iterations therefore may be calculated in the initial phase of learning. Afterwards may be checked and updated from time to time.
\textit{In this section we slightly abused the notation -- a value of gradient at a given point is called just a gradient since it is obvious what point is considered.}
\subsection{Defining \emph{drawers}}
\subsubsection{Regular and uneven}\label{evenUneven}
We mentioned in the subsection \ref{firtsMentionOfPercentiles} that the simplest way of defining \emph{drawers} is to take intervals whose endings are suitable percentiles that distribute target values uniformly. In this case $n$ \emph{regular drawers} are defined in the following way:
\begin{equation}
e_i = (q_{i-1,n}, q_{i,n}]
\end{equation}
where $q_{i,n}$ is $\frac{i}{n}$-quantile of targets $y$ from training set (the values $q_{0,n}$ and $q_{n,n}$ are defined as \emph{minus infinity} and \emph{plus inifinity}, respectively).
This way of defining \emph{drawers} makes each interval $e_i$ contain approximately the same number of target values.
However, we noticed that an alternative way of defining $e_i$'s, which tends to support classical mean square error (MSE) loss better, may be proposed. The MSE loss penalizes more when the difference between the given target and the prediction is larger. To address this problem $drawers$ may be defined in a way which encourages the learning procedure to focus on extreme values. \emph{Drawers} should group the middle values in bigger clusters while placing extreme values in smaller ones. The definition of $2n$ \emph{uneven drawers} is as follows:
\begin{equation}
e_i = (q_{1,2^{n-i+2}}, q_{2,2^{n-i+2}}], \text{ for } i \leq n,
\end{equation}
\begin{equation*}
e_i = (q_{2^{i-n+1}-2,2^{i-n+1}}, q_{2^{i-n+1}-1,2^{i-n+1}}], \text{ for } i>n.
\end{equation*}
In this case every \emph{drawer} $e_{i+1}$ contains approximately two times more target values as compared to \emph{drawer} $e_i$ for $i<n$. Finally, both $e_n$ and $e_{n+1}$ contain the maximum of $25\%$ of all target values. Similarly to the asceding intervals in the first half, $e_i$ are desceding for $i>n$ i.e. contain less and less target values.
The number of \emph{drawers} $n$ is a hyperparameter. The bigger $n$ the more complex distribution may be modeled. On the other hand each \emph{drawers} has to contain enough representants among targets from training set. In our experiments each \emph{drawer} contained at least 500 target values.
\subsubsection{Disjoint and nested} \label{disjointNested}
We observed that sometimes it may be better to train $s(h(\cdot))$ to predict whether target is in a set $f_j$, where $f_j = \cup_{i=j}^n e_i$. In this case $s(h(\cdot))$ has to answer a simpler question: \textit{"Is a target higher than a given value?"} instead of bounding the target value from both sides. Of course in this case $s(h(x))$ no longer solves a one-class classification problem, but every value of $s(h(x))$ may be assesed independently by binary cross-entropy loss.\\
Therefore, \emph{drawers} may be:
\begin{itemize}
\item \emph{regular} or \emph{uneven},
\item \emph{nested} or \emph{disjoint}.
\end{itemize}
These divisions are orthogonal. In all experiments described in this paper (the section \ref{experiments}) \emph{uneven drawers} were used.
\section{Logic behind our idea}
We believe that \emph{drawering} improves learning by providing the following properties.
\begin{itemize}
\item The extension $s(\cdot)$ gives additional expressive power to a given neural network. It is used to predict additional target, but since this target is closely related with the original one, it is believed that gained knowledge is transferred to the core of the given neural network $h(\cdot)$.
\item Since categorical distributions do not assume their shape, they can model arbitrary distribution -- they are more flexible.
\item We argue that classification loss functions provide better behaved gradients than regression ones. As a result evolution of classification neural network is more smooth during learning.
\item Additional target (even closely related) works as a regularization as typically in multitask learning \cite{thrun1996learning}.
\end{itemize}
\section{Model comparison}
Effectiveness of the method presented was established with the comparison. The original and \emph{drawered} neural network were trained on the same dataset and once trainings were completed the neural networks performances on a given test set were measured. Since \emph{drawering} affects just a learning procedure the comparision is fair.
All learning procedures depend on random initialization hence to obtain reliable results a few learning procedures in both setups were performed. Adam \cite{adam} was chosen for stochastic optimization.
The comparison was done on two datasets described in the following section. The results are described in the section \ref{experiments}.
\section{Data}
The method presented were tested on two datasets.
\subsection{Rossmann Store Sales}
The first dataset is public and was used during \emph{Rossmann Store Sales} competition on well-known platform \emph{kaggle.com}. The official description starts as follows:
\begin{quote}
Rossmann operates over 3,000 drug stores in 7 European countries. Currently, Rossmann store managers are tasked with predicting their daily sales for up to six weeks in advance. Store sales are influenced by many factors, including promotions, competition, school and state holidays, seasonality, and locality. With thousands of individual managers predicting sales based on their unique circumstances, the accuracy of results can be quite varied.
\end{quote}
The dataset contains mainly categorical features like information about state holidays, an indicator whether a store is running a promotion on a given day etc.
Since we needed ground truth labels, only the train part of the dataset was used (in \emph{kaggle.com} notation). We split this data into new training set, validation set and test set by time. The training set ($648k$ records) is consisted of all observations before year 2015. The validation set ($112k$ records) contain all observations from January, February, March and April 2015. Finally, the test set ($84k$ records) covers the rest of the observations from year 2015.
In our version of this task target $y$ is normalized logarithm of the turnover for a given day. Logarithm was used since the turnovers are exponentially distributed. An input $x$ is consisted of all information provided in the original dataset except for \emph{Promo2} related information. A day and a month was extracted from a given date (a year was ignored).
The biggest challenge linked with this dataset is not to overfit trained model, because dataset size is relatively small and encoding layers have to be used to cope with categorical variables. Differences between scores on train, validation and test sets were significant and seemed to grow during learning. We believe that \emph{drawering} prevents overfitting -- works as a regularization in this case.
\subsection{Conversion value task}
This private dataset depicts conversion value task i.e. a regression problem where one wants to predict the value of the next item bought for a given customer who clicked a displayed ad.
The dataset describes states of customers at the time of impressions. The state (input $x$) is a vector of mainly continuous features like a price of last item seen, a value of the previous purchase, a number of items in the basket etc. Target $y$ is the price of the next item bought by the given user. The price is always positive since only users who clicked an ad and converted are incorporated into the dataset.
The dataset was split into training set ($2,1$ million records) and validation set ($0,9$ million observations). Initially there was also a test set extracted from validation set, but it turned out that scores on validation and test sets are almost identical.
We believe that the biggest challenge while working on the conversion value task is to tame gradients which vary a lot. That is to say, for two pairs $(x_1, y_1)$ and $(x_2, y_2)$ from the dataset, the inputs $x_1$ and $x_2$ may be close to each other or even identical, but the targets $y_1$ and $y_2$ may even not have the same order of magnitude. As a result gradients may remain relatively high even during the last phase of learning and the model may tend to predict the last encountered target ($y_1$ or $y_2$) instead of predicting an average of them. We argue that \emph{drawering} helps to find general patterns by providing better behaved gradients.
\section{Experiments}\label{experiments}
In this section the results of the comparisons described in the previous section are presented.
\subsection{Rossmann Store Sales}
In this case the original neural network $f(\cdot)$ takes an input which is produced from 14 values -- 12 categorical and 2 continuous ones. Each categorical value is encoded into a vector of size $min(k,10)$, where $k$ is the number of all possible values of the given categorical variable. The minimum is applied to avoid incorporating a redundancy. Both continuous features are normalized. The concatenation of all encoded features and two continuous variables produces the input vector $x$ of size 75.
The neural network $f(\cdot)$ has a sequential form and is defined as follows:
\begin{itemize}
\item an input is processed by $h(\cdot)$ which is as follows:
\begin{itemize}
\item $Linear(75, 64)$,
\item $ReLU$,
\item $Linear(64, 128)$,
\item $ReLU$,
\end{itemize}
\item afterwards an output of $h(\cdot)$ is fed to a simple function $g(\cdot)$ which is just a $Linear(128, 1)$.
\end{itemize}
The \emph{drawered} neural network with incorporated $s(\cdot)$ is as follows:
\begin{itemize}
\item as in the original $f(\cdot)$, the same $h(\cdot)$ processes an input,
\item an output of $h(\cdot)$ is duplicated and processed independently by $g(\cdot)$ which is the same as in the original $f(\cdot)$ and $s(\cdot)$ which is as follows:
\begin{itemize}
\item $Linear(128, 1024)$,
\item $ReLU$,
\item $Dropout(0.5)$,
\item $Linear(1024, 19)$,
\item $Sigmoid$.
\end{itemize}
\end{itemize}
\emph{The torch notation were used, here:
\begin{itemize}
\item $Linear(a, b)$ is a linear transformation -- vector of size $a$ into vector of size $b$,
\item $ReLU$ is the rectifier function applied pointwise,
\item $Sigmoid$ ia the sigmoid function applied pointwise,
\item $Dropout$ is a dropout layer \cite{srivastava2014dropout}.
\end{itemize}
}
The \emph{drawered} neural network has roughly $150k$ more parameters. It is a huge advantage, but these additional parameters are used only to calculate new target and additional calculations may be skipped during an evaluation. We believe that patterns found to answer the additional target, which is related to the original one, were transferred to the core part $h(\cdot)$.
We used dropout only in $s(\cdot)$ since incorporating dropout to $h(\cdot)$ causes instability in learning. While work on regression tasks we noticed that it may be a general issue and it should be investigated, but it is not in the scope of this paper.
Fifty learning procedures for both the original and the extended neural network were performed. They were stopped after fifty iterations without any progress on validation set (and at least one hundred iterations in total). The iteration of the model which performed the best on validation set was chosen and evaluated on the test set. The loss function used was a classic square error loss.
The minimal error on the test set achieved by the \emph{drawered} neural network is $4.481$, which is $7.5\%$ better than the best original neural network. The difference between the average of Top5 scores is also around $7.5\%$ in favor of \emph{drawering}. While analyzing the average of all fifty models per method the difference seems to be blurred. It is caused be the fact that a few learning procedures overfited too much and achieved unsatisfying results. But even in this case the average for \emph{drawered} neural networks is about $3.8\%$ better. All these scores with standard deviations are showed in the Table \ref{rossmannScores}.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Rossmann Store Sales Scores}
\label{rossmannScores}
\centering
\begin{tabular}{c||c|c|c|c|c}
Model & Min & Top5 mean & Top5 std & All mean & All std\\
\hline
Original & $4.847$ & $4.930$ & $0.113$ & $5.437$ & $0.259$\\
Extended & $4.481$ & $4.558$ & $0.095$ & $5.232$ & $0.331$\\
\end{tabular}
\end{table}
\textit{We have to note that extending $h(\cdot)$ by additional $150k$ parameters may result in ever better performance, but it would drastically slow an evaluation. However, we noticed that simple extensions of the original neural netwok $f(\cdot)$ tend to overfit and did not achieve better results.}
The train errors may be also investigated. In this case the original neural network performs better which supports our thesis that \emph{drawering} works as a regularization. Detailed results are presented in the Table \ref{rossmannScoresTrain}.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.3}
\caption{Rossmann Store Sales Scores on Training Set}
\label{rossmannScoresTrain}
\centering
\begin{tabular}{c||c|c|c|c|c}
Model & Min & Top5 mean & Top5 std & All mean & All std\\
\hline
Original & $3.484$ & $3.571$ & $0.059$ & $3.494$ & $0.009$\\
Extended & $3.555$ & $3.655$ & $0.049$ & $3.561$ & $0.012$\\
\end{tabular}
\end{table}
\subsection{Conversion value task}
This dataset provides detailed users descriptions which are consisted of 6 categorical features and more than 400 continuous ones. After encoding the original neural network $f(\cdot)$ takes an input vector of size 700. The core part $h(\cdot)$ is the neural network with 3 layers that outputs a vector of size 200. The function $g(\cdot)$ and the extension $s(\cdot)$ are simple, $Linear(200,1)$ and $Linear(200, 21)$, respectively.
In case of the conversion value task we do not provide a detailed model description since the dataset is private and this experiment can not be reproduced. However, we decided to incorporate this comparison to the paper because two versions of \emph{drawers} were tested on this dataset (\emph{disjoint} and \emph{nested}). We also want to point out that we invented \emph{drawering} method while working on this dataset and afterwards decided to check the method out on public data. We were unable to achieve superior results without \emph{drawering}. Therefore, we believe that work done on this dataset (despite its privacy) should be presented.
To obtain more reliable results ten learning procedures were performed for each setup:
\begin{itemize}
\item \textit{Original} -- the original neural network $f(\cdot)$,
\item \textit{Disjoint} -- \emph{drawered} neural network for \emph{disjoint drawers},
\item \textit{Nested} -- \emph{drawered} neural network for \emph{nested drawers}.
\end{itemize}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{cv}
\caption{Sample evolutions of scores on validation set during learning procedures.}
\label{cvcurves}
\end{figure}
In the Figure \ref{cvcurves} six learning curves are showed. For each of three setups the best and the worst ones were chosen. It means that the minimum of the other eight are between representants showed. The first 50 iterations were skipped to make the figure more lucid. Each learning procedure was finished after 30 iterations without any progress on the validation set.
It may be easily inferred that all twenty \emph{drawered} neural networks performed significantly better than neural networks trained without the extension. The difference between \textit{Disjoint} and \textit{Nested} versions is also noticeable and $\textit{Disjoint}$ \emph{drawers} tends to perform slightly better.
In the Rossmann Stores Sales case we experienced the opposite, hence the version of \emph{drawers} may be understood as a hyperparameter. We suppose that it may be related with the size of a given dataset.
\section{Analysis of $s(h(x))$ values}
Values of $s(h(x))$ may be analyzed. For a pair \mbox{$(x,y) \in D$} the $i$-th value of the vector $s(h(x))$ is the probability that target $y$ belongs to the \emph{drawer} $f_i$. In this section we assume that \emph{drawers} are nested, hence values of $s(h(x))$ should be descending. Notice that we do not force this property by the architecture of \emph{drawered} neural network, so it is a side effect of the nested structure of \emph{drawers}.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{rss}
\caption{Sample values of $s(h(x))$ for a randomly chosen model solving Rossmann Store Sales problem (nested \emph{drawers}).}
\label{rss}
\end{figure}
In the Figure \ref{rss} a few sample distributions are showed. Each according label is ground truth ($i$ such that $e_i$ contains target value). The values of $s(h(x))$ are clearly monotonous as expected. It seems that $s(h(x))$ performs well -- values are close to one in the beginning and to zero in the end. A switch is in the right place, close to the ground truth label and misses by maximum one \emph{drawer}.
\section{Conclusion}
The method presented, \emph{drawering}, extends a given regression neural network which makes training more effective. The modification affects the learning procedure only, hence once \emph{drawered} model is trained, the extension may be easily omitted during evaluation without any change in prediction. It means that the modified model may be evaluated as fast as the original one but tends to perform better.
\newpage
We believe that this improvement is possible because \emph{drawered} neural network has bigger expressive power, is provided with better behaved gradients, can model arbitrary distribution and is regularized. It turned out that the knowledge gained by the modified neural network is contained in the parameters shared with the given neural network.
Since the only cost is an increase in learning time, we believe that in cases when better performance is more important than training time, \emph{drawering} should be incorporated into a given regression neural network.
\bibliographystyle{IEEEtran}
| {'timestamp': '2016-12-07T02:01:38', 'yymm': '1612', 'arxiv_id': '1612.01589', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.01589'} |
\section{Introduction}
At the meeting of the American Mathematical Society in Hayward, California, in April 1977, Olga Taussky-Todd \cite{TausskyTodd} asked whether one could characterize the values of the group determinant when the entries are all integers.
For a prime $p,$ a complete description was obtained for $\mathbb Z_{p}$ and $\mathbb Z_{2p}$, the cyclic groups of order $p$ and $2p$, in \cite{Newman1} and \cite{Laquer}, and for $D_{2p}$ and $D_{4p}$ the dihedral groups of order $2p$ and $4p$ in \cite{dihedral}. The values for $Q_{4n}$, the dicyclic group of order $4n$ were explored in \cite{dicyclic}
with a near complete description for $Q_{4p}$. In general though this quickly becomes a hard problem,
with only partial results known even for $\mathbb Z_{p^2}$ once $p\geq 7$ (see \cite{Newman2} and \cite{Mike}).
The remaining groups of order less than 15 were tackled in \cite{smallgps} and $\mathbb Z_{15}$ in \cite{bishnu1}.
The integer group determinants have been determined for all five abelian groups of order 16 ($\mathbb Z_2 \times \mathbb Z_8$, $\mathbb Z_{16}$, $\mathbb Z_2^4$, $\mathbb Z_4^2$, $\mathbb Z_2^2 \times\mathbb Z_4$ in \cite{Yamaguchi1,Yamaguchi2,Yamaguchi3,Yamaguchi4,Yamaguchi5}), and for three of the non-abelian groups
($D_{16}$, $\mathbb Z_2\times D_8$, $\mathbb Z_2 \times Q_8$ in \cite{dihedral,ZnxH}).
Here we determine the the group determinants for $Q_{16}$, the dicyclic or generalized quaternion group of order 16.
$$ Q_{16}=\langle X,Y \; | \; X^8=1,\; Y^2=X^4,\; XY=YX^{-1}\rangle. $$
This leaves five unresolved non-abelian groups of order 16
\begin{theorem} The even integer group determinants for $Q_{16}$ are exactly the multiples of $2^{10}$.
The odd integer group determinants are all the integers $n\equiv 1$ mod 8 plus those $n\equiv 5$ mod 8 of the form
$n=mp^2$ where $m\equiv 5$ mod 8 and $p\equiv 7$ mod $8$ is prime.
\end{theorem}
We shall think here of the group determinant as being defined on elements of the group ring $\mathbb Z [G]$
$$ \mathcal{D}_G\left( \sum_{g\in G} a_g g \right)=\det\left( a_{gh^{-1}}\right) .$$
\begin{comment}
We observe the multiplicative property
\begin{equation} \label{mult} \mathcal{D}_G(xy)= \mathcal{D}_G(x)\mathcal{D}_G(y), \end{equation}
using that
$$ x=\sum_{g \in G} a_g g,\;\;\; y=\sum_{g \in G} b_g g \; \Rightarrow \; xy=\sum_{g\in G} \left(\sum_{hk=g}a_hb_k\right) g. $$
\end{comment}
Frobenius \cite{Frob} observed that the group determinant can be factored using the groups representations (see for example \cite{Conrad} or \cite{book})
and an explicit expression for a dicyclic group determinant was given in \cite{smallgps}. For $Q_{16}$, arranging the
16 coefficients into two polynomials of degree 7
$$ f(x)=\sum_{j=0}^7 a_j x^j,\;\; g(x)=\sum_{j=0}^7 b_jx^j, $$
and writing the primitive 8th root of unity $\omega:=e^{2\pi i/8}=\frac{\sqrt{2}}{2}(1+i)$, this becomes
\begin{equation} \label{form}\mathcal{D}_G\left( \sum_{j=0}^7 a_j X^j + \sum_{j=0}^7 b_j YX^j\right) =ABC^2D^2 \end{equation}
with integers $A,B,C,D$ from
\begin{align*}
A=& f(1)^2- g(1)^2\\
B=& f(-1)^2-g(-1)^2\\
C=& |f(i)|^2-|g(i)|^2 \\
D=& \left(|f(\omega)|^2+|g(\omega)|^2\right)\left(|f(\omega^3)|^2+|g(\omega^3)|^2\right).
\end{align*}
From \cite[Lemma 5.2]{dicyclic} we know that the even values must be multiples of $2^{10}$. The odd values must be
1 mod 4 (plainly $f(1)$ and $g(1)$ must be of opposite parity and $A\equiv B\equiv \pm 1$ mod 4 with $(CD)^2\equiv 1$ mod 4).
\section{Achieving the values $n\not \equiv 5$ mod 8}
We can achieve all the multiples of $2^{10}$.
Writing $h(x):=(x+1)(x^2+1)(x^4+1),$ we achieve the $2^{10}(-3+4m)$ from
$$
f(x) = (1-m)h(x),\quad
g(x)=1+x^2+x^3+x^4-mh(x), $$
the $2^{10}(-1+4m)$ from
$$ f(x)= 1+x+x^4+x^5-mh(x),\;\;\;\;
g(x)= 1+x-x^3-x^7-mh(x), $$
the $2^{11}(-1+2m)$ from
$$ f(x)= 1+x+x^2+x^3+x^4+x^5-mh(x),\;\;\quad
g(x)=1+x^4-mh(x), $$
and the $2^{12}m$ from
$$ f(x)= 1+x+x^4+x^5-x^6-x^7-mh(x),\;\;
g(x)= 1+x-x^3+x^4+x^5-x^7+mh(x). $$
We can achieve all the $n\equiv 1$ mod 8; the $1+16m$ from
$$ f(1)=1+mh(x),\;\; g(x)=mh(x), $$
and the $-7+16m$ from
$$f(x)= 1-x+x^2+x^3+x^7- mh(x),\;\;
g(x)= 1+x^3+x^4+x^7-mh(x). $$
\section{ The form of the $n\equiv 5$ mod 8}
This leaves the $n\equiv 5$ mod 8. Since $(CD)^2\equiv 1$ mod 8 we must have $AB\equiv 5$ mod 8. Switching $f$ and $g$ as necessary we assume that $f(1),f(-1)$ are odd and $g(1),g(-1)$ even. Replacing $x$ by $-x$ if needed we can assume that $g(1)^2\equiv 4$ mod 8 and $g(-1)^2\equiv 0$ mod 8.
We write
$$ F(x)=f(x)f(x^{-1})= \sum_{j=0}^7 c_j (x+x^{-1})^j, \quad G(x)=g(x)g(x^{-1})= \sum_{j=0}^7 d_j (x+x^{-1})^j, $$
with the $c_j,d_j$ in $\mathbb Z$.
From $F(1),F(-1)\equiv 1$ mod 8 we have
$$ c_0+2c_1+4c_2 \equiv 1 \text{ mod }8, \quad c_0-2c_1+4c_2 \equiv 1 \text{ mod }8, $$
and $c_0$ is odd and $c_1$ even.
From $G(1)\equiv 4$, $G(-1)\equiv 0$ mod 8 we have
$$ d_0+2d_1+4d_2 \equiv 4 \text{ mod 8}, \quad d_0-2d_1+4d_2 \equiv 0 \text{ mod } 8, $$
and $d_0$ is even and $d_1$ is odd.
Since $\omega+\omega^{-1}=\sqrt{2}$ we get
\begin{align*} F(\omega) & = (c_0+2c_2+4c_4+\ldots ) + \sqrt{2}(c_1+2c_3+4c_5+\cdots),\\
G(\omega) & = (d_0+2d_2+4d_4+\ldots ) + \sqrt{2}(d_1+2d_3+4d_5+\cdots),
\end{align*}
and
$$|f(\omega)|^2+|g(\omega)|^2= F(\omega)+G(\omega) = X+ \sqrt{2} Y>0, \;\; \quad X, Y \text{odd, } $$
with $ |f(\omega^3)|^2+|g(\omega^3)|^2=F(\omega^3)+G(\omega^3) = X- \sqrt{2} Y>0$. Hence the positive integer $D=X^2-2Y^2\equiv -1$ mod 8.
Notice that primes 3 and 5 mod 8 do not split in $\mathbb Z[\sqrt{2}]$ so only their squares can occur in $D$. Hence
$D$ must contain at least one prime $p\equiv 7$ mod 8, giving the claimed form of the values 5 mod 8.
\section{Achieving the specified values 5 mod 8}
Suppose that $p\equiv 7$ mod 8 and $m\equiv 5$ mod 8. We need to achieve $mp^2$.
Since $p\equiv 7$ mod 8 we know that $\left(\frac{2}{p}\right)=1$ and $p$ splits in $\mathbb Z[\sqrt{2}].$ Since $\mathbb Z[\sqrt{2}]$ is
a UFD, a generator for the prime factor gives a solution to
$$ X^2-2Y^2=p, \;\; X,Y\in \mathbb N. $$
Plainly $X,Y$ must both be odd and $X+\sqrt{2}Y$ and $X-\sqrt{2}Y$ both positive.
Since $(X+\sqrt{2}Y)(3+2\sqrt{2})=(3X+4Y)+\sqrt{2}(2X+3Y)$ there will be $X,Y$ with $X\equiv 1$ mod 4 and with
$X\equiv -1$ mod 4.
Cohn \cite{Cohn} showed that $a+b\sqrt{2}$ in $\mathbb Z[\sqrt{2}]$ is a sum of four squares in $\mathbb Z[\sqrt{2}]$ if and only if $2\mid b$. Hence we can write
$$ 2(X+\sqrt{2}Y)= \sum_{j=1}^4 (\alpha_j + \beta_j\sqrt{2})^2, \;\;\alpha_j,\beta_j\in \mathbb Z. $$
That is,
$$ 2X=\sum_{j=1}^4 \alpha_j^2+ 2\sum_{j=0}^4 \beta_j^2,\;\;\quad Y=\sum_{j=1}^4\alpha_j\beta_j.$$
Since $Y$ is odd we must have at least one pair, $\alpha_1$, $\beta_1$ say, both odd. Since $2X$ is even we must have two
or four of the $\alpha_i$ odd. Suppose that $\alpha_1$, $\alpha_2$ are odd and $\alpha_3,\alpha_4$ have the same parity.
We get
\begin{align*} X+\sqrt{2}Y & = \left( \frac{\alpha_1+\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1+\beta_2)\right)^2+ \left( \frac{\alpha_1-\alpha_2}{2} + \frac{\sqrt{2}}{2}(\beta_1-\beta_2)\right)^2 \\
& \quad + \left( \frac{\alpha_3+\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3+\beta_4)\right)^2+ \left( \frac{\alpha_3-\alpha_4}{2} + \frac{\sqrt{2}}{2}(\beta_3-\beta_4)\right)^2.
\end{align*}
Writing
$$ f(\omega)=a_0+a_1\omega+a_2\omega^2+a_3\omega^3=a_0+ \frac{\sqrt{2}}{2}(1+i)a_1+a_2i+ \frac{\sqrt{2}}{2}(-1+i)a_3,$$
we have
$$ \abs{f(\omega)}^2 =\left(a_0+ \frac{\sqrt{2}}{2}(a_1-a_3)\right)^2 + \left(a_2+ \frac{\sqrt{2}}{2}(a_1+a_3)\right)^2 $$
and can make
$$ |f(\omega)|^2+|g(\omega)|^2 = X + \sqrt{2}Y $$
with the selection of integer coefficients for $f(x)=\sum_{j=0}^3a_jx^j$ and $g(x)=\sum_{j=0}^3 b_jx^j$
\begin{align*} a_0=&\frac{1}{2}(\alpha_1-\alpha_2),\quad a_1 =\beta_1,\quad a_2=\frac{1}{2}(\alpha_1+\alpha_2), \quad a_3=\beta_2, \\
b_0=& \frac{1}{2}(\alpha_3-\alpha_4),\quad b_1 =\beta_3,\quad b_2=\frac{1}{2}(\alpha_3+\alpha_4), \quad b_3=\beta_4.
\end{align*}
These $f(x)$, $g(x)$ will then give $D=p$ in \eqref{form}.
We can also determine the parity of the coefficients.
\vskip0.1in
\noindent
{\bf Case 1}: the $\alpha_i$ are all odd.
Notice that $a_0$ and $a_2$ have opposite parity, as do $b_0$ and $b_2$. Since $Y$ is odd we must have one or three of the
$\beta_i$ odd.
If $\beta_1$ is odd and $\beta_2,\beta_3,\beta_4$ all even, then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4.
Then $a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even and $f(x)=u(x)+2k(x)$
with $u(x)=1+x$ or $x(1+x)$. Likewise $b_0,b_1,b_2,b_3$ are odd, even, even, even or even, even, odd, even
and $g(x)=v(x)+2s(x)$ with $v(x)=1$ or $x^2$. Hence if we take
\begin{equation} \label{shift} f(x)=u(x)+(1-x^4)k(x)-mh(x),\quad g(x)=v(x)+(1-x^4)s(x)-mh(x), \end{equation}
we get $A=3-16m$, $B=-1$, $C=1$, $D=p$ and we achieve $(16m-3)p^2$ in \eqref{form}.
If three $\beta_i$ are odd then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. We assume $\beta_1,\beta_2,\beta_3$ are
odd and $\beta_4$ even. Hence $a_0,a_1,a_2,a_3$ are either odd, odd, even, odd or even, odd, odd, odd and
$f(x)=u(x)+2k(x)$ with $u(x)=1+x+x^3$ or $x(1+x+x^2)$ and $b_0,b_1,b_2,b_3$ are odd, odd, even, even or even, odd, odd, even
and $g(x)=v(x)+2s(x)$ with $v(x)=1+x$ or $x(1+x)$. In this case \eqref{shift} gives
$A=(5-16m)$, $B=1$, $C=-1$, $D=p$ achieving $(5-16m)p^2$.
\vskip0.1in
\noindent
{\bf Case 2}: $\alpha_1$, $\alpha_2$ are odd, $\alpha_3$, $\alpha_4$ are even.
In this case $a_0$, $a_2$ will have opposite parity and $b_0$, $b_2$ the same parity.
Since $Y$ is odd we must have $\beta_1$ odd, $\beta_2$ even. Since $2X\equiv 2$ mod 4 we must have one more odd $\beta_i$, say $\beta_3$ odd and $\beta_4$ even.
If $\alpha_3\equiv \alpha_4$ mod 4 then $2X\equiv 6$ mod 8 and $X\equiv -1$ mod 4. Hence
$a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are even, odd, even, even and $v(x)=x^2$ and again \eqref{shift} gives $(16m-3)p^2$.
If $\alpha_3\not\equiv \alpha_4$ mod 4 then $2X\equiv 10$ mod 8 and $X\equiv 1$ mod 4. In this case
$a_0,a_1,a_2,a_3$ are either odd, odd, even, even or even, odd, odd, even, that is $u(x)=1+x$ or $x(1+x)$ and $b_0,b_1,b_2,b_3$ are odd, odd, odd, even and $v(x)=1+x+x^2$ and again \eqref{shift} gives $(5-16m)p^2$.
Hence, in either case, starting with an $X\equiv 1$ mod 4 gives the $mp^2$ with $m\equiv 5$ mod 16 and
an $X\equiv -1$ mod 4 the $mp^2$ with $m\equiv -3$ mod 16.
\section*{Acknowledgement}
\noindent
We thank Craig Spencer for directing us to Cohn's four squares theorem in $\mathbb Z[\sqrt{2}]$.
| {'timestamp': '2023-02-24T02:03:13', 'yymm': '2302', 'arxiv_id': '2302.11688', 'language': 'en', 'url': 'https://arxiv.org/abs/2302.11688'} |
\section{Introduction}
Seven institutes\footnote{CEA Saclay, Paris, France; CAB-INTA, Madrid, Spain; MPIA, Heidelberg, Germany; University College London, U.K.; University of Leicester, U.K.; SRON, Utrecht, NL; Universitat Wien, Austria}
in Europe have combined their expertise in the field of exoplanetary research to develop the European Horizon-2020
ExoplANETS-A\footnote{{\it Exoplanet Atmosphere New Emission Transmission Spectra Analysis};
https://cordis.europa.eu/project/rcn/212911\_en.html ;
The ExoplANETS-A project has received funding from the EU's Horizon-2020 programme; Grant Agreement no.~776403.}
project under the coordination of CEA Saclay. In the framework of the project, novel data calibration and spectral extraction tools, as well as novel retrieval tools, based on 3D models of exoplanet atmospheres, will be developed to exploit archival data from space- and ground-based observatories, and produce a homogeneous and reliable characterization of the atmospheres of transiting exoplanets. Additionally, to model successfully the exoplanet atmosphere, it is necessary to have a sound knowledge of the host star. To this end, we will collect a coherent and uniform database of the relevant properties of host stars from online archives (e.g. XMM-Newton, Gaia) and publications. These exoplanet and host-star catalogues will be accompanied by computer models to assess the importance of star--planet interactions, for example the `space weather' effects of the star on its planetary system. The knowledge gained from this project will be published through peer-reviewed scientific journals and modelling tools will be publicly released.
The project has six work packages (WPs); the focus in this paper is on the WP `Host-star properties: the active environment of exoplanets'. Fig.\ref{fig_dataflow}(a) illustrates the flow of data through the host-stars WP, and interfaces to the overall project.
We outline the activities concerning the host stars, and present early results from the host-star investigations, including the basic stellar observational and physical properties, and indications of future observations needed to maximize the coverage of the target list of $\sim 100$ stars. We also discuss some of the modelling aspects.
\begin{figure}[h]
\begin{center}
\includegraphics[width=50mm]{ExoplanetsA-HostStarDataFlow-Fig-v01.eps}
\includegraphics[width=80mm]{SpType_Histogram_v03.eps}
\caption{ {\it (a, left).} Schematic of the flow of data through the host-stars workpackage, and interfaces to the overall project. {\it (b, right).} The target sample, in terms of the distribution of host stars with spectral type, showing the breakdown by archival-observation types.}
\label{fig_dataflow}
\end{center}
\end{figure}
\section{The host-stars activities}
In addition to compiling the observational data, we will
where necessary, e.g.\ in the EUV, interpolate, extrapolate and scale spectral and other information, to cover gaps in the observational data, in order to provide the full XUV spectral range for modelling the host-star influence on the exoplanet atmosphere (e.g.\ Nemec \etal\ 2019).
This work will be accompanied by computer models to assess the importance of star--planet interactions, for example the `space weather' effects of the star on its planetary system; We will also model the possible evolutionary scenarios for the stellar activity over the star's lifetime, in order to gain insight into the past environment of the exoplanet (e.g.\ Johnstone \etal\ 2015).
\section{The sample}
The sample of exoplanets and host stars considered by the project comprises all transiting-exoplanet systems observed by HST or Spitzer. Currently, this corresponds to 135 exoplanets, of which 85 have HST data; the associated number of stars is 113, with 76 having HST data (Fig.\ref{fig_dataflow}(b)).
For the host stars, the primary online archival databases to be used are HST (for UV spectra), XMM-Newton (for X-ray), Gaia (for astrometric properties) and SIMBAD (for spectral types etc). The searches also include: GALEX (UV photometry); Chandra, ROSAT and other X-ray catalogues, together with results retrieved from the published literature. At X-ray wavelengths, most of the detections are from the 3XMM (DR8) serendipitous source catalogue (Rosen \etal\ 2016).
In summary, to date (November 2018), we have the following statistics:
\begin{itemize}
\item X-ray (3XMM-DR8 cat.): 31 stars observed, with 17 detections in the public archive, and the outcomes awaited for the others;
\item UV photometry (GALEX, GR6 cat.): 70 stars detected (51 in the HST sample)
\item UV spectra (HST COS and/or STIS): 26 stars observed
\item X-ray observation \& UV photometry: 23 stars
\item X-ray observation \& UV spectra: 20 stars
\end{itemize}
All the stars with XMM observations also have HST visible/IR spectra.
Fig.\ref{fig_examples} illustrates a few of the stellar parameters from our initial assessments to date.
\begin{figure}[h]
\begin{center}
\includegraphics[width=50mm]{Dist_AppMagGaia_v03.eps}
\includegraphics[width=40mm]{AppMagGaia_Xflux_v01.eps}
\includegraphics[width=40mm]{Dist_Lx_v01.eps}
\caption{Examples of host-star properties, plotted from the catalogue content. }
\label{fig_examples}
\end{center}
\end{figure}
In order to fill in gaps in the stellar parameter space (e.g.\ to have adequate UV and X-ray measurements across spectral types) we are planning observing proposals, principally to HST and XMM-Newton.
\section{Determination of the physical parameters of the host stars}
A coherent and uniform determination of the stellar properties is essential to avoid any bias in the final results; use of Virtual Observatory (VO) tools facilitates this goal.
For example, we have used VOSA (http://svo2.cab.inta-csic.es/theory/vosa/) to build observational Spectral Energy Distributions (SED) and compare them with theoretical SEDs (Fig.\ref{fig_sed_rot}) to obtain physical parameters such as effective temperature,
stellar radius and luminosity.
Effective temperatures range from 3000 to 6500K.
We have also used VOSA to search for infrared excess in our sources, with negative outcome.
Medium/high resolution spectra for 48 of our sources
observed with FEROS, HARPS, FORS 1-2, X-SHOOTER, FLAMES and UVES were gathered from the ESO archive.
Visible-light photometry from Kepler/K2 and TESS will also be used to help characterise the stars in terms of rotational and aperiodic variability.
One third of our sources have
Kepler light-curves while TESS will probably provide data for the remainder.
\begin{figure}[h]
\begin{center}
\includegraphics[width=65mm]{sed_example_v01.eps}
\includegraphics[width=65mm]{RotCurves_v02.eps}
\caption{{\it (a, left).} SED fitting using VOSA. Blue spectrum represents the theoretical model (Allard \etal\ 2012) that best fits while red dots represent the observed photometry. {\it (b, right).} Stellar rotation rate (days) as a function of log(age, years) for 3 cases: wind + Alf\'en wings torques (light blue curve), wind + tides (red) and wind + tides + Alfv\'en wings torques (dark blue) (Ahuir \etal\ 2019 in prep; see Benbakoura \etal\ 2018 for details of the ESPEM code used for the study). }
\label{fig_sed_rot}
\end{center}
\end{figure}
The ExoplANETS-A database of stellar properties and spectral models will be made publicly available, via a web-based interface, by the end of 2020.
\section{Scientific interpretation of the data: star--planet interactions}
Most exoplanets live around active stars, whose magnetism and intense activity can have
a direct impact on habitability conditions and more generally on the exosphere of the planet.
Intense wind and storms can lead to atmospheric loss. Further, a large fraction of exoplanets live
close to their host star (within 10--20 solar radii), to a point where in many cases they orbit within the star's Alfv\'en surface.
This has direct consequence on star--planet interaction and planet migration, as magnetic torques through
Alfv\'en wings directly connecting the planet to its host star can occur. We have developed both
ab-initio 3D MHD simulation of such close-in systems (Strugarek \etal\ 2015, Strugarek \etal\ 2017) as well as a simpler 1D secular evolution model
of star--planet systems (Benbakoura \etal\ 2018).
In Fig.\ref{fig_sed_rot}(b) we show the evolution of a star--planet system subject to intense
magnetic torques through direct Alfv\'en wings connection (on top of tides and wind effect) and compare it to the case with magnetic torques and stellar wind only (light blue curve)
or tides + wind (red curve).
We note that adding all the effects leads to a quick demise of the planet, that impacts the star's rotation rate.
Such Alfv\'en wings can also lead to hot spots on the star's surface, impacting directly their light-curve and hence the transit curves.
We intend to develop 3D and secular models for the most important systems identified in the ExoplANETS-A project,
as well as obtaining spectropolarimetric magnetic maps of the host star in order to model the stellar wind and assess as accurately as possible the
space environment around these exoplanetary systems.
| {'timestamp': '2019-03-04T02:12:08', 'yymm': '1903', 'arxiv_id': '1903.00234', 'language': 'en', 'url': 'https://arxiv.org/abs/1903.00234'} |
\section{Introduction}
The Fock space representation of the
quantum affine algebra $U_q(\widehat{sl}_n)=U_q(A^{(1)}_{n-1})$
was constructed by Hayashi \cite{H}.
A combinatorial version of this construction was then used by
Misra and Miwa \cite{MM} to describe Kashiwara's crystal basis of
the basic representation $V(\Lambda_0)$.
This made it possible to compute the global crystal basis of
$V(\Lambda_0)$ \cite{LLT}. Then, it was conjectured
that the degree $m$ part of the transition matrices
giving the coefficients of the global basis
on the natural basis of the Fock space
were $q$-analogues of the decomposition matrices of the type $A$
Hecke algebras $H_m$ at an $n$th root of unity \cite{LLT}.
According to a conjecture
of James \cite{J}, these should coincide, for $n$ prime and
large enough,
with the decomposition matrices of symmetric groups ${\rm S}_m$ over a field
of characteristic $n$.
The conjecture of \cite{LLT} has been proved by Ariki \cite{Ar},
and by Grojnowski \cite{Gr} using the results of \cite{G}.
There is another approach to the calculation of decomposition
matrices of type $A$ Hecke algebras, relying upon Soergel's
results on tilting modules for quantum groups at roots of
unity \cite{Soe1,Soe2}.
This approach also leads to $q$-analogues of decomposition
numbers expressed in terms of Kazhdan-Lusztig polynomials.
It seems that these $q$-analogues are the same as those
of \cite{LLT} but there is no proof of this coincidence.
In fact, the relationship between the two approaches is still unclear.
The results of \cite{LLT,Ar,Gr} have been applied recently
by Foda {\it et al.} \cite{FLOTW} to determine which simple
$H_m$-modules remain simple after restriction to $H_{m-1}$
and to show that this problem is equivalent to the decomposition
of a tensor product of level 1 $A_{n-1}^{(1)}$-modules.
This provided an explanation for an intriguing correspondence
previously observed in \cite{FOW} between a class of RSOS models
and modular representations of symmetric groups.
Another description of the $U_q(A^{(1)}_{n-1})$-Fock space,
as a deformation of the infinite wedge realization of
the fermionic Fock space, was obtained by Stern \cite{St}.
In \cite{KMS}, the $q$-bosons needed for the decomposition
of the Fock space into irreducible $U_q(A^{(1)}_{n-1})$-modules
were introduced. This construction was used in \cite{LLTrib}
to give a combinatorial formula for the highest weight
vectors, and in \cite{LT} to define a canonical basis
of the whole Fock space which was conjectured to
yield the decomposition matrices
of $q$-Schur algebras at roots of unity.
Moreover, strong support in favor of this conjecture was
obtained by establishing its compatibility with a version
of the Steinberg tensor product theorem proved by James
in this context \cite{J,LT}.
Recently, the theory of perfect crystals \cite{KMN1,KMN2} allowed
Kashiwara {\it et al.} \cite{KMPY} to define a general
notion of $q$-Fock space, extending the results of \cite{KMS}
to several series of affine algebras.
Their results apply in particular to the twisted affine algebra
of type $A^{(2)}_{2n}$, which is the case considered in this note.
It has been noticed by Nakajima and Yamada \cite{NY} that the combinatorics
of the basic representation
$V(\Lambda_n)$ of $A^{(2)}_{2n}$ was similar to the
one encountered in the $(2n+1)$-modular representation theory of the spin
symmetric groups ${\rm \widehat{S}}_m$ by Morris \cite{Mo1} as early as 1965.
This can be explained to a certain extent by observing that
the $(r,\bar{r})$-inducing operators of Morris and Yaseen \cite{MY}
coincide with the Chevalley lowering operators of the
Fock space representation of $A^{(2)}_{2n}$. This provides
a further example of the phenomenon observed in \cite{LLT}
in the case of symmetric groups and $A_{n-1}^{(1)}$-algebras.
In this note, we give the analogues for $U_q(A^{(2)}_{2n})$ of the
results of \cite{LLT}.
Using the level~1 $q$-Fock spaces of \cite{KMPY},
we describe an algorithm for computing
the canonical basis of the basic representation $V(\Lambda_n)$, which
allows us to prove that this basis is in the ${\bf Z}[q]$-lattice
spanned by the natural basis of the $q$-Fock space, and that
the transition matrices have an upper triangle of zeros
(Theorem 4.1).
We conjecture that the specialization $q=1$ gives, up to splitting of rows and
columns for pairs of associate characters, and for sufficiently
large primes $p=2n+1$, the decomposition matrices of spin symmetric groups.
However, the reduction $q=1$ is more tricky than in the $A_{n-1}^{(1)}$ case.
Indeed, the $q$-Fock space of $A^{(2)}_{2n}$ is strictly larger than
the classical one, and one has to factor out the null space
of a certain quadratic form \cite{KMPY} to recover the usual
description.
The missing ingredient in the spin case when we compare it to
\cite{LLT} is that, since the spin symmetric groups are
not Coxeter groups, there is no standard way of associating
to them a Hecke algebra, and this is an important obstruction
for proving our conjecture.
What we can actually prove is that all self-associate
projective characters of ${\rm \widehat{S}}_m$ are linear combinations
of characters obtained from smaller groups by a sequence
of $(r,\overline r)$-inductions (Theorem 6.1).
This proof is constructive in the sense that the intermediate
basis $\{A(\mu)\}$ of our algorithm for the canonical basis,
suitably specialized at $q=1$, is a basis for the space spanned
by such characters.
This should have implications on the way of labelling the
irreducible modular spin representations of ${\rm \widehat{S}}_m$.
Up to now, a coherent labelling scheme has been found
only for $p=3$ \cite{BMO} and $p=5$ \cite{ABO}.
The case $p\ge 7$ led to formidable difficulties.
To overcome this problem, we propose to use the labels
of the crystal graph of $V(\Lambda_n)$, which may contain
partitions with repeated parts not arising in the
representation theory of ${\rm \widehat{S}}_m$, and corresponding to ghost vectors
of the $q$-Fock space at $q=1$.
\section{The Fock space representation of $U_q(A^{(2)}_{2n})$}
The Fock space representation of the affine Lie algebra
$A^{(2)}_{2n}$ can be constructed by means of its
embedding in $b_\infty=\widehat{go}_\infty$, the completed infinite
rank affine Lie algebra of type $B$ \cite{DJKM1,DJKM2}.
The (bosonic) Fock space of type $B$ is
the polynomial algebra ${\cal F} = {\bf C}[p_{2j+1}, j\ge 0 ]$ in an infinite
number of generators $p_{2j+1}$ of odd degree $2j+1$. If one identifies
$p_k$ with the power sum symmetric function $p_k=\sum_i x_i^k$
in some infinite set of variables, the natural basis of weight
vectors for $b_\infty$ is given by Schur's $P$-functions $P_\lambda$
(where $\lambda$ runs over the set ${\rm DP}$ of partitions
into distinct parts) \cite{DJKM1,You,JY}.
The Chevalley generators $e^\infty_i$, $f^\infty_i$ ($i\ge 0$)
of $b_\infty$ act on $P_\lambda$ by
\begin{equation}\label{FP}
e^\infty_i P_\lambda = P_\mu \ , \qquad f^\infty_i P_\lambda = P_\nu
\end{equation}
where $\mu$ (resp. $\nu$) is obtained from $\lambda$ by replacing its part $i+1$
by $i$ (resp. its part $i$ by $i+1$), the result being $0$
if $i+1$ (resp. $i$) is not a part of $\lambda$.
Also, it is
understood that $P_\mu=0$ as soon as $\mu$ has a multiple part.
For example, $f^\infty_0 P_{32}=P_{321}$,
$f^\infty_3 P_{32}=P_{42}$,
$e^\infty_1 P_{32}= P_{31}$ and $e^\infty_2 P_{32}=P_{22}=0$.
Let $h=2n+1$.
The Chevalley generators $e_i$, $f_i$ of $A^{(2)}_{2n}$
will be realized as
\begin{equation}\label{FNP}
f_i=\sum_{j\equiv n\pm i} f^\infty_j \qquad
(i=0,\ldots ,n) \,,
\end{equation}
\begin{equation}
e_i=\sum_{j\equiv n\pm i } e^\infty_j \qquad
(i=0,\ldots, n-1)\,,\qquad
e_n=e^\infty_0 +
2 \sum_{\scriptstyle j>0 \atop \scriptstyle j \equiv 0,-1} e^\infty_j \,,
\end{equation}
where all congruences are taken modulo $h$.
Let $A_{2n}^{(2)}{}'$ be the derived algebra of $A_{2n}^{(2)}$
(obtained by omitting the degree operator $d$).
The action of $A_{2n}^{(2)}{}'$ on ${\cal F}$ is centralized
by the Heisenberg algebra generated by the operators
$\displaystyle{\partial\over\partial p_{hs}}$ and $p_{hs}$ for odd $s\ge 1$.
This implies that the Fock space decomposes under $A_{2n}^{(2)}$ as
\begin{equation}\label{DEC1}
{\cal F} = \bigoplus_{k\ge 0} V(\Lambda_n-k\delta)^{\oplus p^* (k)}
\end{equation}
where $p^*(k)$ is the number of partitions of $k$ into odd parts.
In particular, the subrepresentation generated by the vacuum vector
$|0\rangle=P_0 = 1$ is the basic
representation $V(\Lambda_n)$ of $A_{2n}^{(2)}$, and its principally
specialized character is \cite{KKLW}
\begin{equation}\label{CHAR}
{\rm ch}_t\,V(\Lambda_n) =
\sum_{m\ge 0}\dim V(\Lambda_n)_m\,t^m =
\prod_{\scriptstyle i \ {\rm odd} \atop \scriptstyle i\not\equiv 0{\ \rm mod\ } h}
{1\over 1-t^i}\,.
\end{equation}
The $q$-deformation of this situation has been discovered
by Kashiwara {\it et al.} \cite{KMPY}.
Contrary to the case of
$A^{(1)}_{n-1}$, the $q$-Fock space is strictly larger than
the classical one.
We recall here briefly their construction, referring to
\cite{KMPY} for details and notation.
Let ${\rm DP}_h(m)$ be the set
of partitions $\lambda=(1^{m_1}2^{m_2}\ldots r^{m_r})$
of $m$ for which $m_i\le 1$ when $i\not\equiv 0 {\ \rm mod\ } h$.
For example, ${\rm DP}_3(7)=\{(7),(61),(52),(43),(421),(331)\}$.
Set ${\rm DP}_h=\bigcup_m {\rm DP}_h(m)$.
Then, the $q$-Fock space of type $A_{2n}^{(2)}$ is
\begin{equation}
{\cal F}_q = \bigoplus_{\lambda\in {\rm DP}_h} {\bf Q}(q)\, |\lambda\>
\end{equation}
where for $\lambda=(\lambda_1,\ldots,\lambda_r)$,
$|\lambda\>$ denotes the infinite $q$-wedge product
\[
|\lambda\> = u_\lambda = u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots\wedge_q
u_{\lambda_r}\wedge_q u_0 \wedge_q u_0 \wedge_q \cdots
\]
of basis vectors $u_i$ of the representation $V_{\rm aff}$.
The quantum affine algebra $U_q(A_{2n}^{(2)})$ acts on
$V_{\rm aff}=\bigoplus_{i\in{\bf Z}}{\bf Q}(q) u_i$
by
\begin{eqnarray}
f_i u_j = \cases{ u_{j+1} & if $j\equiv n\pm i{\ \rm mod\ } h$ \\
0 & otherwise \\}
\qquad (i=0,\ldots,n-1)
\\
\label{ACTF}
f_n u_j = \cases{ u_{j+1} & if $j\equiv -1 {\ \rm mod\ } h$ \\
(q+q^{-1}) u_{j+1} & if $j\equiv 0 {\ \rm mod\ } h$ \\
0 & otherwise \\}
\\
e_i u_j = \cases{ u_{j-1} & if $j\equiv n+1\pm i{\ \rm mod\ } h$ \\
0 & otherwise \\}
\qquad (i=0,\ldots,n-1)
\\
e_n u_j = \cases{ u_{j-1} & if $j\equiv 1 {\ \rm mod\ } h$ \\
(q+q^{-1}) u_{j-1} & if $j\equiv 0 {\ \rm mod\ } h$ \\
0 & otherwise \\}
\\
t_0 u_j = \cases{ q^4 u_j & if $j\equiv n{\ \rm mod\ } h$ \\
q^{-4} u_j & if $j\equiv n+1 {\ \rm mod\ } h$ \\
u_j & otherwise \\}
\\
t_i u_j = \cases{ q^2 u_j & if $j\equiv n\pm i{\ \rm mod\ } h$ \\
q^{-2} u_j & if $j\equiv n+1\pm i {\ \rm mod\ } h$ \\
u_j & otherwise \\ }
\qquad (i=1,\ldots,n-1)
\\
t_n u_j = \cases{ q^2 u_j & if $j\equiv -1{\ \rm mod\ } h$ \\
q^{-2} u_j & if $j\equiv 1 {\ \rm mod\ } h$ \\
u_j & otherwise \\}
\end{eqnarray}
The only commutation rules we will need to describe the
action of $e_i$ and $f_i$ on ${\cal F}_q$ are:
\begin{eqnarray}
u_j \wedge_q u_j &=& 0 \ {\rm if}\ j\not\equiv 0 {\ \rm mod\ } h \\
u_j \wedge_q u_{j+1} &=& -q^2 u_{j+1}\wedge_q u_j \ {\rm if} \label{STR2}
j\equiv 0,-1 {\ \rm mod\ } h \ .
\end{eqnarray}
The action on the vacuum vector
$|0\> = u_0\wedge_qu_0\wedge_q\cdots $
is given by
\begin{equation}
e_i|0\> = 0, \qquad
f_i|0\> = \delta_{i n}|1\>, \qquad
t_i|0\> = q^{\delta_{i n}}|0\>,
\end{equation}
and on a $q$-wedge
$|\lambda\>=u_{\lambda_1}\wedge_q\cdots\wedge_q u_{\lambda_r}
\wedge_q |0\>$,
\begin{eqnarray}
f_i |\lambda\>
=&
f_iu_{\lambda_1}\wedge_q t_iu_{\lambda_2}\wedge_q\cdots t_iu_{\lambda_r}
\wedge_q t_i|0\> \nonumber \\
& +
u_{\lambda_1}\wedge_q f_iu_{\lambda_2}\wedge_q\cdots t_iu_{\lambda_r}
\wedge_q t_i|0\> \nonumber \\
& + \cdots +
u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots u_{\lambda_r}
\wedge_q f_i|0\>
\end{eqnarray}
\begin{eqnarray}
e_i |\lambda\>
=&
t_i^{-1} u_{\lambda_1}\wedge_q t_i^{-1}u_{\lambda_2}\wedge_q\cdots t_i^{-1}u_{\lambda_r}
\wedge_q e_i|0\> \nonumber \\
& +
t_i^{-1}u_{\lambda_1}\wedge_q t_i^{-1}u_{\lambda_2}\wedge_q\cdots e_iu_{\lambda_r}
\wedge_q |0\> \nonumber \\
& + \cdots +
e_i u_{\lambda_1}\wedge_q u_{\lambda_2}\wedge_q\cdots u_{\lambda_r}
\wedge_q |0\>
\end{eqnarray}
\begin{equation}
t_i |\lambda\> =
t_i u_{\lambda_1}\wedge_q t_i u_{\lambda_2}\wedge_q\cdots\wedge_q
t_i u_{\lambda_r}\wedge_q t_i|0\> \ .
\end{equation}
For example, with $n=2$, one has
\[
f_2 |542\>= (q^4+q^2)|642\>+q|552\>+|5421\>,
\]
and
\[
f_2 |552\> = (q^2+1)(|652\>+|562\>)+|5521\>
= (1-q^4)|652\>+|5521\>,
\]
the last equality resulting from (\ref{STR2}).
It is proved in \cite{KMPY} that ${\cal F}_q$ is an integrable
highest weight $U_q(A_{2n}^{(2)})$-module whose decomposition
into irreducible components, obtained by means of $q$-bosons, is
\begin{equation}\label{DEC2}
{\cal F}_q = \bigoplus_{k\ge 0} V(\Lambda_n-k\delta)^{\oplus p(k)}
\end{equation}
where $p(k)$ is now the number of all partitions of $k$
(compare (\ref{DEC1})).
Thus, the submodule $U_q(A_{2n}^{(2)}) \,|0\>$ is a realization
of the basic representation $V(\Lambda_n)$.
\section{The crystal graph of the $q$-Fock space}
The first step in computing the global basis
of $V(\Lambda_n) \subset {\cal F}_q$ is to determine
the crystal basis of ${\cal F}_q$ whose description
follows from \cite{KMPY,KMN1,KMN2}.
Let $A$ denote the subring of ${\bf Q}(q)$ consisting of
rational functions without pole at $q=0$.
The crystal lattice of ${\cal F}_q$ is
$L = \bigoplus_{\lambda\in {\rm DP}_h} A\,|\lambda\>$,
and the crystal basis of the ${\bf Q}$-vector space $L/qL$ is
$B=\{|\lambda\> {\ \rm mod\ } qL, \lambda \in {\rm DP}_h\}$.
We shall write $\lambda$ instead of $|\lambda\> {\ \rm mod\ } qL$.
The Kashiwara operators $\tilde{f}_i$ act on $B$ in
a simple way recorded on the crystal graph $\Gamma({\cal F}_q)$.
To describe this graph, one starts with the crystal graph
$\Gamma(V_{\rm aff})$ of $V_{\rm aff}$. This is the graph with vertices
$j\in {\bf Z}$, whose arrows labelled by $i\in \{0,1,\ldots ,n\}$
are given, for $i \not = n$, by
\[
j \stackrel{i}{\longrightarrow} j+1 \quad \Longleftrightarrow \quad
j \equiv n \pm i {\ \rm mod\ } h \,,
\]
and for $i=n$ by
\[
j \stackrel{n}{\longrightarrow} j+1 \quad \Longleftrightarrow \quad
j \equiv -1,0 {\ \rm mod\ } h \,.
\]
Thus for $n=2$ this graph is
\[
\cdots \stackrel{1}{\longrightarrow} -1
\stackrel{2}{\longrightarrow} 0
\stackrel{2}{\longrightarrow} 1
\stackrel{1}{\longrightarrow} 2
\stackrel{0}{\longrightarrow} 3
\stackrel{1}{\longrightarrow} 4
\stackrel{2}{\longrightarrow} 5
\stackrel{2}{\longrightarrow} 6
\stackrel{1}{\longrightarrow} 7
\stackrel{0}{\longrightarrow}
\cdots
\]
The graph $\Gamma({\cal F}_q)$ is obtained inductively
from $\Gamma(V_{\rm aff})$ using the following rules.
Let $\lambda = (\lambda_1,\ldots ,\lambda_r)\in B$,
and write $\lambda = (\lambda_1,\lambda^*)$
where $\lambda^* = (\lambda_2,\ldots ,\lambda_r)$.
Then one has $\tilde{f}_i (0) = \delta_{in} (1)$,
$\varphi_i(0)= \delta_{in}$, and
\[
\tilde{f}_i\lambda = \left\{
\matrix{(\tilde{f}_i \lambda_1, \lambda^*)
\ {\rm if} \ \varepsilon_i(\lambda_1) \ge \varphi_i(\lambda^*), \cr
(\lambda_1,\tilde{f}_i\lambda^*)
\ {\rm if} \ \varepsilon_i(\lambda_1) < \varphi_i(\lambda^*). } \right.
\]
Here, $\varepsilon_i(\lambda_1)$ means the distance in $\Gamma(V_{\rm aff})$
from $\lambda_1$ to the origin of its
$i$-string, and $\varphi_i(\lambda^*)$ means the distance in
$\Gamma({\cal F}_q)$ from $\lambda^*$ to the end of its $i$-string.
Thus for $n=1$ one computes successively the following $1$-strings
of $\Gamma({\cal F}_q)$
\[
(0) \stackrel{1}{\longrightarrow} (1)
\]
\[
(2)=(2,0)\stackrel{1}{\longrightarrow}(2,1)
\stackrel{1}{\longrightarrow} (3,1)
\stackrel{1}{\longrightarrow} (4,1)
\]
\[
(3,2)=(3,2,0)\stackrel{1}{\longrightarrow}(3,2,1)
\stackrel{1}{\longrightarrow} (3,3,1)
\stackrel{1}{\longrightarrow} (4,3,1)
\]
from which one deduces that $\tilde{f_1}(3,3,1) = (4,3,1)$
and $\varphi_1(3,3,1)=1$.
The first layers of the crystal $\Gamma({\cal F}_q)$ for $n=1$
are shown in Fig.~\ref{FIG1}.
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize = 15cm
\epsffile{crystalFock.eps}
\end{center}
\caption{\label{FIG1} The graph $\Gamma({\cal F}_q)$ for $A_2^{(2)}$ up to degree $7$}
\end{figure}
One can observe that the decomposition of $\Gamma({\cal F}_q)$ into connected
components reflects the decomposition (\ref{DEC2})
of ${\cal F}_q$ into simple modules.
More precisely, the connected components of
$\Gamma({\cal F}_q)$ are all isomorphic as colored graphs
to the component $\Gamma(\Lambda_n)$ containing the
empty partition.
Their highest vertices are the partitions $\nu$ whose
parts are all divisible by $h$.
This follows from the fact, easily deduced from the
rules we have just explained, that if
$\nu = h\mu = (h\mu_1,\ldots ,h\mu_r)$ is such a partition,
then the map
\begin{equation}\label{MAP}
\lambda \mapsto \lambda + \nu = (\lambda_1+h\mu_1,\lambda_2+h\mu_2,\ldots \ )
\end{equation}
is a bijection from $\Gamma(\Lambda_n)$ onto the connected component
of $\Gamma({\cal F}_q)$ containing $\nu$, and this bijection commutes with
the operators $\tilde{e}_i$ and $\tilde{f}_i$.
This implies that the vertices of $\Gamma(\Lambda_n)$
are the partitions $\lambda=(\lambda_1,\ldots ,\lambda_r,0)\in {\rm DP}_h$ such that
for $i=1,2,\ldots ,r$, one has $\lambda_i- \lambda_{i+1} \le h$ and
$\lambda_i- \lambda_{i+1} < h$ if $\lambda_i \equiv 0 {\ \rm mod\ } h$.
We shall call a partition that satisfies these conditions
$h$-regular.
The set of $h$-regular partitions of $m$ will be denoted by ${\rm DPR}_h(m)$,
and we shall write ${\rm DPR}_h=\bigcup_m {\rm DPR}_h(m)$.
For example,
\[
{\rm DPR}_3(10) = \{ (3331), (4321), (532), (541) \} \,.
\]
\section{The canonical basis of $V(\Lambda_n)$}\label{SECT4}
In this section, we describe an algorithm for computing
the canonical basis (global lower crystal basis) of the
basic representation $V(\Lambda_n)=U_q(A_{2n}^{(2)}) |0\>$
in terms of the natural basis $|\lambda\>$ of the
$q$-Fock space. To characterize the canonical basis, we
need the following notations
\begin{equation}
q_i =
\cases{q & if $i=n$ \\
q^2& if $1\le i<n$ \\
q^4& if $i=0$ \\ }
\qquad
t_i =
\cases{q^{h_n}& if $i=n$\\
q^{2h_i}& if $1\le i<n$ \\
q^{4h_0}& if $i=0$ \\}
\end{equation}
and
\begin{equation}
[k]_i = {q_i^k-q_i^{-k}\over q_i-q_i^{-1}}\ ,
\qquad
[k]_i! = [k]_i [k-1]_i \cdots [1]_i \ .
\end{equation}
The $q$-divided powers of the Chevalley generators are defined by
\begin{equation}
e_i^{(k)} = {e_i^k\over [k]_i!}\ ,\qquad
f_i^{(k)} = {f_i^k\over [k]_i!}\ .
\end{equation}
The canonical basis is defined in terms of an involution
$v\mapsto\overline{v}$ of $V(\Lambda_n)$.
Let $x\mapsto \overline{x}$ be the ring automorphism of
$U_q(A_{2n}^{(2)})$ such that $\overline{q}=q^{-1}$,
$\overline{q^h}=q^{-h}$ for $h$ in the Cartan subalgebra
of $A_{2n}^{(2)}$, and $\overline{e_i}=e_i$,
$\overline{f_i}=f_i$. Then, for $v=x|0\>\in V(\Lambda_n)$,
define $\overline{v}=\overline{x}|0\>$.
We denote by $U_{\bf Q}^-$ the sub-${\bf Q}[q,q^{-1}]$-algebra
of $U_q(A_{2n}^{(2)})$ generated by the $f_i^{(k)}$
and set $V_{\bf Q}(\Lambda_n)=U_{\bf Q}^-|0\>$.
Then, as shown by Kashiwara \cite{K}, there exists a unique
${\bf Q}[q,q^{-1}]$-basis $\{G(\mu), \mu\in {\rm DPR}_h\}$
of $V_{\bf Q}(\Lambda_n)$, such that
\begin{quote}
(G1) $G(\mu) \equiv |\mu\> {\ \rm mod\ } qL$
(G2) $\overline{G(\mu)}= G(\mu)$.
\end{quote}
To compute $G(\mu)$, we follow the same strategy as in
\cite{LLT}. We first introduce an auxiliary basis
$A(\mu)$ satisfying (G2), from which we manage to construct
combinations satisfying also (G1). More precisely,
let ${\cal F}_q^m$ be the subspace of ${\cal F}_q$ spanned
by $|\lambda\>$ for $\lambda \in {\rm DP}_h(m)$ and set
$V(\Lambda_n)_m={\cal F}_q^m\cap V(\Lambda_n)$. Denote
by $\unlhd$ the natural order on partitions.
Then, the auxiliary
basis will satisfy
\begin{quote}
(A0) $\{A(\mu),\mu\in {\rm DPR}_h(m)\}$ is a ${\bf Q}[q,q^{-1}]$-basis
of $V_{\bf Q}(\Lambda_n)_m$,
(A1) $A(\mu)=\sum_\lambda a_{\lambda\mu}(q)|\lambda\>$,
where $a_{\lambda\mu}(q)=0$ unless $\lambda\unrhd\mu$,
$a_{\mu\mu}(q)=1$ and $a_{\lambda\mu}(q)\in{\bf Z}[q,q^{-1}]$,
(A2) $\overline{A(\mu)}=A(\mu)$.
\end{quote}
The basis $A(\mu)$ is obtained
by applying monomials in the $f_i^{(k)}$ to the highest weight vector,
that is, $A(\mu)$ is of the form
\begin{equation}\label{defA}
A(\mu) = f_{r_s}^{(k_s)}f_{r_{s-1}}^{(k_{s-1})}\cdots f_{r_1}^{(k_1)}|0\>
\end{equation}
so that (A2) is satisfied.
The two sequences $(r_1,\ldots,r_s)$ and $(k_1,\ldots,k_s)$ are, as
in \cite{LLT}, obtained by peeling off the $A_{2n}^{(2)}$-ladders
of the partition $\mu$, which are defined as follows. We first fill
the cells of the Young diagram $Y$ of $\mu$ with integers
(called residues), constant in
each column of $Y$. If $j\equiv n\pm i {\ \rm mod\ } h$
($0\le i\le n$), the numbers filling
the $j$-th column of $Y$ will be equal to $i$. A ladder of $\mu$
is then a sequence of cells with the same residue, located
in consecutive rows at horizontal distance $h$, except when the residue
is $n$, in which case two consecutive $n$-cells in a row belong also
to the same ladder. For example, with $n=3$ and $\mu=(11,7,7,4)$,
one finds $22$ ladders (indicated by subscripts), the longest
one being the 7th, containing three 3-cells:
\[
\young{
3_{19} & 2_{20} & 1_{21} & 0_{22} \cr
3_{13} & 2_{14} & 1_{15} & 0_{16} & 1_{17} & 2_{18} & 3_{19} \cr
3_{7}&2_8&1_9&0_{10}&1_{11}&2_{12}&3_{13}\cr
3_1&2_2&1_3&0_4&1_5&2_6&3_7&3_7&2_8&1_9&0_{10}\cr}
\]
Note that this definition of ladders agrees with that of \cite{BMO}
for $n=1$, but differs from that of \cite{ABO} for $n=2$.
Then, in (\ref{defA}), $s$ is the number of ladders,
$r_i$ the residue of the $i$th ladder, and $k_i$ the number of its
cells. Thus, proceeding with our example,
\[
\fl
A(11,7,7,4)=
f_0f_1f_2f_3^{(2)}f_2f_1f_0f_1f_2f_3^{(2)}f_2f_1f_0^{(2)}
f_1^{(2)}f_2^{(2)}f_3^{(3)} f_2f_1f_0f_1f_2f_3 |0\> \ .
\]
The proof of (A0) and (A1) can be readily adapted from
\cite{LLT}. In particular, (A1) follows from the fact that
a partition $\lambda$ belongs to ${\rm DPR}_h$ if and only if all cells of a given
ladder intersecting $\lambda$ occupy the highest
possible positions on this ladder.
Another choice of an intermediate basis,
more efficient for practical computations,
would be to use inductively the vectors $G(\nu)$ already computed
and to set $A(\mu)=f_{r_s}^{(k_s)} G(\nu)$, where
$\nu$ is the partition obtained from $\mu$ by removing
its outer ladder.
Define now the coefficients $b_{\nu\mu}(q)$ by
\begin{equation}
G(\mu) =\sum_\nu b_{\nu\mu}(q) A(\nu) \ .
\end{equation}
Still following \cite{LLT}, one can check that $b_{\nu\mu}(q)=0$
unless $\nu\ge \mu$, where $\ge$ denote the lexicographic
ordering on partitions, and that $b_{\mu\mu}(q)=1$. Therefore,
one can apply the triangular process of \cite{LLT} as follows.
Let $\mu^{(1)} < \mu^{(2)} <\ldots < \mu^{(t)}$ be the set
${\rm DPR}_h(m)$ sorted in lexicographic order, so that
$A(\mu^{(t)})=G(\mu^{(t)})$. Suppose that the expansion
on the basis $|\lambda\>$ of $G(\mu^{(i+1)}),\ldots, G(\mu^{(t)})$
has already been calculated. Then,
\begin{equation}
G(\mu^{(i)})=
A(\mu^{(i)})-\gamma_{i+1} (q) G(\mu^{(i+1)}) - \cdots - \gamma_t(q)G(\mu^{(t)}) \ ,
\end{equation}
where the coefficients are determined by the conditions
\[
\gamma_s(q^{-1})=\gamma_s(q), \qquad G(\mu^{(i)}) \equiv |\mu^{(i)}\> {\ \rm mod\ } qL.
\]
Thus, for $n=1$, the first partition for which $A(\mu)\not = G(\mu)$
is $\mu=(3321)$ and
\begin{eqnarray}
\fl A(3321) =
|3321\> + q|333\> + (q^2-q^6)|432\> + (1+2q^2)|531\> + (q^2+q^4)|54\>
\nonumber \\
\lo+ (2q^2+q^4)|621\> +2q^3 |63\> + (q^4+q^6)|72\> +q^4|81\> +q^5|9\>
\end{eqnarray}
Indeed, $A(3321)\equiv |3321\>+|531\> {\ \rm mod\ } qL$.
On the other hand, $ A(531)=|531\>+q^2|54\>+q^2|621\>+q^3|63\>+q^6|72\> $
is equal to $G(531)$,
and one finds by subtracting this from $A(3321)$ that
\begin{eqnarray}
\fl G(3321) = |3321\> + q|333\> + (q^2-q^6)|432\> + 2q^2|531\> + q^4|54\> \\
\lo+ (q^2+q^4)|621\> +q^3 |63\> + q^4|72\> +q^4|81\> +q^5|9\> \ .
\end{eqnarray}
Since $A(432)=|432\>+q^4|531\>+q^2|72\>+q^6|81\>$ satisfies
(G1) and (G2), it has to be equal to
$G(432)$, which completes the determination of the canonical
basis for $m=9$. For $m=10$, the results are displayed as the
columns of Table \ref{TAB1}.
\begin{table}
\caption{\label{TAB1}
The canonical basis for $n=1$ and $m=10$.
}
\begin{indented}
\item[]\begin{tabular}{@{}llllllll}
\br
&$(3 3 3 1)$&$(4 3 2 1)$&$(5 3 2)$&$(5 4 1)$\\
\mr
$(3 3 3 1)$&1&0&0&0\\
$(4 3 2 1)$&$q-q^{5}$&1&0&0\\
$(4 3 3)$&$q^{2}$&$q$&0&0\\
$(5 3 2)$ &0&0&1&0\\
$(5 4 1)$&$q+q^{3}$&$q^{2}+q^{4}$&0&1\\
$(6 3 1)$&$2\ q^{2}$&$q^{3}$&0&$q$ \\
$(6 4)$&$q^{4}$&0&0&$q^{3}$\\
$(7 2 1)$&$q^{3}+q^{5}$&$q^{2}$&0&$q^{4}$\\
$(7 3)$&$q^{4 }$&$q^{3}$&0&$q^{5}$\\
$(8 2)$ & 0 & 0 & $q^2$ & 0 \\
$(9 1)$&$q^{4}$&$q^{5}$&0&0\\
$(10)$&$q^{6}$&0&0&0\\
\br
\end{tabular}
\end{indented}
\end{table}
In the Fock space representation of $A_{n-1}^{(1)}$,
the weight of a basis vector $|\lambda\>$ is determined by
the $n$-core of the partition $\lambda$ (and its degree) \cite{ANY,LLT}.
There is a similar result of Nakajima and Yamada \cite{NY}
for $A_{2n}^{(2)}$, in terms of the notion of $\overline{h}$-core
of a strict partition
introduced by Morris \cite{Mo1} in the context of the modular
representation theory of spin symmetric groups.
One way to see this is to use a theorem of \cite{MY1}
according to which $\lambda, \mu \in {\rm DP}(m)$
have the same $\overline{h}$-core if and only if
they have, for each $i$, the same number $n_i$ of nodes of residue $i$.
On the other hand, it follows from the
implementation of the Chevalley generators
that $|\lambda\>$ has $A_{2n}^{(2)}$-weight
$\Lambda_n - \sum_{0\le i \le n} n_i \alpha_i$,
and the statement follows.
The definition of $\overline{h}$-cores can be extended to ${\rm DP}_h$
by deciding that if $\lambda$ has repeated parts, its $\overline{h}$-core
is equal to that of the partition obtained by removing those repeated parts.
Then it is clear that if $|\lambda\>$ and $|\mu\>$ have the same
$U_q(A_{2n}^{(2)})$-weight, the two partitions $\lambda$ and $\mu$
have the same $\overline{h}$-core.
It follows, since $G(\mu)$ is obviously a weight vector, that its
expansion on the basis $|\lambda\>$ involves only partitions
$\lambda$ with the same $\overline{h}$-core as $\mu$.
Summarizing the discussion, we have:
\begin{theorem}\label{TH}
For $\mu \in {\rm DPR}_h(m)$, define $d_{\lambda\mu}(q)$ by
$\displaystyle G(\mu)=\sum_{\lambda \in {\rm DP}_h(m)} d_{\lambda\mu}(q)|\lambda\>$.
Then,
{\rm (i)} $d_{\lambda\mu}(q)\in {\bf Z}[q]$,
{\rm (ii)} $d_{\lambda\mu}(q)=0$ unless $\lambda\unrhd\mu$,
and $d_{\mu\mu}(q) = 1$,
{\rm (iii)} $d_{\lambda\mu}(q)=0$ unless $\lambda$ and $\mu$
have the same $\overline{h}$-core.
\end{theorem}
\section{The reduction $q=1$}
As observed by Kashiwara {\it et al.} \cite{KMPY}, to recover the
classical Fock space representation ${\cal F}$ of $A_{2n}^{(2)}$, one has
to introduce the inner product on ${\cal F}_q$ for
which the vectors $|\lambda\>$ are orthogonal and the adjoint
operators of the Chevalley generators are
\begin{equation} \label{ADJOINT}
f_i^{\dag} = q_i e_i t_i, \qquad
e_i^{\dag} = q_i f_i t_i^{-1}, \qquad
t_i^{\dag} = t_i.
\end{equation}
It can be checked that, for $\lambda \in {\rm DP}_h$,
\begin{equation} \label{NORM}
\<\lambda|\lambda\> = \prod_{k>0}\prod_{i=1}^{m_{kh}} (1-(-q^2)^i),
\end{equation}
where $m_{kh}$ is the multiplicity of the part $kh$ in $\lambda$.
Let ${\cal F}_1$ denote the $A_{2n}^{(2)}$-module obtained by specializing
$q$ to 1 as in \cite{KMPY}. This space is strictly larger than the classical Fock space
${\cal F}$, since the dimension of its $m$th homogeneous component
(in the principal gradation) is $|{\rm DP}_h(m)|$ whereas that of ${\cal F}$
is only $|{\rm DP}(m)|$.
Let ${\cal N} = {\cal F}_1^\perp$ denote the nullspace. It follows
from (\ref{ADJOINT}) that ${\cal N}$ is a $A_{2n}^{(2)}$-module, and
from (\ref{NORM}) that ${\cal N}$ is the subspace of ${\cal F}_1$
spanned by the wedge products $|\lambda\>$ labelled by $\lambda \in {\rm DP}_h - {\rm DP}$.
Therefore ${\cal F}_1/{\cal N}$ is a $A_{2n}^{(2)}$-module that can
be identified with ${\cal F}$.
In this identification one has, for
$\lambda=(\lambda_1,\ldots,\lambda_r) \in {\rm DP}$,
\begin{equation}
P_\lambda = 2^{\sum_{i=1}^r\lfloor (\lambda_i-1)/h \rfloor} |\lambda \>.
\end{equation}
The power of $2$ comes from the fact that if $\lambda_i = kh$ for $k>0$,
and $\nu$ denotes the partition obtained from $\lambda$ by replacing $\lambda_i$
by $\nu_i = \lambda_i+1$, then it follows from (\ref{FP}), (\ref{FNP})
that $f_n P_\lambda$ contains $P_\nu$ with
coefficient 1, while
$f_n |\lambda\>$ contains $|\nu\>$ with coefficient 2 by (\ref{ACTF}).
For later use we set
\begin{equation}\label{AHN}
a_h(\lambda) = \sum_{i=1}^r\left\lfloor {\lambda_i-1\over h} \right\rfloor \,.
\end{equation}
\section{Modular representations of ${\rm \widehat{S}}_m$}
We refer the reader to \cite{B} for an up-to-date review of the
representation theory of the spin symmetric groups
and their combinatorics.
Let ${\rm \widehat{S}}_m$ be the spin symmetric group as defined by Schur \cite{S},
that is,
the group of order $2\,m!$ with generators $z,s_1,\ldots,s_{m-1}$
and relations $z^2=1$, $zs_i=s_iz$, $s_i^2 = z$, $(1\le i\le m-1)$,
$s_is_j=zs_js_i$ ($|i-j|\ge 2$) and $(s_is_{i+1})^3=z$
($1\le i\le m-2)$.
On an irreducible representation of ${\rm \widehat{S}}_m$, the central element $z$ has to act
by $+1$ or by $-1$. The representations for which $z=1$ are
actually linear representations of the symmetric group ${\rm S}_m$,
and those with $z=-1$, called spin representations
correspond to two-valued representations of ${\rm S}_m$. The irreducible spin
representations over a field of characteristic $0$
are labelled, up to association, by strict partitions
$\lambda\in {\rm DP}(m)$. More precisely, let ${\rm DP}_+(m)$ (resp. ${\rm DP}_-(m)$)
be the set of strict partitions of $m$ having an even (resp. odd)
number of even parts. Then, to each $\lambda\in{\rm DP}_+(m)$ corresponds
a self-associate irreducible spin character $\pr{\lambda}$, and to each
$\lambda\in{\rm DP}_-(m)$ a pair of associate irreducible spin characters denoted
by $\pr{\lambda}$ and $\pr{\lambda}'$.
According to Schur \cite{S}, the values $\pr{\lambda}(\rho)$
of the spin character $\pr{\lambda}$ on conjugacy classes of cycle-type
$\rho=(1^{m_1},3^{m_3},\ldots )$ are given by the expansion
of the symmetric function $P_\lambda$ on the basis of power sums,
namely
\begin{equation}
P_\lambda = \sum_\rho 2^{\lceil (\ell(\rho)-\ell(\lambda))/2\rceil}
\pr{\lambda}(\rho) {p_\rho\over z_\rho}
\end{equation}
where $z_\rho=\prod_j j^{m_j} m_j!$ and $\ell(\lambda)$ stands for the length
of $\lambda$, that is the number of parts of $\lambda$.
For $\lambda\in{\rm DP}(m)$, one introduces the self-associate spin character
\begin{equation}
\prh{\lambda} = \cases{\pr{\lambda} & if $\lambda\in{\rm DP}_+(m)$,\\
\pr{\lambda}+\pr{\lambda}'& if $\lambda\in{\rm DP}_-(m)$.\\}
\end{equation}
The branching theorem for spin characters of Morris \cite{Mo1} implies that
if $\prh{\lambda}$ gets identified with a weight vector of ${\cal F}$
by setting
\begin{equation}\label{IDENT}
P_\lambda = 2^{\lfloor (m - \ell(\lambda))/2\rfloor} \, \prh{\lambda} ,
\end{equation}
then the $b_\infty$-operator $f = \sum_{i\ge 0} f^{\infty}_i$ implements
the induction of self-associate spin characters
from ${\rm \widehat{S}}_m$ to ${\rm \widehat{S}}_{m+1}$.
Similarly, $e= e^{\infty}_0 + 2\sum_{i>0} e^{\infty}_i$ implements
the restriction from ${\rm \widehat{S}}_m$ to ${\rm \widehat{S}}_{m-1}$.
Thus, the Fock space representation of $b_\infty$ may be viewed
as the sum
${\cal F} = \bigoplus_m {\cal C}(m)$
of additive groups generated by self-associate spin characters
of ${\rm \widehat{S}}_m$ in characteristic 0.
In this setting, the Chevalley generators of $b_\infty$ act as
refined induction and restriction operators.
Now, similarly to the case $A_{n-1}^{(1)}$, the reduction from
$b_\infty$ to $A_{2n}^{(2)}$ parallels the reduction modulo $p=h=2n+1$
of representations of ${\rm \widehat{S}}_m$ (from now on we assume that $h$
is an odd prime).
More precisely, using (\ref{FP}) (\ref{FNP}) (\ref{IDENT}),
one sees immediately that the Chevalley generators $f_i$ of
$A_{2n}^{(2)}$ act on $\prh{\lambda}$ as
the $(r,\overline{r})$-induction operators of Morris and
Yaseen $(r=n+1-i)$ \cite{MY}.
Hence the vectors of degree $m$ of
$V(\Lambda_n) = U(A_{2n}^{(2)})^-\,|0\>$ can be identified
with linear combinations of self-associate spin characters
obtained by a sequence of $(r,\overline{r})$-inductions.
It is known from modular representation theory that the maximal
number of linearly independent self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$ is equal to the number of partitions
of $m$ into odd summands prime to $p$.
Therefore the following result follows at once from (\ref{CHAR}).
\begin{theorem}
The self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$ are linear combinations
of characters obtained by a sequence of $(r,\overline{r})$-inductions.
\end{theorem}
This was proved by Bessenrodt {\it et al.} for $p=3$ \cite{BMO}
and Andrews {\it et al.} for $p=5$ \cite{ABO}, but the question
remained open for $p\ge 7$ \cite{B}.
Moreover, the construction of
Section~\ref{SECT4} gives an explicit basis for the space spanned
by such characters.
Denote by $\underline{A}(\mu)$ the column vector obtained
from $A(\mu)$ by reduction $q=1$ and expansion on the basis
$\prh{\lambda}$.
Then, $\underline{A}(\mu)$ is a projective character by
(\ref{defA}) and
$\{\underline{A}(\mu)\ | \ \mu \in {\rm DPR}_p(m) \}$ is a
basis of the ${\bf Q}$-vector space of self-associate projective spin characters
of ${\rm \widehat{S}}_m$ in characteristic $p$.
These observations and the results of \cite{LLT,Ar,Gr,LT} lead us to
formulate a conjecture relating the global basis of $V(\Lambda_n)$
and the decomposition matrices for spin characters of the groups ${\rm \widehat{S}}_m$.
Let $\mu\in {\rm DPR}_p(m)$ and let $\underline{G}(\mu)$ stand for the image of the
global basis $G(\mu)$ in ${\cal F}={\cal F}_1/{\cal N}$, that is,
\begin{equation}
\underline{G}(\mu) = \sum_{\lambda \in {\rm DP}(m)}
2^{b(\lambda) - a_p(\lambda)}
d_{\lambda\mu}(1) \prh{\lambda} \,,
\end{equation}
where $a_p(\lambda)$ is given by (\ref{AHN}) and
\begin{equation}
b(\lambda)= \left\lfloor {m - \ell(\lambda)\over 2}\right\rfloor\,.
\end{equation}
Then denote by $\underline{\underline{G}}(\mu)$ the vector obtained
by factoring out the largest power of $2$ dividing the coefficients
of $\underline{G}(\mu)$ on the basis $\prh{\lambda}$.
For simplicity of notation, we shall identify $\underline{\underline{G}}(\mu)$
with the column vector of its coordinates on $\prh{\lambda}$.
Finally, let us call reduced decomposition matrix of ${\rm \widehat{S}}_m$ in characteristic
$p$ the matrix obtained from the usual decomposition matrix for spin characters
by adding up pairs of associate columns and expanding the column vectors
so obtained on the basis $\prh{\lambda}$.
This is a matrix with $|{\rm DP}(m)|$ rows and $|{\rm DPR}_p(m)|$ columns.
The definition is illustrated in Table~\ref{TAB2} and Table~\ref{TAB3}.
(Table~\ref{TAB2} is taken from \cite{MY}, except for the column labels
which are ours and will be explained in the next section.)
\begin{table}
\caption{\label{TAB2}
The decomposition matrix of ${\rm \widehat{S}}_{10}$ in characteristic 3.}
\begin{indented}
\item[]\begin{tabular}{@{}llllllll}
\br
&(3331)&(3331)'&(4321)&(4321)'&(532)&(541)&(541)'\\
\mr
$\pr{4321}$ &0&0&1&1&0&0&0\\
$\pr{532}$ &0&0&0&0&1&0&0\\
$\pr{532}'$ &0&0&0&0&1&0&0\\
$\pr{541}$ &1&1&1&1&0&0&1\\
$\pr{541}'$ &1&1&1&1&0&1&0\\
$\pr{631}$ &2&2&1&1&0&1&1\\
$\pr{631}'$ &2&2&1&1&0&1&1\\
$\pr{64}$ &1&1&0&0&0&1&1\\
$\pr{721}$ &1&1&0&1&0&0&1\\
$\pr{721}'$ &1&1&1&0&0&1&0\\
$\pr{73}$ &1&1&1&1&0&1&1\\
$\pr{82}$ &0&0&0&0&1&0&0\\
$\pr{91}$ &1&1&1&1&0&0&0\\
$\pr{10}$ &0&1&0&0&0&0&0\\
$\pr{10}'$ &1&0&0&0&0&0&0\\
\br
\end{tabular}
\end{indented}
\end{table}
\begin{table}
\caption{\label{TAB3}
The reduced decomposition matrix of ${\rm \widehat{S}}_{10}$ in characteristic 3.}
\begin{indented}
\item[]\begin{tabular}{@{}lllll}
\br
&(3331)&(4321)&(532)&(541)\\
\mr
$\prh{4 3 2 1}$& 0 &2&0&0\\
$\prh{5 3 2}$ &0&0&1&0\\
$\prh{5 4 1}$&2&2&0&1\\
$\prh{6 3 1}$&4&2&0&2 \\
$\prh{6 4}$&2 &0&0&2\\
$\prh{7 2 1}$&2 &1 &0&1\\
$\prh{7 3}$&2 & 2&0& 2 \\
$\prh{8 2}$& 0&0&1&0\\
$\prh{9 1}$&2&2&0&0\\
$\prh{10}$&1 &0&0&0 \\
\br
\end{tabular}
\end{indented}
\end{table}
\begin{conjecture}
(i) The set of column vectors of the reduced decomposition matrix of ${\rm \widehat{S}}_m$
in odd characteristic $p$ such that $p^2 > m$
coincides with $\{\underline{\underline{G}}(\mu) \ | \ \mu \in {\rm DPR}_p(m)\}$.
(ii) For $p^2\le m$,
the reduced decomposition matrix of ${\rm \widehat{S}}_m$
is obtained by postmultiplying the matrix whose columns are
$\underline{\underline{G}}(\mu)$ by a unitriangular matrix with
nonnegative entries.
\end{conjecture}
Our conjecture has been checked on the numerical tables
computed by Morris and Yaseen ($p=3$) \cite{MY} and
Yaseen ($p=5,7,11$) \cite{Ya}.
Thus, for $p=3$, $m=11$, the columns of the reduced decomposition matrix
are
\[
\underline{\underline{G}}(3332),\
\underline{\underline{G}}(4331)+\underline{\underline{G}}(641),\
\underline{\underline{G}}(5321),\
\underline{\underline{G}}(542),\
\underline{\underline{G}}(641).
\]
\section{Labels for irreducible modular spin characters
and partition identities}
The labels for irreducible modular representations of symmetric
groups form a subset of the ordinary labels
\cite{JK}. It is therefore natural to look for a labelling scheme
for irreducible modular spin representations of
${\rm \widehat{S}}_m$ using a subset of ${\rm DP}(m)$. This was accomplished for
$p=3$ by Bessenrodt {\it et al.} \cite{BMO}, who found
that the Schur regular partitions of $m$ form a convenient
system of labels. These are the partitions
$\lambda=(\lambda_1,\ldots,\lambda_r)$ such that
$\lambda_i-\lambda_{i+1}\ge 3$ for $i=1,\ldots,r-1$, and
$\lambda_i -\lambda_{i+1}>3$ whenever $\lambda_i\equiv 0{\ \rm mod\ } 3$.
In \cite{BMO}, it was also conjectured that for $p=5$, the labels
should be the partitions $\lambda=(\lambda_1,\ldots,\lambda_r)$
satisfying the following conditions: (1) $\lambda_i>\lambda_{i+1}$
for $i\le r-1$,
(2) $\lambda_i -\lambda_{i+2} \ge 5$ for $i\le r-2$,
(3) $\lambda_i -\lambda_{i+2} > 5$ if $\lambda_i\equiv 0{\ \rm mod\ } 5$
or if $\lambda_i+\lambda_{i+1}\equiv 0{\ \rm mod\ } 5$ for $i\le r-2$,
and (4) there are no subsequences of the following types
(for some $j\ge 0$): $(5j+3,5j+2)$, $(5j+6,5j+4,5j)$,
$(5j+5,5j+1,5j-1)$, $(5j+6,5j+5,5j,5j-1)$.
This conjecture turned out to be equivalent to a
$q$-series identity conjectured long ago by Andrews
in the context of extensions of the Rogers-Ramanujan identities,
and was eventually proved by Andrews {\it et al.} \cite{ABO}.
The authors of \cite{ABO} observed however that such a labelling
scheme could not be extended to $p=7,11,13$ (see also \cite{B}).
In terms of canonical bases, the obstruction can be understood as
follows.
Assuming our conjecture and using the results of \cite{BMO,ABO},
one can see that for $p=3,5$, the labels of \cite{BMO} and \cite{ABO}
are exactly the partitions indexing the lowest nonzero entries
in the columns of the matrices $D_m(q) = [d_{\lambda\mu}(q)]_{\lambda,\mu\vdash m}$.
For example, in Table \ref{TAB1}, these are
$(10),(91),(82)$ and $(73)$, which are indeed the Schur regular partitions
of $10$. The problem is that for $p\ge 7$, it can happen that
two columns have the same partition indexing the lowest nonzero
entry. For example, with $p=7$ ($n=3$) and $m=21$, the two canonical basis
vectors
$
G(75432)
=
\ket{7 5 4 3 2}+q^{2}\ket{7 6 4 3 1}+q\ket{7 7 5 2}+q^{3}\ket{7 7 6 1}
+q^{2}\ket{8 6 4 3}+\left (q^{2}+q^{4}\right )\ket{8 6 5 2}+
q^{3}\ket{8 7 6}+q^{4}\ket{9 5 4 3}+\left (q^{4}+q^{6}\right )\ket{9 6 5 1}+
q^{5}\ket{9 7 5}
$
\noindent and
$
G(654321)
=
\ket{6 5 4 3 2 1}+q\ket{7 5 4 3 2}+ q\ket{7 6 4 3 1}+ q\ket{7 6 5 2 1}+
q^{2}\ket{7 7 4 3}+q^{2}\ket{7 7 5 2}+q^{2}\ket{7 7 6 1}+q^{3}\ket{7 7 7}+
\left (q^{3}+q^{5} \right )\ket{8 6 4 3}+
\left (q^{3}+q^{5}\right )\ket{8 6 5 2}+\left (q^{4}-q^{8}\right )\ket{8 7 6}+
\left (q^{3}+q^{5}\right )\ket{9 6 5 1}+\left (q^{4}+q^{6}\right )\ket{9 7 5}
$
\noindent have the same bottom partition $(975)$
(compare \cite{B}, end of Section 3).
On the other hand the partitions
indexing the highest nonzero entries in the columns of $D_m(q)$ are
the labels of the crystal graph (by Theorem \ref{TH}(ii)), so that
they are necessarily distinct. Therefore, we propose to use the set
\begin{eqnarray*}
{\rm DPR}_p(m) = &\{ \lambda = (\lambda_1,\ldots ,\lambda_r)\vdash m \ | \
0<\lambda_i-\lambda_{i+1}\le p \ {\rm if} \ \lambda_i \not \equiv 0 {\ \rm mod\ } p, \\
& 0 \le\lambda_i-\lambda_{i+1} < p \ {\rm if} \ \lambda_i \equiv 0 {\ \rm mod\ } p,
(1\le i \le r)\}
\end{eqnarray*}
for labelling the irreducible spin representations of ${\rm \widehat{S}}_m$
in characteristic $p$.
Indeed its definition is equally simple for all $p$.
Moreover, because of Theorem~\ref{TH}(iii), this labelling would be compatible with
the $p$-block structure, which can be read on the
$\overline{p}$-cores.
Also, it is adapted to the calculation of the vectors
$\underline{A}(\mu)$ which give an approximation to the
reduced decomposition matrix.
Finally, we note that since ${\rm DPR}_p$ provides the right number
of labels we have the following partition identity
\begin{equation}\label{PARTID}
\sum_{m\ge 0} | {\rm DPR}_p(m) | t^m
=
\prod_{\scriptstyle i \ {\rm odd} \atop \scriptstyle i\not\equiv 0{\ \rm mod\ } p}
{1\over 1-t^i}
\end{equation}
which for $p=3,5$ is a counterpart to the
Schur and Andrews-Bessenrodt-Olsson identities.
This happens to be a particular case of a theorem of Andrews
and Olsson \cite{AO}.
Namely, one gets (\ref{PARTID}) by taking
$A=\{1,2,3,\ldots ,p-1\}$ and $N=p$ in Theorem~2 of~\cite{AO}.
A combinatorial proof of a refinement of the Andrews-Olsson
partition identity has been given by Bessenrodt \cite{B1}.
One can also get a direct proof of (\ref{PARTID})
without using representation theory by
simply considering the bijections (\ref{MAP}).
\section{Discussion}
We have used the level 1 $q$-deformed Fock spaces of Kashiwara {\it et al.}
to compute the canonical basis of the basic representation
of $U_q(A_{2n}^{(2)})$, and we have formulated a conjectural
relation with the decomposition matrices of the spin symmetric groups
in odd characteristic $p=2n+1$.
As in the case of $A_{n-1}^{(1)}$,
it is reasonable to expect that in general,
that is when $2n+1$ is not required to be a prime, the canonical basis
is related to a certain family of Hecke
algebras at $(2n+1)$th roots of unity. A good candidate might be the
Hecke-Clifford superalgebra introduced by Olshanski \cite{Ol}.
The case of $2n$th roots of unity should then be related
to the Fock space representation of the affine Lie algebras
of type $D_{n+1}^{(2)}$.
In particular we believe that the fact used by Benson \cite{Be}
and Bessenrodt-Olsson \cite{BO} that the 2-modular irreducible
characters of ${\rm \widehat{S}}_m$ can be identified with the 2-modular irreducible
characters of ${\rm S}_m$ corresponds in the realm of affine Lie algebras
to the isomorphism $D_2^{(2)} \simeq A_1^{(1)}$.
\section*{Acknowledgements}
We thank T. Miwa and A.O. Morris for stimulating discussions,
and G.E. Andrews for bringing references \cite{AO,B1} to
our attention.
\section*{References}
| {'timestamp': '1997-07-10T15:51:15', 'yymm': '9702', 'arxiv_id': 'q-alg/9702001', 'language': 'en', 'url': 'https://arxiv.org/abs/q-alg/9702001'} |
\section{Introduction}
Multigrid algorithms are effective in the solution of elliptic
problems and have found many applications, especially in fluid
mechanics \cite[e.g.]{Mavriplis97}, chemical reactions in flows
\cite[e.g.]{Sheffer98} and flows in porous media \cite{Moulton98}.
Typically, errors in a solution may decrease by a factor of 0.1 each
iteration \cite[e.g.]{Mavriplis97b}. The simple algorithms I present
decrease errors by a factor of $0.05$ (see Tables~\ref{tbl:2d}
and~\ref{tbl:3dopt} on pages~\pageref{tbl:2d}
and~\pageref{tbl:3dopt}). Further gains in the rate of convergence
may be made by further research.
Conventional multigrid algorithms use a hierarchy of grids whose grid
spacings are all proportional to $2^{-\ell}$ where $\ell$ is the level
of the grid \cite[e.g.]{Zhang98}. The promising possibility I report
on here is the use of a richer hierarchy of grids with levels of the
grids oriented diagonally to other levels. Specifically, in 2D I
introduce in Section~\ref{S2d} a hierarchy of grids with grid spacings
proportional to $2^{-\ell/2}$ and with grids aligned at $45^\circ$ to
adjacent levels, see Figure~\ref{Fgrid2d}
(p\pageref{Fgrid2d}).\footnote{This paper is best viewed and printed
in colour as the grid diagrams and the ensuing discussions are all
colour coded.} In 3D the geometry of the grids is much more
complicated. In Section~\ref{S3d} we introduce and analyse a
hierarchy of 3D grids with grid spacings roughly $2^{-\ell/3}$ on the
different levels, see Figure~\ref{Famal} (p\pageref{Famal}). Now
Laplace's operator is isotropic so that its discretisation is
straightforward on these diagonally oriented grids. Thus in this
initial work I explore only the solution of Poisson's equation.
\begin{equation}
\nabla^2 u=f\,.
\label{Epois}
\end{equation}
Given an approximation $\tilde u$ to a solution, each complete
iteration of a multigrid scheme seek a correction $v$ so that
$u=\tilde u+v$ is a better approximation to a solution of Poisson's
equation~(\ref{Epois}). Consequently the update $v$ has to
approximately satisfy a Poisson's equation itself, namely
\begin{equation}
\nabla^2v=r\,,
\quad\mbox{where}\quad
r=f-\nabla^2\tilde u\,,
\label{Evpois}
\end{equation}
is the residual of the current approximation. The multigrid
algorithms aim to estimate the error $v$ as accurately as possible
from the residual $r$. Accuracy in the ultimate solution $u$ is
determined by the accuracy of the spatial discretisation in the
computation of the residual $r$: here we investigate second-order and
fourth-order accurate discretisations \cite[e.g.]{Zhang98} but so far
only find remarkably rapid convergence for second-order
discretisations.
The diagonal grids employed here are perhaps an alternative to the
semi-coarsening hierarchy of multigrids used by Dendy
\cite{Dendy97} in more difficult problems.
In this initial research we only examine the simplest reasonable
V-cycle on the special hierarchy of grids and use only one Jacobi
iteration on each grid. We find in Sections~\ref{SSsr2}
and~\ref{SSsr3} that the smoothing restriction step from one grid to
the coarser diagonally orientated grid is done quite simply. Yet the
effective smoothing operator from one level to that a factor of 2
coarser, being the convolution of two or three intermediate steps, is
relatively sophisticated. One saving in using these diagonally
orientated grids is that there is no need to do any interpolation.
Thus the transfer of information from a coarser to a finer grid only
involves the simple Jacobi iterations described in
Sections~\ref{SSjp2} and~\ref{SSjp3}. Performance is enhanced within
this class of simple multigrid algorithms by a little over relaxation
in the Jacobi iteration as found in Sections~\ref{SSopt2}
and~\ref{SSopt3}. The proposed multigrid algorithms are found to be
up to twice as fast as comparably simple conventional multigrid
algorithms.
\section{A diagonal multigrid for the 2D Poisson equation}
\label{S2d}
\begin{figure}[tbp]
\centering
\includegraphics{grid2d.eps} \caption{three levels of grids in the
2D multigrid hierarchy: the dotted green grid is the finest,
spacing $h$ say; the dashed red grid is the next finest diagonal
grid with spacing $\sqrt2h$; the solid blue grid is the coarsest
shown grid with spacing $2h$. Coarser levels of the multigrid
follow the same pattern.}
\label{Fgrid2d}
\end{figure}
To approximately solve Poisson's equation~(\ref{Evpois}) in
two-dimensions we use a novel hierarchy of grids in the multigrid
method. The length scales of the grid are $2^{-\ell/2}$. If the
finest grid is aligned with the coordinate axes with grid spacing $h$
say, the first coarser grid is at $45^\circ$ with spacing $\sqrt2h$, the
second coarser is once again aligned with the axes and of spacing $2h$,
as shown in Figure~\ref{Fgrid2d}, and so on for all other levels on
the multigrid. In going from one level to the next coarser level the
number of grid points halves.
\subsection{The smoothing restriction}
\label{SSsr2}
\begin{figure}[tbp]
\centering
{\tt \setlength{\unitlength}{0.25ex}
\begin{picture}(270,130)
\thicklines {\color{red}
\put(220,60){\line(1,-1){30}}
\put(170,110){\line(1,-1){30}}
\put(220,80){\line(1,1){30}}
\put(170,30){\line(1,1){30}}
\put(250,110){\framebox(20,20){1/8}}
\put(250,10){\framebox(20,20){1/8}}
\put(150,110){\framebox(20,20){1/8}}
\put(150,10){\framebox(20,20){1/8}}
}
\put(200,60){\color{blue}\framebox(20,20){1/2}}
{\color{green}
\put(70,30){\line(0,1){30}}
\put(30,70){\line(1,0){30}}
\put(80,70){\line(1,0){30}}
\put(70,80){\line(0,1){30}}
\put(60,10){\framebox(20,20){1/8}}
\put(110,60){\framebox(20,20){1/8}}
\put(60,110){\framebox(20,20){1/8}}
\put(10,60){\framebox(20,20){1/8}}
}
\put(60,60){\color{red}\framebox(20,20){1/2}}
\end{picture}}
\caption{restriction stencils are simple weighted averages of
neighbouring grid points on all levels of the grid.}
\label{Erest2}
\end{figure}
The restriction operator smoothing the residual from one grid to the next
coarser grid is the same at all levels. It is simply a weighted
average of the grid point and the four nearest neighbours on the finer
grid as shown in Figure~\ref{Erest2}. To restrict from a fine green
grid to the diagonal red grid
\begin{equation}
r_{i,j}^{\ell-1}=\frac{1}{8}\left( 4r_{i,j}^\ell +r_{i-1,j}^\ell
+r_{i,j-1}^\ell +r_{i+1,j}^\ell +r_{i,j+1}^\ell \right)\,,
\label{Erest2r}
\end{equation}
whereas to restrict from a diagonal red grid to the coarser blue grid
\begin{equation}
r_{i,j}^{\ell-1}=\frac{1}{8}\left( 4r_{i,j}^\ell +r_{i-1,j-1}^\ell
+r_{i+1,j-1}^\ell +r_{i+1,j+1}^\ell +r_{i-1,j+1}^\ell \right)\,.
\label{Erest2b}
\end{equation}
Each of these restrictions takes $6\,\mbox{flops}$ per grid element. Thus
assuming the finest grid is $n\times n$ with $N=n^2$ grid points, the
restriction to the next finer diagonal grid (red) takes approximately
$3N\,\mbox{flops}$, the restriction to the next finer takes approximately
$3N/2\,\mbox{flops}$, etc. Thus to restrict the residuals up $\ell=2L$ levels
to the coarsest grid spacing of $H=2^Lh$ takes
\begin{equation}
K_r\approx 6N\left(1-\frac{1}{4^L}\right)\,\mbox{flops} \approx 6N\,\mbox{flops}\,.
\label{Ekrest2}
\end{equation}
In contrast a conventional nine point restriction operator from one
level to another takes $11\,\mbox{flops}$ per grid point, which then totals to
approximately $3\frac{2}{3}N\,\mbox{flops}$ over the whole conventional
multigrid hierarchy. This is somewhat better than the proposed
scheme, but we make gains elsewhere. In restricting from the green
grid to the blue grid, via the diagonal red grid, the restriction
operation is equivalent to a 17-point stencil with a much richer and
more effective smoothing than the conventional 9-point stencil.
\subsection{The Jacobi prolongation}
\label{SSjp2}
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{prol2d.eps}
\caption{the interpolation in a prolongation step is replaced
by simply a ``red-black'' Jacobi iteration: (a) compute the new
values at the red grid points, then refine the values at the blue
points; (b) compute the new values at the green points, then refine
those at the red points.}
\label{Fprol2d}
\end{figure}
One immediate saving is that there is no need to interpolate in the
prolongation step from one level to the next finer level. For
example, to prolongate from the blue grid to the finer diagonal red grid,
shown in Figure~\ref{Fprol2d}(a), estimate the new value of $v$ at
the red grid points on level $\ell$ by the red-Jacobi iteration
\begin{equation}
v_{i,j}^\ell=\frac{1}{4}\left( -2h^2r_{i,j}^\ell +v_{i-1,j-1}^{\ell-1}
+v_{i+1,j-1}^{\ell-1} +v_{i+1,j+1}^{\ell-1} +v_{i-1,j+1}^{\ell-1}
\right)\,,
\label{Ejacr}
\end{equation}
when the grid spacing on the red grid is $\sqrt2h$. Then the values
at the blue grid points are refined by the blue-Jacobi iteration
\begin{equation}
v_{i,j}^\ell=\frac{1}{4}\left( -2h^2r_{i,j}^\ell +v_{i-1,j-1}^\ell
+v_{i+1,j-1}^\ell +v_{i+1,j+1}^\ell +v_{i-1,j+1}^\ell \right)\,.
\label{Ejacb}
\end{equation}
A similar green-red Jacobi iteration will implicitly prolongate from
the red grid to the finer green grid shown in Figure~\ref{Fprol2d}(b).
These prolongation-iteration steps take $6\,\mbox{flops}$ per grid point.
Thus to go from the red to the green grid takes $6N\,\mbox{flops}$. As each
level of the grid has half as many grid points as the next finer, the
total operation count for the prolongation over the hierarchy from
grid spacing $H=2^Lh$ is
\begin{equation}
K_p\approx 12N\left( 1-\frac{1}{4^L} \right)\,\mbox{flops} \approx 12N\,\mbox{flops}\,.
\label{Ekprol2}
\end{equation}
The simplest (bilinear) conventional interpolation direct from the
blue grid to the green grid would take approximately $2N\,\mbox{flops}$, to be
followed by $6N\,\mbox{flops}$ for a Jacobi iteration on the fine green grid
(using simply $\nu_1=0$ and $\nu_2=1$). Over the whole hierarchy this
takes approximately $10\frac{2}{3}N\,\mbox{flops}$. This is a little smaller
than that proposed here, but the proposed diagonal method achieves
virtually two Jacobi iterations instead of just one and so is more
effective.
\subsection{The V-cycle converges rapidly}
Numerical experiments show that although the operation count of the
proposed algorithm is a little higher than the simplest usual
multigrid scheme, the speed of convergence is much better. The
algorithm performs remarkably well on test problems such as those in
Gupta et al \cite{Gupta97}. I report a quantitative comparison
between the algorithms that show the diagonal scheme proposed here is
about twice as fast.
Both the diagonal and usual multigrid algorithms use $7N\,\mbox{flops}$ to
compute the residuals on the finest grid. Thus the proposed method
takes approximately $25N\,\mbox{flops}$ per V-cycle of the multigrid
iteration, although 17\% more than the simplest conventional algorithm
that takes $21\frac{1}{3}N\,\mbox{flops}$, the convergence is much faster.
Table~\ref{tbl:2d} shows the rate of convergence $\bar\rho_0\approx
0.1$ for this diagonal multigrid based algorithm. The data is
determined using \matlab's sparse eigenvalue routine to find the
largest eigenvalue and hence the slowest decay on a $65\times 65$
grid. This should be more accurate than limited analytical methods
such as a bi-grid analysis \cite{Ibraheem96}. Compared with
correspondingly simple schemes based upon the usual hierarchy of
grids, the method proposed here takes much fewer iterations, even
though each iteration is a little more expensive, and so should be
about twice as fast.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for various
algorithms for solving Poisson's equation in two spatial
dimensions. The column headed ``per iter'' shows the number of
flops per iteration, whereas columns showing ``per dig'' are
$\,\mbox{flops}/\log_{10}\bar\rho$ and indicate the number of flops needed
to compute each decimal digit of accuracy. The right-hand columns
show the performance for the optimal over relaxation parameter
$p$.}
\begin{tabular}{|l|r|lr|llr|}
\hline
algorithm & per iter & $\bar\rho_0$ & per dig & $p$ &
$\bar\rho$ &
per dig \\
\hline
diagonal, $\Ord{h^2}$ & $25.0N$ & .099 & $25.0N$ & 1.052 &
.052 & $19.5N$ \\
usual, $\Ord{h^2}$ & $21.3N$ & .340 & $45.5N$ & 1.121 &
.260 & $36.4N$ \\
\hline
diagonal, $\Ord{h^4}$ & $30.0N$ & .333 & $62.8N$ & 1.200 &
.200 & $42.9N$ \\
usual, $\Ord{h^4}$ & $26.3N$ & .343 & $56.6N$ & 1.216 &
.216 & $39.4N$ \\
\hline
\end{tabular}
\label{tbl:2d}
\end{table}
Fourth-order accurate solvers in space may be obtained using the above
second-order accurate V-cycle as done by Iyengar \& Goyal
\cite{Iyengar90}. The only necessary change is to compute the
residual $r$ in~(\ref{Evpois}) on the finest grid with a fourth-order
accurate scheme, such as the compact ``Mehrstellen'' scheme
\begin{eqnarray}
r_{i,j}&=&\frac{1}{12}\left( 8f_{i,j}
+f_{i+1,j} +f_{i,j+1} +f_{i-1,j} +f_{i,j-1} \right)
\nonumber\\&&{}
-\frac{1}{6h^2}\left[ -20u_{i,j}
+4\left(u_{i,j-1} +u_{i,j+1} +u_{i-1,j} +u_{i+1,j}\right)
\right.\nonumber\\&&\quad\left.{}
+u_{i+1,j+1} +u_{i-1,j+1} +u_{i-1,j-1} +u_{i+1,j-1}
\right]\,.
\label{Efos2}
\end{eqnarray}
Use the V-cycles described above to determine an approximate
correction $v$ to the field $u$ based upon these more accurate
residuals. The operation count is solely increased by the increased
computation in the residual, from $7N\,\mbox{flops}$ per iteration to
$12N\,\mbox{flops}$ (the combination of $f$ appearing on the right-hand side
of~(\ref{Efos2}) need not be computed each iteration). Numerical
experiments summarised in Table~\ref{tbl:2d} show that the multigrid
methods still converge, but the diagonal method has lost its
advantages. Thus fourth order accurate solutions to Poisson's
equation are most quickly obtained by initially using the diagonal
multigrid method applied to the second order accurate computation of
residuals. Then use a few multigrid iterations based upon the fourth
order residuals to refine the numerical solution.
\subsection{Optimise parameters of the V-cycle}
\label{SSopt2}
The multigrid iteration is improved by introducing a small amount of
over relaxation.
First we considered the multigrid method applied to the second-order
accurate residuals. Numerical optimisation over a range of
introduced parameter values suggested that the simplest, most robust
effective change was simply to introduce a parameter $p$ into the
Jacobi iterations~(\ref{Ejacr}--\ref{Ejacb}) to become
\begin{eqnarray}
v_{i,j}^\ell&=&\frac{1}{4}\left( -2ph^2r_{i,j}^\ell
+v_{i-1,j-1}^{\ell-1}
+v_{i+1,j-1}^{\ell-1} +v_{i+1,j+1}^{\ell-1} +v_{i-1,j+1}^{\ell-1}
\right)\,,
\label{Eajacr}\\
v_{i,j}^\ell&=&\frac{1}{4}\left( -2ph^2r_{i,j}^\ell +v_{i-1,j-1}^\ell
+v_{i+1,j-1}^\ell +v_{i+1,j+1}^\ell +v_{i-1,j+1}^\ell \right)\,,
\label{Eajacb}
\end{eqnarray}
on a diagonal red grid and similarly for a green grid. An optimal
value of $p$ was determined to be $p=1.052$. The parameter $p$ just
increases the weight of the residuals at each level by about 5\%.
This simple change, which does not increase the operation count,
improves the factor of convergence to $\bar\rho\approx 0.052$, which
decreases the necessary number of iterations to achieve a given
accuracy. As Table~\ref{tbl:2d} shows, this diagonal multigrid is
still far better than the usual multigrid even with its optimal choice
for over relaxation.
Then we considered the multigrid method applied to the fourth-order
accurate residuals. Numerical optimisation of the parameter $p$
in~(\ref{Eajacr}--\ref{Eajacb}) suggests that significantly more
relaxation is preferable, namely $p\approx 1.20$. With this one
V-cycle of the multigrid method generally reduces the residuals by a
factor $\bar\rho\approx 0.200$. This simple refinement reduces the
number of iterations required by about one-third in converging to the
fourth-order accurate solution.
\section{A diagonal multigrid for the 3D Poisson equation}
\label{S3d}
The hierarchy of grids we propose for solving Poisson's
equation~(\ref{Evpois}) in three-dimensions is significantly more
complicated than that in two-dimensions. Figure~\ref{Famal} shows the
three steps between levels that will be taken to go from a fine
standard grid (green) of spacing $h$, via two intermediate grids (red
and magenta), to a coarser regular grid (blue) of spacing $2h$. As we
shall discuss below, there is some unevenness in the hierarchy that
needs special treatment.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dgrmb.eps}
\caption{one cell of an amalgam of four levels of the hierarchy of
grids used to form the multigrid V-cycle in 3D: green is the finest
grid shown; red is the next level coarser grid; magenta shows the next
coarser grid; and the blue cube is the coarsest to be shown. This
stereoscopic view is to be viewed cross-eyed as this seems to be more
robust to changes of viewing scale.}
\label{Famal}
\end{figure}
\subsection{The smoothing restriction steps}
\label{SSsr3}
The restriction operation in averaging the residuals from one grid to
the next coarser grid is reasonably straightforward.
\begin{itemize}
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dgr.eps}
\caption{the green and red grids superimposed showing the nodes
of the red grid at the corners and faces of the cube, and their
relationship to their six neighbouring nodes on the finer green grid.}
\label{Fggr}
\end{figure}
The nodes of the red grid are at the corners of the cube and the
centre of each of the faces as seen in Figure~\ref{Fggr}. They
each have six neighbours on the green grid so the natural
restriction averaging of the residuals onto the red grid is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{12}\left( 6r_{i,j,k}^\ell
+r_{i+1,j,k}^\ell +r_{i-1,j,k}^\ell +r_{i,j+1,k}^\ell
+r_{i,j-1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i,j,k+1}^\ell +r_{i,j,k-1}^\ell \right)\,,
\label{Erred}
\end{eqnarray}
for $(i,j,k)$ corresponding to the (red) corners and faces of the
coarse (blue) grid. When the fine green grid is $n\times n\times
n$ so that there are $N=n^3$ unknowns on the fine green grid, this
average takes $8\,\mbox{flops}$ for each of the approximately $N/2$ red
nodes. This operation count totals $4N\,\mbox{flops}$.
Note that throughout this discussion of restriction from the green
to blue grids via the red and magenta, we index variables using
subscripts appropriate to the fine green grid. This also holds for
the subsequent discussion of the prolongation from blue to green grids.
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3drm.eps}
\caption{the red and magenta grids superimposed showing the
nodes of the magenta grid at the corners and the centre of the
(blue) cube.}
\label{Frmag}
\end{figure}
The nodes of the next coarser grid, magenta, are at the corners and
centres of the cube as seen in Figure~\ref{Frmag}. Observe that the
centre nodes of the magenta grid are not also nodes of the finer red
grid; this causes some complications in the treatment of the two
types of nodes. The magenta nodes at the corners are connected to
twelve
neighbours on the red grid so the natural average of the residuals
is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{24}\left( 12r_{i,j,k}^\ell
+r_{i+1,j+1,k}^\ell +r_{i+1,j-1,k}^\ell
+r_{i-1,j-1,k}^\ell +r_{i-1,j+1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i+1,j,k+1}^\ell +r_{i+1,j,k-1}^\ell
+r_{i-1,j,k-1}^\ell +r_{i-1,j,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i,j+1,k+1}^\ell +r_{i,j+1,k-1}^\ell
+r_{i,j-1,k-1}^\ell +r_{i,j-1,k+1}^\ell
\right)\,,
\label{Ermagc}
\end{eqnarray}
for $(i,j,k)$ corresponding to the magenta corner nodes. This
average takes $14\,\mbox{flops}$ for each of $N/8$ nodes. The magenta
nodes at the centre of the coarse (blue) cube is not connected to
red nodes by the red grid, see Figure~\ref{Frmag}. However, it
has six red nodes in close proximity, those at the centre
of the faces, so the natural average is
\begin{equation}
r_{i,j,k}^{\ell-1}=\frac{1}{6}\left( r_{i+1,j,k}^\ell
+r_{i-1,j,k}^\ell
+r_{i,j+1,k}^\ell +r_{i,j-1,k}^\ell +r_{i,j,k+1}^\ell
+r_{i,j,k-1}^\ell
\right)\,,
\label{Ermagm}
\end{equation}
for $(i,j,k)$ corresponding to the magenta centre nodes. This
averaging takes $6\,\mbox{flops}$ for each of $N/8$ nodes. The operation
count for all of this restriction step from red to magenta is
$2\frac{1}{2}N\,\mbox{flops}$.
\item
\begin{figure}[tbp]
\centering
\includegraphics[width=0.9\textwidth]{grid3dmb.eps}
\caption{the magenta and blue grids superimposed showing the
common nodes at the corners of the blue grid and the
connections to
the magenta centre node.}
\label{Frblu}
\end{figure}
The nodes of the coarse blue grid are at the corners of the shown
cube, see Figure~\ref{Frblu}. On the magenta grid they are
connected to eight neighbours, one for each octant, so the natural
average of residuals from the magenta to the blue grid is
\begin{eqnarray}
r_{i,j,k}^{\ell-1}&=&\frac{1}{16}\left( 8r_{i,j,k}^\ell
+r_{i+1,j+1,k+1}^\ell +r_{i+1,j+1,k-1}^\ell
+r_{i+1,j-1,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i+1,j-1,k-1}^\ell
+r_{i-1,j+1,k+1}^\ell +r_{i-1,j+1,k-1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+r_{i-1,j-1,k+1}^\ell +r_{i-1,j-1,k-1}^\ell
\right)\,,
\label{Erblu}
\end{eqnarray}
for $(i,j,k)$ corresponding to the blue corner nodes. This
averaging takes $10\,\mbox{flops}$ for each of $N/8$ blue nodes which thus
totals $1\frac{1}{4}N\,\mbox{flops}$.
\end{itemize}
These three restriction steps, to go up three levels of grids, thus
total approximately $7\frac{3}{4}N\,\mbox{flops}$. Hence, the entire
restriction process, averaging the residuals, from a finest grid of
spacing $h$ up $3L$ levels to the coarsest grid of spacing $H=2^Lh$
takes
\begin{equation}
K_r\approx\frac{62}{7}N\left( 1-\frac{1}{8^L} \right)\,\mbox{flops} \approx
{\textstyle 8\frac{6}{7}}N\,\mbox{flops}\,.
\label{Ekrest3}
\end{equation}
The simplest standard one-step restriction direct from the fine green
grid to the blue grid takes approximately $3\frac{3}{4}N\,\mbox{flops}$.
Over the whole hierarchy this totals $4\frac{2}{7}N\,\mbox{flops}$ which is
roughly half that of the proposed method. We anticipate that rapid
convergence of the V-cycle makes the increase worthwhile.
\subsection{The Jacobi prolongation steps}
\label{SSjp3}
As in 2D, with this rich structure of grids we have no need to
interpolate when prolongating from a coarse grid onto a finer grid; an
appropriate ``red-black'' Jacobi iteration of the residual
equation~(\ref{Evpois}) avoids interpolation. Given an estimate of
corrections $v_{i,j,k}^\ell$ at some blue level grid we proceed to the
finer green grid via the following three prolongation steps.
\begin{itemize}
\item Perform a magenta-blue Jacobi iteration on the nodes of the
magenta grid shown in Figure~\ref{Frblu}. See that each node on
the magenta grid is connected to eight neighbours distributed
symmetrically about it, each contributes to an estimate of the
Laplacian at the node. Thus, given initial approximations on the
blue nodes from the coarser blue grid,
\begin{eqnarray}
v_{i,j,k}^\ell&=&\frac{1}{8}\left( -4p_mh^2r_{i,j,k}^\ell
+v_{i+1,j+1,k+1}^{\ell-1} +v_{i+1,j+1,k-1}^{\ell-1}
+v_{i+1,j-1,k+1}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j-1,k-1}^{\ell-1}
+v_{i-1,j+1,k+1}^{\ell-1} +v_{i-1,j+1,k-1}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i-1,j-1,k+1}^{\ell-1} +v_{i-1,j-1,k-1}^{\ell-1}
\right)\,,
\label{Emprolm}
\end{eqnarray}
for $(i,j,k)$ on the centre magenta nodes. The following blue-Jacobi
iteration uses these updated values in the similar formula
\begin{eqnarray}
v_{i,j,k}^\ell&=&\frac{1}{8}\left( -4p_mh^2r_{i,j,k}^\ell
+v_{i+1,j+1,k+1}^{\ell} +v_{i+1,j+1,k-1}^{\ell}
+v_{i+1,j-1,k+1}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j-1,k-1}^{\ell}
+v_{i-1,j+1,k+1}^{\ell} +v_{i-1,j+1,k-1}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i-1,j-1,k+1}^{\ell} +v_{i-1,j-1,k-1}^{\ell}
\right)\,,
\label{Emprolb}
\end{eqnarray}
for $(i,j,k)$ on the corner blue nodes. In these formulae the
over relaxation parameter $p_m$ has been introduced for later fine
tuning; initially take $p_m=1$. The operation count for this
magenta-blue Jacobi iteration is $10\,\mbox{flops}$ on each of $N/4$ nodes
giving a total of $2\frac{1}{2}N\,\mbox{flops}$.
\item Perform a red-magenta Jacobi iteration on the nodes of the
red grid shown in Figure~\ref{Frmag}. However, because the centre
node (magenta) is not on the red grid, two features follow: it is
not updated in this prolongation step; and it introduces a little
asymmetry into the weights used for values at the nodes. The red
nodes in the middle of each face are surrounded by four magenta
nodes at the corners and two magenta nodes at the centres of the
cube. However, the nodes at the centres are closer and so have twice
the weight in the estimate of the Laplacian. Hence, given initial
approximations on the magenta nodes from the coarser grid,
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{8}\left( -2p_{r1}h^2r_{i,j,k}^\ell
+2\left[v_{i,j,k+1}^{\ell-1}+v_{i,j,k-1}^{\ell-1}\right]
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j+1,k}^{\ell-1} +v_{i+1,j-1,k}^{\ell-1}
+v_{i-1,j-1,k}^{\ell-1} +v_{i-1,j+1,k}^{\ell-1}
\right)\,,
\label{Erprolr}
\end{eqnarray}
for $(i,j,k)$ corresponding to the red nodes on the centre of
faces normal to the $z$-direction. Similar formulae apply for red
nodes on other faces, cyclically permute the role of the indices.
The over relaxation parameters $p_{r1}$ and $p_{r2}$ are
introduced for later fine tuning; initially take
$p_{r1}=p_{r2}=1$. The following magenta-Jacobi iteration uses
these updated values. Each magenta corner node in
Figure~\ref{Frmag} is connected to twelve red nodes and so is
updated according to
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{12}\left(
-4p_{r2}h^2r_{i,j,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j+1,k}^\ell +v_{i+1,j-1,k}^\ell
+v_{i-1,j-1,k}^\ell +v_{i-1,j+1,k}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i+1,j,k+1}^\ell +v_{i+1,j,k-1}^\ell
+v_{i-1,j,k-1}^\ell +v_{i-1,j,k+1}^\ell
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j+1,k+1}^\ell +v_{i,j+1,k-1}^\ell
+v_{i,j-1,k-1}^\ell +v_{i,j-1,k+1}^\ell
\right)\,,
\label{Erprolm}
\end{eqnarray}
for all $(i,j,k)$ corresponding to corner magenta nodes.
The operation count for this red-magenta Jacobi iteration
is $9\,\mbox{flops}$ on each of $3N/8$ nodes and $14\,\mbox{flops}$ on each
of $N/8$ nodes. These total $5\frac{1}{8}N\,\mbox{flops}$.
\item Perform a green-red Jacobi iteration on the nodes of
the fine green grid shown in Figure~\ref{Fggr}. The green
grid is a standard rectangular grid so the Jacobi
iteration is also standard. Given initial approximations on
the red nodes from the coarser red grid,
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{6}\left( -p_gh^2r_{i,j,k}^\ell
+v_{i+1,j,k}^{\ell-1} +v_{i-1,j,k}^{\ell-1}
+v_{i,j+1,k}^{\ell-1} +v_{i,j-1,k}^{\ell-1}
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j,k+1}^{\ell-1} +v_{i,j,k-1}^{\ell-1} \right)\,,
\label{Egprolg}
\end{eqnarray}
for $(i,j,k)$ corresponding to the green nodes (edges and centre
of the cube). The over relaxation parameter $p_g$, initially
$p_g=1$, is introduced for later fine tuning. The red-Jacobi
iteration uses these updated values in the similar formula
\begin{eqnarray}
v_{i,j,k}^{\ell}&=&\frac{1}{6}\left( -p_gh^2r_{i,j,k}^\ell
+v_{i+1,j,k}^{\ell} +v_{i-1,j,k}^{\ell}
+v_{i,j+1,k}^{\ell} +v_{i,j-1,k}^{\ell}
+\right.\nonumber\\&&\left.\quad{}
+v_{i,j,k+1}^{\ell} +v_{i,j,k-1}^{\ell} \right)\,,
\label{Egprolr}
\end{eqnarray}
for the red nodes in Figure~\ref{Fggr}. This prolongation
step is a standard Jacobi iteration and takes $8\,\mbox{flops}$ on
each of $N$ nodes for a total of $8N\,\mbox{flops}$.
\end{itemize}
These three prolongation steps together thus total
$15\frac{5}{8}N\,\mbox{flops}$. To prolongate over $\ell=3L$ levels
from the coarsest grid of spacing $H=2^Lh$ to the finest grid thus takes
\begin{equation}
K_p\approx\frac{125}{7}N\left(1-\frac{1}{8^L}\right)\,\mbox{flops} \approx
{\textstyle 17\frac{6}{7}}N\,\mbox{flops}\,.
\label{Ekprol3}
\end{equation}
The simplest trilinear interpolation direct from the blue grid to the
green grid would take approximately $3\frac{1}{4}N\,\mbox{flops}$, to be
followed by $8N\,\mbox{flops}$ for a Jacobi iteration on the fine green grid.
Over the whole hierarchy this standard prolongation takes
approximately $12\frac{6}{7}N\,\mbox{flops}$. This total is smaller, but the
proposed diagonal grid achieves virtually three Jacobi
iterations instead of one and so is more effective.
\subsection{The V-cycle converges well}
Numerical experiments show that, as in 2D, although the operation
count of the proposed algorithm is a little higher, the speed of
convergence is much better. Both algorithms use $9N\,\mbox{flops}$ to compute
second-order accurate residuals on the finest grid. Thus the proposed
method takes approximately $35\frac{5}{7}N\,\mbox{flops}$ for one V-cycle,
some 37\% more than the $26\frac{1}{7}N\,\mbox{flops}$ of the simplest
standard algorithm. It achieves a mean factor of convergence
$\bar\rho\approx0.140$. This rapid rate of convergence easily
compensates for the small increase in computations taking half the
number of flops per decimal digit accuracy determined.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for
unoptimised
algorithms for solving Poisson's equation in three spatial
dimensions on a $17^3$ grid. The column headed ``per iter'' shows
the number of
flops per iteration, whereas column showing ``per dig'' is
$\,\mbox{flops}/\log_{10}\bar\rho_0$ and indicates the number of flops needed
to compute each decimal digit of accuracy.}
\begin{tabular}{|l|r|lr|}
\hline
algorithm & per iter & $\bar\rho_0$ & per dig \\
\hline
diagonal, $\Ord{h^2}$ & $35.7N$ & 0.140 & $42N$ \\
usual, $\Ord{h^2}$ & $26.1N$ & 0.477 & $81N$ \\
\hline
diagonal, $\Ord{h^4}$ & $48.7N$ & 0.659 & $269N$ \\
usual, $\Ord{h^4}$ & $39.1N$ & 0.651 & $210N$ \\
\hline
\end{tabular}
\label{tbl:3d}
\end{table}
As in 2D, fourth-order accurate solvers may be obtained simply by
using the above second-order accurate V-cycle on the fourth-order
accurate residuals evaluated on the finest grid. A compact
fourth-order accurate scheme for the residuals is the 19~point
formula
\begin{eqnarray}
r_{i,j,k}&=&\frac{1}{12}\left( 6f_{i,j,k}
+f_{i+1,j,k} +f_{i,j+1,k} +f_{i-1,j,k} +f_{i,j-1,k}
+f_{i,j,k+1}
+\right.\nonumber\\&&\quad\left.{}
+f_{i,j,k-1}
\right)
-\frac{1}{6h^2}\left[ -24 u_{i,j,k}
+2\left(u_{i,j-1,k} +u_{i,j+1,k} +u_{i-1,j,k}
+\right.\right.\nonumber\\&&\quad\left.\left.{}
+u_{i+1,j,k}
+u_{i,j,k+1} +u_{i,j,k-1} \right)
+u_{i+1,j+1,k} +u_{i-1,j+1,k}
+\right.\nonumber\\&&\quad\left.{}
+u_{i-1,j-1,k} +u_{i+1,j-1,k}
+u_{i,j+1,k+1} +u_{i,j+1,k-1} +u_{i,j-1,k-1}
+\right.\nonumber\\&&\quad\left.{}
+u_{i,j-1,k+1}
+u_{i+1,j,k+1} +u_{i-1,j,k+1} +u_{i-1,j,k-1} +u_{i+1,j,k-1}
\right]\,.
\label{Efos4}
\end{eqnarray}
Then using the V-cycle described above to determine corrections $v$ to
the field $u$ leads to an increase in the operation count of
$13N\,\mbox{flops}$ solely from the extra computation in finding the finest
residuals. Numerical experiments show that the multigrid iteration
still converge, albeit slower, with $\bar\rho\approx 0.659$.
Table~\ref{tbl:3d} shows that the rate of convergence on the diagonal
hierarchy of grids is little different than that for the simplest
usual multigrid algorithm. As in 2D, high accuracy, 4th order
solutions to Poisson's equation are best found by employing a first
stage that finds 2nd order accurate solutions which are then refined
in a second stage.
\subsection{Optimise parameters of the V-cycle}
\label{SSopt3}
As in 2D, the multigrid algorithms are improved by introducing some
relaxation in the Jacobi iterations. The four parameters $p_m$,
$p_{r1}$, $p_{r2}$ and $p_g$ were introduced in the Jacobi iterations
(\ref{Emprolm}--\ref{Egprolr}) to do this, values bigger than 1
correspond to some over relaxation.
\begin{table}[tbp]
\centering
\caption{comparison of cost, in flops, and performance for
optimised algorithms for solving Poisson's equation in three
spatial dimensions on a $17^3$ grid varying over relaxation
parameters to determine the best rate of convergence. The column
headed ``per iter'' shows the number of flops per iteration,
whereas column showing ``per dig'' is $\,\mbox{flops}/\log_{10}\bar\rho$
and indicates the number of flops needed to compute each decimal
digit of accuracy.}
\begin{tabular}{|l|r|lllllr|}
\hline
algorithm & per iter & $p_{m}$ & $p_{r1}$
& $p_{r2}$ & $p_{g}$ & $\bar\rho$ &
per dig \\
\hline
diag, $\Ord{h^2}$ & $35.7N$ & 1.11 & 1.42 & 1.08 &
0.99 & 0.043 & $26N$ \\
usual, $\Ord{h^2}$ & $26.1N$ & & & & 1.30 & 0.31 &
$51N$ \\
\hline
diag, $\Ord{h^4}$ & $48.7N$ & 0.91 & 0.80 & 0.70 &
1.77 & 0.39 & $119N$ \\
usual, $\Ord{h^4}$ & $39.1N$ & & & & 1.70 & 0.41 &
$101N$ \\
\hline
\end{tabular}
\label{tbl:3dopt}
\end{table}
The search for the optimum parameter set used the Nelder-Mead simplex
method encoded in the procedure \textsc{fmins} in \matlab{}. Searches
were started from optimum parameters found for coarser grids. As
tabulated in Table~\ref{tbl:3dopt} the optimum parameters on a $17^3$
grid\footnote{Systematic searches on a finer grid were infeasible
within one days computer time due to the large number of unknowns:
approximately 30,000 components occur in the eigenvectors on a $33^3$
grid.} were $p_m=1.11$, $p_{r1}=1.42$, $p_{r2}=1.08$ and $p_g=0.99$
and achieve an astonishingly fast rate of convergence of
$\bar\rho\approx 0.043$. This ensures convergence to a specified
precision at half the cost of the similarly optimised, simple
conventional multigrid algorithm.
For the fourth-order accurate residuals an optimised diagonal
multigrid performs similarly to the optimised conventional multigrid
with a rate of convergence of $\bar\rho\approx 0.39$. Again fourth
order accuracy is best obtained after an initial stage in which second
order accuracy is used.
\section{Conclusion}
The use of a hierarchy of grids at angles to each other can halve the
cost of solving Poisson's equation to second order accuracy in grid
spacing. Each iteration of the optimised \emph{simplest} multigrid
algorithm decreases errors by a factor of at least 20. This is true
in both two and three dimensional problems. Further research is
needed to investigate the effective of extra Jacobi iterations at each
level of the diagonal grid.
When compared with the amazingly rapid convergence obtained for the
second order scheme, the rate of convergence when using the fourth
order residuals is relatively pedestrian. This suggests that a
multigrid V-cycle specifically tailored on these diagonal grids for
the fourth order accurate problem may improve convergence markedly.
There is more scope for W-cycles to be effective using these diagonal
grids because there are many more levels in the multigrid hierarchy.
An exploration of this aspect of the algorithm is also left for
further research.
\paragraph{Acknowledgement:} This research has been
supported by a grant from the Australian Research Council.
\bibliographystyle{plain} | {'timestamp': '1999-07-30T02:14:46', 'yymm': '9907', 'arxiv_id': 'math/9907190', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9907190'} |
\section{Introduction}
\label{sec:intro}
This paper is devoted to the mathematical analysis of the following
class of parabolic systems:
\begin{align}\label{CH1}
& u_t - \Delta w = 0,\\
\label{CH2}
& w = \delta \Delta^2 u - a(u) \Delta u - \frac{a'(u)}2 |\nabla u|^2
+ f(u) + \epsi u_t,
\end{align}
on $(0,T) \times \Omega$, $\Omega$ being a bounded
smooth subset of $\RR^3$ and $T>0$ an assigned final
time. The restriction to the three-dimensional setting is
motivated by physical applications. Similar, or better results,
are expected to hold in space dimensions 1 and 2.
The system is coupled with
the initial and boundary conditions
\begin{align}\label{iniz-intro}
& u|_{t=0} = u_0,
\quext{in }\,\Omega,\\
\label{neum-intro}
& \dn u = \dn w = \delta \dn \Delta u = 0,
\quext{on }\,\partial\Omega,\
\quext{for }\,t\in(0,T)
\end{align}
and represents a variant of the
Cahn-Hilliard model for phase separation in binary materials.
The function $f$ stands for the derivative
of a {\sl singular}\/ potential $F$ of a {\sl double obstacle}\
type. Namely, $F$ is assumed to be $+\infty$ outside a bounded interval
(assumed equal to $[-1,1]$ for simplicity), where the extrema
correspond to the pure states. A physically significant
example is given by the so-called
Flory-Huggins logarithmic potential
\begin{equation}\label{logpot}
F(r)=(1-r)\log(1-r)+(1+r)\log(1+r) - \frac\lambda2 r^2,
\quad \lambda\ge 0.
\end{equation}
As in this example, we will assume $F$ to be at least
{\sl $\lambda$-convex}, i.e., convex up to a quadratic
perturbation. In this way, we can also allow for singular
potentials having more than two minima in the interval $[-1,1]$
(as it happens in the case of the oil-water-surfactant models
described below, where the third minimum appears in relation
to the so-called ``microemulsion'' phase).
We assume the coefficients $\delta,\epsi$ to be $\ge 0$, with the
case $\delta>0$ giving rise to a {\sl sixth order}\ model
and the case $\epsi>0$ related to possible viscosity effects
that are likely to appear in several models of Cahn-Hilliard type
(see, e.g., \cite{Gu}).
The investigation of the limits as $\delta$ or $\epsi$ tend to zero
provides validation of these models as the approximates of the limit fourth order
model.
The main novelty of system \eqref{CH1}-\eqref{CH2}
is related to the presence of the
nonlinear function $a$ in \eqref{CH2}, which is supposed
smooth, bounded, and strongly positive (i.e.,
everywhere larger than some constant $\agiu>0$). Mathematically,
the latter is an unavoidable assumption as we are mainly interested
in the behavior of the problem when $\delta$ is let
tend to $0$ and in the properties of the (fourth order)
limit system $\delta=0$. On the other hand, at least in the
physical context of the sixth order model, it would also be
meaningful to admit $a$ to take negative values, as it may
happen in presence of the ``microemulsion'' phase
(see \cite{GK93a,GK93b}). We will not deal with this
situation, but we just point out that, as long as $\delta>0$
is fixed, this should create no additional mathematical
difficulties since the nonlinear diffusion
term is then dominated by the sixth order term.
From the analytical point of view,
as a basic observation we can notice that this class of systems
has an evident variational structure. Indeed, (formally)
testing \eqref{CH1} by $w$, \eqref{CH2} by $u_t$,
taking the difference of the obtained relations,
integrating with respect to~space variables,
using the {\sl no-flux}\/ conditions \eqref{neum-intro},
and performing suitable integrations
by parts, one readily gets the {\sl a priori}\/ bound
\begin{equation}\label{energyineq}
\ddt\calE\dd(u) + \| \nabla w \|_{L^2(\Omega)}^2
+ \epsi \| u_t \|_{L^2(\Omega)}^2
= 0,
\end{equation}
which has the form of an {\sl energy equality} for the
{\sl energy functional}
\begin{equation}\label{defiE}
\calE\dd(u)=\io \Big( \frac\delta2 |\Delta u|^2
+ \frac{a(u)}2 |\nabla u|^2
+ F(u) \Big),
\end{equation}
where the interface (gradient) part contains the nonlinear
function $a$. In other words, the system \eqref{CH1}-\eqref{CH2}
arises as the $(H^1)'$-gradient flow problem for the
functional $\calE\dd$.
While the literature on the fourth order Cahn-Hilliard model
with logarithmic free energy is very wide (starting from the
pioneering work \cite{DD} up to more recent works like,
e.g., \cite{AW,GMS,MZ}, see also the recent review \cite {ChMZ} and
the references therein),
it seems that potentials of logarithmic types have
never been considered in the case of a nonconstant
coefficient $a$.
Similarly, the sixth order Cahn-Hilliard type equations, which appear as
models of various physical phenomena and
have recently attracted a notable interest in the
mathematical literature (see discussion below),
seem not to be so far studied in the case of
logarithmic potentials.
The sixth order system \eqref{CH1}-\eqref{CH2} arises as a model
of dynamics of ternary oil-water-surfactant mixtures in which three
phases occupying a region
$\Omega$ in $ \RR^3$, microemulsion, almost pure oil and almost pure water,
can coexist in equilibrium.
The phenomenological Landau-Ginzburg theory for such mixtures has been proposed
in a series of papers by Gompper et~al.~(see, e.g.,
\cite{GK93a,GK93b,GZ92} and other references in \cite{PZ11}).
This theory is based on the free energy functional
\eqref{defiE} with constant $\delta>0$ (in general, however,
this coefficient can depend on $u$, see \cite{SS93}), and with
$F(u)$, $a(u)$ approximated, respectively, by a sixth and
a second order polynomial:
\begin{equation}\label{apprFa}
F(u)= (u+1)^2 (u^2+h_0) (u-1)^2, \qquad
a(u) = g_0 + g_2 u^2,
\end{equation}
where the constant parameters $h_0,g_0,g_2$ are adjusted experimentally,
$g_2>0$ and $h_0$, $g_0$ are of arbitrary sign.
In this model, $u$ is the scalar, conserved order parameter
representing the local difference
between oil and water concentrations; $u=-1$, $u=1$, and $u=0$ correspond to oil-rich,
water-rich and microemulsion phases, respectively, and the parameter $h_0$ measures
the deviation from oil-water-microemulsion coexistence.
The associated evolution system \eqref{CH1}-\eqref{CH2} has the standard
Cahn-Hilliard structure. Equation \eqref{CH1} expresses the conservation law
\begin{equation}\label{conslaw}
u_t + \nabla \cdot j = 0
\end{equation}
with the mass flux $j$ given by
\begin{equation}\label{mflux}
j = - M\nabla w.
\end{equation}
Here $M > 0$ is the constant mobility (we set $M=1$ for simplicity),
and $w$ is the chemical potential difference between the oil and water phases.
The chemical potential is defined by the constitutive equation
\begin{equation}\label{chpot}
w = \frac{\delta \calE\dd(u)}{\delta u} + \epsi u_t,
\end{equation}
where $ \frac{\delta \calE\dd(u)}{\delta u}$ is the first variation of the functional
$ \calE\dd(u) $, and the constant $ \epsi \geq 0$
represents possible viscous effects.
For energy \eqref{defiE} equation \eqref{chpot} yields \eqref{CH2}.
We note also that the boundary conditions $ \dn u = \delta \dn \Delta u = 0 $
are standardly used in the frame of sixth order Cahn-Hilliard
models due to their mathematical simplicity. Moreover,
they are related to the variational structure of the
problem in terms of the functional \eqref{defiE}.
However, other types of boundary conditions for $u$ might
be considered as well, paying the price of technical
complications in the proofs. Concerning, instead,
the condition $ \dn w = 0$, in view of \eqref{mflux},
it simply represents the mass isolation at the
boundary of $\Omega$.
The system \eqref{CH1}-\eqref{neum-intro}
with functions $ F(u), a(u)$ in the polynomial form \eqref{apprFa},
and with no viscous term
($\epsi=0$) has been recently
studied in~\cite{PZ11}. It has been proved there that for a sufficiently
smooth initial datum $u_0$ the system admits a unique global solution in the strong sense.
The sixth order Cahn-Hilliard type equation with the same structure as
\eqref{CH1}-\eqref{CH2}, $\delta > 0$, polynomial $F(u)$, and negative constant $a$,
arises also as the so-called phase field crystal (PFC)
atomistic model of crystal growth, developed by Elder et al.,
see e.g., \cite{EG04, BGE06, BEG08}, and \cite{GDL09} for the overview and up-to
date references. It is also worth mentioning
a class of sixth order convective Cahn-Hilliard type equations
with different (nonconservative) structure than \eqref{CH1}-\eqref{CH2}.
This type of equations arise in particular as a model of
the faceting a of a growing crystalline surface,
derived by Savina et al.~\cite{S03}
(for a review of other convective 4th and 6th order Cahn-Hilliard models see \cite{KEMW08}).
In this model, contrary to~\eqref{CH1}-\eqref{CH2}, the
order parameter $u$ is not a conserved quantity due to the
presence of a force-like term related to the deposition rate.
Such class of models has been recently studied mathematically
in one- and two- dimensional cases by Korzec et al.~\cite{KEMW08, KNR11,KR11}.
Finally, let us note that in the case $\delta=0$, $a(u)=\const>0$,
the functional \eqref{defiE}
represents the classical Cahn-Hilliard free energy \cite{Ca,CH}.
The original Cahn-Hilliard free energy derivation
has been extended by Lass et al. \cite{LJS06}
to account for composition dependence of the
gradient energy coefficient $a(u)$. For a face-centered cubic
crystal the following expressions for $a(u)$ have been derived,
depending on the level of approximation of the nearest-neighbor
interactions:
\begin{equation}\label{appra}
a(u) = a_0 + a_1 u + a_2 u^2,
\end{equation}
where $a_0>0$, $a_1,a_2\in\RR$ in the case of four-body
interactions, $a_2=0$ in the case of three-body
interactions, and $a_1=a_2=0$ in the case of pairwise
interactions.
Numerical experiments in \cite{LJS06} indicate that these
three different approximations (all reflecting the face-centered
cubic crystal symmetry) have a substantial effect on the shape
of the equilibrium composition profile and the
interfacial energy.
A specific free energy with composition dependent gradient
energy coefficient $a(u)$ also arises in modelling of phase
separation in polymers \cite{dG80}. This energy, known
as the Flory-Huggins-de Gennes one, has the form \eqref{defiE}
with $\delta=0$, $F(u)$ being the logarithmic potential
\eqref{logpot}, and the singular coefficient
\begin{equation}\label{adG}
a(u) = \frac1{(1-u)(1+u)}.
\end{equation}
We mention also that various formulations of phase-field models
with gradient energy coefficient dependent on the order parameter
(and possibly on other fields) appear, e.g., in~\cite{Aif86,BS96}.
Our objective in this paper is threefold.
First, we would like to extend the result
of \cite{PZ11} both to the viscous problem
($\epsi>0$) and to the case
when the configuration potential is {\sl singular}\
(e.g., of the form \eqref{logpot}).
While the first extension is almost straighforward, considering
constraint (singular) terms in fourth order equations
(\eqref{CH2}, in the specific case) gives rise
to regularity problems since it is not possible, up to
our knowledge, to estimate all the terms of equation~\eqref{CH2}
in $L^p$-spaces. For this reason, the nonlinear term $f(u)$
has to be intended in a weaker form, namely, as a selection
of a nonlinear, and possibly multivalued, mapping acting
from $V=H^1(\Omega)$ to $V'$. This involves some monotone
operator technique that is developped in a specific section
of the paper.
As a second step, we investigate the behavior of the solutions
to the sixth order system as the parameter $\delta$ is let to tend
to $0$. In particular, we would like to show that, at least up to
subsequences, we can obtain in the limit suitably defined
solutions to the fourth order system obtained setting $\delta = 0$
in \eqref{CH2}. Unfortunately, we are able to prove this fact
only under additional conditions. The reason is that
the natural estimate required to control second space derivatives
of $u$, i.e., testing \eqref{CH2} by $-\Delta u$, is compatible with
the nonlinear term in $\nabla u$ only under additional assumptions
on $a$ (e.g., if $a$ is concave). This nontrivial fact depends
on an integration by parts formula devised by Dal Passo,
Garcke and Gr\"un in \cite{DpGG} in the frame
of the thin-film equation and whose use is necessary to control
the nonlinear gradient term. It is however likely that the use of more
refined integration by parts techniques may permit to control
the nonlinear gradient term under more general conditions on $a$.
Since we are able to take the limit $\delta\searrow0$ only in special
cases, in the subsequent part of the paper we address
the fourth order problem by using a direct approach.
In this way, we can obtain existence of a weak solution
under general conditions on $a$ (we notice that, however,
uniqueness is no longer guaranteed for $\delta=0$).
The proof of existence is based on an ``ad hoc''
regularization of the equations by means of a system of
phase-field type. This kind of approach has been proved
to be effective also in the frame of other types
of Cahn-Hilliard equations (see, e.g., \cite{BaPa05}).
Local existence for the regularized system
is then shown by means of the Schauder theorem, and, finally,
the regularization is removed by means of suitable a priori estimates
and compactness methods. This procedure involves
some technicalities since parabolic spaces of H\"older type
have to be used for the fixed point argument.
Indeed, the use of Sobolev techniques seems not suitable due to
the nonlinearity in the highest order term, which prevents
from having compactness of the fixed point map with respect to
Sobolev norms. A further difficulty is related with the
necessity of estimating the second order space derivatives
of $u$ in presence of the nonlinear term in the gradient. This is
obtained by introducing a proper transformed variable,
and rewriting \eqref{CH2} in terms of it. Proceeding in
this way, we can get rid of that nonlinearity, but
at the same time, we can still
exploit the good monotonicity properties of $f$.
We note here that a different method based on entropy estimates
could also be used to estimate $\Delta u$ without making the change
of variable, which seems however a simpler technique.
Finally, in the last section of the paper, we
discuss further property
of weak solutions. More precisely, we address the problems
of uniqueness (only for the 4th order system,
since in the case $\delta>0$ it is always guaranteed)
and of parabolic time-regularization of solutions
(both for the 6th and for the 4th order system).
We are able to prove such properties only when the
energy functional $\calE\dd$ is $\lambda$-convex
(so that its gradient is monotone up to a linear
perturbation). In terms of the coefficient $a$,
this corresponds to asking that $a$ is a {\sl convex}\/
function and, moreover, $1/a$ is {\sl concave}\/
(cf.~\cite{DNS} for generalizations and further comments
regarding this condition). If these conditions
fail, then the gradient of the energy functional exhibits
a nonmonotone structure in terms of the space derivatives
of the highest order. For this reason, proving an
estimate of contractive type (which would be required for
having uniqueness) appears to be difficult in that case.
As a final result, we will show that, both in the
6th and in the {\sl viscous}\/ 4th order case,
all weak solutions satisfy the energy {\sl equality}\/
\eqref{energyineq}, at least in an integrated form
(and not just an energy inequality).
This property is the starting point for proving
existence of the global attractor for
the dynamical process associated
to system \eqref{CH1}-\eqref{CH2}, an issue that
we intend to investigate in a forthcoming paper.
Actually, it is not difficult to show that
the set of initial data having finite energy
constitutes a complete metric space
(see, e.g., \cite[Lemma~3.8]{RS}) which can
be used as a {\sl phase space}\/ for the system.
Then, by applying the so-called ``energy method''
(cf., e.g., \cite{MRW,Ba1}), one can see that
the energy equality implies precompactness
of trajectories for $t\nearrow\infty$ with
respect to the metric of the phase space. In
turn, this gives existence of the global attractor
with respect to the same metric.
On the other hand, the question whether the energy equality
holds in the nonviscous 4th order case seems
to be more delicate, and, actually, we could not
give a positive answer to it.
It is also worth to notice an important issue concerned
with the sharp interface limit of the Cahn-Hilliard equation
with a nonlinear gradient energy coefficient $a(u)$.
To our knowledge this issue has not been so far addressed in
the literature. Let us mention that using the method of matched
asymptotic expansions the sharp interface limits of the
Cahn-Hilliard equation with constant coefficient $a$ have been
investigated by Pego \cite{Peg89} and rigorously by Alikakos et al.
\cite {ABC94}.
Such method has been also successfully applied to a number of
phase field models of phase transition problems, see e.g., \cite{CF88},
\cite{C90}.
In view of various physical applications described above, it would
be of interest to apply the matched asymptotic expansions in the case
of a nonlinear coefficient $a(u)$ to investigate what kind of corrections it
may introduce to conditions on the sharp interface.
The plan of the paper is as follows.
In the next Section~\ref{sec:main},
we will report our notation and hypotheses, together with
some general tools that will be used in the proofs.
Section~\ref{sec:6th} will contain
the analysis of the sixth order model. The limit
$\delta\searrow 0$ will then be analyzed in
Section~\ref{sec:6thto4th}.
Section~\ref{sec:4th} will be devoted to the analysis
of the fourth order model. Finally, in Section~\ref{sec:uniq}
uniqueness and regularization properties of the solutions
will be discussed, as well as the validity of the
energy equality.
\medskip
\noinden
{\bf Acknowledgment.}~~The authors are grateful to Prof.~Giuseppe
Savar\'e for fruitful discussions about the strategy of some proofs.
\section{Notations and technical tools}
\label{sec:main}
Let $\Omega$ be a smooth bounded domain of $\RR^3$
of boundary $\Gamma$, $T>0$ a given final time, and
let $Q:=(0,T)\times\Omega$. We let $H:=L^2(\Omega)$,
endowed with the standard scalar product $(\cdot,\cdot)$
and norm $\| \cdot \|$.
For $s>0$ and $p\in[1,\infty]$, we
use the notation $W^{s,p}(\Omega)$ to indicate
Sobolev spaces of positive (possibly fractional) order.
We also set $H^s(\Omega):=W^{s,2}(\Omega)$ and
let $V:=H^1(\Omega)$.
We note by $\duav{\cdot,\cdot}$ the duality between
$V'$ and $V$ and by $\|\cdot\|_X$ the norm in
the generic Banach space $X$.
We identify $H$ with
$H'$ in such a way that $H$ can be seen as a subspace of $V'$
or, in other words, $(V,H,V')$ form a Hilbert triplet.
We make the following assumptions on the nonlinear
terms in \eqref{CH1}-\eqref{CH2}:
\begin{align}\label{hpa1}
& a \in C^2_b(\RR;\RR), \quad \esiste \agiu,\asu>0:~~
\agiu \le a(r)\le \asu~~\perogni r\in \RR;\\
\label{hpa2}
& \esiste a_-,a_+\in [\agiu,\asu]:~~
a(r)\equiv a_-~~\perogni r\le-2, \quad
a(r)\equiv a_+~~\perogni r\ge 2;\\
\label{hpf1}
& f\in C^1((-1,1);\RR), \quad f(0)=0, \quad
\esiste\lambda\ge 0:~~f'(r)\ge -\lambda~~\perogni r\in (-1,1);\\
\label{hpf2}
& \lim_{|r|\to 1}f(r)r = \lim_{|r|\to 1}\frac{f'(r)}{|f(r)|}
= + \infty.
\end{align}
The latter condition in \eqref{hpf2} is just a technical hypotheses
which is actually verified in all significant cases.
We also notice that, due to the choice of a singular potential
(mathematically represented here by assumptions \eqref{hpf1}-\eqref{hpf2}),
any weak solution $u$ will take its values only in the physical
interval $[-1,1]$). For this reason, the behavior of
$a$ is also significant only in that interval and we have extended it
outside $[-1,1]$ just for the purpose of properly constructing
the approximating problem (see Subsection~\ref{subsec:appr} below).
Note that our assumptions on $a$ are not in conflict with
\eqref{apprFa} or \eqref{appra} since these conditions
(or, more generally, any condition on the values of $a(u)$
for large $u$) make sense in the different situation of a
function $f$ with polynomial growth (which does not constrain
$u$ in the interval $(-1,1)$).
It should be pointed out, however, that the assumptions \eqref{hpa1}-\eqref{hpf2}
do not admit the singular Flory-Huggins-de Gennes free energy model
with $a(u)$ given by \eqref{adG}. We expect that the analysis of such
a singular model could require different techniques.
In \eqref{hpa1}, $C^2_b$ denotes
the space of functions that are continuous and globally
bounded together with their derivatives up to the second order.
Concerning $f$, \eqref{hpf1} states that it can be written in the form
\begin{equation}\label{f0}
f(r)=f_0(r)-\lambda r,
\end{equation}
i.e., as the difference between a (dominating) monotone part $f_0$ and
a linear perturbation. By \eqref{hpf1}-\eqref{hpf2}, we can also set,
for $r\in(-1,1)$,
\begin{equation}\label{F0}
F_0(r):=\int_0^r f_0(s)\,\dis \qquext{and~}\,
F(r):=F_0(r)-\frac\lambda2 r^2,
\end{equation}
so that $F'=f$.
Notice that $F_0$ may be bounded in $(-1,1)$ (e.g., this occurs
in the case of the logarithmic potential \eqref{logpot}). If this
is the case, we extend it by continuity to $[-1,1]$. Then,
$F_0$ is set to be $+\infty$ either outside $(-1,1)$ (if it is
unbounded in $(-1,1)$) or outside $[-1,1]$ (if it is bounded
in $(-1,1)$). This standard procedure permits to penalize the
non-physical values of the variable $u$ and to intend $f_0$ as the
subdifferential of the (extended)
convex function $F_0:\RR\to[0,+\infty]$.
That said, we define a number of operators. First, we set
\begin{equation}\label{defiA}
A:V\to V', \qquad
\duav{A v, z}:= \io \nabla v \cdot \nabla z,
\quext{for }\, v,z \in V.
\end{equation}
Then, we define
\begin{equation}\label{defiW}
W:=\big\{z\in H^2(\Omega):~\dn z=0~\text{on }\Gamma\big\}
\end{equation}
and recall that (a suitable restriction of) $A$ can be seen
as an unbounded linear operator on $H$
having domain $W$. The space
$W$ is endowed with the natural $H^2$-norm. We then
introduce
\begin{equation}\label{deficalA}
\calA: W \to H, \qquad
\calA(z) := - a(z)\Delta z - \frac{a'(z)}2 |\nabla z|^2.
\end{equation}
It is a standard issue to check that, indeed, $\calA$ takes its
values in~$H$.
\subsection{Weak subdifferential operators}
\label{sec:weak}
To state the weak formulation of the 6th order system,
we need to introduce a proper
relaxed form of the maximal monotone
operator associated to the function $f_0$ and
acting in the duality between $V'$ and $V$ (rather than
in the scalar product of $H$). Actually, it is well known
(see, e.g., \cite[Ex.~2.1.3, p.~21]{Br})
that $f_0$ can be interpreted as a maximal monotone operator
on~$H$ by setting, for $v,\xi\in H$,
\begin{equation} \label{betaL2}
\xi = f_0(v)\quext{in $H$}~~~\Longleftrightarrow~~~
\xi(x) = f_0(v(x))\quext{a.e.~in $\Omega$}.
\end{equation}
If no danger of confusion occurs,
the new operator on $H$ will be still noted by the letter $f_0$.
Correspondingly, $f_0$ is the $H$-subdifferential of the
convex functional
\begin{equation} \label{betaL2-2}
\calF_0:H\mapsto[0,+\infty], \qquad
\calF_0(v):= \io F_0(v(x)),
\end{equation}
where the integral might possibly be $+\infty$ (this happens,
e.g., when $|v|>1$ on a set of strictly positive Lebesgue measure).
The weak form of $f_0$ can be introduced by setting
\begin{equation} \label{betaV}
\xi\in \fzw(v) \Longleftrightarrow
\duav{\xi,z-v}\le \calF_0(z)-\calF_0(v)
\quext{for any $z\in V$}.
\end{equation}
Actually, this is nothing else than the
definition of the subdifferential
of (the restriction to $V$ of) $\calF_0$
with respect to the duality pairing
between $V'$ and $V$. In general, $\fzw$ can be a
{\sl multivalued}\/ operator; namely,
$\fzw$ is a {\sl subset}\ of $V'$ that
may contain more than one element.
It is not difficult to prove
(see, e.g., \cite[Prop.~2.5]{BCGG}) that, if $v\in V$
and $f_0(v)\in H$, then
\begin{equation} \label{betavsbetaw}
\{f_0(v)\}\subset\fzw(v).
\end{equation}
Moreover,
\begin{equation} \label{betavsbetaw2}
\text{if }\,v\in V~\,\text{and }\,
\xi \in \fzw(v) \cap H,
\quext{then }\,\xi = f_0(v)
~\,\text{a.e.~in }\,\Omega.
\end{equation}
In general, the inclusion in \eqref{betavsbetaw}
is strict and, for instance, it can happen that
$f_0(v)\not\in H$ (i.e., $v$ does not belong
to the $H$-domain of $f_0$),
while $\fzw(v)$ is nonempty. Nevertheless,
we still have some ``automatic'' gain of
regularity for any element of $\fzw(v)$:
\bepr\label{misura}
Let $v\in V$, $\xi\in\fzw(v)$. Then,
$\xi$ can be seen as an element of the
space ${\cal M}({\overline \Omega})=C^0(\barO)'$
of the bounded real-valued Borel measures on $\overline \Omega$.
More precisely, there exists $T\in {\cal M}({\overline \Omega})$,
such that
\begin{equation} \label{identif}
\duav{\xi,z}=\ibaro z\,\diT
\qquext{for any~\,$z\in V\cap C^0(\overline\Omega)$}.
\end{equation}
\empr
\begin{proof}
Let $z\in C^0(\overline \Omega)\cap V$ with $z\not = 0$.
Then, using definition \eqref{betaV},
it is easy to see that
\begin{align} \label{prova-meas}
\duav{\xi,z} & = 2\| z \|_{L^\infty(\Omega)} \duavg{\xi,\frac{z}{2\| z \|_{L^\infty(\Omega)}}}
\le 2\| z \|_{L^\infty(\Omega)} \bigg(
\duav{\xi,v}+\calF_0\Big( \frac{z}{2\| z \|_{L^\infty(\Omega)}} \Big)-\calF_0(v) \bigg)\\
& \le 2\| z \|_{L^\infty(\Omega)} \Big( |\duav{\xi,v}|+|\Omega|\big(F_0(-1/2)+F_0(1/2)\big) \Big).
\end{align}
This actually shows that the linear
functional $z\mapsto\duav{\xi,z}$
defined on $C^0(\overline \Omega)\cap V$ (that is a dense subspace
of $C^0(\overline \Omega)$, recall that $\Omega$ is smooth)
is continuous with respect to the sup-norm. Thus,
by the Riesz representation theorem, it can
be represented over $C^0(\barO)$
by a measure $T\in {\cal M}({\overline \Omega})$.
\end{proof}
\noinden
Actually, we can give a general definition, saying that a functional
$\xi\in V'$ belongs to the space
$V'\cap {\cal M}(\overline \Omega)$ provided that
$\xi$ is continuous
with respect to the sup-norm on $\overline \Omega$.
In this case, we can use \eqref{identif} and say
that the measure $T$ represents $\xi$ on
${\cal M}(\overline \Omega)$.
We now recall a result \cite[Thm.~3]{brezisart}
that will be exploited in the sequel.
\bete\label{teobrezis}
Let $v\in V$, $\xi\in \fzw(v)$. Then,
denoting by $\xi_a+\xi_s=\xi$ the Lebesgue decomposition
of $\xi$, with $\xi_a$ ($\xi_s$) standing for
the absolute continuous (singular, respectively)
part of $\xi$, we have
\begin{align}\label{bre1}
& \xi_a v\in L^1(\Omega),\\
\label{bre2}
& \xi_a(x) = f_0(v(x)) \qquext{for a.e.~$x\in\Omega$,}\\
\label{bre3}
& \duav{\xi,v} - \io \xi_a v\,\dix
= \sup \bigg\{\ibaro z\,\dixi_s,~z\in C^0(\overline\Omega),~
z(\overline\Omega)\subset[-1,1] \bigg\}.
\end{align}
\ente
\noinden
Actually, in \cite{brezisart} a slightly different result is proved,
where $V$ is replaced by $H^1_0(\Omega)$ and, correspondingly,
${\cal M}(\overline \Omega)$ is replaced by ${\cal M}(\Omega)$
(i.e., the dual of $C_c^0(\Omega)$). Nevertheless, thanks to the
smoothness of $\Omega$, one can easily realize that the approximation
procedure used in the proof of the theorem can be extended to
cover the present situation. The only difference is given by
the fact that the singular part $\xi_s$ may be supported
also on the boundary.
\smallskip
Let us now recall that, given a pair $X,Y$ of Banach spaces,
a sequence of (multivalued)
operators ${\cal T}_n:X\to 2^Y$ is said to
G-converge (strongly) to ${\cal T}$ iff
\begin{equation}\label{defGconv}
\perogni (x,y)\in {\cal T}, \quad \esiste
(x_n,y_n)\in {\cal T}_n \quext{such that \,
$(x_n,y_n)\to(x,y)$~~strongly in }\, X\times Y.
\end{equation}
We would like to apply this condition to an approximation
of the monotone function $f_0$ that we now construct.
Namely, for $\sigma\in(0,1)$ (intended
to go to $0$ in the limit), we would like to
have a family $\{f\ssi\}$ of monotone functions such that
\begin{align}\label{defifsigma}
& f\ssi\in C^{1}(\RR), \qquad
f\ssi'\in L^{\infty}(\RR), \qquad
f\ssi(0)=0, \\
\label{convcomp}
& f\ssi\to f_0 \quext{uniformly on compact
subsets of }\,(-1,1).
\end{align}
Moreover, noting
\begin{equation}\label{defiFsigma}
F\ssi(r):=\int_0^r
f\ssi(s)\,\dis,
\quext{for }\,r\in\RR,
\end{equation}
we ask that
\begin{equation}\label{propFsigma}
F\ssi(r) \ge \lambda r^2 - c,
\end{equation}
for some $c\ge 0$ independent of $\sigma$ and for
all $r\in\RR$, $\sigma\in (0,1)$,
where $\lambda$ is as in
\eqref{hpf1} (note that the analogue
of the above property holds for $F$ thanks to
the first of \eqref{hpf2}).
Moreover, we ask the monotonicity
condition
\begin{equation}\label{defifsigma2}
F_{\sigma_1}(r)\le F_{\sigma_2}(r)
\qquext{if }\,\sigma_2\le \sigma_1
\quext{and for all }\, r\in \RR.
\end{equation}
Finally, on account of the last assumption
\eqref{hpf2}, we require that
\begin{equation}\label{goodmono}
\perogni m>0,~~\esiste C_m\ge 0:~~~
f\ssi'(r) - m |f\ssi(r)| \ge - C_m,
\quad \perogni r\in[-2,2]
\end{equation}
with $C_m$ being independent of $\sigma$.
Notice that it is sufficient to ask
the above property for $r\in [-2,2]$.
The details of the construction of a family $\{f\ssi\}$
fulfilling \eqref{defifsigma}-\eqref{goodmono} are
standard and hence we leave them to the reader.
For instance, one can first take Yosida regularizations
(see, e.g., \cite[Chap.~2]{Br}) and then mollify in order
to get additional smoothness.
Thanks to the monotonicity property \eqref{defifsigma2},
we can apply \cite[Thm.~3.20]{At}, which gives that
\begin{align}\label{Gforte}
& f\ssi\quext{G-converges to }\,f_0
\quext{in \,$H\times H$},\\
\label{Gdebole}
& f\ssi\quext{G-converges to }\,\fzw
\quext{in \,$V\times V'$}.
\end{align}
A notable consequence of G-convergence is
the following property, whose proof can be obtained by
slightly modifying \cite[Prop.~1.1, p.~42]{barbu}:
\bele\label{limimono}
Let $X$ be an Hilbert space, ${\cal B}\ssi$, ${\cal B}$ be
maximal monotone operators in $X\times X'$ such that
\begin{equation}\label{Gastr}
{\cal B}\ssi\quext{G-converges to }\, {\cal B}
\quext{in }\,X\times X',
\end{equation}
as $\sigma\searrow0$. Let also, for any $\sigma>0$, $v\ssi\in X$,
$\xi\ssi\in X'$ such that $\xi\ssi\in{\cal B}\ssi(v\ssi)$.
Finally, let us assume that, for some $v\in X$,
$\xi\in X'$, there holds
\begin{align}\label{Gastrnew}
& v\ssi\to v\quext{weakly in }\, X, \qquad
\xi\ssi\to \xi\quext{weakly in }\,X',\\[1mm]
\label{Gastrnew-2}
& \limsup_{\sigma\searrow0} \duavg{\xi\ssi,v\ssi}_X
\le \duavg{\xi,v}_X.
\end{align}
Then, $\xi\in {\cal B}(v)$.
\enle
\noinden
Next, we present an integration by parts formula:
\bele\label{BSesteso}
Let $u\in W\cap H^3(\Omega)$, $\xi\in V'$ such that
$\xi\in \fzw(u)$. Then, we have that
\begin{equation}\label{majozero}
\duav{\xi,Au}\geq 0.
\end{equation}
\enle
\begin{proof}
Let us first note that the duality above surely makes
sense in the assigned regularity setting. Actually,
we have that $Au\in V$. We then consider the elliptic problem
\begin{equation}\label{elpromon}
u\ssi\in V, \qquad
u\ssi + A^2 u\ssi + f\ssi (u\ssi)
= u + A^2 u + \xi \text{~~~~in \,$V'$.}
\end{equation}
Since $f\ssi$ is monotone and Lipschitz continuous
and the above \rhs\ lies in $V'$,
it is not difficult to show that the above problem
admits a unique solution $u\ssi \in W \cap H^3(\Omega)$.
Moreover, the standard a priori estimates for $u\ssi$
lead to the following convergence relations, which hold,
for some $v\in V$ and $\zeta\in V'$,
up to the extraction of (non-relabelled)
subsequences (in fact uniqueness
guarantees them for the whole $\sigma\searrow0$):
\begin{align}\label{stlemma11}
& u\ssi\longrightarrow v \quext{weakly in }\, H^3(\Omega)~~
\text{and strongly in }\,W,\\
\label{stlemma11.2}
& A^2 u\ssi\longrightarrow A^2 v \quext{weakly in }\, V',\\
\label{stlemma11.3}
& f\ssi(u\ssi)\longrightarrow \zeta \quext{weakly in }\, V'.
\end{align}
As a byproduct, the limit functions satisfy
$ v+A^2v+\zeta=u+A^2u+\xi$ in~$V'$. Moreover, we
deduce from (\ref{elpromon})
\begin{equation}\label{contolemma11}
\big(f\ssi(u\ssi),u\ssi\big)
= \duavg{ u + A^2 u + \xi - u\ssi - A^2u\ssi, u\ssi},
\end{equation}
whence
\begin{equation}\label{old11}
\lim_{\sigma \rightarrow 0} \,\big(f\ssi(u\ssi),u\ssi\big)
= \duavg{u + A^2 u + \xi - v - A^2 v, v}
= \duav{\zeta,v}.
\end{equation}
Then, on account of
\eqref{stlemma11}, \eqref{stlemma11.3}, \eqref{Gdebole}
and Lemma~\ref{limimono} (cf., in particular,
relation \eqref{Gdebole}) applied to the sequence
$\{f_\sigma(v_\sigma)\}$, we readily obtain that
$\zeta\in \fzw(v)$. By uniqueness, $v=u$ and $\zeta=\xi$.
Let us finally verify the required property.
Actually, for $\sigma>0$, thanks to monotonicity
of $f\ssi$ we have
\begin{equation}\label{contolemma12}
0 \leq \big(f\ssi(u\ssi), A u\ssi \big)
= \duavg{ u + A^2 u+\xi-u\ssi-A^2u\ssi, A u\ssi}.
\end{equation}
Taking the supremum limit, we then obtain
\begin{equation}\label{contolemma12b}
0 \leq \limsup_{\sigma\searrow 0} \duavg{ u + A^2 u+\xi-u\ssi-A^2u\ssi, A u\ssi}
= \duavg{ u + A^2 u+ \xi - u, Au} - \liminf_{\sigma\searrow 0} \duavg{A^2u\ssi, A u\ssi}.
\end{equation}
Then, using \eqref{stlemma11} and semicontinuity of norms
with respect to weak convergence,
\begin{equation}\label{contolemma12c}
- \liminf_{\sigma\searrow 0} \duavg{A^2u\ssi, A u\ssi}
= - \liminf_{\sigma\searrow 0} \| \nabla A u\ssi \|^2
\le - \| \nabla A u \|^2
= - \duavg{A^2 u , A u},
\end{equation}
whence we finally obtain
\begin{equation}\label{old12}
0 \le \duav{u+A^2u+\xi-u-A^2u,Au}
= \duav{\xi,Au},
\end{equation}
as desired.
\end{proof}
\noinden
Next, we recall a further integration by parts formula
that extends the classical result
\cite[Lemma~3.3, p.~73]{Br}
(see, e.g., \cite[Lemma~4.1]{RS} for a proof):
\bele\label{BResteso}
Let $T>0$ and let $\calJ:H\to [0,+\infty]$ a convex,
lower semicontinuous and proper functional. Let
$u\in \HUVp \cap \LDV$, $\eta\in \LDV$ and let
$\eta(t)\in \de\calJ(u(t))$ for a.e.~$t\in(0,T)$,
where $\de\calJ$
is the $H$-subdifferential of $\calJ$. Moreover, let us
suppose the coercivity property
\begin{equation}\label{coerccalJ}
\esiste k_1>0,~k_2\ge 0
\quext{such that }\,\calJ(v) \ge k_1 \| v \|^2 - k_2
\quad\perogni v\in H.
\end{equation}
Then, the function $t\mapsto \calJ(u(t))$ is
absolutely continuous in $[0,T]$ and
\begin{equation}\label{ipepardiff}
\ddt \calJ(u(t)) = \duav{u_t(t),\eta(t)}
\quext{for a.e.~}\, t\in (0,T).
\end{equation}
In particular, integrating in time, we have
\begin{equation}\label{ipepars}
\int_s^t \duav{u_t(r),\eta(r)}\,\dir
= \calJ(u(t)) - \calJ(u(s))
\quad\perogni s,t\in [0,T].
\end{equation}
\enle
\noinden
We conclude this section by stating an integration by parts
formula for the operator $\calA$.
\bele\label{lemma:ipp}
Let $a$ satisfy \eqref{hpa1} and let either
\begin{equation} \label{x11}
v \in \HUH \cap \LDW \cap L^\infty(Q),
\end{equation}
or
\begin{equation} \label{x12}
v \in \HUVp \cap \LIW \cap L^2(0,T;H^3(\Omega)).
\end{equation}
Then, the function
\begin{equation} \label{x13}
t\mapsto \io \frac{a(v(t))}2 | \nabla v(t) |^2
\end{equation}
is absolutely continuous over $[0,T]$. Moreover,
for all $s,t\in [0,T]$ we have that
\begin{equation} \label{x14}
\int_s^t \big(\calA(v(r)),v_t(r)\big)\,\dir
= \io \frac{a(v(t))}2 |\nabla v(t)|^2
- \io \frac{a(v(s))}2 |\nabla v(s)|^2,
\end{equation}
where, in the case \eqref{x12}, the scalar product
in the integral on
the \lhs\ has to be replaced with the duality
$\duav{v_t(r),\calA(v(r))}$.
\enle
\begin{proof}
We first notice that \eqref{x13}-\eqref{x14}
surely hold if $v$ is smoother. Then, we can proceed
by first regularizing $v$ and then passing to the
limit. Namely, we define $v\ssi$, a.e.~in~$(0,T)$,
as the solution of the singular perturbation problem
\begin{equation} \label{co93}
v\ssi + \sigma A v\ssi = v,
\quext{for }\, \sigma\in (0,1).
\end{equation}
Then, in the case \eqref{x11}, we have
\begin{equation} \label{co93-b}
v\ssi \in H^1(0,T;W) \cap L^2(0,T;H^4(\Omega)),
\end{equation}
whereas, if \eqref{x12} holds, we get
\begin{equation} \label{co93-c}
v\ssi \in H^1(0,T;V) \cap L^\infty(0,T;H^4(\Omega)).
\end{equation}
Moreover, proceeding as in \cite[Appendix]{CGG}
(cf., in particular, Proposition~6.1 therein)
and applying the Lebesgue dominated convergence
theorem in order to control the dependence on time variable,
we can easily prove that
\begin{equation} \label{y11}
v\ssi \to v \quext{strongly in }\,\HUH \cap \LDW
~~\text{and weakly star in }\, L^\infty(Q)
\end{equation}
(the latter condition following from the maximum principle),
if \eqref{x11} holds, or
\begin{equation} \label{y12}
v\ssi \to v \quext{strongly in }\,\HUVp \cap L^2(0,T;H^3(\Omega))
~~\text{and weakly star in }\, \LIW,
\end{equation}
if \eqref{x12} is satisfied instead.
Now, the functions $v\ssi$, being smooth,
surely satisfy the analogue of \eqref{x14}:
\begin{equation} \label{x14ssi}
\int_s^t \big(\calA(v\ssi(r)),v_{\sigma,t}(r)\big)\,\dir
= \io \frac{a(v\ssi(t))}2 |\nabla v\ssi(t)|^2
- \io \frac{a(v\ssi(s))}2 |\nabla v\ssi(s)|^2,
\end{equation}
for all $s,t\in[0,T]$.
Let us prove that we can take the limit $\sigma\searrow0$,
considering first the case \eqref{x11}. Then, using \eqref{y11}
and standard compactness results, it is not difficult
to check that
\begin{equation} \label{co93-b2}
\calA(v\ssi) \to \calA(v), \quext{(at least) weakly in }\,L^2(0,T;H).
\end{equation}
In particular, to control the square gradient term in $\calA$,
we use the Gagliardo-Nirenberg inequality (cf.~\cite{Ni})
\begin{equation}\label{ineq:gn}
\| \nabla z \|_{L^4(\Omega)} \le c\OO \| z \|_{W}^{1/2}
\| z \|_{L^\infty(\Omega)}^{1/2}
+ \| z \| \qquad \perogni z \in W,
\end{equation}
so that, thanks also to \eqref{hpa1},
\begin{equation}\label{conseq:gn}
\big\| a'(v\ssi) |\nabla v\ssi|^2 \big\|_{\LDH}
\le \| a'(v\ssi) \|_{L^\infty(Q)} \| \nabla v\ssi \|_{L^4(Q)}^2
\le c \| v\ssi \|_{L^\infty(Q)} \big( 1 +
\| A v\ssi \|_{\LDH} \big),
\end{equation}
and \eqref{co93-b2} follows.
Moreover, by \eqref{y11} and the continuous embedding
$H^1(0,T;H) \cap L^2(0,T;W) \subset C^0([0,T];V)$,
we also have that
\begin{equation} \label{co93-d}
v\ssi \to v \quext{strongly in }\,C^0([0,T];V).
\end{equation}
Combining \eqref{y11}, \eqref{co93-b2} and \eqref{co93-d}, we
can take the limit $\sigma\searrow 0$ in \eqref{x14ssi} and get
back \eqref{x14}. Then, the absolute continuity property
of the functional in \eqref{x13} follows from the summability
of the integrand on the \lhs\ of \eqref{x14}.
Finally, let us come to the case \eqref{x12}. Then,
\eqref{y12} and the Aubin-Lions theorem give directly
\eqref{co93-d}, so that we can pass to the limit
in the \rhs\ of \eqref{x14ssi}. To take
the limit of the \lhs, on account of the first
\eqref{y12}, it is sufficient to prove that
\begin{equation} \label{x15}
\calA(v\ssi) \to \calA(v) \quext{at least weakly in }\,L^2(0,T;V).
\end{equation}
Since weak convergence surely holds in $\LDH$, it is then
sufficient to prove uniform boundedness in $\LDV$.
With this aim, we compute
\begin{align} \no
& \nabla \Big( a(v\ssi)\Delta v\ssi + \frac{a'(v\ssi)}2 |\nabla v\ssi|^2 \Big)\\
\label{x16}
& \mbox{}~~~~~
= a'(v\ssi) \nabla v\ssi \Delta v\ssi + a(v\ssi) \nabla \Delta v\ssi
+ \frac{a''(v\ssi)}2 | \nabla v\ssi |^2 \nabla v\ssi
+ a'(v\ssi) D^2 v\ssi \nabla v\ssi ,
\end{align}
and, using \eqref{y12}, \eqref{hpa1}, and standard embedding
properties of Sobolev spaces, it is a standard procedure
to verify that the \rhs\ is uniformly bounded in
$\LDH$ (and, consequently, so is the left). This concludes
the proof.
\end{proof}
\section{The 6th order problem}
\label{sec:6th}
We start by introducing the concept
of {\sl weak solution}\ to the sixth order problem
associated with system \eqref{CH1}-\eqref{neum-intro}:
\bede\label{def:weaksol6th}
Let $\delta>0$ and $\epsi\ge 0$.
Let us consider the {\rm 6th order problem}
given by the system
\begin{align}\label{CH1w}
& u_t + A w = 0, \quext{in }\,V',\\
\label{CH2w}
& w = \delta A^2 u + \calA(u) + \xi - \lambda u + \epsi u_t,
\quext{in }\,V',\\
\label{CH3w}
& \xi \in \fzw(u)
\end{align}
together with the initial condition
\begin{equation}\label{init}
u|_{t=0}=u_0,
\quext{a.e.~in }\,\Omega.
\end{equation}
A (global in time) {\rm weak solution}
to the 6th order problem\/
\eqref{CH1w}-\eqref{init}
is a triplet $(u,w,\xi)$, with
\begin{align}\label{regou}
& u\in \HUVp\cap L^\infty(0,T;W) \cap L^2(0,T;H^3(\Omega)),
\qquad \epsi u\in \HUH,\\
\label{regoFu}
& F(u) \in L^\infty(0,T;L^1(\Omega)),\\
\label{regofu}
& \xi \in L^2(0,T;V'),\\
\label{regow}
& w\in L^2(0,T;V),
\end{align}
satisfying\/ \eqref{CH1w}-\eqref{CH3w} a.e.~in~$(0,T)$
together with~\eqref{init}.
\edde
\noinden
We can then state the main result of this section:
\bete\label{teoesi6th}
Let us assume\/ \eqref{hpa1}-\eqref{hpf2}. Let
$\epsi\ge 0$ and $\delta>0$. Moreover, let us suppose
that
\begin{equation}\label{hpu0}
u_0\in W, \quad
F(u_0)\in L^1(\Omega), \quad
(u_0)\OO \in (-1,1),
\end{equation}
where $(u_0)\OO$ is the spatial mean of $u_0$.
Then, the sixth order problem admits one and only one
weak solution.
\ente
\noinden
The proof of the theorem will be carried out in several steps,
presented as separate subsequences.
\beos\label{rem:mean}
We observe that the last condition in \eqref{hpu0},
which is a common assumption when dealing with Cahn-Hilliard equations
with constraints (cf.~\cite{KNP} for more details),
does not simply follow from the requirement $F(u_0)\in L^1(\Omega)$.
Indeed, $F$ may be bounded over $[-1,1]$, as it
happens, for instance, with the logarithmic
potential~\eqref{logpot}. In that case,
$F(u_0)\in L^1(\Omega)$ simply means $-1 \le u_0 \le 1$
almost everywhere and, without the last \eqref{hpu0},
we could have initial data that coincide almost everywhere
with either of the pure states $\pm1$. However, solutions that assume
(for example) the value $+1$ in a set of strictly positive
measure cannot be considered, at least in our regularity
setting. Indeed, if $|\{u=1\}|>0$, then regularity~\eqref{regofu}
(which is crucial for passing to the limit in our
approximation scheme) is broken, because $f(r)$ is {\sl unbounded}\/
for $r\nearrow +1$ and $\xi$ is nothing else than
a relaxed version of $f(u)$.
\eddos
\subsection{Approximation and local existence}
\label{subsec:appr}
First of all, we introduce a suitably approximated
statement. The monotone function $f_0$ is regularized
by taking a family $\{f\ssi\}$, $\sigma\in(0,1)$,
defined as in Subsection~\ref{sec:weak}.
Next, we regularize $u_0$ by singular perturbation,
similarly as before (cf.~\eqref{co93}). Namely,
we take $u\zzs$ as the solution to the elliptic problem
\begin{equation}\label{defiuzzd}
u\zzs + \sigma A u\zzs = u_0,
\end{equation}
and we clearly have, by Hilbert elliptic
regularity results,
\begin{equation}\label{regouzzd}
u\zzs \in D(A^2)
\quad\perogni \sigma\in(0,1).
\end{equation}
Other types of approximations of the initial datum are possible,
of course. The choice \eqref{defiuzzd}, beyond its
simplicity, has the advantage that it preserves the mean value.
\smallskip
\noinden
{\bf Approximate problem.}~~For $\sigma\in(0,1)$,
we consider the problem
\begin{align}\label{CH1appr}
& u_t + A w = 0,\\
\label{CH2appr}
& w = \delta A^2 u + \calA(u) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t,\\
\label{inisd}
& u|_{t=0}=u\zzs,
\quext{a.e.~in }\,\Omega.
\end{align}
We shall now show that it admits at least one local
in time weak solution. Namely, there holds
the following
\bele\label{teo:loc:appr}
Let us assume\/ \eqref{hpa1}-\eqref{hpf2}.
Then, for any $\sigma\in(0,1)$,
there exist $T_0\in(0,T]$
(possibly depending on $\sigma$)
and a pair $(u,w)$ with
\begin{align}\label{regovsd}
& u\in H^1(0,T_0;H)
\cap L^\infty(0,T_0;W) \cap L^2(0,T_0;D(A^2)),\\
\label{regowsd}
& w \in L^2(0,T_0;W),
\end{align}
such that \eqref{CH1appr}-\eqref{CH2appr} hold
a.e.~in~$(0,T_0)$ and the initial condition~\eqref{inisd}
is satisfied.
\enle
\begin{proof}
The theorem will be proved by using the Schauder fixed point theorem.
We take
\begin{equation}\label{defiBR}
B_R:=\big \{ v\in L^2(0,T_0;W)\cap L^4(0,T_0;W^{1,4}(\Omega)) :
\| v \|_{L^2(0,T_0;W)} + \| v \|_{L^4(0,T_0;W^{1,4}(\Omega))}\le R \big\},
\end{equation}
for $T_0$ and $R$ to be chosen below.
Then, we take $\baru \in B_R$ and
consider the problem given by \eqref{inisd} and
\begin{align}\label{CH1schau}
& u_t + A w = 0, \quext{in }\,H,\\
\label{CH2schau}
& w = \delta A^2 u + \calA(\baru) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t,
\quext{in }\,H.
\end{align}
Then,
as $\baru\in B_R$ is fixed, we
can notice that
\begin{equation}\label{conto21c}
\| \calA(\baru) \|_{L^2(0,T_0;H)}^2
\le c \big( \| \baru \|_{L^2(0,T_0;W)}^2
+ \| \baru \|_{L^4(0,T_0;W^{1,4}(\Omega))}^4 \big)
\le Q(R).
\end{equation}
Here and below, $Q$ denotes a computable function,
possibly depending on $\sigma$, defined
for any nonnegative value of its argument(s) and
increasingly monotone in (each of) its argument(s).
As we substitute into \eqref{CH1schau} the expression
for $w$ given by \eqref{CH2schau}
and apply the inverse operator
$(\Id + (\epsi + \sigma )A )^{-1}$,
we obtain a parabolic equation in $u$ which is
linear up to the Lipschitz perturbation $f\ssi(u)$.
Hence, owing to the
regularity \eqref{conto21c} of the forcing term,
to the regularity \eqref{regouzzd}
of the initial datum, and to the standard Hilbert
theory of linear parabolic equations,
there exists a unique pair $(u,w)$ solving the problem
given by \eqref{CH1schau}-\eqref{CH2schau} and
the initial condition \eqref{inisd}. Such a pair
satisfies the regularity properties
\eqref{regovsd}-\eqref{regowsd} (as it will also
be apparent from the forthcoming a priori estimates).
We then note as $\calK$ the map
such that $\calK: \baru \mapsto u$. To conclude the proof
we will have to show the following three properties:\\[2mm]
{\sl (i)}~~$\calK$ takes its values in $B_R$;\\[1mm]
{\sl (ii)}~~$\calK$ is continuous with respect to the $L^2(0,T_0;W)$
and the $L^4(0,T_0;W^{1,4}(\Omega))$ norms;\\[1mm]
{\sl (iii)}~~$\calK$ is a compact map.\\[2mm]
To prove these facts, we perform a couple of a priori estimates.
To start, we test \eqref{CH1schau} by $w$ and
\eqref{CH2schau} by $u_t$ (energy estimate). This gives
\begin{align} \no
& \ddt \bigg( \frac\delta2 \| A u \|^2
+ \io \Big( F\ssi(u) - \frac\lambda2 u^2 \Big) \bigg)
+ (\epsi+\sigma) \| u_t \|^2
+ \| \nabla w \|^2\\
\label{contox11}
& \mbox{}~~~~~
= - \big( \calA(\baru) , u_t \big)
\le \frac\sigma2 \| u_t \|^2
+ \frac1{2\sigma}\| \calA(\baru) \|^2
\end{align}
and, after integration in time,
the latter term can be estimated using
\eqref{conto21c}. Next, we observe
that, thanks to \eqref{propFsigma}, we have
\begin{equation}\label{contox12}
\frac\delta2 \| A u \|^2
+ \io \Big( F\ssi(u) - \frac\lambda2 u^2 \Big)
\ge \eta \| u \|_W^2 - c,
\end{equation}
for some $\eta>0$, $c\ge 0$ independent of $\sigma$ and
for all $u$ in $W$.
Thus, \eqref{contox11} provides the bounds
\begin{equation}\label{boundx11}
\| u \|_{L^\infty(0,T_0;W)}
+ \| u_t \|_{L^2(0,T_0;H)}
+ \| \nabla w \|_{L^2(0,T_0;H)} \le Q\big(R,T_0,\| u\zzs \|_W \big).
\end{equation}
Next, testing \eqref{CH2schau} by $A^2 u$ and
performing some standard computations (in particular,
the terms $(\calA(\baru),A^2 u)$ and $(f\ssi(u),A^2u)$
are controlled by using \eqref{conto21c}, H\"older's
and Young's inequalities, and the Lipschitz continuity
of $f\ssi$), we obtain the further bound
\begin{equation}\label{st21}
\| A^2 u \|_{L^2(0,T_0;H)}
\le Q\big(R,T_0,\|u\zzs\|_{W}\big).
\end{equation}
Hence, estimates \eqref{boundx11} and \eqref{st21}
and a standard application of the Aubin-Lions lemma
permit to see that the range of $\calK$ is
relatively compact both in $L^2(0,T_0;W)$
and in $L^4(0,T_0;W^{1,4}(\Omega))$.
Thus, {\sl (iii)}\/ follows.
\medskip
Concerning {\sl (i)}, we can now simply observe that,
by \eqref{boundx11},
\begin{equation}\label{st31}
\| u \|_{L^2(0,T_0;W)}
\le T_0^{1/2} \| u \|_{L^\infty(0,T_0;W)}
\le T_0^{1/2} Q\big(R,T_0,\|u\zzs\|_{W}\big).
\end{equation}
whence the \rhs\ can be made smaller than $R$ if $T_0$ is chosen
small enough. A similar estimate works also for the
$L^4(0,T_0;W^{1,4}(\Omega))$-norm since $W\subset W^{1,4}(\Omega)$
continuously. Thus, also {\sl (i)}\ is proved.
\medskip
Finally, to prove condition {\sl (ii)},
we first observe that,
if $\{\baru_n\}\subset B_R$
converges strongly to $\baru$
in $L^2(0,T_0;W)\cap L^4(0,T_0;W^{1,4}(\Omega))$,
then, using proper weak compactness theorems,
it is not difficult to prove that
\begin{equation}\label{conto31}
\calA(\baru_n)\to \calA(\baru)
\quext{weakly in }\,L^2(0,T_0;H).
\end{equation}
Consequently, if $u_n$ (respectively $u$)
is the solution to \eqref{CH1schau}-\eqref{CH2schau}
corresponding to $\baru_n$ (respectively $\baru$),
then estimates \eqref{boundx11}-\eqref{st21}
hold for the sequence $\{u_n\}$ with a function $Q$ independent
of $n$. Hence, standard weak compactness arguments
together with the Lipschitz continuity of $f\ssi$
permit to prove that
\begin{equation}\label{st33}
u_n=\calK(\baru_n) \to u=\calK(\baru)
\quext{strongly in }\,L^2(0,T_0;W) \cap L^4(0,T_0;W^{1,4}(\Omega)),
\end{equation}
i.e., condition {\sl (ii)}. The proof of the lemma
is concluded.
\end{proof}
\subsection{A priori estimates}
\label{sec:apriori}
In this section we will show that the local solutions
constructed in the previous section satisfy uniform
estimates with respect both to the approximation
parameter $\sigma$ and to
the time $T_0$. By standard extension methods this
will yield a global in time solution
(i.e., defined over the whole of $(0,T)$)
in the limit. However, to avoid
technical complications, we will directly assume that the
approximating solutions are already defined over $(0,T)$.
Of course, to justify this, we will have to take care
that all the constants appearing in the forthcoming
estimates be independent of $T_0$.
To be precise, in the sequel
we will note by $c>0$ a computable
positive constant (whose value can vary on occurrence)
independent of all approximation parameters
(in particular of $T_0$ and $\sigma$) and also of
the parameters $\epsi$ and $\delta$.
\smallskip
\noinden
{\bf Energy estimate.}~
First, integrating \eqref{CH1appr} in space
and recalling \eqref{defiuzzd},
we obtain the {\sl mass conservation}\/ property
\begin{equation}\label{consmedie}
(u(t))\OO = (u\zzs)\OO
= (u_0)\OO.
\end{equation}
Next, we can test \eqref{CH1appr} by $w$, \eqref{CH2appr} by $u_t$
and take the difference, arriving at
\begin{equation}\label{conto41}
\ddt \calE\ssid(u)
+ \| \nabla w \|^2
+ (\epsi+\sigma) \| u_t \|^2
= 0,
\end{equation}
where the ``approximate energy'' $\calE\ssid(u)$ is defined as
\begin{equation}\label{defiEssid}
\calE\ssid(u)=\io \Big( \frac\delta2 | A u |^2
+ \frac{a(u)}2 |\nabla u|^2 + F\ssi(u)
- \frac{\lambda}2u^2 \Big).
\end{equation}
Actually, it is clear that the high regularity of
approximate solutions (cf.~\eqref{regovsd}-\eqref{regowsd})
allows the integration by parts necessary to write \eqref{conto41}
(at least) almost everywhere in time.
Indeed, all single terms in \eqref{CH2appr} lie in
$\LDH$ and the same holds for the test function $u_t$.
Then, we integrate \eqref{conto41} in time and notice
that, by \eqref{propFsigma},
\begin{equation}\label{Essicoerc}
\calE\ssid(u) \ge \eta \big(
\delta \| u \|_W^2 + \| u \|_V^2
\big) - c \quad\perogni t\in(0,T).
\end{equation}
Consequently, \eqref{conto41} provides the bounds
\begin{align} \label{st41}
& \| u \|_{\LIV} + \delta^{1/2} \| u \|_{\LIW} + (\epsi+\sigma)^{1/2}
\| u_t \|_{\LDH} \le c,\\
\label{st43}
& \| \nabla w \|_{\LDH} \le c,\\
\label{st44}
& \| F\ssi(u) \|_{L^\infty(0,T;L^1(\Omega))} \le c,
\end{align}
where it is worth stressing once more
that the above constants $c$
neither depend explicitly on $\delta$ nor on $\epsi$.
\smallskip
\noinden
{\bf Second estimate.}~
We test \eqref{CH2appr} by $u-u\OO$, $u\OO$ denoting
the (constant in time) spatial mean of $u$. Integrating
by parts the term $\calA(u)$, we obtain
\begin{align}\no
& \delta \| A u \|^2
+ \io a(u) | \nabla u |^2
+ \io f\ssi(u)\big( u - u\OO \big)\\
\label{conto51}
& \mbox{}~~~~~
\le \big( w + \lambda u - (\epsi+\sigma) u_t, u - u\OO \big)
- \io \frac{a'(u)}2 | \nabla u |^2 ( u - u\OO )
\end{align}
and we have to estimate some terms. First
of all, we observe that there exists
a constant $c$, depending on the (assigned
once $u_0$ is fixed) value of $u\OO$, but
{\sl independent of $\sigma$},
such that
\begin{equation}\label{conto52}
\io f\ssi(u)\big( u - u\OO \big)
\ge \frac12 \| f\ssi(u) \|_{L^1(\Omega)} - c.
\end{equation}
To prove this inequality, one basically
uses the monotonicity of $f\ssi$ and the fact
that $f\ssi(0)=0$ (cf.~\cite[Appendix]{MZ}
or \cite[Third a priori estimate]{GMS}
for the details). Next, by
\eqref{hpa2}, the function
$r\mapsto a'(r)(r-u\OO)$ is uniformly bounded,
whence
\begin{equation}\label{conto53}
- \io \frac{a'(u)}2 | \nabla u |^2 ( u - u\OO )
\le c \| \nabla u \|^2.
\end{equation}
Finally, using that $(w\OO+\lambda u\OO, u-u\OO)=0$
since $w\OO+\lambda u\OO$ is constant with respect to space variables,
and applying the Poincar\'e-Wirtinger inequality,
\begin{align}\no
& \big( w + \lambda u - (\epsi+\sigma) u_t, u - u\OO \big)
= \big( w - w\OO + \lambda (u-u\OO) - (\epsi+\sigma) u_t, u - u\OO \big)\\
\no
& \mbox{}~~~~~
\le c \| \nabla w \| \| \nabla u \|
+ c \| \nabla u \|^2
+ c (\epsi + \sigma) \| u_t \| \| \nabla u \| \\
\label{conto54}
& \mbox{}~~~~~
\le c\big( \| \nabla w \|
+ (\epsi + \sigma) \| u_t \|
+ 1 \big),
\end{align}
the latter inequality following from
estimate~\eqref{st41}.
Thus, squaring \eqref{conto51}, using
\eqref{conto52}-\eqref{conto54}, and integrating
in time, we arrive after recalling
\eqref{st41}, \eqref{st43} at
\begin{equation} \label{st51}
\| f\ssi(u) \|_{L^2(0,T;L^1(\Omega))} \le c.
\end{equation}
Next, integrating \eqref{CH2appr} with respect
to space variables (and, in particular, integrating
by parts the term $\calA(u)$), using
\eqref{st51}, and recalling \eqref{st43}, we obtain
(still for $c$ independent of $\epsi$ and $\delta$)
\begin{equation} \label{st52}
\| w \|_{L^2(0,T;V)} \le c.
\end{equation}
\noinden
{\bf Third estimate.}~
We test \eqref{CH2appr} by $Au$.
Using the monotonicity of
$f\ssi$ and \eqref{hpa1}, it is
not difficult to arrive at
\begin{equation}\label{conto61}
\frac{\epsi+\sigma}2\ddt \| \nabla u \|^2
+ \delta \| \nabla A u \|^2
+ \frac{\agiu}2 \| A u \|^2
\le \big( \nabla w + \lambda \nabla u , \nabla u \big)
+ c \| \nabla u \|_{L^4(\Omega)}^4.
\end{equation}
Using the continuous embedding
$H^{3/4}(\Omega)\subset L^4(\Omega)$
(so that, in particular,
$H^{7/4}(\Omega)\subset W^{1,4}(\Omega)$)
together with the interpolation inequality
\begin{equation}\label{new-interp}
\| v \|_{H^{3/4}(\Omega)}
\le \| v \|^{3/8}_{H^3(\Omega)} \| v \|^{5/8}_{H^1(\Omega)}
\quad \perogni v \in H^{3/4}(\Omega),
\end{equation}
and recalling estimate \eqref{st41},
the last term is treated as follows:
\begin{equation}\label{conto62}
c \| \nabla u \|_{L^4(\Omega)}^4
\le c \| u \|_{H^3(\Omega)}^{3/2}
\| u \|_{V}^{5/2}
\le \frac\delta2 \| \nabla A u \|^2
+ c(\delta).
\end{equation}
Note that the latter constant $c(\delta)$
is expected to explode as $\delta\searrow 0$ but, on
the other hand, is independent of $\sigma$.
Next, noting that
\begin{equation}\label{conto63}
\big( \nabla w + \lambda \nabla u , \nabla u \big)
\le c \big( \| \nabla u \|^2 + \| \nabla w \|^2 ),
\end{equation}
from \eqref{conto61} we readily deduce
\begin{equation} \label{st61}
\| u \|_{L^2(0,T;H^3(\Omega))}
\le c(\delta).
\end{equation}
A similar (and even simpler)
argument permits to check that it is also
\begin{equation} \label{st62}
\| \calA(u) \|_{L^2(0,T;H)} \le c(\delta).
\end{equation}
Thus, using \eqref{st41}, \eqref{st52},
\eqref{st61}-\eqref{st62} and comparing terms in
\eqref{CH2appr}, we arrive at
\begin{equation} \label{st63}
\| f\ssi(u) \|_{L^2(0,T;V')} \le c(\delta).
\end{equation}
\subsection{Limit $\boldsymbol \sigma\searrow 0$}
\label{sec:sigma}
We now use the machinery introduced in
Subsection~\ref{sec:weak} to take the
limit $\sigma\searrow 0$ in
\eqref{CH1appr}-\eqref{CH2appr}. For convenience,
we then rename as $(u\ssi,w\ssi)$ the solution. Then,
recalling estimates \eqref{st41}-\eqref{st44},
\eqref{st52} and \eqref{st61}-\eqref{st63},
and using the Aubin-Lions compactness lemma,
we deduce
\begin{align} \label{conv41}
& u\ssi \to u \quext{strongly in }\,
C^0([0,T];H^{2-\epsilon}(\Omega)) \cap
L^2(0,T;H^{3-\epsilon}(\Omega)),\\
\label{conv42}
& u\ssi \to u \quext{weakly star in }\, H^1(0,T;V') \cap
L^\infty(0,T;W) \cap
L^2(0,T;H^3(\Omega)),\\
\label{conv42b}
& (\epsi+\sigma) u_{\sigma,t} \to \epsi u_t
\quext{weakly in }\, L^2(0,T;H),\\
\label{conv43}
& w\ssi \to w \quext{weakly in }\, \LDV,\\
\label{conv44}
& f\ssi(u\ssi) \to \xi \quext{weakly in }\, \LDVp,
\end{align}
for suitable limit functions $u,w,\xi$, where $\epsilon>0$ is
arbitrarily small. It is readily checked that the above
relations (\eqref{conv41} in particular) are strong
enough to guarantee that
\begin{equation} \label{conv45}
\calA(u\ssi) \to \calA(u), \quext{strongly in }\, \LDH.
\end{equation}
This allows us to take the limit $\sigma\searrow 0$
in \eqref{CH1appr}-\eqref{inisd} (rewritten for $u\ssi,w\ssi$)
and get
\begin{align}\label{CH1delta}
& u_t + A w = 0, \quext{in }\,V',\\
\label{CH2delta}
& w = \delta A^2 u + \calA(u) + \xi - \lambda u + \epsi u_t,
\quext{in }\,V',\\
\label{iniz-delta}
& u|_{t=0} = u_0
\quext{a.e.~in }\,\Omega.
\end{align}
To identify $\xi$, we observe that,
thanks to \eqref{conv41}, \eqref{conv44}, and
Lemma~\ref{limimono} applied with the choices of
$X=V$, $X'=V'$, $\calB\ssi=f\ssi$, $\calB=\fzw$,
$v\ssi=u\ssi$, $v=u$ and $\xi\ssi=f\ssi(u\ssi)$,
it follows that
\begin{equation} \label{incldelta}
\xi\in \fzw(u).
\end{equation}
Namely, $\xi$ is identified with respect to the
weak (duality) expression of the function $f_0$.
This concludes the proof of Theorem~\ref{teoesi6th}
for what concerns existence.
\subsection{Uniqueness}
\label{sec:uniq6th}
To conclude the proof of Theorem~\ref{teoesi6th}, it remains to
prove uniqueness. To this purpose, we write both \eqref{CH1w} and
\eqref{CH2w} for a couple of solutions $(u_1,w_1,\xi_1)$, $(u_2,w_2,\xi_2)$,
and take the difference. This gives
\begin{align}\label{CH1d0}
& u_t + A w = 0, \quext{in }\,V',\\
\no
& w = \delta A^2 u - a(u_1) \Delta u
- \big( a(u_1) - a(u_2) \big) \Delta u_2
- \frac{a'(u_1)}2 \big( | \nabla u_1 |^2 - | \nabla u_2 |^2 \big)\\
\label{CH2d0}
& \mbox{}~~~~~~~~~~
- \frac{a'(u_1) - a'(u_2)}2 | \nabla u_2 |^2
+ \xi_1 - \xi_2 - \lambda u + \epsi u_t,
\quext{in }\,V',
\end{align}
where we have set $(u,w,\xi):=(u_1,w_1,\xi_1)-(u_2,w_2,\xi_2)$.
Then, we test \eqref{CH1d0} by $A^{-1}u$, \eqref{CH2d0} by $u$,
and take the difference. Notice that, indeed, $u$ has zero mean value
by \eqref{consmedie}. Thus, the operator $A^{-1}$ makes
sense since $A$ is bijective from $W_0$ to $H_0$,
the subscript $0$ indicating the zero-mean condition.
A straighforward computation involving use of standard
embedding properties of Sobolev spaces then gives
\begin{align}\no
& \Big(- a(u_1) \Delta u - \big( a(u_1) - a(u_2) \big) \Delta u_2
- \frac{a'(u_1)}2 \big( | \nabla u_1 |^2 - | \nabla u_2 |^2 \big)
- \frac{a'(u_1) - a'(u_2)}2 | \nabla u_2 |^2 , u \Big) \\
\label{uniq22}
& \mbox{}~~~~~
\le Q\big( \| u_1 \|_{L^\infty(0,T;W)},\| u_2 \|_{L^\infty(0,T;W)} \big)
\| u \|_W \| u \|
\end{align}
and we notice that the norms inside the function $Q$ are controlled
thanks to \eqref{regou}. Thus, also on account of the monotonicity of
$\fzw$, we arrive at
\begin{align}\no
& \ddt\Big( \frac12 \| u \|_{V'}^2 + \frac\epsi2 \| u \|^2 \Big)
+ \delta \| A u \|^2
\le c \| u \|_W \| u \| + \lambda \| u \|^2\\
\label{uniq23}
& \mbox{}~~~~~
\le c \| u \|_W^{4/3} \| u \|_{V'}^{2/3}
\le \frac\delta2 \| A u \|^2 + c(\delta) \| u \|_{V'}^2,
\end{align}
where, to deduce the last two inequalities, we used
the interpolation inequality $\| u \| \le \| u \|_{V'}^{2/3}
\| u \|_W^{1/3}$ (note that $(V,H,V')$ form
a Hilbert triplet, cf., e.g., \cite[Chap.~5]{BrAF})
together with Young's inequality
and the fact that the function
$\| \cdot \|_{V'} + \| A \cdot \|$
is an equivalent norm on $W$. Thus, the thesis of
Theorem~\ref{teoesi6th} follows by applying Gronwall's
lemma to \eqref{uniq23}.
\section{From the 6th order to the 4th order model}
\label{sec:6thto4th}
In this section, we analyze the behavior of solutions to
the 6th order problem as $\delta$ tends to $0$. To start with,
we specify the concept of weak solution in the 4th order case:
\bede\label{def:weaksol4th}
Let $\delta=0$ and $\epsi\ge 0$.
Let us consider the\/ {\rm 4th order problem} given by
the system
\begin{align}\label{CH1w4th}
& u_t + A w = 0, \quext{in }\,V',\\
\label{CH2th}
& w = \calA(u) + f(u) + \epsi u_t,
\quext{in }\,H,
\end{align}
together with the initial condition~\eqref{init}.
A\/ (global in time) {\rm weak solution}
to the 4th order problem \eqref{CH1w4th}-\eqref{CH2th},
\eqref{init} is a pair $(u,w)$, with
\begin{align}\label{regou4}
& u\in \HUVp\cap L^\infty(0,T;V) \cap L^2(0,T;W),
\qquad \epsi u\in \HUH,\\
\label{regoFu4}
& F(u) \in L^\infty(0,T;L^1(\Omega)),\\
\label{regofu4}
& f_0(u) \in L^2(0,T;H),\\
\label{regow4}
& w\in L^2(0,T;V),
\end{align}
satisfying\/ \eqref{CH1w4th}-\eqref{CH2th} a.e.~in~$(0,T)$
together with\/ \eqref{init}.
\edde
\noinden
\bete\label{teo6thto4th}
Let us assume\/ \eqref{hpa1}-\eqref{hpf2} together with
\begin{equation}\label{aconcave}
a \quext{is concave on }\,[-1,1].
\end{equation}
Let also $\epsi\ge 0$ and let, for all $\delta\in(0,1)$,
$u\zzd$ be an initial datum satisfying\/ \eqref{hpu0}.
Moreover, let us suppose
\begin{equation}\label{convuzzd}
u\zzd\to u_0 \quext{strongly in }\,V, \qquad
\calE\dd(u\zzd)\to \calE_0(u_0),
\quext{where }\,(u_0)\OO\in(-1,1).
\end{equation}
Let, for any $\delta\in (0,1)$, $(u\dd,w\dd,\xi\dd)$ be a
weak solution to the 6th order system
in the sense of\/
{\rm Definition~\ref{def:weaksol6th}}. Then, we have that,
up to a (nonrelabelled) subsequence of $\delta\searrow 0$,
\begin{align}\label{co4th11}
& u\dd \to u \quext{weakly star in }\,\HUVp \cap \LIV \cap \LDW,\\
\label{co4th12}
& \epsi u\dd \to \epsi u \quext{weakly in }\,\HUH,\\
\label{co4th13}
& w\dd \to w \quext{weakly in }\,\LDV,\\
\label{co4th13b}
& \delta u\dd \to 0 \quext{strongly in }\,L^2(0,T;H^3(\Omega)),\\
\label{co4th14}
& \xi\dd \to f_0(u) \quext{weakly in }\,\LDVp,
\end{align}
and $(u,w)$ is a weak solution to the 4th order problem.
\ente
\noinden
\begin{proof}
The first part of the proof consists in repeating the
``Energy estimate'' and the ``Second estimate''
of the previous section. In fact, we could avoid this procedure
since we already noted that the constants appearing
in those estimates were independent of $\delta$.
However, we choose to perform once more the estimates
working directly on the 6th order problem
(rather than on its approximation) for various
reasons. First, this will show that the estimates
do not depend on the chosen regularization scheme.
Second, the procedure has an independent interest
since we will see that the use of ``weak''
subdifferential operators still permits to rely
on suitable integration by parts formulas and
on monotonicity methods. Of course, many passages,
which were trivial in the ``strong'' setting,
need now a precise justification.
Finally, in this way we are able to prove, as a
byproduct, that any solution to the 6th order system
satisfies an energy {\sl equality}\/ (and not just
an inequality). Actually, this property may be useful for
addressing the long-time behavior of the system.
\smallskip
\noinden
{\bf Energy estimate.~~
As before, we would like to test \eqref{CH1w} by $w\dd$,
\eqref{CH2w} by $u_{\delta,t}$, and take the difference. To justify
this procedure, we start observing that $w\dd\in L^2(0,T;V)$
by \eqref{regow}. Actually, since \eqref{CH1w} is in
fact a relation in $L^2(0,T;V')$, the use of $w\dd$ as
a test function makes sense. The problem, instead, arises when
working on \eqref{CH2w} and, to justify the estimate, we
can just consider the (more difficult) case $\epsi=0$.
Then, it is easy to check that the assumptions of
Lemma~\ref{lemma:ipp} are satisfied. In particular, we have
\eqref{x12} thanks to \eqref{regou}.
Hence, \eqref{x14} gives
\begin{equation}\label{en-11}
\duavb{u_{\delta,t},\calA(u\dd)}
= \frac12 \ddt \io a(u\dd)|\nabla u\dd|^2,
\quext{a.e.~in }\,(0,T).
\end{equation}
Thus, it remains to show that
\begin{equation}\label{en-12}
\duavg{u_{\delta,t},\delta A^2 u\dd + \xi\dd}
= \ddt \io \Big( \frac\delta2 |A u\dd|^2
+ F(u\dd) \Big),
\quext{a.e.~in }\,(0,T).
\end{equation}
To prove this, we observe that
\begin{equation}\label{comparis}
\delta A^2 u\dd + \xi\dd \in \LDV \quad
\perogni \delta\in(0,1).
\end{equation}
Actually, we already noted above that $w\dd$, $\calA(u\dd)$
lie in $\LDV$. Since $u\dd\in \LDV$ by \eqref{regou}
and we assumed $\epsi=0$, \eqref{comparis} simply
follows by comparing terms in \eqref{CH2w}.
Thus, the duality on the \lhs\ of \eqref{en-12} makes sense.
Moreover, as we set
\begin{equation}\label{en-13}
\calJ\dd(v):= \io \Big( \frac\delta2 |A v|^2 + F(v) \Big),
\end{equation}
then a direct computation permits to check that
\begin{equation}\label{en-14}
\delta A^2 u\dd + \xi\dd
\in \de \calJ\dd ( u\dd )
\quext{a.e.~in }\,(0,T).
\end{equation}
Indeed, by definition
of $H$-subdifferential, this corresponds to
the relation
\begin{equation}\label{en-15}
\duavb { \delta A^2 u\dd + \xi\dd , v - u\dd }
\le \calJ\dd ( v ) - \calJ\dd ( u\dd )
\quad \perogni v\in H,
\end{equation}
and it is sufficient to check it for $v\in V$ since for
$v\in V\setminus H$ the \rhs\ is $+\infty$ and consequently
the relation is trivial. However, for $v\in V$,
\eqref{en-15} follows by definition of the relaxed
operator $\fzw$. Thanks to \eqref{en-14},
\eqref{en-12} is a then a direct consequence of
inequality \eqref{ipepardiff} of Lemma~\ref{BResteso}.
Thus, the above procedure permits to see that
(any) weak solution $(u\dd,w\dd,\xi\dd)$ to the
6th order problem satisfies the energy {\sl equality}
\begin{equation}\label{energy-6th}
\ddt \calE\dd(u(t))
+ \| \nabla w(t) \|^2
+ \epsi \| u_t(t) \|^2 = 0
\end{equation}
for almost all $t\in[0,T]$. As a consequence, we
get back the first two convergence relations
in \eqref{co4th11} as
well as \eqref{co4th12}. Moreover, we have
\begin{equation}\label{6to4-01}
\| \nabla w\dd \|_{\LDH} \le c.
\end{equation}
\smallskip
\noinden
{\bf Second estimate.~~
Next, to get \eqref{co4th13} and \eqref{co4th14},
we essentially need to repeat the ``Second estimate''
of the previous section. Indeed, we see that $u\dd-(u\dd)\OO$ is
an admissible test function in \eqref{CH1w}. However,
we now have to obtain an estimate of $\xi\dd$ from
the duality product
\begin{equation}\label{6to4-21}
\duavb{\xi\dd, u\dd - (u\dd)\OO}.
\end{equation}
Actually, if $\xi\dd=\xi\dda+\xi\dds$ is the Lebesgue
decomposition of the {\sl measure}\ $\xi\dd$
given in Theorem~\ref{teobrezis},
then, noting that for all $t\in [0,T]$ we have
$u\dd(t)\in W\subset C^0(\barO)$, we can write
\begin{equation}\label{6to4-22}
\duavb{\xi\dd(t), u\dd(t) - (u\dd)\OO}
= \io \xi\dda(t) \big(u\dd(t) - (u\dd)\OO\big)\,\dix
+ \io \big(u\dd(t) - (u\dd)\OO \big) \dixi_{\delta,s}(t).
\end{equation}
Next, we notice that, as a direct
consequence of assumption~\eqref{convuzzd},
\begin{equation}\label{unifsep}
\esiste\mu\in(0,1):~~
-1+\mu \le (u\zzd)\OO \le 1-\mu,
\quad\perogni \delta\in(0,1),
\end{equation}
where $\mu$ is independent of $\delta$.
In other words, the spatial means $(u\zzd)\OO$ are
uniformly separated from $\pm1$.
Then, recalling \eqref{bre2} and proceeding
as in \eqref{conto52}, we have
\begin{equation}\label{conto52dd}
\io \xi\dda(t) \big(u\dd(t) - (u\dd)\OO\big)\,\dix
\ge \frac12\| \xi\dda(t) \|_{L^1(\Omega)} - c,
\end{equation}
where $c$ does not depend on $\delta$.
On the other hand, let us note as
$\deriv\!\xi\dds=\phi\dds \dixis$ the {\sl polar}\
decomposition of $\xi\dds$, where $|\xi\dds|$ is
the {\sl total variation}\/ of $\xi\dds$
(cf., e.g., \cite[Chap.~6]{Ru}).
Then, introducing the bounded linear functional
$\calS\dd:C^0(\barO)\to \RR$ given by
\begin{equation}\label{deficalS}
\calS\dd(z):= \ibaro z \,\dixi_{\delta,s}
\end{equation}
using, e.g., \cite[Thm.~6.19]{Ru},
and recalling \eqref{bre3},
we can estimate the norm of $\calS$ as
follows:
\begin{align}\no
|\xi\dds|(\barO)
& = \ibaro \dixis
= \| \calS\dd \|_{\calM(\barO)}\\
\no
& = \sup\Big\{\ibaro z\, \dixi_{\delta,s},~
z\in C^0(\overline\Omega),~
z(\barO)\subset[-1,1] \Big\}\\
\label{6to4-23}
& = \duav{\xi\dd,u\dd} - \io \xi\dda u\dd
= \ibaro u\dd \,\dixi_{\delta,s}
= \ibaro u\dd\phi\dds\,\dixis,
\end{align}
where we also used that $u\dd\in C^0(\barO)$.
Comparing terms, it then follows
\begin{equation}\label{6to4-24}
u\dd = \phi\dds,
\quad |\xi\dds|-\text{a.e.~in $\barO$}.
\end{equation}
Then, since is clear that
\begin{equation}\label{6to4-25}
u\dd=\pm 1 \implica \frac{u\dd-(u\dd)\OO}{|u\dd-(u\dd)\OO|}=\pm1,
\end{equation}
coming back to \eqref{6to4-23} we deduce
\begin{equation}\label{6to4-26}
\ibaro \dixis
= \ibaro \phi\dds\frac{u\dd-(u\dd)\OO}{|u\dd-(u\dd)\OO|}\,\dixis
\le c \ibaro \phi\dds \big( u\dd-(u\dd)\OO \big)\,\dixis
= c \ibaro \big( u\dd-(u\dd)\OO \big) \, \dixi\dds.
\end{equation}
Here we used again in an essential way the
uniform separation property \eqref{unifsep}.
Collecting \eqref{6to4-22}-\eqref{6to4-26},
we then have
\begin{equation}\label{6to4-21b}
\duavb{\xi\dd, u\dd - (u\dd)\OO}
\ge \frac12 \| \xi\dda(t) \|_{L^1(\Omega)}
+ \eta \ibaro \dixis - c,
\end{equation}
for some $c\ge0$, $\eta>0$ independent of $\delta$.
On the other hand, mimicking \eqref{conto51}-\eqref{conto54},
we obtain
\begin{equation}\label{6to4-27}
\delta \| A u\dd \|^2
+ \agiu \| \nabla u\dd \|^2
+ \duavb{\xi\dd, u\dd - (u\dd)\OO}
\le c \big(\| \nabla w\dd \|
+ \epsi \| u_{\delta,t} \| + 1 \big),
\end{equation}
whence squaring, integrating in time,
and using \eqref{co4th12} and \eqref{6to4-01},
we obtain that the function
\begin{equation}\label{6to4-28}
t \mapsto \| \xi\dda(t) \|_{L^1(\Omega)}
+ \io \dixist
\quext{is bounded in $L^2(0,T)$, independently of $\delta$.}
\end{equation}
Integrating now \eqref{CH2w} in space, we deduce
\begin{equation}\label{6to4-29}
\io w\dd
= \frac12 \io a'(u\dd) | \nabla u\dd |^2
+ \io \xi\dd
- \lambda (u\dd)\OO,
\end{equation}
whence
\begin{equation}\label{6to4-29b}
\bigg| \io w\dd \bigg|
\le c \bigg( \| \nabla u\dd \|^2
+ \| \xi\dda(t) \|_{L^1(\Omega)}
+ \ibaro \dixist + 1 \bigg).
\end{equation}
Thus, squaring, integrating in time, and recalling
\eqref{6to4-01} and \eqref{6to4-28}, we finally
obtain \eqref{co4th13}.
\smallskip
\noinden
{\bf Key estimate.}~
To take the limit $\delta>0$, we have to provide a
bound on $\calA(u\dd)$ independent of $\delta$.
This will be obtained by means of the following
integration by parts formula due to Dal Passo,
Garcke and Gr\"un (\cite[Lemma 2.3]{DpGG}):
\bele\label{lemma:dpgg}
Let $h\in W^{2,\infty}(\RR)$ and $z\in W$. Then,
\begin{align} \no
& \io h'(z) |\nabla z|^2 \Delta z
= -\frac13 \io h''(z) |\nabla z|^4\\
\label{byparts}
& \mbox{}~~~~~
+ \frac23 \io h(z) \big( |D^2 z|^2 - | \Delta z|^2 \big)
+ \frac23 \iga h(z) II( \nabla z ),
\end{align}
where $II(\cdot)$ denotes the second fundamental
form of $\Gamma$.
\enle
\noinden
We then test \eqref{CH2w} by $Au\dd$ in the duality between
$V'$ and $V$. This gives the relation
\begin{equation} \label{conto71}
\frac\epsi2\ddt \| \nabla u\dd \|^2
+ \delta \| \nabla A u\dd \|^2
+\big( \calA(u\dd), A u\dd \big)
+ \duavg{\xi\dd,Au\dd}
= \big(\nabla w\dd, \nabla u\dd\big)
+ \lambda \| \nabla u\dd \|^2
\end{equation}
and some terms have to be estimated. First, we note
that
\begin{equation} \label{conto71b}
\big( \calA(u\dd), Au\dd \big)
= \Big( a(u\dd)\Delta u\dd + \frac{a'(u\dd)}2 |\nabla u\dd|^2,
\Delta u\dd \Big).
\end{equation}
Thus, using Lemma~\ref{lemma:dpgg} with the choice
of $h(\cdot)=a'(\cdot)/2$, we obtain
\begin{align} \no
& \big( \calA(u\dd), Au\dd \big)
= \io a(u\dd) | \Delta u\dd |^2
+ \frac13 \io a(u\dd) \big( |D^2 u\dd|^2 - | \Delta u\dd|^2 \big)\\
\label{conto45}
& \mbox{}~~~~~~~~~~
- \frac16 \io a''(u\dd)|\nabla u\dd|^4
+ \frac13 \iga a(u\dd) II(\nabla u\dd).
\end{align}
Let us now point out that, being $\Gamma$ smooth, we can estimate
\begin{equation} \label{conto46}
\frac13 \bigg| \iga a(u\dd) II(\nabla u\dd) \bigg|
\le c \| \nabla u\dd \|_{L^2(\Gamma)}^2
\le \omega \| A u\dd \|_{W}^2
+ c_\omega \| u\dd \|^2,
\end{equation}
for small $\omega>0$ to be chosen below,
the last inequality following from the continuity of
the trace operator (applied to $\nabla u$)
from $H^s(\Omega)$ into $L^2(\Gamma)$ for $s\in(1/2,1)$
and the compactness of the embedding
$W\subset H^{1+s}(\Omega)$ for $s$ in the same range.
Thus, using the {\sl concavity}\/ assumption \eqref{aconcave}
on $a$ and the fact that $|u_\delta|\le 1$ almost everywhere
in $(0,T)\times\Omega)$,
we get
\begin{equation}\label{elliptic3}
\big( \calA(u\dd), Au\dd \big)
\ge \eta \| A u\dd \|^2
- c,
\end{equation}
for proper strictly positive constants $\eta$ and $c$,
both independent of $\delta$.
Next, we observe that, by \eqref{incldelta} and
Lemma~\ref{BSesteso}, we obtain $\duavg{\xi\dd,Au\dd}\ge 0$.
Finally, we have
\begin{equation}\label{conto73}
- (\nabla w\dd, \nabla u\dd)
\le c \| \nabla w\dd \| \| \nabla u\dd \|,
\end{equation}
and the \rhs\ is readily estimated thanks
to \eqref{co4th12} and \eqref{co4th13}.
Thus, on account of \eqref{elliptic3},
integrating \eqref{conto71} in time,
we readily obtain
the last of \eqref{co4th11} as
well as \eqref{co4th13b}. Moreover, since
$-1\le u\dd\le 1$ almost everywhere, we have
for free
\begin{equation} \label{st-key}
\| u\dd \|_{L^\infty((0,T)\times\Omega)}\le 1.
\end{equation}
Thus, using the Gagliardo-Nirenberg inequality
\eqref{ineq:gn}, we have also
\begin{equation} \label{conv54}
u\dd \to u \quext{weakly in }\,
L^4(0,T;W^{1,4}(\Omega)).
\end{equation}
This readily entails
\begin{equation} \label{conv55}
\calA(u\dd) \to \calA(u) \quext{weakly in }\,
L^2(0,T;H).
\end{equation}
Thus, a comparison of terms in \eqref{CH2w} gives also
\begin{equation} \label{conv56}
\xi\dd \to \xi \quext{weakly in }\,
L^2(0,T;V').
\end{equation}
Then, we can take the limit $\delta\searrow 0$ in
\eqref{CH1w} and get \eqref{CH1w4th}. On the other
hand, if we take the limit of \eqref{CH2w}, we
obtain
\begin{equation} \label{CH2provv}
w = \calA(u) + \xi - \lambda u + \epsi u_t
\end{equation}
and we have to identify $\xi$. Actually, \eqref{conv56},
the strong convergence $u\dd\to u$ in $\LDV$
(following from \eqref{co4th11} and the Aubin-Lions lemma)
and Lemma~\ref{limimono} permit to show that
\begin{equation} \label{incldelta2}
\xi\in \fzw(u) \quext{a.e.~in }\,(0,T).
\end{equation}
On the other hand, a comparison argument in
\eqref{CH2provv} permits to see that
$\xi\in \LDH$, whence, thanks to \eqref{betavsbetaw2},
we obtain that $\xi(t)=f_0(u(t))\in H$ for a.e.~$t\in(0,T)$.
This concludes the proof of Theorem~\ref{teo6thto4th}.
\end{proof}
\section{Analysis of the fourth order problem}
\label{sec:4th}
In this section, we will prove existence of
a weak solution to Problem~\eqref{CH1}-\eqref{neum-intro}
in the fourth order case $\delta =0$ by means of a direct
approach not relying on the 6th order approximation.
This will allow us to consider a general function
$a$ (without the concavity assumption \eqref{aconcave}).
More precisely, we have the following
\bete\label{teo:4th}
Let assumptions\/ \eqref{hpa1}-\eqref{hpf2} hold,
let $\epsi\ge 0$ and let
\begin{equation}\label{hpu0-4}
u_0\in V, \quad
F(u_0)\in L^1(\Omega), \quad
(u_0)\OO \in (-1,1).
\end{equation}
Then, there exists\/ {\rm at least} one weak
solution to the 4th order problem, in the sense
of\/ {\rm Definition~\ref{def:weaksol4th}.}
\ente
\noinden
The rest of the section is devoted to the proof of the
above result, which is divided into several steps.
\smallskip
\noinden
{\bf Phase-field approximation.}~~For
$\sigma\in(0,1)$, we consider the system
\begin{align}\label{CH1-4ap}
& u_t + \sigma w_t + A w = 0,\\
\label{CH2-4ap}
& w = \calA(u) + f\ssi(u) - \lambda u + (\epsi+\sigma) u_t.
\end{align}
This will be endowed with the initial conditions
\begin{equation}\label{init-4ap}
u|_{t=0} = u\zzs, \qquad
w|_{t=0} = 0.
\end{equation}
Similarly as before (compare with \eqref{defiuzzd}),
we have set
\begin{equation}\label{defiuzzs}
u\zzs + \sigma A^2 u\zzs = u_0
\end{equation}
and, by standard elliptic regularity, we have that
\begin{equation}\label{propuzzs}
u\zzs \in H^5(\Omega)\subset C^{3+\alpha}(\barO)
\quext{for }\,\alpha\in(0,1/2),
\qquad \dn u\zzs=\dn A u\zzs=0,~~\text{on }\,\Gamma.
\end{equation}
Moreover, of course, $u\zzs\to u_0$ in a suitable sense
as $\sigma\searrow0$.
\smallskip
\noinden
{\bf Fixed point argument.}~~We now prove existence of
a local solution to the phase-field
approximation by a further Schauder fixed point argument.
Namely, we introduce the system
\begin{align}\label{CH1-4pf}
& u_t + \sigma w_t + A w = 0,\\
\label{CH2-4pf}
& \barw = -a(\baru) \Delta u - \frac{a'(\baru)}2 | \nabla \baru |^2
+ f\ssi(\baru) - \lambda \baru + (\epsi + \sigma) u_t, \qquad
\dn u = 0~~\text{on }\,\Gamma,
\end{align}
which we still endow with the condition \eqref{init-4ap}.
Here, $f\ssi$ is chosen as in \eqref{defifsigma}.
Next, we set
\begin{equation}\label{deficalU}
\calU:=\left\{ u\in C^{0,1+\alpha} ([0,T_0]\times \barO):~
u|_{t=0}=u\zzs,~
\| u \|_{C^{0,1+\alpha}}\le 2R \right\},
\end{equation}
where $R:=\max\{1,\|u\zzs\|_{C^{1+\alpha}(\barO)}\}$
and $T_0$ will be chosen at the end of the argument.
It is clear that $R$ depends in fact on $\sigma$
(so that the same will happen for $T_0$). This dependence
is however not emphasized here.
For the definition of the parabolic H\"older spaces
used in this proof we refer the reader to
\cite[Chap.~5]{Lu}, whose notation is adopted.
Moreover, in the sequel,
in place of $C^{0,\alpha}([0,T_0]\times \barO)$
(and similar spaces) we will just write $C^{0,\alpha}$,
for brevity. We then also define
\begin{equation}\label{deficalW}
\calW:=\left\{ w\in C^{0,\alpha}:~
w|_{t=0}=0,~
\| w \|_{C^{0,\alpha}}\le R \right\},
\end{equation}
where $R$ is, for simplicity, the same number
as in \eqref{deficalU}.
Then, choosing $(\baru,\barw)$ in $\calU\times\calW$
and inserting it in \eqref{CH2-4pf}, we observe that,
by the Lipschitz regularity of $a$ (cf.~\eqref{hpa1})
and standard multiplication properties of
H\"older spaces, there exists a computable
monotone function $Q$, also depending on $\sigma$,
but independent of the time $T_0$, such that
\begin{equation}\label{prop-norme}
\|a(\baru)\|_{C^{0,\alpha}}
+ \big\|a'(\baru)|\nabla \baru|^2\big\|_{C^{0,\alpha}}
+ \|f\ssi(\baru)\|_{C^{0,\alpha}}
\le Q(R).
\end{equation}
Thanks to \cite[Thm.~5.1.21]{Lu}, then there exists one and only one
solution $u$ to \eqref{CH2-4pf} with the first initial condition
\eqref{init-4ap}. This solution satisfies
\begin{equation}\label{regou-4pf}
\| u \|_{C^{1,2+\alpha}}
\le Q(R).
\end{equation}
Substituting then $u_t$ in \eqref{CH1-4pf} and applying the same
theorem of \cite{Lu} to this equation with the second initial condition
\eqref{init-4ap}, we then obtain one and only one solution
$w$, with
\begin{equation}\label{regow-4pf}
\| w \|_{C^{1,2+\alpha}}
\le Q(R).
\end{equation}
We then note as $\calT$ the map
such that $\calT: (\baru,\barw) \mapsto (u,w)$. As before,
we need to show that:\\[2mm]
{\sl (i)}~~$\calT$ takes its values in $\calU\times\calW$;\\[1mm]
{\sl (ii)}~~$\calT$ is continuous with respect to the
$C^{0,1+\alpha}\times C^{0,\alpha}$ norm
of $\calU\times\calW$;\\[1mm]
{\sl (iii)}~~$\calT$ is a compact map.\\[2mm]
First of all let us prove {\sl (i)}. We just refer to the component
$u$, the argument for $w$ being analogous and in fact simpler.
We start observing that, if $u\in \Pi_1 (\calT (\calU\times \calW))$,
$\Pi_1$ denoting the projection on the first component, then
\begin{equation}\label{i-11}
\| u(t) \|_{C^\alpha(\barO)}
\le \| u_0 \|_{C^\alpha(\barO)}
+ \int_0^t \| u_t(s) \|_{C^\alpha(\barO)} \,\dis
\le R + T_0 Q(R),
\quad \perogni t\in[0,T_0],
\end{equation}
which is smaller than $2R$ if $T_0$ is chosen suitably.
Next, using the continuous embedding (cf.~\cite[Lemma~5.1.1]{Lu})
\begin{equation} \label{contemb}
C^{1,2+\alpha} \subset
C^{1/2}([0,T_0];C^{1+\alpha}(\barO))
\cap C^{\alpha/2}([0,T_0];C^2(\barO)),
\end{equation}
we obtain that, analogously,
\begin{equation}\label{i-12}
\| \nabla u(t) \|_{C^\alpha(\barO)}
\le \| \nabla u_0 \|_{C^\alpha(\barO)}
+ T_0^{1/2} \| u \|_{C^{1/2}([0,T_0];C^{1+\alpha}(\barO))}
\le R + T_0^{1/2} Q(R).
\end{equation}
Hence, passing to the supremum for $t\in[0,T_0]$, we see
that the norm of $u$ in $C^{0,1+\alpha}$ can be made
smaller than $2R$ if $T_0$ is small enough.
Thus, {\sl (i)}\ is proved.
\medskip
Let us now come to {\sl (iii)}. As before, we just deal with the
component $u$. Namely, on account of \eqref{regou-4pf},
we have to show that the space $C^{1,2+\alpha}$ is compactly
embedded into $C^{0,1+\alpha}$. Actually, by
\eqref{contemb} and using standard compact inclusion
properties of H\"older spaces, this relation is proved easily.
Hence, we have {\sl (iii)}.
\medskip
Finally, we have to prove {\sl (ii)}. This property
is however straighforward. Actually, taking
$(\baru_n,\barw_n)\to (\baru,\barw)$ in
$\calU\times \calW$, we have that the corresponding
solutions $(u_n,w_n)=\calT(\baru_n,\barw_n)$
are bounded in the sense of \eqref{regou-4pf}-\eqref{regow-4pf}
uniformly in $n$. Consequently, a standard weak
compactness argument, together with
the uniqueness property for the initial value problems
associated to \eqref{CH1-4pf} and to \eqref{CH2-4pf},
permit to see that the {\sl whole sequence}\/
$(u_n,w_n)$ converges to a unique limit point
$(u,w)$ solving \eqref{CH1-4pf}-\eqref{CH2-4pf}
with respect to the limit data $(\baru,\barw)$. Moreover, by the
compactness property proved in {\sl (iii)}, this
convergence holds with respect to the original topology
of $\calU\times \calW$. This proves that
$(u,w) = \calT (\baru,\barw)$, i.e., {\sl (ii)}\
holds.
\medskip
\noinden
{\bf A priori estimates.}~~For any $\sigma>0$, we have obtained
a local (i.e., with a final time $T_0$ depending on $\sigma$)
solution to \eqref{CH1-4ap}-\eqref{CH2-4ap} with the
initial conditions \eqref{init-4ap}. To emphasize the
$\sigma$-dependence, we will note it by $(u\ssi,w\ssi)$
in the sequel. To let $\sigma\searrow 0$, we now
devise some of a priori estimates
uniform both with respect to $\sigma$ and with respect to
$T_0$. As before,
this will give a global solution in the limit and, to
avoid technicalities, we can
directly work on the time interval $[0,T]$.
Notice that the high regularity of $(u\ssi,w\ssi)$
gives sense to all the calculations performed below
(in particular, to all the integrations by parts).
That said, we repeat the ``Energy estimate'', exactly as
in the previous sections. This now gives
\begin{align} \label{st11ap}
& \| u\ssi \|_{\LIV} + \| F\ssi(u\ssi) \|_{L^\infty(0,T;L^1(\Omega))} \le c,\\
\label{st12ap}
& (\sigma+\epsi)^{1/2} \| u_{\sigma,t} \|_{\LDH} \le c,\\
\label{st13ap}
& \sigma^{1/2} \| w\ssi \|_{L^\infty(0,T;H)}
+ \| \nabla w\ssi \|_{\LDH} \le c.
\end{align}
Next, working as in the ``Second estimate'' of
Subsection~\ref{sec:apriori}, we obtain the analogue
of \eqref{st51} and \eqref{st52}.
To estimate $f\ssi(u\ssi)$ in $H$, we
now test \eqref{CH2-4ap} by $f\ssi(u\ssi)$, to get
\begin{align}\no
& \frac{\epsi+\sigma}2 \ddt \io F\ssi(u\ssi)
+ \io \Big( a(u\ssi) f\ssi'(u\ssi)
+ \frac{a'(u\ssi)}2 f\ssi(u\ssi) \Big) | \nabla u\ssi |^2
+ \| f\ssi(u\ssi) \|^2 \\
\label{conto51-4th}
& \mbox{}~~~~~
= \big( w\ssi + \lambda u\ssi, f\ssi(u\ssi) \big),
\end{align}
and it is a standard matter to estimate the \rhs\ by using
the last term on the \lhs, H\"older's and Young's inequalities,
and properties \eqref{st11ap} and \eqref{st52}.
Now, we notice that, thanks to \eqref{goodmono},
\begin{equation}\label{4th-21}
a(r) f\ssi'(r) + \frac{a'(r)}2 f\ssi(r)
\ge \agiu f\ssi'(r) - c | f\ssi(r) |
\ge \frac{\agiu}2 f\ssi'(r) - c
\quad \perogni r\in [-2,2],
\end{equation}
with the last $c$ being independent of $\sigma$.
On the other hand, for $r\not\in [-2,2]$
we have that $a'(r)=0$ by \eqref{hpa2}.
Hence, also thanks to \eqref{st11ap}, the second term
on the \lhs\ of \eqref{conto51-4th} can be controlled.
We then arrive at
\begin{equation} \label{st21ap}
\| f\ssi(u\ssi) \|_{L^2(0,T;H)} \le c.
\end{equation}
The key point is represented by the next estimate, which is
used to control the second space derivatives of $u$. To do
this, we have to operate a change of variable, first.
Namely, we set
\begin{equation} \label{defiphi}
\phi(s):=\int_0^s a^{1/2} (r) \, \dir, \qquad
z\ssi:=\phi(u\ssi)
\end{equation}
and notice that, by \eqref{hpa1}-\eqref{hpa2}, $\phi$ is monotone
and Lipschitz together with its inverse. Then, by \eqref{st11ap},
\begin{equation} \label{st21ap2}
\| z\ssi \|_{L^\infty(0,T;V)} \le c
\end{equation}
and it is straighforward to realize that \eqref{CH2-4ap}
can be rewritten as
\begin{equation} \label{CH2-z}
w\ssi = - \phi'(u\ssi) \Delta z\ssi
+ f\ssi\circ\phi^{-1}(z\ssi) - \lambda u\ssi
+ (\epsi + \sigma) u_{\sigma,t},
\qquad \dn u\ssi = 0~~\text{on }\,\Gamma.
\end{equation}
By the H\"older continuity of $u\ssi$ up to its second space
derivatives and the Lipschitz continuity of $a$ and $a'$
(cf.~\eqref{hpa1}-\eqref{hpa2}), $-\Delta z\ssi$ is also H\"older
continuous in space.
Thus, we can use it as a test function in \eqref{CH2-z}.
Using the monotonicity of $f\ssi$ and $\phi^{-1}$,
and recalling \eqref{st21ap}, we then easily obtain
\begin{equation} \label{st31ap}
\| z\ssi \|_{L^2(0,T;W)} \le c.
\end{equation}
\smallskip
\noinden
{\bf Passage to the limit.}~~As a consequence
of \eqref{st11ap}-\eqref{st13ap},
\eqref{st51}-\eqref{st52} and
\eqref{st21ap}, we have
\begin{align} \label{co11ap}
& u\ssi \to u \quext{weakly star in }\,
\HUVp \cap \LIV,\\
\label{co12ap}
& (\sigma+\epsi) u_{\sigma,t} \to \epsi u_t
\quext{weakly in }\, \LDH,\\
\label{co13ap}
& f\ssi(u\ssi) \to \barf
\quext{weakly in }\, \LDH,\\
\label{co14ap}
& w\ssi \to w
\quext{weakly in }\, \LDV,\\
\label{co15ap}
& u_{\sigma,t} + \sigma w_{\sigma,t}
\to u_t \quext{weakly in }\, \LDVp,
\end{align}
for suitable limit functions $u$, $w$ and $\barf$.
Here and below, all convergence relations have to
be intended to hold up to (nonrelabelled) subsequences
of $\sigma\searrow0$. Now, by the Aubin-Lions lemma,
we have
\begin{equation} \label{co21ap}
u\ssi \to u \quext{strongly in }\, \CZH
\quext{and a.e.~in }\,Q.
\end{equation}
Then, \eqref{co13ap} and a standard monotonicity
argument (cf.~\cite[Prop.~1.1]{barbu})
imply that $\barf=f(u)$ a.e.~in $Q$.
Furthermore, by \eqref{hpa1}-\eqref{hpa2}
and the generalized Lebesgue's theorem,
we have
\begin{equation} \label{co21ap2}
a(u\ssi) \to a(u),~~a'(u\ssi) \to a'(u),~~
\quext{strongly in }\,
L^q(Q)~~\text{for all }\,q\in[1,+\infty).
\end{equation}
Analogously, recalling \eqref{st21ap2},
$z\ssi=\phi(u\ssi)\to \phi(u)=:z$,
strongly in $L^q(Q)$ for all $q\in[1,6)$.
Actually, the latter relation holds also weakly
in $\LDW$ thanks to the bound \eqref{st31ap}.
Moreover, by \eqref{st21ap2}, \eqref{st31ap}
and interpolation, we obtain
\begin{equation} \label{co22ap}
\| \nabla z\ssi \|_{L^{10/3}(Q)} \le c,
\end{equation}
whence, clearly, it is also
\begin{equation} \label{co22ap2}
\| \nabla u\ssi \|_{L^{10/3}(Q)} \le c.
\end{equation}
As a consequence, being
\begin{equation} \label{co22ap3}
- \Delta u\ssi = - \frac1{a^{1/2}(u\ssi)} \Delta z\ssi
+ \frac{a'(u\ssi)}{2a(u\ssi)} | \nabla u\ssi |^2,
\end{equation}
we also have that
\begin{equation} \label{co22ap4}
\Delta u\ssi \to \Delta u
\quext{weakly in }\, L^{5/3}(Q).
\end{equation}
Combining this with \eqref{co11ap} and using the
generalized Aubin-Lions lemma (cf., e.g., \cite{Si}),
we then arrive at
\begin{equation} \label{co22ap5}
u\ssi \to u
\quext{strongly in }\, L^{5/3}(0,T;W^{2-\epsilon,5/3}(\Omega))
\cap C^0([0,T];H^{1-\epsilon}(\Omega)),
\quad \perogni \epsilon > 0,
\end{equation}
whence, by standard interpolation and embedding properties
of Sobolev spaces, we obtain
\begin{equation} \label{co22ap6}
\nabla u\ssi \to \nabla u
\quext{strongly in }\, L^q(Q) \quext{for some }\,q>2.
\end{equation}
Consequently, recalling \eqref{co21ap2},
\begin{equation} \label{co22ap7}
a'(u\ssi) |\nabla u\ssi|^2 \to a'(u) |\nabla u|^2,
\quext{say, weakly in }\, L^1(Q).
\end{equation}
This is sufficient to take the limit $\sigma\searrow 0$
in \eqref{CH2-4ap} and get back \eqref{CH2th}.
To conclude the proof, it only remains to show the
regularity \eqref{regou4} for what concerns the second
space derivatives of $u$. Actually, by
\eqref{st31ap} and the Gagliardo-Nirenberg inequality
\eqref{ineq:gn},
\begin{equation} \label{co22ap8}
z \in L^2(0,T;W) \cap L^\infty(Q) \subset
L^4(0,T;W^{1,4}(\Omega)).
\end{equation}
Thus, we have also $u \in L^4(0,T;W^{1,4}(\Omega))$ and,
consequently, a comparison of terms in \eqref{CH2th}
permits to see that $\Delta u \in L^2(0,T;H)$, whence
\eqref{regou4} follows from elliptic regularity.
The proof of Theorem~\ref{teo:4th} is concluded.
\section{Further properties of weak solutions}
\label{sec:uniq}
\subsection{Uniqueness for the 4th order problem}
\label{subsec:uniq}
We will now prove that, if the interfacial (i.e., gradient)
part of the free energy $\calE\dd$ satisfies a
{\sl convexity}\/ condition (in the viscous case $\epsi>0$)
or, respectively, a {\sl strict convexity}\/ condition
(in the non-viscous case $\epsi=0$), then the solution
is unique also in the 4th order case.
Actually, the stronger assumption
(corresponding to $\kappa>0$ in the statement below)
required in the non-viscous case is needed for the purpose
of controlling the nonmonotone part of $f(u)$, while in the
viscous case we can use the term $\epsi u_t$
for that aim.
It is worth noting that,
also from a merely thermodynamical point of view,
the convexity condition is a rather natural requirement.
Indeed, it corresponds to asking the second differential
of $\calE\dd$ to be positive definite, to ensure
that the stationary solutions are dynamically stable
(cf., e.g., \cite{Su} for more details).
\bete\label{teouniq}
Let the assumptions of\ {\rm Theorem~\ref{teo:4th}} hold
and assume that, in addition,
\begin{equation} \label{1aconc}
a''(r)\ge 0, \quad
\Big(\frac1a\Big)''(r)\le -\kappa,
\quad\perogni r\in[-1,1],
\end{equation}
where $\kappa>0$ if $\epsi=0$ and $\kappa\ge 0$
if $\epsi> 0$. Then, the 4th order problem admits a
unique weak solution.
\ente
\begin{proof}
Let us denote by $J$ the gradient part of the energy,
i.e.,
\begin{equation} \label{defiJ}
J:V\to [0,+\infty), \qquad
J(u):=\io \frac{a(u)}2 | \nabla u |^2.
\end{equation}
Then, we clearly have
\begin{equation} \label{Jprime}
\duavg{J'(u),v} =
\io \Big( a(u) \nabla u \cdot \nabla v
+ \frac{a'(u)}2 |\nabla u|^2 v \Big).
\end{equation}
and we can correspondingly compute the second derivative
of $J$ as
\begin{equation}\label{Jsecond}
\duavg{J''(u)v,z} =
\io \Big( \frac{a''(u)|\nabla u|^2 vz}2
+ a'(u) v \nabla u\cdot\nabla z
+ a'(u) z \nabla u\cdot\nabla v
+ a(u) \nabla v\cdot\nabla z \Big).
\end{equation}
To be more precise, we have that
$J'(u)\in V'$ and $J''(u)\in \calL(V,V')$
at least for $u\in W$ (this may instead not be true
if we only have $u\in V$, due to the quadratic terms
in the gradient). This is however the case for the
4th order system since for any weak solution
we have that $u(t) \in W$ at least for a.e.~$t\in(0,T)$.
From \eqref{Jsecond}, we then have in particular
\begin{align} \no
\duavg{J''(u)v,v}
& = \io \Big( \frac{a''(u)|\nabla u|^2 v^2}2
+ 2 a'(u) v \nabla u\cdot\nabla v
+ a(u) | \nabla v |^2 \Big)\\
\label{Jvv}
& \ge \io \Big( a(u) - \frac{2 a'(u)^2}{a''(u)} \Big)
| \nabla v|^2,
\end{align}
whence the functional $J$ is convex, at least when restricted
to functions $u$ such that
\begin{equation} \label{doveJconv}
u \in W, \quad u(\Omega)\subset [-1,1],
\end{equation}
provided that $a$ satisfies
\begin{equation} \label{aaprimo}
a(r)a''(r) - 2a'(r)^2 \ge 0
\quad\perogni r\in [-1,1].
\end{equation}
Noting that
\begin{equation} \label{1asecondo}
\Big(\frac1a\Big)''
= \frac{2(a')^2-aa''}{a^3},
\end{equation}
we have that $J$ is (strictly) convex
if $1/a$ is (strictly) concave, i.e.,
\eqref{1aconc} holds (cf.~also \cite[Sec.~3]{DNS}
for related results). Note that,
in deducing the last inequality in \eqref{Jvv},
we worked as if it was $a''>0$. However, if
$a''(r)=0$ for some $r$, then also $a'(r)$
has to be $0$ due to \eqref{aaprimo}. So, this
means that in the set $\{u=r\}$ the first two
summands in the \rhs\ of the first line of
\eqref{Jvv} identically vanish.
\smallskip
That said, let us write both \eqref{CH1w} and
\eqref{CH2w} for a couple of solutions $(u_1,w_1)$, $(u_2,w_2)$,
and take the difference.
Setting $(u,w):=(u_1,w_1)-(u_2,w_2)$, we obtain
\begin{align}\label{CH1d}
& u_t + A w = 0,\\
\label{CH2d}
& w = J'(u_1) - J'(u_2) + f(u_1) - f(u_2) + \epsi u_t.
\end{align}
Then, we can test \eqref{CH1d} by
$A^{-1}u$, \eqref{CH2d} by $u$, and take the difference.
Indeed, $u=u_1-u_2$ has zero mean value
by \eqref{consmedie}. We obtain
\begin{equation} \label{contod1}
\frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big)
+ \duavg{J'(u_1) - J'(u_2),u}
+ \big( f(u_1) - f(u_2), u \big) = 0
\end{equation}
and, using the convexity of $J$
coming from \eqref{1aconc} and the
$\lambda$-monotonicity of $f$
(see~\eqref{hpf1}), we have,
for some function $\xi$ belonging to $W$
a.e.~in time and taking its values in $[-1,1]$,
\begin{equation} \label{contod2}
\frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big)
+ \kappa \| \nabla u \|^2
\le \frac12 \ddt \Big( \| u \|_{V'}^2 + \epsi \| u \|^2 \Big)
+ \duavg{J''(\xi) u, u}
\le \lambda \| u \|^2.
\end{equation}
Thus, in the case $\epsi>0$ (where it may be $\kappa=0$),
we can just use Gronwall's Lemma. Instead, if
$\epsi=0$ (so that we assumed $\kappa>0$),
by the Poincar\'e-Wirtinger inequality we
have
\begin{equation} \label{contod3}
\lambda \| u \|^2
\le \frac\kappa2 \| \nabla u \|^2
+ c \| u \|_{V'}^2,
\end{equation}
and the thesis follows again by applying
Gronwall's lemma to \eqref{contod2}.
\end{proof}
\subsection{Additional regularity}
\label{sec:add}
We prove here parabolic regularization properties
of the solutions to the 4th order system
holding in the case of a convex energy functional.
An analogous result would hold also for the 6th order
system under general conditions on $a$ since the
bilaplacean in that case dominates the lower order
terms (we omit the details).
\bete\label{teoreg}
Let the assumptions of\/ {\rm Theorem~\ref{teouniq}} hold.
Then, the solution satisfies the additional
regularity property
\begin{equation} \label{add-reg}
\| u \|_{L^\infty(\tau,T;W)}
+ \| u \|_{L^\infty(\tau,T;W^{1,4}(\Omega))}
\le Q(\tau^{-1}) \quad \perogni \tau>0,
\end{equation}
where $Q$ is a computable monotone function whose expression
depends on the data of the problem and, in particular,
on $u_0$.
\ente
\begin{proof}
The proof is based on a further a priori estimate, which
has unfortunately a formal character in the present
regularity setting. To justify it, one should proceed
by regularization. For instance, a natural choice would be
that of refining the fixed point argument leading to
existence of a weak solution (cf.~Sec.~\ref{sec:4th})
by showing (e.g., using a bootstrap regularity argument)
that, at least locally in time, the solution lies
in higher order H\"older spaces. We leave the details
to the reader.
That said, we test \eqref{CH1w} by $w_t$ and subtract
the result from the time derivative of \eqref{CH2w}
tested by $u_t$. We obtain
\begin{equation} \label{contoe1}
\frac 12 \ddt \| \nabla w \|^2
+ \frac\epsi2 \ddt \| u_t \|^2
+ \duavg{J''(u) u_t, u_t}
+ \io f'(u) u_t^2 \le 0.
\end{equation}
Then, by convexity of $J$,
\begin{equation} \label{contoe2}
\duavg{J''(u) u_t, u_t}
\ge \kappa \| u_t \|_{\LDV}^2.
\end{equation}
On the other hand, the
$\lambda$-monotonicity of $f$ gives
\begin{equation} \label{contoe3}
\io f'(u) u_t^2
\ge - \lambda \| u_t \|_{\LDH}^2
\end{equation}
and, if $\epsi=0$ (so that $\kappa>0$),
we have as before
\begin{equation} \label{contoe3-b}
- \lambda \| u_t \|_{\LDH}^2
\ge - \frac\kappa2 \| u_t \|_{\LDV}^2
- c \| u_t \|_{\LDVp}^2.
\end{equation}
Thus, recalling the first of \eqref{regou}
and applying the {\sl uniform}\/ Gronwall lemma
(cf.~\cite[Lemma~I.1.1]{Te}),
it is not difficult to infer
\begin{equation} \label{ste1}
\| \nabla w \|_{L^\infty(\tau,T;H)}
+ \epsi^{1/2} \| u_t \|_{L^\infty(\tau,T;H)}
+ \kappa \| u_t \|_{L^2(\tau,T;V)}
\le Q(\tau^{-1}) \quad \perogni \tau>0.
\end{equation}
Next, testing \eqref{CH2w} by $u-u\OO$ and proceeding
as in the ``Second estimate'' of Subsection~\ref{sec:apriori},
but taking now the essential supremum as time
varies in $[\tau,T]$, we arrive at
\begin{equation} \label{ste2}
\| w \|_{L^\infty(\tau,T;V)}
+ \| f(u) \|_{L^\infty(\tau,T;L^1(\Omega))}
\le Q(\tau^{-1}) \quad \perogni \tau>0.
\end{equation}
Thus, thanks to \eqref{ste2}, we can test \eqref{CH2th} by
$-\Delta z$, with $z=\phi(u)$ (cf.~\eqref{defiphi}). Proceeding
similarly with Section~\ref{sec:4th} (but taking now the
supremum over $[\tau,T]$ rather than integrating in time),
we easily get \eqref{add-reg}, which concludes the proof.
\end{proof}
\subsection{Energy equality}
\label{sec:long}
As noted in Section~\ref{sec:6thto4th},
any weak solution to the 6th order system satisfies
the energy {\sl equality} \eqref{energy-6th}.
We will now see that the same property holds
also in the {\sl viscous}\/ 4th order
case (i.e., if $\delta=0$ and $\epsi>0$).
More precisely, we can prove the
\bepr\label{prop:energy}
Let the assumptions of\/ {\rm Theorem~\ref{teo:4th}}
hold and let $\epsi>0$. Then, any weak solution to the
4th order system satisfies the\/ {\rm integrated} energy
equality
\begin{equation}\label{energy-4th-i}
\calE_0(u(t)) = \calE_0(u_0)
- \int_0^t \big( \| \nabla w(s) \|^2
- \epsi \| u_t(s) \|^2 \big) \dis
\quad \perogni t\in[0,T].
\end{equation}
\empr
\begin{proof}
As before, we proceed by testing \eqref{CH1w4th} by $w$,
\eqref{CH2th} by $u_t$ and taking the difference. As
$u_t\in \LDH$ and $f_0(u)\in \LDH$
(cf.~\eqref{regou4} and \eqref{regofu4}),
then the integration by parts
\begin{equation} \label{co91}
\big(f(u),u_t\big)
= \ddt \io F(u), \quext{a.e.~in }\,(0,T)
\end{equation}
is straighforward (it follows directly
from \cite[Lemma~3.3, p.~73]{Br}).
Moreover, in view of \eqref{regou4},
assumption \eqref{x11} of Lemma~\ref{lemma:ipp}
is satisfied. Hence, by \eqref{x14},
we deduce that
\begin{equation} \label{co92}
\int_0^t \big(\calA(u(s)),u_t(s)\big)\,\dis
= \io \frac{a(u(t))}2 |\nabla u(t)|^2
- \io \frac{a(u_0)}2 |\nabla u_0|^2.
\end{equation}
Combining \eqref{co91} and \eqref{co92}, we immediately get
the assert.
\end{proof}
\noinden
It is worth noting that the energy equality obtained
above has a key relevance in the investigation
of the long-time behavior of
the system. In particular, given $m\in(-1,1)$ (the spatial mean
of the initial datum, which is a conserved quantity due
to~\eqref{consmedie}), we can define the {\sl phase space}
\begin{equation} \label{defiXd}
\calX\ddm:=\big\{u\in V:~\delta u\in W,~F(u)\in L^1(\Omega),~
u\OO = m\big\}
\end{equation}
and view the system (both for $\delta>0$ and for $\delta=0$)
as a (generalized) dynamical process in $\calX\ddm$.
Then, \eqref{energy-4th-i} (or its 6th order analogue)
stands at the basis of the so-called {\sl energy method}\/
(cf.~\cite{Ba1,MRW}) for proving existence of the
{\sl global attractor} with respect to the ``strong''
topology of the phase space.
This issue will be analyzed in a forthcoming work.
\beos\label{nonviscous}
Whether the equality \eqref{energy-4th-i} still holds
in the nonviscous case $\epsi=0$ seems to be
a nontrivial question.
The answer would be positive in case one could prove
the integration by parts formula
\begin{equation} \label{co101}
\itt\duavb{u_t,\calA(u)+f(u)}
= \io\Big(\frac{a(u(t))}2 |\nabla u(t)|^2 + F(u(t))\Big)
- \io\Big(\frac{a(u_0)}2|\nabla u(t)|^2 + F(u_0)\Big),
\end{equation}
under the conditions
\begin{equation} \label{co102}
u \in \HUVp \cap L^2(0,T;W) \cap L^\infty(Q),
\qquad \calA(u)+f(u) \in \LDV,
\end{equation}
which are satisfied by our solution (in particular
the latter \eqref{co102} follows by a comparison of
terms in \eqref{CH2th}, where now $\epsi=0$).
Actually, if \eqref{co102} holds, then both hands
sides of \eqref{co101} make sense. However, devising
an approximation argument suitable for proving
\eqref{co101} could be a rather delicate problem.
\eddos
| {'timestamp': '2012-06-26T02:07:39', 'yymm': '1106', 'arxiv_id': '1106.1581', 'language': 'en', 'url': 'https://arxiv.org/abs/1106.1581'} |
\section{Introduction}\medskip
A hallmark of animal brain is the capability of forming decisions from sensory inputs to guide meaningful behavioral responses. Understanding the relationship between behavioral responses and how they are encoded in brains is a major goal in the neuroscience.
To this end, behavior training of nonhuman primates has been studied in a variety of decision tasks, such as perceptual discrimination \citep{shadlen2001neural}.
These electrophysiological experiments have uncovered that neural signals at the single-neuron level are correlated with specific aspects of decision computation. However, in the mammalian brain, a decision is made not by a single neuron, but by the collective dynamics of neural circuits. Unfortunately, the animal-based experiment does not allow us to access all of the relevant neural circuits in the brain. To address this problem, neural circuit modeling with recurrent neural network has been used to uncover circuit mechanisms underlying complex behaviors \citep{mante2013context}.
The contributions of the prefrontal cortex-basal ganglia to complex behaviors are still not completely understood. A wide array of evidence~\citep{o2004dissociable,sohal2009parvalbumin} shows that the prefrontal cortex-basal ganglia circuit appears to implement RL algorithm and is driven by a reward prediction error (RPE). This RPE signal, conveyed by dopamine, is thought to gate Hebbian synaptic plasticity in the striatum \citep{montague1996framework}. Over the last decade, many explicit RL models have been produced to understand the functions of dopamine and prefrontal cortex-basal ganglia circuits~\citep{cohen2009neurocomputational,maia2009reinforcement}. Recent functional magnetic resonance imaging (fMRI) studies in humans revealed that the activation in the hippocampus, a central for storing episodic memory \citep{paller2002observing}), is modulated by reward, demonstrating a link between episodic memory and RL \citep{wittmann2005reward,krebs2009novelty}. However, the existing RL models do not take into account the effect of episodic memory, which is necessary for those who want to explore decision-making by modeling circuits.
In this paper, we construct an Actor-Critic framework (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{right}) based on RL theories in prefrontal cortex-basal ganglia systems (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{left}) and RL algorithms for artificial systems. The Actor-Critic framework was modeled by recurrent neural network, which is a natural class of models to study mechanisms in neuroscience systems because they are both dynamical and computational \citep{mante2013context}.
This framework was trained for two classical decision tasks, \textit{i.e.}, random dots motion (RDM) direction discrimination task \citep{roitman2002response} and value-based economic choice task \citep{padoa2006neurons}. For RDM task, a monkey is asked to arbitrarily choose the direction (left or right) of a flow of moving dots (\textcolor{blue}{Fig.~\ref{rdm_task}}a). We show that an agent reproduces qualitative results, that is, behavioral data generated by our framework can be fitted with: (i) psychometric function, a tool for analyzing the relationship between accuracy and stimulus strength (\textcolor{blue}{Fig.~\ref{rdm_task}}b, top), and (ii) chronometric function, a tool for analyzing the relationship between response time and stimulus strength (\textcolor{blue}{Fig.~\ref{rdm_task}}b, bottom). For value-based economic choice task, in which a monkey is asked to choose between two types of juice offered in different amounts (\textcolor{blue}{Fig.~\ref{fig3}}).
The activity of units in the critic network shows similar types of response observed in the orbitofrontal cortex of monkeys (\textcolor{blue}{Fig.~\ref{fig4}}). These results confirm that our framework can serve as a platform for studying diverse cognitive computations and mechanisms.
Moreover, anatomical and electrophysiological studies in animals, including humans, suggest that the episodic memory in the hippocampus is critical for adaptive behavior. Particularly, the latest research suggests that the hippocampus supports deliberation during value-based economic choice task \citep{bakkour2019hippocampus}. Our computational framework also supports this experimental conclusion (\textcolor{blue}{Fig.~\ref{fig5}}). Yet how the brain selects experiences, from many possible options, to govern the decisions has always been an open question. To address this gap, we investigated which episodic memories should be accessed to govern future decisions by conducting experiment on this validated Actor-Critic framework in Section~\ref{investigate}. The results show that salient events sampled from episodic memories can effectively shorten deliberation time than common events in the decision-making process, suggesting that salient events stored in the hippocampus could be prioritized to propagate reward information and guide decisions.
\section{Background}\medskip
In the present work, we first trained our RNN-based Actor-Critic model using two classical decision tasks, and then conduct experiment on this optimized model to explore how episodic memory govern decision-making. The framework we designed is based on four assumptions listed below:
1. \textbf{Actor-critic architecture for RL in biological system.} This assumption states that a cortex-basal ganglia circuit (PFC-BG) can be modeled as an actor-critic architecture \citep{dayan2002reward,o2004dissociable,haber2014place}. In this process, the midbrain dopamine neurons play a central role, which code reinforcement prediction error. The actor-critic view of action selection in the brain suggests that the dorsal striatum in PFC-BG is responsible for learning stimulus-response association, which can be thought of as the `actor' in the actor-critic architecture. The ventral striatum in basal ganglia, together with cortex, is mainly used to learns state values, which is akin to the `critic'~\citep{maia2009reinforcement,maia2010two}.
2. \textbf{Recurrent neural networks reproduce neural population dynamics.} This assumption states that we can conceptualize a PFC-BG system using recurrent neural networks (RNNs), for both actor and critic. RNN is a class of artificial neural networks (ANN) with feedback connection, which has been successfully applied in both artificial intelligence~\citep{ijcai2018-98,liu2019GPN,10.1145/3390891} and computational neuroscience. There are many essential similarities between RNNs and biological neural circuits: First, RNNs units are nonlinear and numerous. Second, the units have feedback connections, which allows them to generates temporal dynamic behavior within the circuit. Third, individual units are simple, so they need to work together in a parallel and distributed manner to implement complex computations. Both dynamical and computational features of RNNs make it an ideal model for studying the mechanisms of system neuroscience \citep{rajan2016recurrent,sussillo2014neural,mante2013context}. Since basal ganglia can perform dynamic gating via reinforcement learning mechanisms (\textcolor{blue}{Fig.~\ref{fig1}}, \textit{left}), here we consider more sophisticated units, i.e., gated recurrent units (GRUs), to implement this gating mechanism.
3. \textbf{Episodic memory contributes to decision-making process.} This assumption states that episodic memory, depending crucially on the
hippocampus and surrounding medial temporal lobe
(MTL) cortices, can be used as a complementary system for reinforcement learning to influence decisions. First, in addition to its role in remembering the past, the MTL also supports the ability to imagine specific episodes in the future \citep{hassabis2007patients}, with direct implications for decision making \citep{peters2010episodic}. Second, episodic memories are constructed in a way that allows relevant elements of a past event to guide future decisions \citep{shohamy2008integrating}.
4. \textbf{There are two different forms of learning in biological systems: slow learning and fast learning.} Many evidence suggests that cortex-basal ganglia circuits appear to implement reinforcement learning \citep{frank2004carrot}. Hence, the synaptic weights of dopamine targets (striatum in BG) in the circuit, including the PFC network, can be modulated by a model-free RL procedure. This method of incremental parameter adjustment makes it a slow form of learning. On the other hand, as mentioned above, episodic memories stored in the hippocampus impact reward-based learning, suggesting that the hippocampus can serve as a supplementary system to reinforcement learning. From this, episodic memories in replay buffer (a function similar to the hippocampus) can be used to estimate the value of actions and states to guide reward-based decision-making \citep{wimmer2014episodic}, which is a fast form of learning.
These assumptions are all based on existing research. For demonstration, we abstract the neural basis of RL in biological systems (\textcolor{blue}{Fig.~\ref{fig1}} \textit{left}) into a simple computational model (\textcolor{blue}{Fig.~\ref{fig1}} \textit{right}), an actor-critic equipped with episodic memory architecture, in which actor network leverages noisy and incomplete perceptual information about the environment to make a choice, while the critic network emits the value of the selected option. We exploit recent advances in deep RL, specifically the application of the policy gradient algorithm on RNN \citep{bakker2002reinforcement}, to train our model to perform decision-making task.
\section{Methods}\medskip
\subsection{Computational Model}\smallskip
\textbf{RNN unit.} The Actor architecture used in our framework, which represents a particular RNN form, is depicted in \textcolor{blue}{Fig.~\ref{fig1}}c.
RNNs have been introduced by neuroscientists into the field of neuroscience systems to describe the average firing rate of neural populations within a biological context \citep{wilson1972excitatory}. A general definition of an RNN unit is given by \cite{sussillo2014neural}:
\begin{align}
\mathrm {\tau} \frac{\mathrm d \bm{\mathrm x}}{\mathrm d \bm{t}}=-\bm {\bm{\mathrm x}}+{\bm{\mathrm W}}_{rec}\bm{\mathrm r}+{\bm{\mathrm W}}_{in}{\bm{\mathrm u}}+\bm {\mathrm b},
\label{eq:general-rnn}
\end{align}%
Where the $\bm{\mathrm x}$ is a vector, and the $i$th component is ${x}_i$, which can be viewed as the sum of the filtered synaptic currents at the soma of a biological neuron. The variable ${r}_i$ denotes the instantaneous, positive `firing rate', which is obtained by a threshold-linear activation function $[x]^{+}=max(0,x)$, the vector $\bm{\mathrm u}$ presents the external inputs provided to the network. $b_i$ is the bias each unit in the network receives, and the time constant ${\mathrm \tau}$ sets the timescale of the network. In our model, we use gated recurrent units (GRUs), a variant of the RNN architecture introduced by \cite{Chung2014Empirical}. GRUs use gating mechanisms to control and manage the flow of information between cells in the neural network. There are two main reasons for using GRUs: (1) Since the basal ganglia in the brain can perform dynamic gating via RL mechanisms, this gating mechanism can be implemented using GRUs; (2) A parallel neural system allows the biological agents to solve learning problems on a different timescale, and learning with multiple timescales have been shown to improve the performance and speed up the learning process by theoretical and modeling studies \citep{o2006making, neil2016phased}. This multiplicity of timescales is also an important feature of GRUs, as indicated by \cite{Chung2014Empirical}, in which each unit learns to adaptively capture dependencies over different time scales. In this work, we perform a little modification on the used GRUs according to Equation~(\ref{eq:general-rnn}). A continuous-time form of the modified GRUs is described as follows.
\begin{equation}
\begin{split}
\bm{\mathrm \alpha} &=\sigma(\bm{\mathrm W}^\alpha_{rec}{\bm{\mathrm r}}+\bm{\mathrm W}^\alpha_{in}{\bm{\mathrm u}}+\bm{\mathrm b}^\alpha),\\
\bm{\mathrm \beta} &=\sigma(\bm{\mathrm W}^\beta_{rec}{\bm{\mathrm r}}+\bm{\mathrm W}^\beta_{in}\bm{\mathrm u}+\bm{\mathrm b}^\beta),\\
{\mathrm \tau}{\frac{\mathrm d {\bm{\mathrm x}}}{\mathrm d {\bm{t}}}} &=-{\bm{ \alpha}} \circ {\bm{\mathrm x}}+{\bm{{ \alpha}}} \circ (\bm{\mathrm W}_{rec}(\bm{\beta} \circ {\bm{\mathrm r}})+\bm{\mathrm W}_{in}{\bm{\mathrm u}}+{\bm{\mathrm b}}+\sqrt{2\mathrm \tau { k_{rec}^2}}{\bm{\mathrm \xi}}),\\
{\bm{\mathrm r}}&=[{\bm{\mathrm x}}]^{+}
\end{split}
\label{equ:2}
\end{equation}
Where $\circ$ denotes the Hadamard product, $\sigma(x)=\frac{1}{1+e^{-x}}$ is the sigmoid function. The vector $\bm{\mathrm \xi}$ are independent Gaussian white noise scaled by $k_{rec}$, which present noise intrinsic to the RNN. The matrices $\bm{\mathrm W}^\alpha_{rec}$, $\bm{\mathrm W}^\alpha_{rec}$, and $\bm{\mathrm W}^\alpha_{rec}$ are $N\times N$ weight matrices of recurrent connection. While $\bm{\mathrm W}^\alpha_{in}$, $\bm{\mathrm W}^\alpha_{in}$, and $\bm{\mathrm W}^\alpha_{in}$ are $N\times N_{in}$ weight matrices of connection from input units to recurrent units. The vectors ${\bm{\mathrm b_\alpha}}$, ${\bm{\mathrm b_\beta}}$, and ${\bm{\mathrm b}}$ are bias.
Threshold-linear activation function $[x]^{+}$ guarantees that Equation~(\ref{equ:2}) is a nonlinear dynamic system. These leaky threshold-linear units in GRUs are modulated by the time constant $\mathrm \tau$, with an update gate $\bm{\alpha}$ and reset gate $\bm{\beta}$. Based on the dynamics equation of the GRU defined above, the following section will provide a detailed description of Actor-Critic model.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figs/fig1.pdf}
\caption{Actor-Critic framework equipped with episodic memory. \textbf{(a)} Anatomy of a model of reinforcement learning.
The model is focused on \textbf{PFC} (robust active maintenance of task-relevant information), \textbf{BG} (dynamic gating of PFC active maintenance), \textbf{DA} (encoding a reward prediction error), \textbf{Hippocampus} (storing episodic memory). Sensory inputs are processed by PFC-BG circuit, and corresponding motor signals are sent out by Thalamus (not shown here). Working memory representations in PFC are updated via dynamic gating by the BG. These gating functions are learned by BG based on modulatory input from dopaminergic neuron (purple dotted line), \textit{i.e.}, dopamine drives reinforcement learning (slow RL) in BG regions. Moreover, dopamine modulates episodic memories in the hippocampus, supporting adaptive behaviors (fast RL). The synaptic weights in the PFC-BG network are adjusted by an RL procedure, in which DA conveys an RPE signal. \textbf{(b)} The computational model of reinforcement learning. The PFC-BG circuits in the brain were mapped to the Actor-Critic framework (green box). At each time step, the actor receives an observation from environment (corresponding to Sensory input) and selects an action (corresponding to Motion output) based on the past experience (working memory stored in RNN) and current sensory input. The reward is given followed by the chosen action and the environment moves the next state. The critic will estimate the action by computing the state-value function. Then the TD RPE (purple) is estimated through a Temporal Difference algorithm drives by DA, which adjusts the weight of the actor and critic network. Replay buffers (yellow) are used to store and replay episodic memories, similar to the function of the hippocampus. \textbf{(c)} A more detailed schematic of the actor network implementation used in our model: $\mathrm u$ represents sensory input, $\mathrm a$ represents action, and $t$ is time-step. Input units in the Actor model encode the current observation, which connect all-to-all with GRU units. The GRUs layer is composed of a fully connected set of GRU units ($N$ units shown by orange circles), which connect all-to-all with softmax layer encoding the probability of selecting each action. The critic network shown in \textbf{(d)} has the same GRUs layer as actor network, which also receives observations as input from the environment. The output in the critic network is a linear unit encoding estimated state ${\mathrm V}$, combining with the reward ${\eta}$ to calculate TD error.}
\label{fig1}
\end{figure*}
\textbf{Actor-Critic model}. Based on the model constructed by \cite{Amir2019Models}, our Actor model is composed of three layers: an input layer, an RNN (GRUs) layer, and an output softmax layer. The RNN layer in our model consisted of $N=256$ GRU units, and the output layer contains three nodes (since there are $N_a=3$ actions in the RDM task and value-based choice task) (\textcolor{blue}{Fig.~\ref{fig1}}c). At each time step $t$, the input to the Actor model is current observation provided by the environment, and the outputs are the probabilities of choosing action given by the agent's policy. Here, the policy $\mathrm \pi (a_t|u_t;\theta)$ (parameterized by $\theta$) is implemented through the output of a linear readout by softmax normalization, which is determined by the activity $\mathrm r^{\pi}$ of GRU in actor network:
\begin{align}
\bm {\mathrm z}_t &=\bm {\mathrm W}_{out}^{\pi}\bm {\mathrm r}_t^\pi+\bm {\mathrm b}_{out}^{\pi},\\
\mathrm \pi(a_t=j|u_t;\theta) &=\frac{e^{(z_t)_j}}{\sum_{l=1}^{N_a}e^{(z_t)_l}},\quad (j=1,...,N_a)
\label{equ:4}
\end{align}
Where $\bm {\mathrm W}_{out}^{\pi} \in \mathbb{R}^{N_a\times N}$ is matrix of connection weights from GRU layer to the softmax layer, $\bm{\mathrm z}_{t}$ is $N_a$ linear readouts and $\bm{\mathrm b}_{out}^{\pi}$ is $N_a$ bias. The process of action selection is carried out through random sampling from the probability distribution in equation~(\ref{equ:4}). This sampling can be considered as an abstract representation of action selection in the downstream circuitry through basal ganglia, which is the process for selecting `what to do next' in dynamic and unpredictable environments in real time.
The Critic model contains an input layer and a GRUs layer \textcolor{blue}{Fig.~\ref{fig1}}d. In particular, the inputs to the Critic model include not only the observation provided by the environment but also the activity of GRU in the actor network. The output is the state value function $\mathrm V$ (parameterized by $\theta_\mathrm v$), estimating the expected return from sensory input $\bm {\mathrm u}$ and telling the actor how good its action. The state value is predicted by the activity of GRU in Critic network through a linear readout.
\begin{align}
\mathrm V(u_t;\theta_\mathrm v) &=\bm {\mathrm W}_{out}^{\mathrm v}\bm {\mathrm r}_t^\mathrm v+ {\mathrm b}_{out}^{\mathrm v},
\label{equ:5}
\end{align}
Where $\bm {\mathrm W}_{\mathrm out}^{\mathrm v} \in \mathbb{R}^{1\times N}$ is matrix of connection weights from GRU layer to the single linear readout layer ${\mathrm v_t}$, and ${\mathrm b}_{out}^\mathrm v$ is bias.
The Actor network and Critic network have the same GRU structure. The GRUs layer consists of a set of interconnected GRU units (the memory part of the GRU), which is presented by $x_t^i$ in \textcolor{blue}{Fig.~\ref{fig1}}c for the ith GRU unit at time $t$. The value of each unit is updated based on the current input and the last value of all GRU units $(x_{t-1}^i, i=1,2, … ,N )$.
In this way, GRUs layer can keep track of information about the history of past rewards and actions. In Actor model, each GRU unit takes its updated value as the current value $(x_{t-1}^i)$ and then transmits it to the softmax layer through a set of all-to-all connections.
These connections determine the impact of each unit's output on the prediction of the next action. In Critic model, each GRU unit transmits its output to one unit (output layer of Critic model) and a scalar value is calculated, which evaluates the action value. As a whole, overall architecture will learn to perform decision-making task by learning the optimal policy using the Actor model and evaluating the action using Critic model.
\subsection{Behavior tasks }\smallskip
\label{sec:task}
\textbf{RDM direction discrimination task.} In the RDM discrimination task (`reaction-time' version), a monkey chooses between two visual targets; a general description is shown in~\textcolor{blue}{Fig.~\ref{rdm_task}}a. First, the monkey was required to fixate a central point until the random dot motion appears on the screen. Then, the monkey indicated its decision in the direction of dots, by making a saccadic eye movement to the target of choice. In the standard RL model, an RL agent learns by interacting with its surrounding environment and receiving rewards for performing actions. Accordingly, in the RDM task, the actual direction of the moving dots can be considered to be a state of the environment. This state is partially observable, since the monkey does not know the precise direction of the coherent motion. Therefore, the monkey needs to integrate the noisy sensory stimuli to figure out the direction. The monkey is given a positive reward, such as fruit juice, for choosing the correct target after the fixation cue turns off, while a negative reward is given, in the form of timeouts, when either the fixation is broken too early or no choice is made during the stimulus period. During the simulation, the incorrect response was rewarded with a zero reward. Given the reward schedule, the policy could be modeled and optimized using the method of policy gradient.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{figs/fig2.pdf}
\caption{\textbf{(a)}. RDM direction discrimination task (`reaction-time' version). Monkeys are trained to discriminate the direction of motion in a random-dot stimulus that contained coherent horizontal motion. After fixation (screen 1), the two choice targets appeared in the periphery (screen 2). After a variable delay period (was randomly selected from an exponential distribution with mean $700$ ms), dynamic random dots appeared in a $5^{\circ}$ diameter aperture (screen 3). The monkey was allowed to make a saccadic eye movement to a choice target at any time after onset of random-dot motion to indicate the direction of perceived motion (screen 4). Reaction time (RT) is defined as the elapsed time from motion onset to the initiation of the saccade, which was controlled by the monkeys and could be measured. \textit{(Buttom)} Examples of random-dot motion stimulus of variable motion coherence. Stimulus strength is varied by changing the proportion of dots moving coherently in a single direction, which determines the difficulty of the task. The lower (higher) the coherence levels, the more difficult (easier) the task is. Coherently moving dots are the `signal', and randomly moving dots are the `noise'. \textbf{(b)}. Behavior comparison of the animal and the agent. During training for the RDM task, the behaviors of the agent reflected in psychometric functions \textit{(top)} and chronometric functions \textit{(bottom)}. \textit{Left}: animal behavioral data from one experience (reproduced from \cite{roitman2002response}. \textit{Right}: our agent behavioral data. \textit{Top}: Psychometric functions from reaction time version of the RDM. The probability of a correct direction judgment is plotted as a function of motion strength and fitted by sigmoid functions. \textit{Bottom}: Effect of motion strength on reaction time (average reaction time of correct trials). The relationship between the log scaled motion strength and the reaction time fits a linear function.}
\label{rdm_task}
\end{figure*}
\textbf{Value-based economic choice Task.} In the economic choice task experiment, reported by \cite{padoa2006neurons}, the monkey chooses between two types of juice (labeled A and B, with A being preferred) offered in different amounts \textcolor{blue}{Fig.~\ref{fig3}. Each trial began with a fixation period of $1.5s$} and then the offer, which indicated the juice type and amount for the left and right choices, was presented for $1-2s$ before it disappeared. The network was required to indicate its decision in a decision period of $0.75s$.
Since there is a choice that leads to higher rewards, in this sense, there is a `correct' answer in each trial.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figs/pado_task.pdf}
\caption{Value-based economic choice task. At the beginning of each trial, the monkey fixated a center point on the monitor. Then two offers appeared on the two sides of the center fixation. The offers were represented by two sets of squares, with the color linked to the juice type and the number of squares indicating juice amount, which remained on the monitor for a randomly variable delay. The monkey continued fixation the center point until it was extinguished (`go' signal), at which point the monkey indicated its choice by making a saccade towards one of two targets.}
\label{fig3}
\end{figure}
\section{Experiment}\medskip
\label{sec:5}
In this section, we will describe in detail how the Actor-Critic model learns a behavioral policy to maximize the cumulative reward.
The interaction between a monkey and an experimentalist is regarded as the interaction between agent $\mathcal A$ and environment $\mathcal E$. At each time step $t$, the agent observes the inputs $u_t$ from the environment and then selects an action $a_t$ to be performed. The probability of selecting action $a_t$ is given by the policy function $\pi$. After performing the action $a_t$, the environment provides the agent with a scalar reward $\eta_{t}$ (here we use $\eta$ to distinguish it from ${\mathrm r}$, the firing rates of the GRU). In summary, the actor network attempts to learn a policy $\pi$ by receiving feedback from the critic network, and the critic network learns a value function $\mathrm V$ (the expected return in rewards), used to determine how advantageous it is to be in a particular state.
\subsection{Experiment 1: Training our framework to perform RDM task} \smallskip
For the RDM task, the actual direction of the moving dots can be considered to be a state of the environment. For the monkey, this state is partially observable. Learning this behavioral task by an RL algorithm is to solve a partially observable Markov decision process (POMDP). At each time $t$, an observable information is drawn from a set of environment states according to a probability distribution ${\mathrm P}(\mathrm u_t |\mathrm s_t)$. The sensory input, \textit{i.e.}, the observation received by the agent, is denoted as a tuple $\bm{\mathrm u}=(\mathrm {c_F}, \mathrm{c_L}, \mathrm{c_R})$, where $\mathrm{c_F}$ is fixation cue, $\mathrm {c_L}$ is the percentage of dots moving in the left direction, $\mathrm {c_R}$ is the percentage of dots moving in the right direction. These percentages represent the noisy evidence for two choices (left) and (right). At each time, the agent selects to perform one from the set of actions $\mathrm {A=\{{F, L, R}\}}$: fixation $(a_t=\mathrm F)$, select left $( a_t=\mathrm L)$, select right $( a_t=\mathrm R)$. A trial ends as long as the agent makes a decision (select left or right): the agent is reward with $\eta=8$ for making a correct decision and with $\eta=0$ for making a wrong decision. Aborting trial, i.e., breaking fixation early before the `go’ cue, results in a negative reward $\eta=-2$.
If the agent has not made a choice at the maximum time $t_{max}$, the reward is $\eta=0$. Here we use $e^{-t/\tau_{\eta}}$ to discount future rewards \citep{doya2000reinforcement}, where $\tau_{\eta}$ is time constant. Discounted rewards still denote as $\eta$. Given reward function $\eta=\eta(\mathrm u_t,\mathrm a_t)$, the learning is implemented by single-threaded Advantage Actor-Critic (A2C) algorithm described by \cite{mnih2016asynchronous}.
The goal of the agent is to learn a policy that maximizes the expected future reward to be received, starting from $t=0$ until the terminal time $T$($\leq t_{max}$).
\begin{align}
J(\theta) &=\mathbb{E}[\sum_{t=0}^{T-1} \eta_{t+1}],
\label{equ:6}
\end{align}
For policy network, \textit{i.e.}, actor network, the loss function $\mathcal{L}^\pi (\theta)$ is defined as following.
\begin{align}
\mathcal{L}^\pi (\theta) &=-J(\theta)+\beta_e H^{\pi}(\theta),
\label{equ:7}
\end{align}
We introduce entropy $H^{\pi}(\theta)$ to the policy network, which encourages exploration by preventing the agent from being too decisive and converging at local optima and $\beta_e$ is hyperparameter controlling the relative contribution of entropy regularization term. The key gradient $\nabla_\theta J(\theta)$ is given for each trial by the A2C algorithm.
\begin{align}
\nabla_\theta J(\theta) &=\sum_{t=0}^{T}\nabla_\theta \log \pi(\mathrm a_t|\mathrm u_t;\theta)A(u_t, r^{\pi}_t),\label{equ:8}\\
A(\mathrm u_t, \mathrm r^{\pi}_t) &=\eta_t+\gamma \mathrm V(\mathrm u_{t+1},\mathrm r^{\pi_{t+1}};\theta_ {\mathrm v})-\mathrm V(\mathrm u_{t},r^{\pi}_t;\theta_ {\mathrm v}),
\label{equ:9}
\end{align}
\noindent
where the parameters $\theta$ and $\theta_\mathrm v$ consist of connection weight, biases of the actor network and critic network respectively, \textit{i.e.}, $\theta=\{\bm {\mathrm W}_{in}^\pi,\bm {\mathrm W}_{rec}^\pi,\bm {\mathrm W}_{out}^\pi,$ $\bm {\mathrm b}_{in}^\pi,\bm {\mathrm b}_{rec}^\pi,$ $\bm {\mathrm b}_{out}^\pi\}$, $\theta_\mathrm v=\{\bm {\mathrm W}_{in}^{\mathrm v}, \bm {\mathrm W}_{rec}^{\mathrm v},\bm {\mathrm W}_{out}^{\mathrm v},\bm {\mathrm b}_{in}^{\mathrm v},\bm {\mathrm b}_{rec}^{\mathrm v},\bm {\mathrm b}_{out}^{\mathrm v}\}$. The actor learns a policy $\pi$ (the rule that the agent follows) by receiving feedback from a critic. The critic learns a state value function $\mathrm V(\mathrm u_t,\mathrm r^{\pi}_t;\theta_ {\mathrm v})$ (the expected return in rewards), which is used to determine how advantageous it is to be in a particular state by estimating the advantage function $A(\mathrm u_t,\mathrm r^{\pi}_t)$, \textit{i.e.}, TD error. The parameter $\gamma$ is the discount factor.
For value network, the loss function $\mathcal{L}^\mathrm v (\theta)$ is Mean Square Error
\begin{align}
\mathcal{L}^\mathrm v (\theta) &= \sum_{t=0}^{T}[\eta_t+\gamma \mathrm V(\mathrm u_{t+1},\mathrm r^{\pi}_{t+1})-\mathrm V(\mathrm u_{t},\mathrm r^{\pi}_t)]^2,
\label{equ:10}
\end{align}
We can get the loss function for the model overall through combining the two loss functions
\begin{align}
\mathcal{L} (\theta) &= \mathcal{L}^\pi (\theta)+\beta_\mathrm v \mathcal{L}^\mathrm v (\theta),
\label{equ:11}
\end{align}
Here, the hyperparameter $\beta_\mathrm v$ controls the relative contribution of the value estimate loss.
After every trial, the policy network and value network use Adam stochastic gradient descent (SGD) to find the parameters $\theta$ that minimizes an objective function $\mathcal{L} (\theta)$.
\begin{equation}
\begin{split}
&\nabla_\theta \mathcal{L}(\theta) = \nabla_\theta \mathcal{L}(\theta)+\beta_\mathrm v \nabla_\theta \mathcal{L}(\theta)\\
&=-\sum_{t=0}^{T}\nabla_\theta \log \pi_\theta(\mathrm a_t|\mathrm u_t)A(\mathrm u_t, \mathrm r^\pi_t)+\beta_\mathrm e \nabla_\theta H^\pi (\theta)+\beta_\mathrm v \nabla_\theta \mathcal{L}^\mathrm v (\theta),
\label{equ:12}
\end{split}
\end{equation}
The gradient $\nabla_\theta \log \pi_\theta(\mathrm a_t|\mathrm u_t)$, $\nabla_\theta H^\pi (\theta)$, $\nabla_\theta \mathcal{L}^\mathrm v (\theta)$ are computed using the backpropagation through time (BPTT). Through this training, the actor network learns to extract history experiences into the hidden state, in the form of working memory (WM). This working memory is thought to be facilitated by the PFC, which can be summarized to instruct the actor system to select rewarding actions. Meanwhile, the critic system learns a value function to train the actor network, which in turn furnishes a dynamic gating mechanism to control updating the working memory.
\subsection{Experiment 2: Training our framework to perform value-based economic task}\smallskip
We also trained the Actor-Critic model to perform the value-based economic choice task, described in Section~\ref{sec:task}, with a training procedure similar to the above-described one for the RDM task. In this task, we noticed that there was no real correct or wrong choice for the monkey. However, there is a choice that allowed the monkey to receive the highest reward, this choice can thus be considered as a 'correct' choice. Unlike the RDM task, the information regarding whether an answer is correct is not included in the inputs, but rather in the correlation between the inputs and rewards.
\subsection{Test behavioral characteristics of our framework}\smallskip
\label{sec:test}
Next, we investigated whether the Actor-Critic framework captures the behavioral characteristics of animals in the cognitive experiments. In the previous section, we have trained the Actor-Critic framework to perform the RDM and value-based economic choice tasks. Here, we compare the behavioral characteristics exhibited by the trained model with those observed in the animal experiments.\smallskip
\textbf{RDM task}. The results are consistent with the behavioral findings from the animal experiments, which are mainly reflected in the psychometric and chronometric functions, as shown in \textcolor{blue}{Fig.~\ref{rdm_task}}b.
The performance accuracy in the RDM task depends on the strength of the sensory input, and the psychometric function is a good tool to analyze such a relationship. The percentage of correct direction judgments is plotted as a function of the motion strength (measured by the proportion of coherently moving dots). \textcolor{blue}{Fig.~\ref{rdm_task}}b \textit{(top)} shows a high accuracy during a strong motion, while less accuracy is shown with more chance and a weaker motion, which suggests that the agent in our Actor-Critic framework captures this important behavioral feature. Moreover, the theory of chronometric functions puts a constraint on the relationship between the response time and accuracy. A difficult task (weaker stimuli strength) requires the agent to take more time to make a decision (\textcolor{blue}{Fig.~\ref{rdm_task}}b \textit{(bottom)}), which means that the additional viewing time for difficult trials was devoted to integrating the sensory information. As a result, the appropriate trade-off between speed and accuracy is learned by this Actor-Critic framework. It is worth emphasizing that unlike the usual machine learning goals, our objective is not to achieve the 'perfect' performance, but rather to train the agents to match the smooth psychometric and chronometric characteristics observed in the behavior of the monkeys.\smallskip
\textbf{Value-based economic choice task}. The activity of the units in the critic network exhibits similar types of response to those observed in the orbitofrontal cortex of the monkeys \citep{padoa2006neurons}. First, roughly $20\%$, $60\%$, and $20\%$ of the active units are selective to the chosen value, the offered value, and to choose alone, respectively, as defined in the animal experiment. Second, there is a trade-off between the juice type and its quantity (upper panel of \textcolor{blue}{Fig.~\ref{fig4}}). Third, the patterns of neural activity are consistent with the behavioral findings from the animal experiment, with three main response patterns: (i) similar U-shaped response pattern (\textcolor{blue}{Fig.~\ref{fig4}}a-c, {deep blue circles}); (ii) the response pattern associated with the `offer value' variable (\textcolor{blue}{Fig.~\ref{fig4}}d-e, purple circles); (iii) the response pattern related to the juice `taste' variable. For this task, the network architecture has not been changed, and we only change the initial value of the critic network's input weight.
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{figs/fig4.pdf}
\caption{The units in our model exhibit diverse selectivity for the task variables, as observed in the orbitofrontal cortex. The top panel shows the percentage of trials in which the agent chose `juice' B ($y$ axis) for various offer types ($x$ axis). The relative value is indicated on the top left. For example, in \textbf{a}, the relative value is $3.2$, which indicates that the reward contingencies are indifferent between $1$ `juice' of A and $3.2$ `juice' of B. Different relative values indicate different choice patterns. The bottom panel of the figure shows the mean activity ($y$ axis, defined as the period of $800$ ms before the decision) of example value network units for various offer types ($x$ axis) under different choice patterns: $1$A = $3.2$B (\textbf{a}, deep blue), $1$A = $2.5$B (\textbf{b}, deep blue), $1$A = $3.3$B (\textbf{c}, deep blue), $1$A = $2.5$B (\textbf{d}, purple), $1$A = $2.2$B (\textbf{e}, purple), and $1$A = $4.1$B (\textbf{f}, green and blue). For each case, the grey circles show the mean the activity of value network units during the fixation period. \textbf{a-c}, The units in the value network exhibit selectivity for the `chosen value'. \textbf{d-e}, The units in the value network exhibit selectivity for the `offer value'. \textbf{f}, The trials are separated into choice A (green diamonds) and choice B (blue circles).
}
\label{fig4}
\end{figure*}
\begin{table}[width=.9\linewidth,cols=4,pos=h]
\caption{Parameter for Actor-Critic model training.}\label{tbl1}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
Parameter & Value & Parameter & Value\\
\midrule
Learnig rate & 0.004 & $t_{max}$ & 275 \\
$\tau$ & 50ms & $k_{rec}^2$ & 0.01 \\
$\tau_{\eta}$ & 200ms & $\beta_\mathrm v$ & 0.5 \\
$\gamma$ & 0.99 & $\beta_\mathrm e$ & 0.5 \\
\bottomrule
\label{table1}
\end{tabular*}
\end{table}
\section{Analysis}\medskip
In Section~\ref{sec:test}, which suggests that it can serve as a computational platform to study the impact of memory on the cognitive function. It has been shown by a number of experimental studies that memory is essential to make decisions, enabling the organisms to predict possible future outcomes by drawing on past events. For instance, working memory, which is a temporary storage in the brain \citep{repovvs2006multi}, has been shown to guide the choice by maintaining and manipulating task-relevant information. Besides, episodic memory has also been shown to be involved in the decision-making process. Moreover, a recent study suggests that the hippocampus supports deliberation about the value during the value-based economic choice task: thus, the hippocampus contributes to the construction of internal samples of evidence that are related to decision-making \citep{bakkour2019hippocampus}. Based on this idea, in this section, we combine our computational platform with the value-based economic choice task to explore the role of episodic memory in the process of decision-making.
\subsection{Episodic memory contributes to decision-making}\smallskip
First, we need to verify whether the Actor-Critic model that is equipped with episodic memory has an effective performance. Psychologically, episodic memory refers to the capacity to consciously recollect an autobiographical memory of the events that occurred in particular times and places. For example, a person can recall an episode from the past, such as his $20^{\rm th}$ birthday party, and remember who was there and where it happened. Computationally, we mainly emphasize the notion of one-time episodes (like one-trial learning in a task). A previous study suggested that episodic memory could be used to store the specific rewarding sequence of state-action pairs and later try to mimic such a sequence, a process called episodic control \citep{lengyel2008hippocampal}. In this work, we propose a slightly different computational principle, in which episodic memory is used to optimize the policy rather than directly extract it.
In our computational model, one episodic memory is generated as follows: On each trial $i$ in the value-based economic choice task, the agent's experiences $e_t=(\mathrm u_t,\mathrm a_t,\eta_t,\mathrm s_{t+1})$ at each time step $t$ are stored as an episodic memory $E_i=(\mathrm u_0,\mathrm a_0,\eta_0,\mathrm s_1,…,\mathrm u_t,$ $\mathrm a_t,\eta_t,\mathrm s_{t+1},…,\mathrm u_{T_{i-1}},\mathrm a_{T_{i-1}},\eta_{T_{i-1}},$ $\mathrm s_{T_{i}})$ and $T_i$ is the length of the $i$th trial. According to the reward received at the end of the $i$th trial, we can divide the memory into three types: the trial with positive reward (denoted as $E_i^{posi}$), the trial with negative reward~(denoted as $E_i^{nega}$), and the trial with zero reward (denoted as $E_i^{zero}$). Then the agent stores these episodic memories in one replay buffer $D=\{\{E_1^{posi},…,E_{N_1}^{posi}\},$ $\{E_1^{nega},…,E_{N_2}^{nega}\},$ $\{E_1^{zero},…,E_{N_3}^{zero}\}\}$, a pool of memories, the function of which is similar to the hippocampus in the brain.
How does past experience stored in replay buffer optimize behavior policy? At the computational level, a method called importance sampling can be used to estimate the expected return $J(\theta)$ by sampling episodic memory from replay buffer $D$. In fact, this behavior policy for collecting samples is a known policy (predefined just like a hyperparameter), labeled as $\mu(\mathrm a|\mathrm u)$. Suppose we retrieve a single experience $(\mathrm u_0,\mathrm a_0,\eta_0,\mu(.|\mathrm u),…,\mathrm u_t,\mathrm a_t,\eta_t,\mu(.|u_t)$ $,…,$ $\mathrm u_{T_{i-1}}$,
$\mathrm a_{T_{i-1}},\eta_{T_{i-1}},\mu(.|\mathrm u_{T_{i-1}}))$, where actions have been sampled from episodic memory according to the behavior policy $\mu(a|u)$. Given that the training observations, the policy gradient can be rewritten as:
\begin{align}
\nabla_\theta J(\theta) &=\sum_{t=0}^{T}\frac{\pi ( \mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}\nabla_\theta \log \pi(\mathrm a_t|\mathrm u_t;\theta)A(\mathrm u_t, \mathrm r^\pi_t),
\label{equ:13}
\end{align}
\noindent where $\frac{\pi(\mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}$ is the importance weight, and $\mu$ is non-zero whereever $\pi(\mathrm a_t|\mathrm u_t;\theta)$ is.
We note that in the case where $\frac{\pi_\theta (\mathrm a_t|\mathrm u_t;\theta)}{\mu(\mathrm a_t|\mathrm u_t)}=1$ the equation~(\ref{equ:13}) is the same as equation~(\ref{equ:8}). To use episodic memory to optimize policy, we define the learning process as follows: for trial $n=1$, policy network was updated with equation~(\ref{equ:12}), in which the gradient term $\nabla_\theta J(\theta)$ is represented by equation~(\ref{equ:8}). Then the agent store full trajectory (an episodic memory) of this trial in replay buffer. For the trial $n=2$, the agent randomly samples a trajectory as past experience to optimize policy and the gradient term $\nabla_\theta J(\theta)$ is represented by equation~(\ref{equ:13}). These steps are repeated until the training terminal, at which point the agent learns a policy concerning how to perform the value-based economic choice task.
\textcolor{blue}{Fig.~\ref{fig5}} \textit{(left)} shows the learning curve of agents with and without episodic memory (orange line and blue line, respectively) for the value-based economic choice task (the average return of $2000$ trial samples). It can be seen that the agent with episodic memory performs significantly faster in this task compared with the one without episodic memory, although both policies eventually reached the same performance. These results are consistent with some recent studies showing that animal decisions can indeed be guided by samples of the individual past experience ~\citep{murty2016episodic}.
The percentage of correct trials is shown in \textcolor{blue}{Fig.~\ref{fig5}} \textit{(right)} and it is calculated by $N_{right}/N_{choice}$, where $N_{choice}$ represents the number of trials in which the monkey made a choice (right or error) in $20000$ trials, and $N_{right}$ denotes the number of correct choices. It can be observed that at the beginning of the trial, the correct percentage of agents who cannot extract episodic memory from the replay buffer is maintained at around $50\%$ (blue line), and only after substantial training (about $30000$ trials) can the agent achieve the baseline accuracy rate. This suggests that the agent equipped with episodic memory shows a better execution efficiency.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/fig5.pdf}
\caption{Learning curves of the agent with episodic memory (orange line) and without episodic memory (blue line) on the economic choice task. \textit{(Left)} Average reward per trial. \textit{(Right)} Percent correct, for trials on which the network made a decision.
}
\label{fig5}
\end{figure}
\subsection{Episodic memory for salient event}\smallskip
\label{investigate}
In the previous section, we have verified that episodic memory indeed allows the agent to learn a task faster. Nevertheless, the question of which types of episodic memory samples should be selected to govern the decisions remains unanswered in the field of cognitive neuroscience. In this section, we will examine this question.
The relationship between events is often clear only when they are reviewed. For example, when something positive happens, we want to know how to repeat this event. However, when an event occurs before the reward is given, how to know what causes it? This is the earlier mentioned `temporal credit assignment problem', which can be solved by saving all the potential determinants, such as rewards, of behaviorally relevant events into working memory. We propose the question of how does episodic memory balance the need to represent these potential determinants of reward outcomes to deal with credit assignment? One solution may be to enhance episodic memory for notable events, referred to as 'salient memory', which are potential reward determinants. In fact, both the violations and conformance of expectancy can be considered as salient events to be stored in the memory buffer. Since such long-term memories are potentially predictive of reward outcomes, it will provide a computationally feasible way to obtain future rewards.
In the value-based economic choice task, salient events include trials in which the right choice was made (rewarded; expectancy conformance) or the fixation was broken (punished; expectancy violation). When it comes to a gaze-breaking trial, the agent's policy cannot be optimized due to insufficient interaction with the environment. As a result, we only choose expectancy conformance as a salient event. In the third type of trials, the monkeys made a response before the trial was over, but their choice was wrong. The incorrect response was neither rewarded by the juice nor punished. Such a trial can be considered as a common event, because it's not a particular event for monkeys. Accordingly, the episodic memories in the replay buffer $D$ have three types: the set $D_{posi}=\{E_1^{posi},$ $…,E_{N_c}^{posi}\}$ for salient events, the set $D_{zero}=\{E_1^{zero},…,E_{N_z}^{zero}\}$ for common events and the remaining events are denoted as the set $D_{nega}=\{E_1^{nega},…,$ $E_{N_e}^{nega}\}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figs/fig6.pdf}
\caption{Learning curves of an agent on the RDM task for different types of episodic memory, salient memory (green line), common episodic memory (blue line), all type of episodic memory (orange). \textit{(Left)} Average reward per trial. \textit{(Right)} Percent correct.
}
\label{fig6}
\end{figure}
In order to investigate if salient events sampled from the memory buffer can more effectively have a bias towards reward-guided choice compared with common events, we plot the learning curve of the agent with different types of episodic memories. By comparing the green (salient events) and blue (common events) curves in \textcolor{blue}{Fig.~\ref{fig6}}, we can see that the agent with significant events achieves a better performance than the agent with common events.
As shown in \textcolor{blue}{Fig.~\ref{fig6}} (\textit{left}), when the agent uniformly at random draws an event from the set $D_{posi}$ to optimize the policy, the return received by the agent can reach the baseline level more quickly (green line). However, when the agent extracts common events from the set $D_{zero}$ (blue line), it must go through a long period of learning to get higher returns. In this case, the percentage of agents who chose correctly is also always maintained at around $50\%$ at the beginning of the experiment (green line in \textcolor{blue}{Fig.~\ref{fig6}} (\textit{right})), which indicates that the monkey chooses the direction at random. As the training increases, the monkey makes more and more correct choices. It can be noted that its learning curve is similar to that of an agent who does not use memory to optimize their strategies (blue line in \textcolor{blue}{Fig.~\ref{fig5}}).
This suggests that episodic memory about common events did not help the monkeys to make choices. Moreover, when an experience is sampled from the set $D$, the reward value and final accuracy obtained by the agent are higher than those in the case where experience is sampled from the set $D_{zero}$, but lower than the case where experience is sampled from the set $D_{posi}$. Although the learning time significantly varies, the agent ends up with the same return value and accuracy in all the cases. Our results suggest that memory encoding may be stronger for trials that involved salient events. That is, the salient episodic memory in the hippocampus is more likely to be sampled during the ensuing choice.
\section{Discussion}\medskip
The goal of the present work was twofold: First, we trained an Actor-Critic RL model to solve tasks that are analogous to the monkey's tasks. This can reproduce the main features of the behavioral data so that we conduct other behavioral experiments in this framework. Specifically, we used RNNs to construct the Actor-Critic RL framework based on RL theories of the PFC-BG circuit. The model was evaluated in two classical decision-making tasks --- a simple conceptual decision-making task and a value-based economic choice task --- and successfully reproduced the behavioral features reported by \citep{shadlen2001neural} and neural activity recorded from the animal brain reported by \citep{padoa2006neurons}. We presented a computational platform, in which corresponding circuit mechanisms can be studied by systematically analyzing a model network. In addition, diverse cognitive functions can also be explored by conducting corresponding behavioral experiments. Second, based on our modeling work, we investigated which experiences in the hippocampus are ultimately considered or ignored during deliberation to govern future choices.
Since 1995, numerous actor-critic models for reinforcement learning have been proposed in the field of neuroscience, particularly in the rat’s basal ganglia \citep{davis1995models,joel2002actor}. Some evidence shows that neurons in the PFC \citep{fujii2005time} and striatum \citep{barnes2005activity} code the action sequences, suggesting that the BG-PFC circuit may participate in abstract action representations. Therefore, at the biological analysis level, our model supports the actor-critic picture for reward-based learning in the PFC-BG circuit: One circuit learns an action selection policy and implement it, while the second structure computes the expected return and offers immediate feedback that tells it whether the current action is good or bad. Moreover, \cite{frank2006anatomy} have demonstrated that the BG can implement an adaptive gating mechanism, which allows task-relevant information to be maintained into working memory (a temporary storage in the brain, and facilitated by the prefrontal cortex). Our model also supports this division of labor between PFC and BG as follows: The actor network learns task-relevant information and saves it into the hidden state in the form of working memory, while the critic system learns a value function to train the actor network, which in turn furnishes a dynamic gating mechanism to control updating the working memory.
Moreover, a recent experimental work in humans has shown that during memory-based decision-making tasks, the medial frontal cortical neurons phase-locked their activity to theta frequency band oscillations in the hippocampus, which suggests an oscillation-mediated activity coordination between distant brain regions \citep{Minxha2020Flexible}. This functional interaction between the frontal cortical and hippocampus supports our computational framework: The Actor-Critic model uses working memory stored in the hidden state of the GRU to make a choice, and this selected action affects the storage of memories in the hippocampus, which is in turn used to optimize the policy and control working memory updates. Although we have used the GRU to model the decision and value networks, both the ability of dynamic gating mechanism and storing states as working memory make our model shows a powerful computational learning performance. However, early work demonstrated that the capacity of working memory is limited, which results in decisions that are often made with finite information. Due to the transient characteristic caused by the capacity limitation and fast decay rate of working memory, it is not an ideal memory system to independently support decision-making. Moreover, accumulating evidence indicates that dopamine can facilitate episodic memory in the hippocampus encoding to support adaptive behavior \citep{bethus2010dopamine}, which suggests that episodic sampling is may be a powerful decision-making mechanism. Therefore, we investigated the link between episodic memory and reward-based choice in our framework by conducting the value-based economic choice task in our framework. The results suggest that a retrieval of salient episodic memory can promote deliberation in the decision-making process, which is essential to future goal-directed behavior.
Our model has some limitations, which may be opportunities for future work. For instance, during the retrieval of samples from episodic memories, we have defined the priority of salient events only in an abstract way, while we have not provided a mechanism to explain how the mammalian brain would compute it. Therefore, there is a need to develop a process-level model to implement this term. Moreover, in the cerebral cortex of mammals, one neuron releases only a single transmitter, known as `Dale's Principle', which generates the same effect (excitatory or inhibitory) at all of its synaptic connections to other cells. In our framework, due to the complex nature of the GRU, we omitted such a biological constraint and instead used the firing rate units as a mixture of excitatory and inhibitory neurons. In future work, it is required to reintroduce these constraints, and other physiologically relevant phenomena, such as bursting, adaptation and oscillations, may also be incorporated to build a more biologically-plausible model.
\noindent
\textbf{Acknowledgement} This work was supported by the National Natural Science Foundation of China (Grant No.11572127 and No.11172103).
\printcredits
\bibliographystyle{cas-model2-names}
| {'timestamp': '2021-03-08T02:24:47', 'yymm': '2103', 'arxiv_id': '2103.03679', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.03679'} |
\section{Introduction}\label{Sec1}
\thispagestyle{empty}
Person re-identification (RE-ID) is a challenging problem focusing on pedestrian
matching and ranking across non-overlapping camera views. It remains an open
problem although it has received considerable exploration recently, in consideration of its potential
significance in security applications, especially in the case of video surveillance.
It has not been solved yet principally because of the dramatic intra-class variation
and the high inter-class similarity.
Existing attempts mainly focus on learning to extract robust and discriminative
representations
\cite{2014_ECCV_SCNCD,2014_IVC_KBICOV, 2015_CVPR_LOMO},
and learning matching functions or metrics
\cite{2011_CVPR_PRDC,2012_CVPR_KISSME,2013_CVPR_LADF,2014_ICDSC_KCCA,2015_CVPR_LOMO,2015_ICCV_MLAPG, 2015_ICCV_CSL}
in a supervised manner. Recently, deep learning has been adopted to RE-ID community
\cite{2015_CVPR_Ahmed,2016_CVPR_JSTL,2016_CVPR_Wang, 2016_ECCV_Gated}
and has gained promising results.
However, supervised strategies are intrinsically limited due to the requirement
of manually labeled cross-view training data, which is very expensive \cite{2015_TCSVT_xiaojuan}.
In the context of RE-ID,
the limitation is even pronounced because \emph{(1)} manually labeling may not be reliable
with a huge number of images to be checked across multiple camera views, and more importantly \emph{(2)} the astronomical
cost of time and money is prohibitive to label the overwhelming amount of data across disjoint camera views.
Therefore, in reality supervised methods would be restricted
when applied to a new scenario with a huge number of unlabeled data.
\begin{figure}\label{FigTitle}
\includegraphics[width=1\linewidth]{temp2.pdf}
\caption{Illustration of view-specific interference/bias and our idea.
Images from different cameras suffer from
view-specific interference, such as occlusions in Camera-1,
dull illumination in Camera-2, and the change of viewpoints between them.
These factors introduce bias in the original feature space, and therefore
unsupervised re-identification is extremely challenging. Our model
structures data by clustering and learns view-specific projections
jointly, and thus finds a shared space where view-specific bias is
alleviated and better performance can be achieved. (Best viewed in color)
}
\end{figure}
\ws{To directly make full use of the cheap and valuable unlabeled data,
some existing efforts on exploring unsupervised strategies
\cite{2010_CVPR_SDALF,2013_CVPR_SALIENCE,2014_BMVC_GTS, 2015_BMVC_DIC,2015_PAMI_ISR,2016_CVPR_tDIC, 2016_ICIP_Wang, 2016_ECCV_Kodirov} have been reported,}
but they are still not very satisfactory.
One of the main reasons is that without the help of labeled data,
it is rather difficult to model the dramatic variances
across camera views, such as the variances of illumination and occlusion conditions.
Such variances lead to view-specific interference/bias which can be very disturbing in finding
what is more distinguishable in matching people across views (see Figure \ref{FigTitle}).
In particular, existing unsupervised models treat the samples from different views in the same manner,
and thus the effects of view-specific bias could be overlooked.
In order to better address the problems \ws{caused by camera view changes} in unsupervised RE-ID scenarios, we propose a novel
unsupervised RE-ID model named \emph{Clustering-based Asymmetric\footnote{\final{``Asymmetric'' means specific transformations for each camera view.}} MEtric Learning (CAMEL)}.
The ideas behind are on the two \ws{following} considerations. \ws{First, although}
conditions can vary among camera views, we assume that there should be some shared space
where the data representations are less affected by view-specific bias.
By projecting original data into the shared space, the distance between any pair of
samples $\mathbf{x}_i$ and $\mathbf{x}_j$ is computed as:
\begin{equation}\label{EqSym}
\small
d(\mathbf{x}_i,\mathbf{x}_j) = \lVert \bm{U}^{\mathrm{T}}\mathbf{x}_i - \bm{U}^{\mathrm{T}}\mathbf{x}_j \rVert_2
= \sqrt{(\mathbf{x}_i-\mathbf{x}_j)^{\mathrm{T}}\bm{M}(\mathbf{x}_i-\mathbf{x}_j)},
\end{equation}
where $\bm{U}$ is the transformation matrix and $\bm{M} = \bm{U}\bm{U}^{\mathrm{T}}$.
\Koven{However, it can be hard for a universal transformation to implicitly model the view-specific feature distortion from different camera views,
especially when we lack label information to guide it.
This motivates us to \emph{explicitly} model the view-specific bias.
Inspired by the supervised asymmetric distance model \cite{2015_TCSVT_ASM},
we propose to embed the asymmetric metric learning to our unsupervised RE-ID modelling,
and thus modify the symmetric form in Eq. (\ref{EqSym}) to an asymmetric one:}
\begin{equation}\label{EqAsym}
\small
d(\mathbf{x}_i^p,\mathbf{x}_j^q) = \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \bm{U}^{q\mathrm{T}}\mathbf{x}_j^q \rVert_2,
\end{equation}
where $p$ and $q$ are indices of camera views.
An asymmetric metric is more acceptable for unsupervised RE-ID scenarios as
it explicitly models the variances among views by treating each view differently.
By such an explicit means, we are able to better alleviate the disturbances of view-specific
bias.
The other consideration is that since we are not clear about how to separate similar persons
in lack of labeled data, it is reasonable to pay more attention to
better separating dissimilar ones.
Such consideration \ws{motivates} us to structure our data by clustering.
Therefore, we develop \emph{asymmetric metric clustering} that clusters cross-view person images.
By clustering together with asymmetric modelling, the data can be better characterized in the shared space,
contributing to better matching performance (see Figure \ref{FigTitle}).
In summary, the proposed CAMEL aims to learn view-specific projection for each camera view
by jointly learning the asymmetric metric and
seeking \ws{optimal} cluster separations.
In this way, the data from different views is projected into
a shared space where view-specific bias is aligned to an extent, and thus better performance
of cross-view matching can be achieved.
\ws{So far in literatures, the unsupervised RE-ID models have only been evaluated on small datasets which contain only hundreds or
a few thousands of images. However, in more realistic scenarios we need evaluations
of unsupervised methods on much larger datasets, say, consisting of hundreds of thousands of samples,
to validate their scalabilities. In our experiments, we have conducted extensive comparison on datasets
with their scales ranging widely.
In particular, we combined two existing RE-ID datasets \cite{2015_ICCV_MARKET,MARS}
to obtain a larger one which contains over 230,000 samples.
Experiments on this dataset (see Sec. \ref{SecFurtherEval}) show empirically that our model is more scalable to problems of larger scales,
which is more realistic and more meaningful for unsupervised RE-ID models, while some existing unsupervised RE-ID models are not scalable due to the expensive cost in either storage or computation.}
\section{Related Work}\label{Sec2}
At present, most existing RE-ID models are in a supervised manner. They are mainly
based on learning distance metrics or subspace
\cite{2011_CVPR_PRDC,2012_CVPR_KISSME,2013_CVPR_LADF,2014_ICDSC_KCCA,2015_CVPR_LOMO,2015_ICCV_MLAPG, 2015_ICCV_CSL},
learning view-invariant and discriminative features
\cite{2014_ECCV_SCNCD,2014_IVC_KBICOV, 2015_CVPR_LOMO},
and deep learning frameworks
\cite{2015_CVPR_Ahmed,2016_CVPR_JSTL,2016_CVPR_Wang, 2016_ECCV_Gated}.
However, all these models rely on substantial labeled training data, which is typically required
to be pair-wise for each pair of camera views. Their performance depends highly on
the quality and quantity of labeled training data.
In contrast, our model does not require any labeled data and thus is free from
prohibitively high cost of manually labeling and the risk of incorrect labeling.
\ws{To directly utilize unlabeled data for RE-ID, several unsupervised RE-ID models \cite{2013_CVPR_SALIENCE,2014_BMVC_GTS,2015_PAMI_ISR,2015_BMVC_DIC,2016_CVPR_tDIC}
have been proposed}.
All these models differ from ours in two aspects.
On the one hand, these models do not explicitly exploit the information on
view-specific bias, i.e., they treat feature transformation/quantization in every distinct camera view in the same manner
when modelling. In contrast, our model tries to learn specific transformation
for each camera view, aiming to find a shared space where view-specific interference
can be alleviated and thus better performance can be achieved.
On the other hand, as for the means to learn a metric or a transformation,
existing unsupervised methods for RE-ID rarely consider clustering while
we introduce an asymmetric metric clustering to characterize data in the learned space. \ws{While the methods proposed in \cite{2015_TCSVT_ASM, 2013_AVSS_RCCA,2015_TCSVT_RCCA} could
also learn view-specific mappings, they are supervised methods and more importantly cannot be generalized to handle unsupervised RE-ID.}
Apart from our model, there have been some clustering-based metric learning models
\cite{2007_CVPR_AML,2015_NC_uNCA}. However, to our best knowledge, there is no such
attempt in RE-ID community before.
This is potentially because clustering is more susceptible to view-specific interference
and thus data points from the same view are more inclined to be clustered together,
instead of those of a specific person across views.
Fortunately, \ws{by formulating asymmetric learning and further limiting the discrepancy between view-specific transforms}, this problem can be
alleviated in our model. Therefore, our model is essentially different from these models
not only in formulation but also
in that our model is able to better deal with cross-view matching problem by treating
each view asymmetrically. We will discuss the differences between our model and the
existing ones in detail in Sec. \ref{SecFairCmp}.
\section{Methodology}
\subsection{Problem Formulation}
Under a conventional RE-ID setting, suppose we have a surveillance camera network that
consists of $V$ camera views, from each of which we have collected
$N_p\;(p = 1,\cdots,V)$ images and thus there are $N = N_1+\cdots+N_V$ images in total as training samples.
Let \modify{ $\bm{X} = [\mathbf{x}_1^1,\cdots,\mathbf{x}_{N_1}^1,\cdots,\mathbf{x}_{1}^V,\cdots,\mathbf{x}_{N_V}^V]\in \mathbb{R}^{M \times N}$}
denote the training set, with each column $\mathbf{x}_i^p$ $(i = 1,\cdots,N_p; p = 1,\cdots,V)$
corresponding to an $M$-dimensional representation of the $i$-th image from the $p$-th
camera view.
Our goal is to learn $V$ mappings i.e., $\bm{U}^1,\cdots,\bm{U}^V$,
where $\bm{U}^p \in \mathbb{R}^{M \times T} (p = 1,\cdots,V)$,
corresponding to each camera view,
and thus we can project the original representation $\mathbf{x}_i^p$
from the original space $\mathbb{R}^M$
into a shared space $\mathbb{R}^T$
in order to alleviate the view-specific interference.
\subsection{Modelling}\label{Sec3}
Now we are looking for some transformations to map our data
into a shared space where we can better separate the
images of one person from those of different persons.
Naturally, this goal can be achieved by narrowing intra-class discrepancy and meanwhile
pulling the centers of all classes away from each other.
In an unsupervised scenario, however, we have no labeled data to tell our model
how it can exactly distinguish one person from another who has a confusingly similar
appearance with him.
Therefore, it is acceptable to relax the original idea:
we focus on gathering similar person images together, and hence separating relatively dissimilar ones.
Such goal can be modelled by minimizing an objective function like that of $k$-means
clustering \cite{KMEANS}:
\begin{equation}\label{Eq0}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}}\mathcal{F}_{intra}= \sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{\mathrm{T}}\mathbf{x}_i - \mathbf{c}_k \rVert^2,
\end{aligned}
\end{equation}
where $K$ is the number of clusters,
$\mathbf{c}_k$ denotes the centroid of the $k$-th cluster and
$\mathcal{C}_k = \{ i | \bm{U}^{\mathrm{T}}\mathbf{x}_i \in k$-th cluster$\}$.
However, clustering results may be affected extremely
by view-specific bias when applied in cross-view problems.
In the context of RE-ID, the feature distortion could be view-sensitive due to view-specific interference like
different lighting conditions and occlusions \cite{2015_TCSVT_ASM}.
Such interference
might be disturbing or even dominating in searching the similar person images across views during
clustering procedure. To address this cross-view problem,
we learn specific projection for each view rather than a universal one
to explicitly model the effect of view-specific interference and to alleviate it.
Therefore, the idea can be further formulated
by minimizing an objective function below:
\begin{equation}\label{Eq1}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}^1,\cdots,\bm{U}^V}\mathcal{F}_{intra}= &\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \mathbf{c}_k \rVert^2\\
s.t.\qquad \bm{U}^{p\mathrm{T}}&\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V),
\end{aligned}
\end{equation}
where the notation is similar to Eq. (\ref{Eq0}), with
$p$ denotes the view index,
$\bm{\Sigma}^p = \bm{X}^p\bm{X}^{p\mathrm{T}}/ N_p + \alpha \bm{I}$ and $\bm{I}$ represents the identity matrix
which avoids singularity of the covariance matrix.
The transformation $\bm{U}^p$ that corresponds to each instance $\mathbf{x}_i^p$ is determined
by the camera view which $\mathbf{x}_i^p$ comes from.
The quasi-orthogonal constraints on $\bm{U}^p$ ensure that the model will
not simply give zero matrices. By combining the asymmetric metric learning, we actually realize an asymmetric metric clustering on RE-ID data across camera views.
Intuitively, if we minimize this objective function directly,
$\bm{U}^p$ will largely depend on the data distribution
from the $p$-th view. Now that there is specific bias on each view,
any $\bm{U}^p$ and $\bm{U}^q$ could be arbitrarily different.
This result is very natural,
but large inconsistencies among the learned transformations are
not what we exactly expect,
because the transformations are with respect to person images from different views: they are inherently correlated and homogeneous.
More critically, largely different projection basis pairs would fail to
capture the discriminative nature of cross-view images, producing an even worse
matching result.
Hence, to strike a balance between the ability to capture discriminative nature and
the capability to alleviate view-specific bias, we embed a cross-view consistency regularization term
into our objective function. And then, in consideration of better tractability,
we divide the intra-class term by its scale $N$, so that the regulating parameter
would not be sensitive to the number of training samples.
Thus, our optimization task becomes
\modify{
\begin{equation}\label{Eq2}
\small
\begin{aligned}
\mathop{\min}_{\bm{U}^1,\cdots,\bm{U}^V} \mathcal{F}_{obj} = \frac{1}{N}&\mathcal{F}_{intra} + \lambda\mathcal{F}_{consistency} \\
= \frac{1}{N}\sum_{k=1}^K &\sum_{i \in {\mathcal{C}_k}} \lVert \bm{U}^{p\mathrm{T}}\mathbf{x}_i^p - \mathbf{c}_k \rVert^2
+\lambda \sum_{p\neq q} \lVert \bm{U}^p-\bm{U}^q\rVert_F^2 \\
s.t.\qquad &\bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V),
\end{aligned}
\end{equation}}
where $\lambda$ is the cross-view regularizer and $\lVert\cdot\rVert_F$ denotes the Frobenius norm
of a matrix. We call the above model the \emph{Clustering-based Asymmetric MEtric Learning (CAMEL)}.
To illustrate the differences between symmetric and asymmetric metric clustering in structuring data
in the RE-ID problem,
we further show the data distributions in Figure \ref{FigP}.
We can observe from Figure \ref{FigP} that the view-specific
bias is obvious in the original space: triangles in the upper left and circles in the lower right.
In the common space
learned by symmetric metric clustering, the bias is still obvious.
In contrast, in the shared space learned by asymmetric metric clustering,
the bias is alleviated and thus the data is better characterized according to the identities
of the persons, i.e., samples of one person (one color) gather together into a cluster.
\begin{figure}[t]
\hspace{-1ex}
\subfigure[Original]{
\includegraphics[width=0.33\linewidth]{feature_1.pdf}
}
\hspace{-2.5ex}
\subfigure[Symmetric]{
\includegraphics[width=0.33\linewidth]{symDistribution_7color.pdf}
}
\hspace{-2.5ex}
\subfigure[Asymmetric]{
\includegraphics[width=0.33\linewidth]{metric_1.pdf}
}
\caption{\label{FigP}Illustration of how symmetric and asymmetric metric clustering structure data
using our method for the unsupervised RE-ID problem. The samples are from the SYSU dataset \cite{2015_TCSVT_ASM}.
We performed PCA for visualization. One shape (triangle or circle) stands for samples from one view, while one color indicates samples of one person.
(a) Original distribution (b) distribution in the common space learned by symmetric metric clustering
(c) distribution in the shared space learned by asymmetric metric clustering. (Best viewed in color)}
\end{figure}
\subsection{Optimization}
For convenience, we denote $\mathbf{y}_i=\bm{U}^{p\mathrm{T}}\mathbf{x}_i^p$. Then we have $\bm{Y} \in \mathbb{R}^{T \times N}$,
where each column $\mathbf{y}_i$
corresponds to the projected new representation of that from $\bm{X}$. For optimization, we rewrite our objective function in a more compact form.
The first term can be rewritten as follow \cite{NMF}:
\begin{equation}\label{Eq3}
\small
\begin{aligned}
\frac{1}{N}\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \mathbf{y}_i - \mathbf{c}_k \rVert^2
=\frac{1}{N}[\mathrm{Tr}(\bm{Y}^{\mathrm{T}}\bm{Y})-\mathrm{Tr}(\bm{H}^{\mathrm{T}}\bm{Y}^{\mathrm{T}}\bm{YH})], \\
\end{aligned}
\end{equation}
where
\begin{equation}\label{EqH}
\small
\bm{H} =
\begin{bmatrix}
\mathbf{h}_1,...,\mathbf{h}_K
\end{bmatrix}
,\quad \mathbf{h}_k^{\mathrm{T}}\mathbf{h}_l =
\begin{cases}
0 & k\neq l \\
1 & k= l
\end{cases}
\end{equation}
\begin{equation}\label{EqColH}
\small
\mathbf{h}_k =
\begin{bmatrix}
0,\cdots,0,1,\cdots,1,0,\cdots,0,1,\cdots
\end{bmatrix}
^{\mathrm{T}}/\sqrt{n_k}
\end{equation}
is an indicator vector with the $i$-th entry corresponding to the instance $\mathbf{y}_i$,
indicating that $\mathbf{y}_i$ is in the $k$-th cluster if the corresponding entry does not equal zero.
Then we construct
\modify{
\begin{equation}
\small
\widetilde {\bm{X}} =
\begin{bmatrix}
\mathbf{x}^1_1&\cdots&\mathbf{x}^1_{N_1}& \mathbf{0}& \cdots& \mathbf{0}& \cdots& \mathbf{0} \\
\mathbf{0}&\cdots&\mathbf{0}& \mathbf{x}^2_1&\cdots& \mathbf{x}^2_{N_2}& \cdots& \mathbf{0} \\
\vdots&\vdots&\vdots& \vdots&\vdots& \vdots& \vdots& \vdots \\
\mathbf{0}&\cdots&\mathbf{0}& \mathbf{0}&\cdots& \mathbf{0}& \cdots& \mathbf{x}^V_{N_V}
\end{bmatrix}
\end{equation}}
\begin{equation}
\small
\widetilde {\bm{U}} =
\begin{bmatrix}
\bm{U}^{1\mathrm{T}}, \cdots, \bm{U}^{V\mathrm{T}}
\end{bmatrix}
^{\mathrm{T}}
,
\end{equation}
so that
\begin{equation}\label{EqY}
\small
\bm{Y} = \widetilde{\bm{U}}^{\mathrm{T}}\widetilde{\bm{X}},
\end{equation}
and thus Eq. (\ref{Eq3}) becomes
\begin{equation}
\small
\begin{aligned}
&\frac{1}{N}\sum_{k=1}^K \sum_{i \in {\mathcal{C}_k}} \lVert \mathbf{y}_i - \mathbf{c}_k \rVert^2 \\
=&\frac{1}{N}\mathrm{Tr}(\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}})
-\frac{1}{N}\mathrm{Tr}({\bm{H}}^{\mathrm{T}}\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}}\bm{H}).
\end{aligned}
\end{equation}
As for the second term, we can also rewrite it as follow:
\begin{equation}
\small
\lambda \sum_{p\neq q} \lVert \bm{U}^p-\bm{U}^q\rVert_F^2 = \lambda\mathrm{Tr}(\widetilde{\bm{U}}^{\mathrm{T}}\bm{D\widetilde U}),
\end{equation}
where
\begin{equation}
\small
\bm{D} =
\begin{bmatrix}
(V-1)\bm{I}& -\bm{I}& -\bm{I}&\cdots &-\bm{I} \\
-\bm{I}& (V-1)\bm{I}& -\bm{I}&\cdots &-\bm{I} \\
\vdots&\vdots&\vdots&\vdots&\vdots \\
-\bm{I}& -\bm{I}& -\bm{I}& \cdots&(V-1)\bm{I}
\end{bmatrix}.
\end{equation}
Then, it is reasonable to relax the constraints
\begin{equation}
\small
\bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \bm{I} \quad (p = 1,\cdots,V)
\end{equation}
to
\begin{equation}
\small
\sum_{p=1}^V \bm{U}^{p\mathrm{T}}\bm{\Sigma}^p\bm{U}^p = \widetilde {\bm{U}}^{\mathrm{T}}\widetilde{\bm{\Sigma}}\widetilde {\bm{U}} = V\bm{I},
\end{equation}
where $\widetilde{\bm{\Sigma}} = diag(\bm{\Sigma}^1, \cdots, \bm{\Sigma}^V)$
because what we expect is to prevent each $\bm{U}^p$ from shrinking to a zero matrix.
The relaxed version of constraints is able to satisfy our need, and it
bypasses trivial computations.
By now we can rewrite our optimization task as follow:
\begin{equation}\label{optFinal}
\small
\begin{aligned}
\mathop{\min}_{\widetilde{\bm{U}}}\mathcal{F}_{obj} &=
\frac{1}{N}\mathrm{Tr}(\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}})
+\lambda\mathrm{Tr}(\widetilde{\bm{U}}^{\mathrm{T}}\bm{D\widetilde U}) \\
&- \frac{1}{N}\mathrm{Tr}({\bm{H}}^{\mathrm{T}}\widetilde {\bm{X}}^{\mathrm{T}}\widetilde {\bm{U}}\widetilde {\bm{U}}^{\mathrm{T}}\widetilde {\bm{X}}\bm{H})
\\
&s.t.\qquad \widetilde {\bm{U}}^{\mathrm{T}}\widetilde{\bm{\Sigma}}\widetilde {\bm{U}} = V\bm{I}.
\end{aligned}
\end{equation}
It is easy to realize from Eq. (\ref{Eq2}) that our objective function
is highly non-linear and non-convex. Fortunately, in the form of Eq. (\ref{optFinal})
we can find that once $\bm{H}$ is fixed, Lagrange's method can be applied to
our optimization task. And again from Eq. (\ref{Eq2}),
it is exactly the objective of $k$-means clustering once $\widetilde{\bm{U}}$ is fixed \cite{KMEANS}.
Thus, we can adopt an alternating algorithm to solve the optimization problem.
\noindent \textbf{Fix $\bm{H}$ and optimize $\widetilde{\bm{U}}$.} Now we see how we optimize $\widetilde{\bm{U}}$.
After fixing $\bm{H}$ and applying the method
of Lagrange multiplier, our optimization task (\ref{optFinal})
is transformed into an eigen-decomposition problem as follow:
\begin{equation}\label{EqEigenDe}
\small
\bm{G}\mathbf{u} = \gamma \mathbf{u},
\end{equation}
where $\gamma$ is the Lagrange multiplier (and also is the eigenvalue here) and
\begin{equation}\label{EqG}
\small
\bm{G} = \widetilde{\bm{\Sigma}}^{-1}(\lambda \bm{D}+\frac{1}{N}\widetilde{\bm{X}}\widetilde{\bm{X}}^{\mathrm{T}}-\frac{1}{N}\widetilde{\bm{X}}\bm{HH}^{\mathrm{T}}\widetilde{\bm{X}}^{\mathrm{T}}).
\end{equation}
Then, $\widetilde{\bm{U}}$ can be obtained by solving this eigen-decomposition problem.
\noindent \textbf{Fix $\widetilde{\bm{U}}$ and optimize $\bm{H}$.} As for the optimization of $\bm{H}$, we can simply fix $\widetilde{\bm{U}}$
and conduct $k$-means clustering in the learned space. Each column of $\bm{H}$,
$\mathbf{h}_k$, is thus constructed according to the clustering result.
Based on the analysis above, we can now propose the main algorithm
of CAMEL in Algorithm \ref{AlgCamel}. We set maximum iteration to 100.
After obtaining $\widetilde{\bm{U}}$, we decompose it back into $\{\bm{U}^1,\cdots,\bm{U}^V\}$.
The algorithm is guaranteed to convergence, as given in the following proposition:
\final{
\begin{prop}
In Algorithm \ref{AlgCamel}, $\mathcal{F}_{obj}$ is guaranteed to convergence.
\end{prop}
\begin{proof}
In each iteration, when $\widetilde{\bm{U}}$ is fixed,
if $\bm{H}$ is the local minimizer, $k$-means remains $\bm{H}$ unchanged, otherwise it seeks the local minimizer.
When $\bm{H}$ is fixed, $\widetilde{\bm{U}}$ has a closed-form solution which is the global minimizer.
Therefore, the $\mathcal{F}_{obj}$ decreases step by step.
As $\mathcal{F}_{obj}\geq 0$ has a lower bound $0$, it is guaranteed to convergence.
\end{proof}
}
\begin{algorithm}[t]\label{AlgCamel}
\scriptsize
\caption{CAMEL}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{$\widetilde{\bm{X}},K,\epsilon=10^{-8}$}
\Output{$\widetilde{\bm{U}}$}
Conduct $k$-means clustering with respect to
each column of $\widetilde{\bm{X}}$ to initialize $\bm{H}$ according to Eq. (\ref{EqH}) and (\ref{EqColH}). \\
Fix $\bm{H}$ and solve the eigen-decomposition problem described by Eq. (\ref{EqEigenDe}) and (\ref{EqG})
to construct $\widetilde{\bm{U}}$. \\
\While{decrement of $\mathcal{F}_{obj} > \epsilon$ \& maximum iteration unreached}
{
\begin{itemize}[leftmargin=*]
\setlength{\topsep}{1ex}
\setlength{\itemsep}{-0.1ex}
\setlength{\parskip}{0.1\baselineskip}
\vspace{0.1cm}
\item Construct $\bm{Y}$ according to Eq. (\ref{EqY}). \\
\item Fix $\widetilde{\bm{U}}$ and conduct $k$-means clustering with respect to
each column \par of $\bm{Y}$ to update $\bm{H}$ according to Eq. (\ref{EqH}) and (\ref{EqColH}). \\
\item Fix $\bm{H}$ and solve the eigen-decomposition problem described by \par Eq. (\ref{EqEigenDe}) and (\ref{EqG})
to update $\widetilde{\bm{U}}$.
\end{itemize}
}
\end{algorithm}
\section{Experiments}
\subsection{Datasets}
\begin{figure}
\begin{center}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_VIPER.pdf}
}
\subfigure[\label{FigDatasetsCUHK01}]{
\includegraphics[width=0.137\linewidth]{collage_CUHK01.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_CUHK03.pdf}
}
\subfigure[\label{FigDatasetsSYSU}]{
\includegraphics[width=0.137\linewidth]{collage_SYSU.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_Market.pdf}
}
\subfigure[]{
\includegraphics[width=0.137\linewidth]{collage_ExMarket.pdf}
}
\caption{\label{FigDatasets}Samples of the datasets. Every two images in
a column are from one identity across two disjoint camera views.
(a) VIPeR (b) CUHK01 (c) CUHK03 (d) SYSU (e) Market (f) ExMarket. (Best viewed in color)}
\end{center}
\end{figure}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.5cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
\# Samples & 1,264 & 3,884 & 13,164 & 24,448 & 32,668 & 236,696 \\
\# Views & 2 & 2 & 6 & 2 & 6 & 6 \\
\bottomrule
\end{tabular}%
\caption{\label{TableDatasets}Overview of dataset scales. ``\#'' means ``the number of''.}
\end{center}
\end{table}
Since unsupervised models are more meaningful when the scale of problem
is larger, our experiments were conducted on relatively big datasets
except VIPeR \cite{VIPER} which is small but widely used.
Various degrees of view-specific bias can be observed in all these datasets (see Figure \ref{FigDatasets}).
\noindent \textbf{The VIPeR dataset} contains 632 identities,
with two images captured from two camera views of each identity.
\noindent \textbf{The CUHK01 dataset} \cite{CUHK01} contains 3,884 images of
971 identities captured from
two disjoint views. There are two images of every identity from each view.
\noindent \textbf{The CUHK03 dataset} \cite{2014_CVPR_CUHK03} contains 13,164 images
of 1,360 pedestrians captured from six surveillance camera views.
Besides hand-cropped images, samples detected
by a state-of-the-art pedestrian detector are provided.
\noindent \textbf{The SYSU dataset} \cite{2015_TCSVT_ASM} includes 24,448 RGB images of 502 persons under two surveillance cameras.
One camera view mainly
captured the frontal or back views of persons, while the other observed mostly
the side views.
\noindent \textbf{The Market-1501 dataset} \cite{2015_ICCV_MARKET} (Market) contains 32,668 images of 1,501 pedestrians, each of which was
captured by at most six cameras. All of the images were cropped by a pedestrian
detector. There are some bad-detected samples in this datasets as distractors
as well.
\noindent \textbf{The ExMarket dataset}\final{\footnote{Demo code for the model and the ExMarket dataset can be found on \url{https://github.com/KovenYu/CAMEL}.}}. In order to evaluate unsupervised RE-ID methods on even larger scale,
which is more realistic, we further combined \textbf{the MARS dataset} \cite{MARS} with
Market. MARS is a video-based RE-ID dataset which contains
20,715 tracklets of 1,261 pedestrians. All the identities from MARS are of a
subset of those from Market.
We then took 20\% frames (each one in every five successive frames) from the tracklets
and combined them with Market to obtain an extended version of Market (\textbf{ExMarket}).
The imbalance between the numbers of samples from the 1,261 persons and other
240 persons makes this dataset more challenging and realistic. There are 236,696 images
in ExMarket in total, and 112,351 images of them are of training set.
A brief overview of the dataset scales can be found in Table \ref{TableDatasets}.
\subsection{Settings}
\noindent \textbf{Experimental protocols}:
A widely adopted protocol was followed on VIPeR in our
experiments \cite{2015_CVPR_LOMO}, i.e., randomly dividing the 632 pairs of images into
two halves, one of which was used as training set and the other as testing set. This
procedure was repeated 10 times to offer average performance.
Only
single-shot experiments were conducted.
The experimental protocol for CUHK01 was the same as that in \cite{2015_CVPR_LOMO}.
We randomly selected 485 persons as training set and the other 486 ones as testing set.
The evaluating procedure was repeated 10 times. Both multi-shot and single-shot
settings were conducted.
The CUHK03 dataset was provided together with its recommended evaluating protocol \cite{2014_CVPR_CUHK03}.
We followed the provided protocol, where images of 1,160 persons were chosen as training set,
images of another 100 persons as
validation set and the remainders as testing set.
This procedure was repeated 20 times.
In our experiments, detected samples were adopted since they
are closer to real-world settings.
Both multi-shot and single-shot experiments were conducted.
As for the SYSU dataset, we randomly picked 251 pedestrians' images as training set
and the others as testing set.
In the testing stage, we basically followed the protocol as in \cite{2015_TCSVT_ASM}. That is,
we randomly chose one and three images of each pedestrian as gallery for single-shot and multi-shot experiments, respectively.
We repeated the testing procedure by 10 times.
Market is somewhat different from others. The evaluation protocol was also
provided along with the data \cite{2015_ICCV_MARKET}. Since the images of one person
came from at most six views, single-shot experiments were not suitable. Instead,
multi-shot experiments were conducted and both cumulative matching characteristic (CMC) and
mean average precision (MAP) were adopted for evaluation \cite{2015_ICCV_MARKET}.
The protocol of ExMarket was identical to that of Market since the identities were
completely the same as we mentioned above.
\noindent \textbf{Data representation}:
In our experiments we used the deep-learning-based JSTL feature proposed in \cite{2016_CVPR_JSTL}.
We implemented it using the 56-layer ResNet \cite{2016_CVPR_resnet}, which
produced $64$-D features.
The original JSTL was adopted to our implementation to extract features on SYSU, Market and ExMarket.
Note that the training set of the original JSTL contained VIPeR, CUHK01 and CUHK03,
violating the unsupervised setting.
So we trained a new JSTL model without VIPeR in its training set to extract
features on VIPeR. The similar procedures were done for CUHK01 and CUHK03.
\noindent \textbf{Parameters}:
We set $\lambda$, the cross-view consistency regularizer, to $0.01$.
We also evaluated the situation when $\lambda$ goes to infinite, i.e.,
the symmetric version of our model in Sec. \ref{SecFurtherEval},
to show how important the asymmetric modelling is.
\Koven{Regarding the parameter $T$ which is the feature dimension after the transformation learned by CAMEL, we set $T$ equal to original feature dimension i.e., $64$, for simplicity. In our experiments, we found that CAMEL can align data distributions across camera views even without performing any further dimension reduction.
This may be due to the fact that, unlike conventional subspace learning models, the transformations learned by CAMEL are view-specific for different camera views and always non-orthogonal. Hence, the learned view-specific transformations can already reduce the discrepancy between the data distributions of different camera views.}
As for $K$, we found that
our model was not sensitive to $K$ when $N\gg K$ and $K$ was not too small
(see Sec. \ref{SecFurtherEval}),
so we set $K = 500$.
These parameters were fixed for all datasets.
\subsection{Comparison}\label{SecFairCmp}
Unsupervised models are more significant when applied on larger datasets.
In order to make comprehensive and fair comparisons, in this section
we compare CAMEL with the most comparable unsupervised models
on six datasets with their scale orders varying from hundreds to hundreds of thousands.
We show the comparative results measured by
the rank-1 accuracies of CMC and MAP (\%)
in Table \ref{TableJSTL}.
\noindent \textbf{Comparison to Related Unsupervised RE-ID Models}.
In this subsection we compare CAMEL with the sparse dictionary learning
model (denoted as Dic) \cite{2015_BMVC_DIC},
sparse representation learning model ISR \cite{2015_PAMI_ISR},
kernel subspace learning model RKSL \cite{2016_ICIP_Wang} and
sparse auto-encoder (SAE) \cite{SAE1,SAE2}.
We tried several sets of parameters for them, and report the best ones.
We also adopt the Euclidean distance which is adopted in the original JSTL paper \cite{2016_CVPR_JSTL} as a baseline (denoted as JSTL).
From Table \ref{TableJSTL}
we can observe that
CAMEL outperforms other models on all the datasets on both settings.
In addition, we can further see from Figure \ref{FigCMC} that CAMEL outperforms other models
at any rank.
One of the main reasons is that the view-specific
interference is noticeable in these datasets. For example, we can see in Figure \ref{FigDatasetsCUHK01} that
on CUHK01, the
changes of illumination are extremely severe and even human beings may have difficulties in
recognizing the identities in those images across views.
This impedes other symmetric models from achieving higher accuracies,
because they potentially hold an assumption that
the invariant and discriminative information can be retained and exploited through a universal
transformation for all views.
But CAMEL relaxes this assumption by
learning an asymmetric metric and then can outperform other models significantly.
In Sec. \ref{SecFurtherEval} we will see the performance of CAMEL would drop much
when it degrades to a symmetric model.
\begin{table}[t]
\scriptsize
\begin{center}
\setlength{\tabcolsep}{0.16cm}
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
Dic \begin{tiny}\cite{2015_BMVC_DIC}\end{tiny} &29.9&49.3/52.9&27.4/36.5&21.3/28.6&50.2(22.7)& 52.2(21.2) \\
ISR \begin{tiny}\cite{2015_PAMI_ISR}\end{tiny} &27.5 &53.2/55.7 &31.1/38.5& 23.2/33.8& 40.3(14.3)&- \\
RKSL \begin{tiny}\cite{2016_ICIP_Wang}\end{tiny} &25.8 & 45.4/50.1 &25.8/34.8 &17.6/23.0 &34.0(11.0) &- \\
SAE \begin{tiny}\cite{SAE1}\end{tiny} &20.7 &45.3/49.9 &21.2/30.5 &18.0/24.2 &42.4(16.2) &44.0(15.1) \\
JSTL \begin{tiny}\cite{2016_CVPR_JSTL}\end{tiny} &25.7 &46.3/50.6 &24.7/33.2 &19.9/25.6 &44.7(18.4) &46.4(16.7)\\
\midrule
AML \begin{tiny}\cite{2007_CVPR_AML}\end{tiny} &23.1 &46.8/51.1 &22.2/31.4 &20.9/26.4 &44.7(18.4) &46.2(16.2) \\
UsNCA \begin{tiny}\cite{2015_NC_uNCA}\end{tiny} &24.3 &47.0/51.7 &19.8/29.6 &21.1/27.2 &45.2(18.9) &- \\
\midrule
CAMEL & \textbf{30.9} & \textbf{57.3/61.9} & \textbf{31.9/39.4} & \textbf{30.8/36.8} & \textbf{54.5}(\textbf{26.3}) & \textbf{55.9}(\textbf{23.9}) \\
\bottomrule
\end{tabular}%
\caption{\label{TableJSTL}Comparative results of unsupervised models on the six datasets, measured by
rank-1 accuracies and MAP (\%).
``-'' means prohibitive time consumption due to time complexities of the models.
``SS'' represents single-shot setting and ``MS'' represents multi-shot setting.
For Market and ExMarket, MAP is also provided in the parentheses due to the
requirement in the protocol \cite{2015_ICCV_MARKET}.
Such a format is also applied in the following tables.}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\subfigure[VIPeR]{
\includegraphics[width=0.47\linewidth]{VIPER_.pdf}
}
\subfigure[CUHK01]{
\includegraphics[width=0.47\linewidth]{CUHK01_.pdf}
}
\subfigure[CUHK03]{
\includegraphics[width=0.47\linewidth]{CUHK03_.pdf}
}
\subfigure[SYSU]{
\includegraphics[width=0.47\linewidth]{SYSU_.pdf}
}
\subfigure[Market]{
\includegraphics[width=0.47\linewidth]{Market_.pdf}
}
\subfigure[ExMarket]{
\includegraphics[width=0.47\linewidth]{ExMarket_.pdf}
}
\caption{\label{FigCMC}CMC curves.
\modify{For CUHK01, CUHK03 and SYSU, we take the results under single-shot setting as examples.
Similar patterns can be observed on multi-shot setting.}}
\end{center}
\end{figure}
\noindent \textbf{Comparison to Clustering-based Metric Learning Models}.
In this subsection we compare CAMEL with
a typical model AML \cite{2007_CVPR_AML} and a recently proposed model UsNCA \cite{2015_NC_uNCA}.
We can see from Fig. \ref{FigCMC} and Table \ref{TableJSTL} that compared to them, CAMEL achieves
noticeable improvements on all the six datasets.
One of the major reasons is that
they do not consider the view-specific bias which can
be very disturbing in clustering, making them unsuitable for RE-ID problem.
\ws{In comparison}, CAMEL alleviates such disturbances by asymmetrically modelling.
This factor contributes to the much better performance of CAMEL.
\noindent \textbf{Comparison to the State-of-the-Art.}
In the last subsections, we compared with existing unsupervised RE-ID methods using the same features.
In this part, we also compare with the results reported in literatures.
Note that most existing unsupervised RE-ID methods have not been evaluated on large datasets like CUHK03, SYSU, or Market,
so Table \ref{TableSotA} only reports the comparative results
on VIPeR and CUHK01.
We additionally compared existing unsupervised RE-ID models, including the
hand-craft-feature-based SDALF \cite{2010_CVPR_SDALF} and CPS \cite{CAVIAR},
the transfer-learning-based UDML \cite{2016_CVPR_tDIC},
graph-learning-based model (denoted as GL) \cite{2016_ECCV_Kodirov},
and local-salience-learning-based GTS \cite{2014_BMVC_GTS} and SDC \cite{2013_CVPR_SALIENCE}.
We can observe from Table \ref{TableSotA} that
our model CAMEL can outperform the state-of-the-art by large margins on CUHK01.
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccccc}
\toprule
Model & SDALF & CPS & UDML
& GL & GTS
& SDC & CAMEL \\
&\cite{2010_CVPR_SDALF} &\cite{CAVIAR} &\cite{2016_CVPR_tDIC} &\cite{2016_ECCV_Kodirov} &\cite{2014_BMVC_GTS} & \cite{2013_CVPR_SALIENCE} & \\
\midrule
VIPeR & 19.9 & 22.0 & 31.5 & \textbf{33.5} & 25.2 & 25.8 & 30.9 \\
CUHK01 & 9.9 & - & 27.1 & 41.0 & - & 26.6 & \textbf{57.3} \\
\bottomrule
\end{tabular}%
\caption{\label{TableSotA}Results compared to the state-of-the-art reported in literatures, measured by rank-1 accuracies (\%). ``-'' means no reported result.}
\end{center}
\end{table}
\noindent \textbf{Comparison to Supervised Models.}
Finally, in order to see how well CAMEL can approximate the performance of supervised RE-ID,
\Koven{we additionally compare CAMEL with its supervised version (denoted as CAMEL$_s$) which is easily derived by substituting the clustering results by true labels, and three standard supervised models,
including the widely used KISSME \cite{2012_CVPR_KISSME}, XQDA \cite{2015_CVPR_LOMO}, the asymmetric distance model CVDCA \cite{2015_TCSVT_ASM}.
The results are shown in Table \ref{TableSupervised}.
We can see that CAMEL$_s$ outperforms CAMEL by various degrees,
indicating that label information can further improve CAMEL's performance.
Also from Table \ref{TableSupervised}, we notice that CAMEL can be comparable to other standard supervised models on some datasets like CUHK01,
and even outperform some of them.}
It is probably because the used JSTL model had not been fine-tuned on the target datasets: this was for a fair comparison with unsupervised models which work on completely unlabelled training data.
Nevertheless, this suggests that the performance of CAMEL may not be far below the standard supervised RE-ID models.
\begin{table}[t]
\scriptsize
\setlength{\tabcolsep}{0.11cm}
\begin{tabular}{ccccccc}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
KISSME \begin{tiny}\cite{2012_CVPR_KISSME}\end{tiny} &28.4&53.0/57.1&37.8/45.4&24.7/31.8&51.1(24.5)& 48.0(18.3) \\
XQDA \begin{tiny}\cite{2015_CVPR_LOMO}\end{tiny} &28.9&54.3/58.2&36.7/43.7&25.2/31.7&50.8(24.4)& 47.4(18.1) \\
CVDCA \begin{tiny}\cite{2015_TCSVT_ASM}\end{tiny} &\textbf{37.6}&57.1/60.9&37.0/44.6&31.1/\textbf{38.9}&52.6(25.3)&51.5(22.6) \\
CAMEL$_s$ &33.7&\textbf{58.5/62.7}&\textbf{45.1/53.5}&\textbf{31.6}/37.6&\textbf{55.0}(\textbf{27.1})& \textbf{56.1}(\textbf{24.1}) \\
\midrule
CAMEL & 30.9 & 57.3/61.9 & 31.9/39.4 & 30.8/36.8 &54.5(26.3) & 55.9(23.9) \\
\bottomrule
\end{tabular}%
\caption{\label{TableSupervised}Results compared to supervised models using the same JSTL features.}
\end{table}
\subsection{Further Evaluations}\label{SecFurtherEval}
\noindent \textbf{The Role of Asymmetric Modeling}.
We show what is going to happen if CAMEL degrades to a common symmetric model
in Table \ref{TableSym}. Apparently, without asymmetrically modelling each camera view,
our model would be worsen largely, indicating that the asymmetric modeling for clustering
is rather important for addressing the cross-view matching problem in RE-ID as well as in our model.
\noindent \textbf{Sensitivity to the Number of Clustering Centroids}. We take
CUHK01, Market and ExMarket datasets as examples of different scales (see Table \ref{TableDatasets}) for this evaluation.
Table \ref{TableK} shows how the performance varies with different numbers of clustering centroids, $K$.
It is obvious that the performance
only fluctuates mildly when $N \gg K$ and $K$ is not too small.
Therefore CAMEL is not very sensitive to $K$ especially when applied to large-scale problems.
\final{To further explore the reason behind,
we show in Table \ref{table:rate} the rate of clusters which contain more than one persons,
in the initial stage and convergence stage in Algorithm \ref{AlgCamel}.
We can see that \emph{(1)} in spite of that $K$ is varying,
there is always a number of clusters containing more than one persons in both the initial stage and convergence stage.
This indicates that our model works \emph{without} the requirement of perfect clustering results.
And \emph{(2)}, although the number is various,
in the convergence stage the number is consistently decreased compared to initialization stage.
This shows that the cluster results are improved consistently.
These two observations suggests that
the clustering should be a mean to learn the asymmetric metric, rather than an ultimate objective.}
\modify{
\noindent \textbf{Adaptation Ability to Different Features}.
At last, we show that CAMEL can be effective not only when adopting deep-learning-based JSTL features.
We additionally adopted the hand-crafted LOMO feature proposed in \cite{2015_CVPR_LOMO}.
We performed PCA to produce $512$-D LOMO features, and the results are shown in Table \ref{TableLOMO}.
Among all the models, the results of Dic and ISR are the most comparable (Dic and ISR take all second places). So for clarity, we only compare CAMEL with them and $L_2$ distance as baseline.
From the table we can see that CAMEL can outperform them.
}
\begin{table}[t]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{0.11cm}
\begin{tabular}{ccccccc}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
CMEL & 27.5 & 52.5/54.9 & 29.8/37.5 & 25.4/30.9 & 47.6(21.5) & 48.7(20.0) \\
CAMEL & \textbf{30.9} & \textbf{57.3/61.9} & \textbf{31.9/39.4} & \textbf{30.8/36.8} & \textbf{54.5}(\textbf{26.3}) & \textbf{55.9}(\textbf{23.9}) \\
\bottomrule
\end{tabular}%
\caption{\label{TableSym}Performances of CAMEL compared to its symmetric version, denoted as CMEL.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccc}
\toprule
K & 250 & 500 & 750 & 1000 & 1250 \\
\midrule
CUHK01 & 56.59 & 57.35 & 56.26 & 55.12 & 52.75 \\
Market & 54.48 & 54.45 & 54.54 & 54.48 & 54.48 \\
ExMarket & 55.49 & 55.87 & 56.17 & 55.93 & 55.67 \\
\bottomrule
\end{tabular}%
\caption{\label{TableK}Performances of CAMEL when the number of clusters, K, varies.
Measured by single-shot rank-1 accuracies (\%) for CUHK01 and multi-shot for Market and ExMarket.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\begin{tabular}{cccccc}
\toprule
K & 250 & 500 & 750 & 1000 & 1250 \\
\midrule
Initial Stage & 77.6\% & 57.0\% & 26.3\% & 11.6\% & 6.0\% \\
Convergence Stage & 55.8\% & 34.3\% & 18.2\% & 7.2\% & 4.8\% \\
\bottomrule
\end{tabular}%
\caption{\label{table:rate}
Rate of clusters containing similar persons on CUHK01.
Similar trend can be observed on other datasets.}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\scriptsize
\setlength{\tabcolsep}{0.16cm}
\begin{tabular}{
>{\centering\arraybackslash}p{1.2cm}
>{\centering\arraybackslash}p{0.7cm}
>{\centering\arraybackslash}p{0.8cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}
>{\centering\arraybackslash}p{0.85cm}}
\toprule
Dataset & VIPeR & CUHK01 & CUHK03 & SYSU & Market & ExMarket \\
\midrule
Setting & SS & SS/MS & SS/MS & SS/MS & MS & MS \\
\midrule
Dic \begin{tiny}\cite{2015_BMVC_DIC}\end{tiny} & 15.8 & 19.6/23.6 & 8.6/13.4 & 14.2/24.4 & 32.8(12.2) & 33.8(12.2) \\
ISR \begin{tiny}\cite{2015_PAMI_ISR}\end{tiny} & 20.8 & 22.2/27.1 & 16.7/20.7 & 11.7/21.6 & 29.7(11.0) & - \\
$L_2$ & 11.6 & 14.0/18.6 & 7.6/11.6 & 10.8/18.9 & 27.4(8.3) & 27.7(8.0) \\
\midrule
CAMEL & \textbf{26.4} & \textbf{30.0/36.2} & \textbf{17.3/23.4} & \textbf{23.6/35.6} & \textbf{41.4(14.1)} & \textbf{42.2(13.7)} \\
\bottomrule
\end{tabular}%
\caption{\label{TableLOMO}Results using $512$-D LOMO features.}
\end{center}
\end{table}
\section{Conclusion}
In this work, we have shown that metric learning can be effective for unsupervised RE-ID by proposing
clustering-based asymmetric metric learning called CAMEL. \ws{CAMEL learns view-specific projections
to deal with view-specific interference, and this is based on existing clustering (e.g., the $k$-means model demonstrated in this work)
on RE-ID unlabelled data, resulting in an asymmetric metric clustering.
Extensive experiments show that our model can outperform
existing ones in general, especially on large-scale unlabelled RE-ID datasets.
\section*{Acknowledgement}
This work was supported partially by the National Key Research and Development Program of China (2016YFB1001002), NSFC(61522115, 61472456, 61573387, 61661130157, U1611461), the Royal Society Newton Advanced Fellowship (NA150459), Guangdong Province Science and Technology Innovation Leading Talents (2016TX03X157).
{\small
\bibliographystyle{ieee}
| {'timestamp': '2017-10-19T02:06:57', 'yymm': '1708', 'arxiv_id': '1708.08062', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.08062'} |
\section{Introduction}
The physics of planar structures describes interesting properties \cite{Bais}, e. g., charge fractioning \cite{Feldman,Cherman} and fractional statistics \cite{Arovas}. Furthermore, in analyzing planar systems, several interesting features arise due to the correspondence between particles and their duals. One of these correspondences is the particle-vortex duality \cite{Karch,Murugan,Metlitski}. In the planar world, vortices constitute an important class of structures. The importance of these structures is due to their relevant applications, as we can see in Refs. \cite{Lima1,Lima2,Lima3,Lima4}. A notably interesting application appears in condensed matter physics, where these structures appear in the phenomena description of superconductivity \cite{Abrikosov,Davis1,Davis2}.
In general, one can understand the vortices as structures that arise in three-dimensional spacetime, i. e., $(2+1)$D \cite{Casana2,Casana3,Casana4,Edery1,Edery2}. In field theory, pioneers in the study of vortex structures were Nielsen and Olesen \cite{Nielsen}. In the seminal paper: {\it Vortex-line models in dual strings}, the authors show the vortex solutions of an action constructed with a complex scalar field minimally coupled to a gauge field with symmetry $U(1)$ \cite{Nielsen}. After Nielsen and Olesen's proposal, several papers emerged discussing topological \cite{Weinberg,Hong} and non-topological \cite{LeeP,Arthur,Kimm} structures.
Only in 1991, Stern \cite{Stern} proposed for the first time the study of a theory non-minimally coupled to the gauge field. Using a three-dimensional spacetime, Stern seeks to describe point particles with no spin degree of freedom that carries an appropriate magnetic momentum. Stern's work motivated several researchers who later proposed papers on non-minimal models, e. g., vortices non-minimally coupled to the gauge field \cite{Lima3,Torres,PKGhosh,SGhosh}. To be specific, in Ref. \cite{Cavalcante}, the authors investigate BPS vortex solutions for a specific interaction using an $O(3)$-sigma model non-minimally coupled to a Maxwell-Chern-Simons field. Besides, the BPS properties of sigma model vortices were also studied using a non-minimal coupling and a multi-field approach \cite{Lima3}. Motivated by these applications, a natural questioning arises: How are vortex structures modified in a non-minimum theory constituted by non-canonical multi-fields? Throughout this work, we will expose the answer to this question.
In this research article, we use the non-linear O(3)-sigma model. Briefly, the non-linear O(3)-sigma model consists of three real scalar fields \cite{Rajaraman}, i. e., $\Phi(\textbf{r},t)\equiv\{\phi_i(\textbf{r},t),\, i=1,2,3\}$ with the constraint
\begin{align}\label{vin}
\Phi\cdot\Phi=\sum_{i=1}^{3}\phi_i\phi^i=1.
\end{align}
Respecting this constraint, the dynamics of the O(3)-sigma field, i. e., of the field $\Phi$ is governed by the following Lagrangian
\begin{align}
\mathcal{L}=\frac{1}{2}\partial_\mu\Phi\cdot\partial^\mu\Phi.
\end{align}
Thus, one describes the sigma model as a vector of fields in its internal space, i. e., a three-dimensional field space \cite{Rajaraman,Ghosh1,Ghosh2,Schroers1,Schroers2}. In 1960, Gell-Mann and L\'{e}vy were the first to propose this model \cite{Gellmann}. At the time, the purpose was to describe the Goldberger and Treiman formula for the rate of decay of the charged pion using a strong interaction proposed by Schwinger \cite{Schwinger} and a weak current formulated by Polkinghorne \cite{Polkinghorne}. After the work of Gell-Mann and L\'{e}vy, several papers considered the non-linear sigma model in their analysis. For example, using the O(3)-sigma model, photons emerging was investigated in Ref. \cite{Motrunich}. Furthermore, the solitons stability and Lorentz violation were studied, respectively, in Refs. \cite{Leese} and \cite{Messias}.
Not far from the non-linear sigma model, some authors have proposed so-called multi-field models \cite{Bazeia1,Bazeia2,Bazeia3}. These models play an important role in inflationary theories \cite{Langlois1}. That is because the theoretical results of the multi-field theories agree with the phenomenological measurements \cite{Langlois1,Langlois2,Bean, Trotta, Keskitalo}. Thus, that motivates us to study the topological structures derived from this kind of theory. Indeed, one can find some research articles in the literature discussing aspects of structures in multi-field theories, e. g., see Refs. \cite{Oles,Liu}. However, as far as we know, no study was performed discussing the vortex structures considering an O(3)-sigma and other non-canonical fields.
In particular, in this work, in addition to the dynamic term of the sigma model, we will use a cuscuton-like non-canonical real scalar field. Afshordi, Chung, and Geshnizjani announce the cuscuton model in the paper: {\it A causal field theory with an infinite speed of sound} \cite{Afshordi}. In this theory, the cuscuton dynamics arise from the degenerate Hamiltonian symplectic structure description in the cosmologically homogeneous limit \cite{Afshordi}. In this case, the cuscuton theory becomes homogeneous when the metric is locally Minkowski \cite{Afshordi,Afshordi2,Afshordi3}. An interesting feature of the cuscuton field is that it does not contribute to the equation of motion at the stationary limit. Thus, one can interpret it as a non-dynamic auxiliary field that follows the dynamics of the fields to which it couples.
Naturally, together with these applications and motivations arise some questioning. For example, is it possible to obtain a vortex line in an O(3)-sigma model coupled to a non-canonical field? How do the non-canonical term and multi-field influence the structure of O(3)-sigma vortices? These are relevant questions that motivate our study. Thus, considering a sigma-cuscuton model, we hope to answer these questions throughout this research article.
We organized our work as follows: In Sec. II, the BPS vortices are analyzed. In Sec. III, we implement spherical symmetry in the target space of the O(3)-sigma model. Posteriorly, in Sec. IV, topological vortex solutions are displayed. To finalize, in Sec. V, our findings are announced.
\section{Non-minimal BPS vortex}
As discussed in Ref. \cite{Lima3}, the vortex configurations generated by multi-field theories are interesting because it is possible that they can have changes in their physical properties. Motivated by that, allow us to start our study by considering a three-dimensional model, i. e., a spacetime with $(2+1)$D. In this scenario, the Lagrangian density of our theory is
\begin{align}\label{Lag}
\mathcal{L}=\frac{1}{2}\nabla_{\mu}\Phi\cdot\nabla^{\mu}\Phi+\eta\sqrt{\vert \partial_\mu\psi\partial^\mu\psi\vert}+\frac{1}{2}\partial_\mu\psi\partial^\mu\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\mathcal{V}(\phi_3,\psi).
\end{align}
Here, $\Phi$ is a triplet of scalar fields subject to the constraint $\Phi\cdot\Phi=1$. Meanwhile, $\psi$ is a real scalar field, $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$ is the electromagnetic tensor and $\mathcal{V}( \phi_3,\psi)$ is the theory interaction. Furthermore, the term $\eta\sqrt{\vert\partial_\mu\psi\,\partial^\mu\psi\vert}$ is known as the cuscuton term. This term describes non-canonical theories \cite{Afshordi,Afshordi2,Afshordi3}. Indeed, the term cuscuton appears for the first time as an alternative to describe dark matter and their contribution to the action lacks a dynamic degree of freedom \cite{Lima2, Afshordi2}. In its etymology, the word \textit{cuscuton} originates in Latin and describes a parasitic plant, namely, the Cuscuta. Based on this, we call our theory of sigma-cuscuton-like model.
As discussed in Ref. \cite{Lima3}, one defines the usual covariant derivative as
\begin{align}
\label{CovariantD0}
D_\mu\Phi=\partial_\mu\Phi+eA_\mu\,(\hat{n}_3\times\Phi).
\end{align}
Meanwhile, the non-minimal covariant derivative is
\begin{align}\label{CovariantD}
\nabla_\mu\Phi=\partial_\mu\Phi+\bigg(eA_\mu+\frac{g}{2}\varepsilon_{\mu\nu\lambda}F^{\nu\lambda}\bigg)\,\hat{n}_3\times \Phi.
\end{align}
Let us study the non-minimal theory, i. e., vortex configurations with an anomalous contribution of magnetic momentum. We introduce the anomalous magnetic momentum contribution using the coupling $\frac{g}{2}\varepsilon_{\mu\nu\lambda}F^{\nu\lambda}$ in the covariant derivative, i. e., a coupling between the gauge field and the matter field. One can find the non-minimal coupling applied in investigations of the properties of BPS solitons, e. g., see Refs. \cite{Torres,PKGhosh,CA,SGhosh}.
To carry out our study, allow us to consider a flat spacetime with a metric signature such as $\eta_{\mu\nu}=$ det$(-,+,+)$. Moreover, one defines the gauge field as
\begin{align}\label{GaugeEquation}
j^\nu=\partial_\lambda[g\varepsilon_{\mu\lambda\nu}(\Phi\times\nabla^\mu\Phi)\cdot\hat{n}_3-F^{\lambda\nu}],
\end{align}
where $j^\nu=e(\Phi\times\nabla^\nu\Phi)\cdot\hat{n}_3$ and $\textbf{J}^\nu=-j^\nu\cdot\hat{n}_3$.
By inspection of Gauss' law, i. e., the component $\nu=0$ of Eq. (\ref{GaugeEquation}), we can assume $A_0=0$. In this case, the structures that arise in this theory will be purely magnetic.
Investigating the equation of motion, one obtains the matter field equation, namely,
\begin{align}
\nabla^\mu\nabla_\mu\Phi=-\mathcal{V}_\Phi,
\end{align}
with $\mathcal{V}_\Phi=\frac{\partial\mathcal{V}}{\partial \Phi}$.
Meanwhile, the real scalar field equation is
\begin{align}
\partial_\mu\bigg[\partial^\mu\psi+\eta\frac{\partial^\mu\psi}{\sqrt{\vert\partial_\nu\psi\,\partial^\nu\psi\vert}}\bigg]=-\mathcal{V}_\psi,
\end{align}
with $\mathcal{V}_\psi=\frac{\partial\mathcal{V}}{\partial \psi}$.
We are interested in the soliton-like solutions that describe the topological vortices. Thus, it is necessary to investigate the energy of the system. To perform this analysis, we construct the energy-momentum tensor and examine the $T_{00}$ component of the energy-momentum tensor. The integration of the $T_{00}$ component in the overall space gives us the energy of the structures. Performing this analysis, the energy is
\begin{align}\label{energy0}
\mathrm{E}=\frac{1}{2}\int\, d^2x\, \bigg[\nabla_i\Phi\cdot\nabla^i\Phi+\partial_i\psi\partial^i\psi+2
\eta\sqrt{\vert\partial_i\psi\,\partial^i\psi\vert}+F_{ij}F^{ij}+2\mathcal{V}\bigg].
\end{align}
The energy can be organized as follows:
\begin{align}\label{energy1} \nonumber
\mathrm{E}=&\int\, d^2x\,\bigg[\frac{1}{2}(\nabla_i\Phi\mp\varepsilon_{ij}\Phi\times\nabla_j\Phi)^2+\frac{1}{2}\bigg(\partial_i \psi\mp\frac{W_\psi}{r}\bigg)^2+\frac{1}{2}(F_{ij}\pm\sqrt{2\mathcal{U}})+\eta\sqrt{\vert\partial_i\psi\,\partial^i\psi\vert}+\\
\mp&\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)\mp\frac{W_\psi\partial_i\psi}{r}-\frac{W_{\psi}^{2}}{2r^2}+\mathcal{V}\mp F_{ij}\sqrt{2\mathcal{U}}-\mathcal{U}\bigg].
\end{align}
Here, we implement in the energy two interactions, i. e., $\mathcal{W}=\mathcal{W}[\phi_3(x_i);\, x_i]$ and $\mathcal{U}=\mathcal{U}[\psi(x_i);\, x_i ]$ with $\mathcal{W}_\psi=\frac{\partial \mathcal{W}}{\partial\psi}$. In general, one implements the superpotential functions $\mathcal{W}$ and $\mathcal{U}$ to obtain a first-order formalism of the theory. Indeed, these superpotentials play a relevant role, i. e., these functions relate with the potential $\mathcal{V}$ at the saturation limit of the energy \cite{Vachaspati}. Thus, it allows in the energy saturation limit, to obtain first-order equations of motion \cite{Vachaspati}, which is quite suitable for our purpose.
Analyzing the energy (\ref{energy1}), one notes that the static field configurations have energy bounded from below. Therefore, at the energy saturation limit, one obtains
\begin{align}\label{BPS1}
\nabla_i\Phi=\pm\varepsilon_{ij}\Phi\times\nabla_j\Phi, \, \, \, \, \, \, F_{ij}=\mp\sqrt{2\mathcal{U}} \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \partial_i\psi=\pm\frac{W_\psi}{r}.
\end{align}
Note that the first two first-order equations of the expression (\ref{BPS1}) are known as the Bogomol'nyi equations (or BPS equation) that describe the vortices of the O(3)-sigma model. On the other hand, the expression $\partial_i\psi=\frac{W_\psi}{r}$ is the BPS equation for the scalar field without the contribution of the non-canonical term (the cuscuton contribution). As a matter of fact, in the stationary case, the dynamics derived from the cuscuton term do not contribute to the equation of motion. That occurs because when we consider the case of the cuscuton-like scalar field $\psi=\psi(r_1)\equiv\psi(r)$, the contribution of the cuscuton-like term in the equation of motion is
\begin{align}
\partial_\mu\bigg[\frac{\partial \mathcal{L}_{cusc}}{\partial(\partial_\mu\psi)}\bigg]=\bigg(\frac{\partial\mathcal{L}_{cusc}}{\partial \psi'}\bigg)'=\eta\bigg(\frac{\partial\vert\psi'\vert}{\partial\psi'}\bigg)',
\end{align}
which disappears, except in the singular case, i. e., $\psi'=0$. However, this singularity is removable. Therefore, one can assign the value zero to the contribution of the cuscuton-like to the equation of motion. Thus, pure contributions from the cuscuton term yield only a trivial contribution to the equations of motion, regardless of the shape of the potential. So, we hope that the first-order motion equation for the $\psi$ field is simply the BPS equation for the $\psi$ field without the cuscuton contributions.
Substituting Eqs. (\ref{BPS1}) into (\ref{energy1}), one obtains
\begin{align}\label{energy3}
\mathrm{E}_{BPS}=\mp\int\, d^2x\, \bigg[\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)-F_{ij}\sqrt{2\mathcal{U}}+ \frac{W_\psi\partial_i\psi}{r}\bigg].
\end{align}
The integrand of the above equation is the BPS energy density.
To obtain the BPS properties, we assume that the interaction is
\begin{align}
\mathcal{V}=\mathcal{U}+\frac{W_{\psi}^{2}}{2r^2}\mp\eta\frac{W_\psi}{r}.
\end{align}
Perceive that the last term in the potential is the contribution of the non-canonical term. Thus, the cuscuton-like term in the BPS limit plays the role of we call impurity. This word is applied to characterize terms in the action that do not change the equations of motion but can change the soliton profile \cite{Adam}. In truth, we can find theories with impurity in some works. For example, the impurities appear in the studies of the self-dual configuration solubility \cite{Adam}, CP$(2)$ vortex solutions \cite{Casana}, and the vortices in the presence of a neutral field \cite{Dionisio}.
Therefore, the absolute BPS energy (\ref{energy3}) is
\begin{align}
\mathrm{E}_{BPS}=\mathrm{E}_{BPS}^{(\sigma)}+\mathrm{E}_{BPS}^{(\psi)},
\end{align}
where
\begin{align}\label{Energy4}
\mathrm{E}_{BPS}^{(\sigma)}=\mp\int\, d^2x\, [\varepsilon_{ij}\Phi\cdot(\nabla_i\Phi\times\nabla_j\Phi)-F_{ij}\sqrt{2\mathcal{U}}] \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \mathrm{E}_{BPS}^{(\psi)}=\mp\int\,d^2x\, \frac{W_\psi\partial_i\psi}{r}.
\end{align}
\section{The spherically symmetric and vacuumless structures}
To investigate the spherically symmetric vortex solutions, let us assume the ansatz proposed by Schroers in Ref. \cite{Schroers1}, i. e.,
\begin{align}\label{ansatz1}
\Phi(r, \theta)=\begin{pmatrix}
\sin f(r)\cos N\theta\\
\sin f(r)\sin N\theta\\
\cos f(r)
\end{pmatrix}.
\end{align}
This ansatz is necessary for the $\Phi$ field to respect the O(3)-sigma model constraint, i. e., $\Phi\cdot\Phi=1$. It is interesting to mention that this ansatz was used widely in other works, e. g., see Refs. \cite{Lima5,Lima6}.
On the other hand, as suggested in Refs. \cite{Lima3, Casana5}, the real scalar field is
\begin{align}
\psi=\psi(r).
\end{align}
To study the vortex configurations, we use the ansatz proposed in Refs. \cite{Schroers1,PKGhosh}, i. e.,
\begin{align}\label{ansatz3}
\textbf{A}(r)=-\frac{Na(r)}{er}\hat{\textbf{e}}_{\theta},
\end{align}
where $N$ is the winding number. This behavior of $\textbf{A}(r)$ produces a magnetic field $\textbf{B}=\nabla\times\textbf{A}$. Thus, calculating the $\nabla\times\textbf{A}$, one obtains
\begin{align}\label{MagneticF}
\textbf{B}=-\frac{Na'(r)}{er}\hat{\textbf{e}}_z,
\end{align}
and therefore, being $F_{12}=-B$ with $B=\vert\vert\textbf{B}\vert\vert$, it follows that
\begin{align}
F_{12}=-\frac{Na'(r)}{er}.
\end{align}
The magnetic field (\ref{MagneticF}) is responsible for arising of a magnetic flux that emerges from the vortex. In this case, the magnetic flux is
\begin{align}\label{flux}
\phi_{flux}=\oiint \textbf{B}\cdot d\textbf{S}
\end{align}
Considering the planar nature of the vortex, we conclude that the magnetic flux (\ref{flux}) is
\begin{align}
\phi_{flux}=-\int_{0}^{2\pi}\int_{0}^\infty\frac{Na'(r)}{er}rdrd\theta,
\end{align}
which leads us to
\begin{align}\label{Mflux}
\phi_{flux}=\frac{2\pi N}{e}[a(0)-a(\infty)].
\end{align}
Furthermore, the vortex has the energy profile shown in Eq. (\ref{Energy4}). This energy reformulated in terms of the field variables $f(r)$ and $a(r)$ is
\begin{align}\label{EBPS}
\mathrm{E}_{BPS}=\mp\int\, d^2x\, \bigg[\frac{N[a(r)-1]}{r}f'(r)\sin f(r)+\frac{Na'(r)}{er}\sqrt{2\mathcal{U}}+\frac{W_\psi\partial_i\psi}{r}\bigg].
\end{align}
\section{Vortex solution in the vacuumless theory}
\subsection{The scalar field solutions}
The boundary conditions of the topological field configurations are
\begin{align}\label{top1}
\psi(r\to 0)=\mp1, \hspace{1cm} \psi(r\to\infty)=\pm1,
\end{align}
\begin{align}\label{top2}
&f(r\to 0)=0, \hspace{1cm} f(r\to \infty)=\pi,
\end{align}
and
\begin{align}
\label{top3}
&a(r\to 0)=0, \hspace{1cm} a(r\to \infty)=-\beta.
\end{align}
Here $\beta\in\mathds{R}_{+}$.
Furthermore, allow us to start our investigation of topological structures by assuming the superpotential
\begin{align}\label{SPW}
W[\psi(r)]=\alpha\psi\bigg(1-\frac{1}{3}\psi^2\bigg).
\end{align}
To avoid carrying too many constants in our theory, let us assume $\eta=\alpha$.
The superpotential (\ref{SPW}) describes a $\phi^4$-like interaction. Therefore, when considering this superpotential, we are ensuring that spontaneous symmetry breaking occurs. This spontaneous symmetry breaking will be responsible for the arising of structures in the topological sector of $\psi$ \cite{Vachaspati}.
Now, using the superpotential (\ref{SPW}) the first-order equation of $\psi(r)$ is
\begin{align}\label{PsiE}
\psi'(r)=\pm\frac{\alpha}{r}[1-\psi(r)^2].
\end{align}
Considering the topological conditions (\ref{top1}), one solves Eq. (\ref{PsiE}). The solutions of the equation (\ref{PsiE}) are
\begin{align}\label{solpsi}
\psi(r)=\pm\frac{r^{2\alpha}-r_{0}^{2\alpha}}{r^{2\alpha}+r_{0}^{2\alpha}}.
\end{align}
As previously discussed in reference \cite{Lima3}, $r_0$ is an integration constant that describes the initial setting of the $\psi$ field. Thus, one can assume $r_{0}=1$. In this case, the solutions (\ref{solpsi}) are
\begin{align}
\psi(r)=\pm\tanh[\text{ln}(r^\alpha)].
\end{align}
The solutions of the $\psi$ field are called kink-like (positive sign) and antikink-like (negative sign) solutions. In Fig. \ref{fig1}, we display the kink-like and antikink-like solutions that describe the field $\psi$.
\begin{figure}[!ht]
\centering
\includegraphics[height=6.5cm,width=8cm]{kink.pdf}
\includegraphics[height=6.5cm,width=8cm]{AKink.pdf}\\
\vspace{-1cm}
\begin{center}
(a) \hspace{7cm} (b)
\end{center}
\vspace{-1cm}
\caption{Solutions of $\psi(r)$. (a) kink-like configuration. (b) Antikink-like configuration.}
\label{fig1}
\end{figure}
\subsection{The vacuumless theory}
To study the vortex configurations of the non-minimal O(3)-sigma model, we particularize our analysis to the case of vacuum theories. For example, some authors used vacuumless theories to study the vortex-like solutions with Maxwell and Chern-Simons electrodynamics \cite{Matheus}. Furthermore, structures in curved spacetime \cite{Moreira} and topological solitons \cite{DBazeiaF} were studied. Therefore, now allow us to consider a vacuumless theory to investigate the vortex solutions of the non-minimal sigma-cuscuton model. Thus, to have a vacuumless theory, let us assume
\begin{align}\label{UP}
\mathcal{U}=-\frac{W_\psi^2}{2r^2}\pm\alpha\frac{W_\psi}{r}.
\end{align}
The only way for equality (\ref{UP}) to be true is if $\mathcal{U}[\phi_i(x_i);x_i]=\mathcal{U}(x_i)$. In this case, the interaction of the theory [see the Lagrangian (\ref{Lag})] is null, i. e., $\mathcal{V}=0$. So, we would have a theory (\ref{Lag}) without a vacuum. Allow us, for the moment, to focus on this case. Thus, let us assume the superpotential
\begin{align}\label{SPSigma}
\mathcal{U}=-\frac{\alpha^2}{2r^2}[1-\tanh^2(\text{ln}(r^\alpha))]^2+\frac{\alpha^2}{r}[1-\tanh(\text{ln}(r^\alpha))].
\end{align}
\subsection{The vacuumless vortex solutions}
Considering the BPS equations (\ref{BPS1}), the ansätze (\ref{ansatz1}) and (\ref{ansatz3}), and the superpotential (\ref{SPSigma}), one obtains the well-known vortex equations of the O(3)-sigma model, i. e.,
\begin{align}\label{B1}
f'(r)=\pm\frac{N}{r}[a(r)-1]\sin f(r),
\end{align}
and
\begin{align}\label{B2}
a'(r)=\pm\frac{\alpha}{N}\sqrt{2r[1-\tanh^2(\text{ln}(r^\alpha))]-[1-\tanh^2(\text{ln}(r^\alpha))]^2}.
\end{align}
To write Eqs. (\ref{B1}) and (\ref{B2}), we use the natural units, i. e., $e=1$.
Considering the topological boundary conditions (\ref{top2}) and (\ref{top3}), let us investigate the vortex solutions produced by Eqs. (\ref{B1}) and (\ref{B2}). To study these solutions, we will use the numerical interpolation method. Thus, in Fig. \ref{fig2}, the numerical solutions are displayed. Fig. \ref{fig2}(a) corresponds to the matter field solutions of the topological sector for the $\Phi$ field. On the other hand, Fig. \ref{fig2}(b) corresponds to the topological solutions of the gauge field.
\begin{figure}[ht!]
\centering
\includegraphics[height=6cm,width=7.5cm]{sigma.pdf}
\includegraphics[height=6cm,width=7.5cm]{gauge.pdf}\vspace{-1cm}
\begin{center}
(a) \hspace{7cm} (b)
\end{center}
\vspace{-1cm}
\caption{(a) Solution of the field variable of the O(3)-sigma model. (b) Solution of the gauge field. In both plots, the dotted line is the curve when $\alpha=1$, while the other curves correspond to $\alpha=2,4,8,16$ and $32$.}
\label{fig2}
\end{figure}
Using the numerical solutions of the matter field (\ref{B1}) and the gauge field (\ref{B2}), one can analyze the magnetic field and the energy density (\ref{EBPS}) of the vortex. Let us start our analysis by investigating the vortex magnetic field. To perform this analysis, we recall that Eq. (\ref{MagneticF}) gives us the magnetic field. Thus, substituting the numerical solution of the gauge field in Eq. (\ref{MagneticF}), we obtain the vortex magnetic field. We expose the magnetic field in Fig. \ref{fig3}. This result shows us an interesting property of the vortex, i. e., the ring-like magnetic field. This feature is what we call a ring-like vortex. For more details, see Refs. \cite{Dionisio2,LA}. We discuss more physical implications of these results in the final remarks.
\begin{figure}[!p]
\centering
\includegraphics[height=4.5cm,width=6cm]{B1.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP1.pdf}
\includegraphics[height=4.5cm,width=6cm]{B2.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP2.pdf}
\includegraphics[height=4.5cm,width=6cm]{B3.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP3.pdf}
\includegraphics[height=4.5cm,width=6cm]{B4.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP4.pdf}
\includegraphics[height=4.5cm,width=6cm]{B5.pdf}
\includegraphics[height=4.5cm,width=5cm]{BP5.pdf}
\vspace{-0.7cm}
\caption{Magnetic field varying $\alpha$.}
\label{fig3}
\end{figure}
By Eq. (\ref{EBPS}), the BPS energy density in terms of the field variable is
\begin{align}\label{DenergyBPS}
\mathcal{E}(r)=\mp\frac{N[a(r)-1]}{r}f'(r)\sin f(r)\mp\frac{Na'(r)}{er}\sqrt{2\mathcal{U}}\mp\frac{W_\psi\partial_i\psi}{r}.
\end{align}
Thus, substituting the numerical solutions of Eqs. (\ref{B1}) and (\ref{B2}) in Eq. (\ref{DenergyBPS}), the BPS energy density of the structure is obtained. The Fig. (\ref{fig4}) shows the numerical solution of the BPS energy density. Analyzing the BPS energy density (see Fig. \ref{fig4}), we highlight the interesting appearance of internal structures.
\begin{figure}[p]
\centering
\includegraphics[height=5cm,width=6cm]{E.pdf}
\includegraphics[height=5cm,width=5cm]{EP1.pdf}
\includegraphics[height=5cm,width=6cm]{E2.pdf}
\includegraphics[height=5cm,width=5cm]{EP2.pdf}
\includegraphics[height=5cm,width=6cm]{E3.pdf}
\includegraphics[height=5cm,width=5cm]{EP3.pdf}
\includegraphics[height=5cm,width=6cm]{E4.pdf}
\includegraphics[height=5cm,width=5cm]{EP4.pdf}
\includegraphics[height=5cm,width=6cm]{E5.pdf}
\includegraphics[height=5cm,width=5cm]{EP5.pdf}
\vspace{-0.7cm}
\caption{Vortex energy density varying $\alpha$.}
\label{fig4}
\end{figure}
\section{Final remarks}
In this work, we studied the vortex solutions of a multi-field theory. The model proposed has a canonical field, i. e., the field describing the O(3)-sigma model, and a non-canonical field, i. e., the field $\psi$. Furthermore, it is considered that $\Phi$ is non-minimally coupled with the gauge field. Thus, the vortices produced have an anomalous contribution from the magnetic dipole momentum.
We consider that the scalar field dynamics have canonical and non-canonical contributions. These contributions are, respectively, $\frac{1}{2}\partial_\mu\psi\partial^\mu\psi$ and $\eta\sqrt {\vert\partial_\mu\psi\partial^\mu\psi\vert}$. The non-canonical contribution is what is known as cuscuton. The cuscuton term is interesting since the contribution of the cuscuton in the stationary case is trivial. Thus, the equation of motion will only have contributions from the canonical terms in the stationary limit. However, in this case, the term cuscuton will have a non-trivial contribution to the energy density of the structures. Therefore, in the stationary BPS limit, cuscuton will be an impurity of the theory. It is worthwhile to mention that cuscuton, in this scenario, is interpreted as an impurity only at the topological sector of the sigma field. Indeed, this is a consequence of dealing with a vacuumless theory, i. e., $\mathcal{V}=0$.
Furthermore, the vacuumless multi-field model proposed proved to support electrically neutral vortices that engender an interesting internal structure. Besides, the magnetic field of vortices also has a ring-like shape. Note that these ring structures become well defined if the contribution of cuscuton increases, i. e. when the $\alpha$ parameter increases. Consequently, as $\eta$ increases, the flux of the magnetic field will increase, and therefore, the energy radiated by the vortex increases. In general, we can interpret this as a consequence of the behavior of the matter field and the gauge field in the topological sector of the sigma model. These fields have a very peculiar behavior, i. e., when the contribution of the cuscuton term (the impurity) increases, the matter field and the gauge field become more compact. That occurs due to the location of the kink at the topological sector of $\psi$ around $r=1$.
Finally, allow us to mention that theories of supersymmetric vortices are a subject of growing interest. That is because these theories generalize particle-vortex dualities. Thus, one expects that duality to have applications in condensed matter physics. Therefore, a future perspective of this work is the study of particle-vortex duality in our theory. Furthermore, one can build extensions of this theory by implementing these structures in dielectric media. We hope to carry out these studies soon.
\section{Acknowledgment}
The authors thank the Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq), grant n$\textsuperscript{\underline{\scriptsize o}}$ 309553/2021-0 (CASA) and the Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior (CAPES), grant n$\textsuperscript{\underline{\scriptsize o}}$ 88887.372425/2019-00 (FCEL), for financial support.
| {'timestamp': '2023-01-05T02:03:30', 'yymm': '2301', 'arxiv_id': '2301.01397', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.01397'} |
\section{Introduction}
With the increasing connectivity between devices, the interest in distributed solutions for estimation \cite{Olfati-Saber2007}, control \cite{Garin2010} and machine learning \cite{Forero2010} has been rapidly growing. In particular, the problem of parameter estimation over networks has been extensively studied, especially in the context of Wireless Sensor Networks (WSNs). The methods designed to solve this identification problem can be divided into three groups: incremental approaches \cite{Lopes2007}, diffusion approaches \cite{Cattivelli2008} and consensus-based distributed strategies \cite{Mateos2009}. Due to the low communication power of the nodes in WSNs, research has mainly been devoted to obtain fully distributed approaches, i.e. methods that allow exchanges of information between neighbor nodes only. Even though such a choice enables to reduce multi-hop transmissions and improve robustness to node failures, these strategies allows only neighbor nodes to communicate and thus to reach consensus. As a consequence, to attain consensus on the overall network, its topology has to be chosen to enable exchanges of information between the different groups of neighbor nodes.\\
At the same time, with recent advances in cloud computing \cite{Mell2011} it has now become possible to acquire and release resources with minimum effort so that each node can have on-demand access to shared resources, theoretically characterized by unlimited storage space and computational power. This motivates to reconsider the approach towards a more centralized strategy where some computations are performed at the node level, while the most time and memory consuming ones are executed \textquotedblleft on the cloud\textquotedblright. This requires the communication between the nodes and a fusion center, i.e. the \textquotedblleft cloud\textquotedblright, where the data gathered from the nodes are properly merged.\\
Cloud computing has been considered for automotive vehicle applications in \cite{Li2016}-\cite{Li2017} and \cite{Ozatay2014}. As motivating example for another possible automotive application, consider a vehicle fleet with vehicles connected to the \textquotedblleft cloud\textquotedblright \ (see~\figurename{~\ref{Fig:toy_example}}).
\begin{figure}
\hspace{0cm}
\centering
\resizebox{8cm}{8cm}{
\begin{tikzpicture}[auto, node distance=2cm,>=latex']
\node [draw, cloud, fill=gray!3, cloud puffs=10, aspect=2, inner sep=0cm] (Main) {
\begin{tikzpicture}
\node [communication block] (A) {\begin{tabular}{c} \underline{\Large{Communication Layer}} \vspace{0.2cm}\\
\begin{tikzpicture}
\node [collect block] (A1) {\Large{Collect information}};
\node [broadcasting block, left of=A1, node distance=5cm] (D) {\begin{tabular}{c}\Large{Broadcast} \\
\Large{Global Updates}\end{tabular}};
\end{tikzpicture}\end{tabular}};
\node [operational block, above of=A, node distance=3cm] (B) {\Large{Global Updates}};
\node [aid node, left of=A, node distance=2.9cm] (Aidl1){};
\node [aid node, above of=Aidl1, node distance=0.3cm] (Aidl2){};
\draw [->,thick, black!70!green,bend right] (B) to[] node[name=connection2] {} (Aidl2);
\node [aid node, right of=A, node distance=2.9cm] (Aidr1){};
\node [aid node, above of=Aidr1, node distance=0.3cm] (Aidr2){};
\draw [->,thick, black!50!red,bend right] (Aidr2) to[] node[name=connection1] {} (B);
\node [cloud node, above right of=connection1, node distance=1.5cm] (C) {\textcolor{black}{\underline{\textbf{\Large{CLOUD}}}}};
\end{tikzpicture}};
\node [car block main, below of= A,node distance=9cm] (1) {
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2.8cm,
semithick]
\tikzstyle{every state}=[fill=red,draw=none,text=white]
\node[] (Ain) {\includegraphics[scale=0.5]{car-eps-converted-to}};
\node[operational block, below of=1, node distance=1.5cm] (Bin) [below of=Ain] {\Large{Local Updates}};
\node[aid node, left of=Bin, node distance=1.6cm] (Bin1) {};
\node[aid node, right of=Bin, node distance=1.65cm] (Bin2) {};
\path (Ain) edge[bend right,thick] node {} (Bin1)
(Bin2) edge [bend right,thick] node {} (Ain);
\end{tikzpicture}
};
\node [dot block, right of=1, node distance=4cm] (1r) {$\boldsymbol{\cdots}\boldsymbol{\cdots}$};
\node [car block, right of=1r, node distance=3cm] (2) {\includegraphics[scale=0.3]{car-eps-converted-to}};
\node [dot block, left of=1, node distance=4cm] (1l) {$\boldsymbol{\cdots}\boldsymbol{\cdots}$};
\node [car block, left of=1l, node distance=3cm] (3) {\includegraphics[scale=0.3]{car-eps-converted-to}};
\node [aid node, below of=A, node distance=2.35cm] (baseaid1){};
\node [aid node, right of=baseaid1, node distance=0.1cm] (aid11){};
\node [aid node, left of=baseaid1, node distance=0.1cm] (aid12){};
\node [aid node, right of=baseaid1, node distance=1cm] (baseaid1r){};
\node [aid node, right of=baseaid1r, node distance=0.1cm] (aid1r1){};
\node [aid node, left of=baseaid1r, node distance=0.1cm] (aid1r2){};
\node [aid node, left of=baseaid1, node distance=1cm] (baseaid1l){};
\node [aid node, right of=baseaid1l, node distance=0.1cm] (aid1l1){};
\node [aid node, left of=baseaid1l, node distance=0.1cm] (aid1l2){};
\node [aid node, above of=1, node distance=2.7cm] (baseaidcar1){};
\node [aid node, right of=baseaidcar1, node distance=0.1cm] (aidcar11){};
\node [aid node, left of=baseaidcar1, node distance=0.1cm] (aidcar12){};
\node [aid node, above of=2, node distance=1cm] (baseaidcar2){};
\node [aid node, right of=baseaidcar2, node distance=-0.9cm] (aidcar21){};
\node [aid node, left of=baseaidcar2, node distance=1.2cm] (aidcar22){};
\node [aid node, above of=3, node distance=1cm] (baseaidcar3){};
\node [aid node, right of=baseaidcar3, node distance=1.2cm] (aidcar31){};
\node [aid node, left of=baseaidcar3, node distance=-0.9cm] (aidcar32){};
\draw [->,dashed,very thick,black!50!red] (aidcar11) to[] node[auto] {} (aid11);
\draw [->,dashed,very thick,black!50!red] (aidcar21) to[] node[auto] {} (aid1r1);
\draw [->,dashed,very thick,black!50!red] (aidcar31) to[] node[auto] {} (aid1l1);
\draw [->,dashed,very thick, black!70!green] (aid1l2) to[] node[auto] {} (aidcar32);
\draw [->,dashed,very thick, black!70!green] (aid1r2) to[] node[auto] {} (aidcar22);
\draw [->,dashed,very thick, black!70!green] (aid12) to[] node[auto] {} (aidcar12);
\end{tikzpicture}}
\caption{Cloud-connected vehicles.}
\label{Fig:toy_example}
\vspace{-0.5cm}
\end{figure}
In such a setting, measurements taken on-board of the vehicles can be used for cloud-based diagnostics and prognostics purposes. In particular, the measurements can be used to estimate parameters that may be common to all vehicles, such as parameters in components wear models or fuel consumption models, and parameters that may be specific to individual vehicles. References \cite{Taheri2016} and \cite{howell2010brake} suggest potential applications of such approaches for prognostics of automotive fuel pumps and brake pads. Specifically, the component wear rate as a function of the workload (cumulative fuel flow or energy dissipated in the brakes) can be common to all vehicles or at least to all vehicles in the same class.\\
\noindent A related distributed diagnostic technique has been proposed in \cite{Boem2012}. However it relies on a fully-distributed scheme, introduced to reduce long distance transmissions and to avoid the presence of a \textquotedblleft critic\textquotedblright \ node in the network, i.e. a node whose failure causes the entire diagnostic strategy to fail.\\
In this report a centralized approach for recursive estimation of parameters in the least-squares sense is presented. The method has been designed under the hypothesis of ($i$) ideal transmission, i.e. the information exchanged between the cloud and the nodes is not corrupted by noise, and the assumption that ($ii$) all the nodes are described by the same model, which is supposed to be known a priori. Differently from what is done in many distributed estimation methods (e.g. see~\cite{Mateos2009}), where the nodes estimate common unknown parameters, the strategy we propose allows to account for more general consensus constraint. As a consequence, for example, the method can be applied to problems where only a subset of the unknowns is common to all the nodes, while other parameters are purely local, i.e. they are different for each node.\\
Our estimation approach is based on defining a separable optimization problem which is then solved through the Alternating Direction Method of Multipliers (ADMM), similarly to what has been done in \cite{Mateos2009} but in a somewhat different setting. As shown in \cite{Mateos2009}, the use of ADMM leads to the introduction of two time scales based on which the computations have to be performed. In particular, the local time scale is determined by the nodes' clocks, while the cloud time scale depends on the characteristics of the resources available in the center of fusion and on the selected stopping criteria, used to terminate the ADMM iterations.\\
The estimation problem is thus solved through a two-step strategy. In particular: ($i$) local estimates are recursively retrieved by each node using the measurements acquired from the sensors available locally; ($ii$) global computations are performed to refine the local estimates, which are supposed to be transmitted to the cloud by each node. Note that, based on the aforementioned characteristics, back and forth transmissions to the cloud are required. A transmission scheme referred to as Node-to-Cloud-to-Node (N2C2N) is thus employed.\\
The main features of the proposed strategies are: ($i$) the use of recursive formulas to update the local estimates of the unknown parameters; ($ii$) the possibility to account for the presence of both purely local and global parameters, that can be estimated in parallel; ($iii$) the straightforward integration of the proposed techniques with pre-existing Recursive Least-Squares (RLS) estimators already running on board of the nodes.\\
The report is organized as follows. In Section~\ref{Sec:ADMM_base} ADMM is introduced, while in Section~\ref{Sec:prob_for} is devoted to the statement of the considered problem. The approach for collaborative estimation with full consensus is presented in Section~\ref{Sec:1}, along with the results of simulation examples that show the effectiveness of the approach and its performance in different scenarios. In Section~\ref{Sec:2} and Section~\ref{Sec:3} the methods for collaborative estimation with partial consensus and for constrained collaborative estimation with partial consensus are described, respectively. Results of simulation examples are also reported. Concluding remarks and directions for future research are summarized in Section~\ref{Sec:Conclusions}.
\subsection{Notation}
Let $\mathbb{R}^{\mathsf{n}}$ be the set of real vectors of dimension $\mathsf{n}$ and $\mathbb{R}^{+}$ be the set of positive real number, excluding zero. Given a set $\mathcal{A}$, let $\breve{\mathcal{A}}$ be the complement of $\mathcal{A}$. Given a vector $a\in\mathbb{R}^{\mathsf{n}}$, $\|a\|_2$ is the Euclidean norm of $a$. Given a matrix $A\in\mathbb{R}^{\mathsf{n}\times \mathsf{p}}$, $A'$ denotes the transpose of $A$. Given a set $\mathcal{A}$, let $\mathcal{P}_{\mathcal{A}}$ denote the Euclidean projection onto $\mathcal{A}$. Let $I_\mathsf{n}$ be the identity matrix of size $\mathsf{n}$ and $0_\mathsf{n}$ be an $\mathsf{n}$-dimensional column vector of ones.
\section{Alternating Direction Method of Multipliers} \label{Sec:ADMM_base}
The Alternating Direction Method of Multipliers (ADMM) \cite{ADMMBoyd} is an algorithm tailored for problems in the form
\begin{equation}\label{eq:ADMMprob}
\begin{aligned}
&\mbox{minimize } && f(\theta)+\\
&\mbox{subject to } && A\theta+Bz=c,
\end{aligned}
\end{equation}
where $\theta \in \mathbb{R}^{n_{\theta}}$, $z \in \mathbb{R}^{n_{z}}$, $f:\mathbb{R}^{n_{\theta}} \rightarrow \mathbb{R}\cup\{+\infty\}$ and $g: \mathbb{R}^n_{z} \rightarrow \mathbb{R}\cup\{+\infty\}$ are closed, proper, convex functions and $A \in \mathbb{R}^{p\times n_{\theta}}$, $B \in \mathbb{R}^{p \times n_{z}}$, $c \in \mathbb{R}^{p}$.\\
To solve Problem~\eqref{eq:ADMMprob}, the ADMM iterations to be performed are
\begin{align}
& \theta^{(k+1)}=\underset{\theta}{\argmin} \ \mathcal{L}(\theta,z^{(k)},\delta^{(k)}), \label{admmStep:1} \\
& z^{(k+1)}=\underset{z}{\argmin} \ \mathcal{L}(\theta^{(k+1)},z,\delta^{(k)}), \label{admmStep:2}\\
& \delta^{(k+1)}=\delta^{(k)}+\rho(A\theta^{(k+1)}+Bz^{(k+1)}-c), \label{admmStep:3}
\end{align}
where $k \in \mathbb{N}$ indicates the ADMM iteration, $\mathcal{L}$ is the augmented Lagrangian associated to \eqref{eq:ADMMprob}, i.e.
\begin{equation}\label{eq:aLag1}
\mathcal{L}(\theta,z,\delta)=f(\theta)+g(z)+\delta'\left(A\theta+Bz-c\right)+\frac{\rho}{2}\left\|A\theta+Bz-c\right\|_{2}^{2},
\end{equation}
$\delta \in \mathbb{R}^{p}$ is the Lagrange multiplier and $\rho \in \mathbb{R}^{+}$ is a tunable parameter (see~\cite{ADMMBoyd} for possible tuning strategies). Iterations \eqref{admmStep:1}-\eqref{admmStep:3} have to be run until a stopping criteria is satisfied, e.g. the maximum number of iterations is attained.\\
It has to be remarked that the convergence of ADMM to high accuracy results might be slow (see~\cite{ADMMBoyd} and references therein). However, the results obtained with a few tens of iterations are usually accurate enough for most of applications. For further details, the reader is referred to \cite{ADMMBoyd}.
\subsection{ADMM for constrained convex optimization}
Suppose that the problem to be addressed is
\begin{equation}\label{eq:constr_probl}
\begin{aligned}
& \min_{\theta} &&\ f(\theta)\\
& \mbox{s.t. }&& \theta \in \mathcal{C},
\end{aligned}
\end{equation}
with $\theta \in \mathbb{R}^{n_\theta}$, $f: \mathbb{R}^{n_{\theta}}\rightarrow \mathbb{R}\cup\{+\infty\}$ being a closed, proper, convex function and $\mathcal{C}$ being a convex set, representing constraints on the parameter value.\\
As explained in \cite{ADMMBoyd}, \eqref{eq:constr_probl} can be recast in the same form as \eqref{eq:ADMMprob} through the introduction of the auxiliary variable $z \in \mathbb{R}^{n_{\theta}}$ and the indicator function of set $\mathcal{C}$, i.e.
\begin{equation}\label{eq:ind_func}
g(z)=\begin{cases} 0 &\mbox{ if } z \in \mathcal{C}\\
+\infty &\mbox{ otherwise}
\end{cases}.
\end{equation}
In particular, \eqref{eq:constr_probl} can be equivalently stated as
\begin{equation}\label{eq:constr_prob2}
\begin{aligned}
& \min_{\theta,z} && f(\theta)+g(z)\\
& \mbox{s.t.} && \theta-z=0.
\end{aligned}
\end{equation}
Then, the ADMM scheme to solve \eqref{eq:constr_prob2} is
\begin{align}
\theta^{(k+1)}&=\underset{\theta}{\argmin} \ \mathcal{L}(\theta,z^{(k)},\delta^{(k)}),\\
z^{(k+1)}&=\mathcal{P}_{\mathcal{C}}(\theta^{(k+1)}+\delta^{(k)}),\\
\delta^{(k+1)}&=\delta^{(k)}+\rho(\theta^{(k+1)}-z^{(k+1)})
\end{align}
with $\mathcal{L}$ equal to
\begin{equation*}
\mathcal{L}(\theta,z,\delta)=f(\theta)+g(z)+\delta'(\theta-z)+\frac{\rho}{2}\|\theta-z\|_{2}^{2}
\end{equation*}
\subsection{ADMM for consensus problems}
Consider the optimization problem given by
\begin{equation}\label{eq:gen_consensus}
\min_{\theta^{g}} \sum_{n=1}^{N} f_{n}(\theta^{g}),
\end{equation}
where $\theta^{g} \in \mathbb{R}^{n_{\theta}}$ and each term of the objective, i.e. $f_{n}: \mathbb{R}^{n_{\theta}} \rightarrow \mathbb{R} \cup \{+\infty \}$, is a proper, closed, convex function.\\
Suppose that $N$ processors are available to solve \eqref{eq:gen_consensus} and that, consequently, we are not interested in a centralized solution of the consensus problem. As explained in \cite{ADMMBoyd}, ADMM can be used to reformulate the problem so that each term of the cost function in \eqref{eq:gen_consensus} is handled by its own.\\
In particular, \eqref{eq:gen_consensus} can be reformulated as
\begin{equation}\label{eq:rewritten_consensus}
\begin{aligned}
& \mbox{minimize } && \sum_{n=1}^{N} f_{n}(\theta_{n})\\
& \mbox{subject to } &&\theta_{n}-\theta^{g}=0\hspace{1cm} n=1,\ldots,N.
\end{aligned}
\end{equation}
Note that, thanks to the introduction of the consensus constraint, the cost function in \eqref{eq:rewritten_consensus} is now separable.\\
The augmented Lagrangian correspondent to \eqref{eq:rewritten_consensus} is given by
\begin{equation}\label{eq:lag_cons}
\mathcal{L}(\{\theta_{n}\}_{n=1}^{N},\theta^{g},\{\delta_{n}\}_{n=1}^{N})=\sum_{n=1}^{N}\left(f_{n}(\theta_{n})+\delta_{n}'(\theta_{n}-\theta^{g})+\frac{\rho}{2} \left\|\theta_{n}-\theta^{g}\right\|_{2}^{2}\right),
\end{equation}
and the ADMM iterations are
\begin{align}
&\theta_{n}^{(k+1)}=\underset{\theta_{n}}{\argmin} \hspace{0.15cm} \mathcal{L}_{n}(\theta_{n},\delta_{n}^{(k)},\theta^{g,(k)}), \ n=1,\ldots,N \label{eq:admm_cons:setp1}\\
&\theta^{g,(k+1)}=\frac{1}{N}\sum_{n=1}^{N} \left(\theta_{n}^{(k+1)}+\frac{1}{\rho}\delta_{n}^{(k)}\right), \label{eq:admm_cons:setp2}\\
&\delta_{n}^{(k+1)}=\delta_{n}^{(k)}+\rho\left(\theta_{n}^{(k+1)}-\theta^{g,(k+1)}\right), \ n=1,\ldots,N \label{eq:admm_cons:setp3}
\end{align}
with
\begin{equation*}
\mathcal{L}_{n}=f_{n}(\theta_{n})+(\delta_{n})'(\theta_{n}-\theta^{g})+\frac{\rho}{2}\|\theta_{n}-\theta^{g}\|_{2}^{2}.
\end{equation*}
Note that, on the one hand \eqref{eq:admm_cons:setp1} and \eqref{eq:admm_cons:setp3} can be carried out independently by each agent $n \in \{1,\ldots,N\}$, \eqref{eq:admm_cons:setp2} depends on all the updated local estimates. The global estimate should thus be updated in a \textquotedblleft fusion center\textquotedblright, where all the local estimates are collected and merged.
\section{Problem statement}\label{Sec:prob_for}
Assume that ($i$) measurements acquired by $N$ agents are available and that ($ii$) the behavior of the $N$ data-generating systems is described by the same known model. Suppose that some parameters of the model, $\theta_{n} \in \mathbb{R}^{n_{\theta}}$ with $n=1,\ldots,N$, are unknown and that their value has to to be retrieved from data. As the agents share the same model, it is also legitimate to assume that ($iii$) there exist a set of parameters $\theta^{g} \in \mathbb{R}^{n_{g}}$, with $n_{g} \leq n_{\theta}$, common to all the agents.\\
We aim at ($i$) retrieving \emph{local} estimates of $\{\theta_{n}\}_{n=1}^{N}$, employing information available at the local level only, and ($ii$) identifying the \emph{global} parameter $\theta^{g}$ at the \textquotedblleft cloud\textquotedblright \ level, using the data collected from all the available sources. To accomplish these tasks ($i$) $N$ local processors and ($ii$) and a \textquotedblleft cloud\textquotedblright, where the data are merged are needed.\\
The considered estimation problem can be cast into a separable optimization problem, given by
\begin{equation}\label{eq:problem}
\begin{aligned}
& \min_{\theta_{n}} && \sum_{n=1}^{N} f_{n}(\theta_{n})\\
& \mbox{s.t. } && F(\theta_{n})=\theta^{g},\\
&&& \theta_{n} \in \mathcal{C}_{n}, \ n=1,\ldots,N
\end{aligned}
\end{equation}
where $f_{n}: \mathbb{R}^{n_{\theta}} \rightarrow \mathbb{R}\cup\{+\infty\}$ is a closed, proper, convex function, $F: \mathbb{R}^{n_{\theta}} \rightarrow \mathbb{R}^{n_{g}}$ is a nonlinear operator and $\mathcal{C}_{n} \subset \mathbb{R}^{n_{\theta}}$ is a convex set representing constraints on the parameter values. Note that, constraints on the value of the global parameter can be enforced if $\mathcal{C}_{n}=\mathcal{C}\cup\{\mathcal{C}_{n}\cap \breve{\mathcal{C}}\}$, with $\theta \in \mathcal{C}$.\\
Assume that the available data are the output/regressor pairs collected from each agent $n \in \{1,\ldots,N\}$ over an horizon of length $T \in \mathbb{N}$, i.e. $\{y_{n}(t),X_{n}(t)\}_{t=1}^{T}$. Relying on the hypothesis that the regressor/output relationship is well modelled as
\begin{equation}\label{eq:model_base}
y_{n}(t)=X_{n}(t)'\theta_{n}+e_{n}(t),
\end{equation}
with $e_{n}(t) \in \mathbb{R}^{n_{y}}$ being a zero-mean additive noise independent of the regressor $X_{n}(t) \in \mathbb{R}^{n_{\theta}\times n_{y}}$, we will focus on developing a recursive algorithm to solve \eqref{eq:problem} with the local cost functions given by
\begin{equation}\label{eq:least_sqCost}
f_{n}(\theta_{n})=\frac{1}{2}\sum_{t=1}^{T} \lambda_{n}^{T-t}\left\|y_{n}(t)-X_{n}(t)'\theta_{n}\right\|_{2}^{2}.
\end{equation}
The forgetting factor $\lambda_{n} \in (0,1]$ is introduced to be able to estimate time-varying parameters. Note that different forgetting factors can be chosen for different agents.
\begin{remark}{\textbf{ARX models}}\\
Suppose that an AutoRegressive model with eXogenous inputs (ARX) has to be identified from data. The input/output relationship is thus given by
\begin{align}\label{eq:arx_io}
\nonumber y(t)&=\theta_{1}y(t-1)+\ldots+\theta_{n_{a}}y(t-n_{a})+\\
&\hspace{1cm}+\theta_{n_{a}+1}u(t-n_{k}-1)+\ldots+\theta_{n_{a}+n_{b}}u(t-n_{k}-n_{b})+e(t)
\end{align}
where $u$ is the deterministic input, $\{n_{a}, n_{b}\}$ indicate the order of the system, $n_{k}$ is the input/output delay.\\
Note that \eqref{eq:arx_io} can be recast as the output/regressor relationship with the regressor defined as
\begin{equation}\label{eq:reg}
X(t)=\begin{bmatrix}y(t-1)' & \ldots & y'(t-n_{a}) & u(t-n_{k}-1)' & \ldots & u(t-n_{k}-n_{b})'\end{bmatrix}'
\end{equation}
It is worth to point out that, in the considered framework, the parameters $n_{a}$, $n_{b}$ and $n_{k}$ are the same for all the N agents, as they are supposed to be described by the same model. \hfill $\blacksquare$
\end{remark}
\section{Collaborative estimation for full consensus}\label{Sec:1}
Suppose that the problem to be solve is \eqref{eq:gen_consensus}, i.e. we are aiming at achieving full consensus among $N$ agents. Consequently, the consensus constraint in \eqref{eq:problem} has to be modified as
\begin{equation*}
F(\theta_{n})=\theta^{g} \rightarrow \theta_{n}=\theta^{g}
\end{equation*}
and $\mathcal{C}_{n}=\mathbb{R}^{n_{\theta}}$, so that $\theta_{n} \in \mathcal{C}_{n}$ can be neglected for $n=1,\ldots,N$. Moreover, as we are focusing on the problem of collaborative least-squares estimation, we are interested in the particular case in which the local cost functions in \eqref{eq:rewritten_consensus} are equal to \eqref{eq:least_sqCost} .\\
Even if the considered problem can be solved in a centralized fashion, our goal is to obtain estimates of the unknown parameters both ($i$) at a local level and ($ii$) on the \textquotedblleft cloud\textquotedblright. With the objective of distributing the computation among the local processors and the \textquotedblleft cloud\textquotedblright, we propose $5$ approaches to address \eqref{eq:rewritten_consensus}.
\subsection{Greedy approaches}
All the proposed \textquoteleft greedy\textquoteright \ approaches rely on the use, by each local processor, of the standard Recursive Least-Squares (RLS) method (see~\cite{ljung1999system}) to update the local estimates, $\{\hat{\theta}_{n}\}_{n=1}^{N}$. Depending on the approach, $\{\hat{\theta}_{n}\}_{n=1}^{N}$ are then combined on the \textquotedblleft cloud\textquotedblright \ to update the estimate of the global parameter.\\
The first two methods that are used to compute the estimates of the unknown parameters both ($i$) locally and ($ii$) on the \textquotedblleft cloud\textquotedblright \ are:
\begin{enumerate}
\item \textbf{Static RLS (S-RLS)} The estimate of the global parameter is computed as
\begin{equation}\label{eq:mean}
\hat{\theta}^{g}=\frac{1}{N} \sum_{n=1}^{N} \hat{\theta}_{n}(t).
\end{equation}
\item \textbf{Static Weighted RLS (SW-RLS)} Consider the matrices $\{\phi_{n}\}_{n=1}^{N}$, obtained applying standard RLS at each node (see~\cite{ljung1999system}), and assume that $\{\phi_{n}\}_{n=1}^{N}$ are always invertible. The estimate $\hat{\theta}^{g}$ is computed as the weighted average of the local estimates
\begin{equation}\label{eq:w_mean}
\hat{\theta}^{g}=\left(\sum_{n=1}^{N} \phi_{n}(t)^{-1}\right)^{-1}\left(\sum_{n=1}^{N} \phi_{n}(t)^{-1} \hat{\theta}_{n}(t)\right).
\end{equation}
Considering that $\phi_{n}$ is an indicator of the accuracy of the $n$th local estimate, \eqref{eq:w_mean} allows to weight more the \textquotedblleft accurate \textquotedblright \ estimates then the \textquotedblleft inaccurate\textquotedblright \ ones.
\end{enumerate}
\begin{figure}[!tb]
\centering
\hspace{-1cm}
\begin{tabular}[t]{cc}
\subfigure[S-RLS and SW-RLS\label{Fig:SSchemes}]{
\resizebox{6cm}{5cm}{\begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto,->]
\tikzstyle{agents}=[rectangle,rounded corners,thick,draw=blue!75,fill=blue!20,minimum size=1cm]
\tikzstyle{dots}=[rectangle,fill=white]
\tikzstyle{cloud}=[rectangle,rounded corners,thick,draw=red!75,fill=red!20,minimum size=1cm]
\tikzstyle{aid}=[coordinate]
\node[agents] (1) {$\# 1$};
\node[dots,right of=1, node distance=2cm] (dots1) {$\boldmath{\cdots}$};
\node[aid, above of=1, node distance=2cm] (aidIn1) {};
\node[agents, right of = dots1, node distance=2cm](N) {$\# N$};
\node[aid, above of=N, node distance=2cm] (aidInN) {};
\node[cloud, below of= dots1, node distance=3cm] (C) {CLOUD};
\node[aid, below of=C, node distance=2cm] (aidOut) {};
\path (aidIn1) edge[thick] node[swap] {$\{\hat{\theta}_{1}(0),\phi_{1}(0)\}$} (1)
(aidInN) edge[thick] node {$\{\hat{\theta}_{N}(0),\phi_{N}(0)\}$} (N)
(1) edge[thick] node[swap,yshift=0.5cm,xshift=0.5cm] {\begin{tabular}{c}$\hat{\theta}_{1}(t)$ \\
or\\
$\{\hat{\theta}_{1}(t),\phi_{1}(t)\}$\end{tabular}} (C)
(N) edge[thick] node[yshift=0.5cm,xshift=-0.5cm] {\begin{tabular}{c}$\hat{\theta}_{N}(t)$ \\
or\\
$\{\hat{\theta}_{N}(t),\phi_{N}(t)\}$\end{tabular}} (C)
(C) edge[thick] node {$\hat{\theta}^{g}$} (aidOut);
\end{tikzpicture}}
}
\subfigure[M-RLS and MW-RLS \label{Fig:MSchemes}]{
\resizebox{6cm}{5cm}{\begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto,->]
\tikzstyle{agents}=[rectangle,rounded corners,thick,draw=blue!75,fill=blue!20,minimum size=1cm]
\tikzstyle{dots}=[rectangle,fill=white]
\tikzstyle{cloud}=[rectangle,rounded corners,thick,draw=red!75,fill=red!20,minimum size=1cm]
\tikzstyle{aid}=[coordinate]
\node[agents] (1) {$\# 1$};
\node[dots,right of=1, node distance=2cm] (dots1) {$\boldmath{\cdots}$};
\node[aid, above of=1, node distance=2cm] (aidIn1) {};
\node[agents, right of = dots1, node distance=2cm](N) {$\# N$};
\node[aid, above of=N, node distance=2cm] (aidInN) {};
\node[cloud, below of= dots1, node distance=3.5cm] (C) {CLOUD};
\node[aid, left of=C, node distance=2cm] (aidIn) {};
\node[aid, below of=C, node distance=1cm] (aidOut1) {};
\node[aid, right of=aidOut1, node distance=4cm] (aid1) {};
\node[aid, right of=N, node distance=2cm] (aid2) {};
\node[aid, left of=aidOut1, node distance=4cm] (aid3) {};
\node[aid, left of=1, node distance=2cm] (aid4) {};
\node[aid, below of=aidOut1, node distance=1cm] (aidOut2) {};
\path (aidIn1) edge[thick] node {$\phi_{1}(0)$} (1)
(aidIn) edge[thick,dashed,black!20!red] node[swap] {$\hat{\theta}_{\mathrm{o}}^{g}$} (C)
(aidInN) edge[thick] node {$\phi_{N}(0)$} (N)
(1) edge[thick] node [swap,yshift=0.5cm,xshift=0.5cm]{\begin{tabular}{c}$\hat{\theta}_{1}(t)$ \\
or\\
$\{\hat{\theta}_{1}(t),\phi_{1}(t)\}$\end{tabular}} (C)
(N) edge[thick] node[yshift=0.5cm,xshift=-0.5cm] {\begin{tabular}{c}$\hat{\theta}_{N}(t)$ \\
or\\
$\{\hat{\theta}_{N}(t),\phi_{N}(t)\}$\end{tabular}} (C)
(C) edge[-,thick] node {} (aidOut1)
(aidOut1) edge[-,thick] node {} (aid1)
(aid1) edge[-,thick] node[swap] {$\hat{\theta}^{g}$} (aid2)
(aid2) edge[,thick] node {} (N)
(aidOut1) edge[-,thick] node {} (aid3)
(aid3) edge[-,thick] node {$\hat{\theta}^{g}$} (aid4)
(aid4) edge[,thick] node {} (1)
(aidOut1) edge[thick] node {$\hat{\theta}^{g}$} (aidOut2);
\end{tikzpicture}}}
\end{tabular}
\caption{Greedy approaches. Schematic of the information exchanges between the agents and the \textquotedblleft cloud\textquotedblright.}
\label{Fig:SMschemes}
\end{figure}
S-RLS and SW-RLS allow to achieve our goal, i.e. ($i$) obtain a local estimate of the unknowns and ($ii$) compute $\hat{\theta}^{g}$ using all the information available. However, looking at the scheme in \figurename{~\ref{Fig:SSchemes}} and at Algorithm~\ref{algo1}, it can be noticed that the global estimate is not used at a local level.\\
\begin{algorithm}[!tb]
\caption{S-RLS and SW-RLS}
\label{algo1}
~~\textbf{Input}: Sequence of observations $\{X_{n}(t),y_{n}(t)\}_{t=1}^T$, initial matrices $\phi_{n}(0) \in \mathbb{R}^{n_{\theta}\times n_{\theta}}$, initial estimates $\hat{\theta}_{n}(0) \in \mathbb{R}^{n_{\theta}}$, $n=1,\ldots,N$
\vspace*{.1cm}\hrule\vspace*{.1cm}
\begin{enumerate}[label=\arabic*., ref=\theenumi{}]
\item \textbf{for} $t=1,\ldots,T$ \textbf{do}
\begin{itemize}
\item[] \hspace{-0.5cm} \textbf{\underline{Local}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{for} $n=1,\ldots,N$ \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $K_{n}(t)$, $\phi_{n}(t)$ and $\hat{\theta}_{n}(t)$ with standard RLS \cite{ljung1999system};
\end{enumerate}
\item \textbf{end for};
\end{enumerate}
\item[] \hspace{-0.5cm} \textbf{\underline{Global}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{compute} $\hat{\theta}^{g}$;
\end{enumerate}
\end{itemize}
\item \textbf{end}.
\end{enumerate}
\vspace*{.1cm}\hrule\vspace*{.1cm}
~~\textbf{Output}: Local estimates $\{\hat{\theta}_{n}(t)\}_{t=1}^{T}$, $n=1,\ldots,N$, estimated global parameters $\{\hat{\theta}^{g}(t)\}_{t=1}^{T}$.
\end{algorithm}
Thanks to the dependence of $\hat{\theta}^{g}$ on all the available information, the local use of the global estimate might enhance the accuracy of $\{\hat{\theta}_{n}\}_{n=1}^{N}$. Motivated by this observation, we introduce two additional methods:
\begin{enumerate}
\setcounter{enumi}{3}
\item \textbf{Mixed RLS (M-RLS)}
\item \textbf{Mixed Weighted RLS (MW-RLS)}
\end{enumerate}
While M-RLS relies on \eqref{eq:mean}, in MW-RLS the local estimates are combined as in \eqref{eq:w_mean}. However, as shown in \figurename{~\ref{Fig:MSchemes}} and outlined in Algorithm~\ref{algo2}, the global estimate $\hat{\theta}^{g}$ is fed to the each local processor and used to update the local estimates instead of their values at the previous step.
\begin{algorithm}[!tb]
\caption{M-RLS and MW-RLS}
\label{algo2}
~~\textbf{Input}: Sequence of observations $\{X_{n}(t),y_{n}(t)\}_{t=1}^T$, initial matrices $\phi_{n}(0) \in \mathbb{R}^{n_{\theta} \times n_{\theta}}$, $n=1,\ldots,N$, initial estimate $\hat{\theta}_{\mathrm{o}}^{g}$.
\vspace*{.1cm}\hrule\vspace*{.1cm}
\begin{enumerate}[label=\arabic*., ref=\theenumi{}]
\item \textbf{for} $t=1,\ldots,T$ \textbf{do}
\begin{itemize}
\item[] \hspace{-0.5cm} \textbf{\underline{Local}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{for} $n=1,\ldots,N$ \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{set} $\hat{\theta}_{n}(t-1)=\hat{\theta}^{g}(t-1)$;
\item \textbf{compute} $K_{n}(t)$, $\phi_{n}(t)$ and $\hat{\theta}_{n}(t)$ with standard RLS \cite{ljung1999system};
\end{enumerate}
\item \textbf{end for};
\end{enumerate}
\item[] \hspace{-0.5cm} \textbf{\underline{Global}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{compute} $\hat{\theta}^{g}$;
\end{enumerate}
\end{itemize}
\item \textbf{end}.
\end{enumerate}
\vspace*{.1cm}\hrule\vspace*{.1cm}
~~\textbf{Output}: Local estimates $\{\hat{\theta}_{n}(t)\}_{t=1}^{T}$, $n=1,\ldots,N$, estimated global parameters $\{\hat{\theta}^{g}(t)\}_{t=1}^{T}$.
\end{algorithm}
Note that, especially at the beginning of the estimation horizon, the approximation made in M-RLS and MW-RLS might affect negatively some of the local estimates, e.g. the ones obtained by the agents characterized by a relatively small level of noise.
\begin{remark}
While S-RLS and M-RLS require the local processors to transmit to the \textquotedblleft cloud\textquotedblright \ only $\{\hat{\theta}_{n}\}_{n=1}^{N}$, the pairs $\{\hat{\theta}_{n},\phi_{n}\}_{n=1}^{N}$ have to be communicated to the \textquotedblleft cloud\textquotedblright \ with both SW-RLS and MW-RLS (see~\eqref{eq:mean} and~\eqref{eq:w_mean}, respectively). Moreover, as shown in \figurename{~\ref{Fig:SMschemes}}, while S-RLS and SW-RLS require Node-to-Cloud-to-Node (N2C2N) transmissions, M-RLS and MW-RLS are based on a Node-to-Cloud (N2C) communication policy. \hfill $\blacksquare$
\end{remark}
\subsection{ADMM-based RLS (ADMM-RLS) for full consensus}\label{Subsec:ADMM-RLS_base}
Instead of resorting to greedy methods, we propose to solve \eqref{eq:gen_consensus} with ADMM.\\
Note that the same approach has been used to develop a fully distributed scheme for consensus-based estimation over Wireless Sensor Networks (WSNs) in \cite{Mateos2009}. However, our approach differs from the one introduced in \cite{Mateos2009} as we aim at exploiting the \textquotedblleft cloud\textquotedblright \ to attain consensus and, at the same time, we want local estimates to be computed by each node.\\
As the problem to be solved is equal to \eqref{eq:rewritten_consensus}, the ADMM iterations to be performed are \eqref{eq:admm_cons:setp1}-\eqref{eq:admm_cons:setp3}, i.e.
\begin{align*}
&\hat{\theta}_{n}(T)^{(k+1)}=\underset{\theta_{n}}{\argmin} \ \left\{f_{n}(\theta_{n})+(\delta_{n}^{(k)})'(\theta_{n}-\hat{\theta}^{g,(k)})+\frac{\rho}{2}\|\theta_{n}-\hat{\theta}^{g,(k)}\|_{2}^{2}\right\},\\
&\hat{\theta}^{g,(k+1)}=\frac{1}{N}\sum_{n=1}^{N} \left(\theta_{n}^{(k+1)}+\frac{1}{\rho}\delta_{n}^{(k)}\right),\\
&\delta_{n}^{(k+1)}=\delta_{n}^{(k)}+\rho\left(\hat{\theta}_{n}^{(k+1)}(T)-\hat{\theta}^{g,(k+1)}\right), \ n=1,\ldots,N
\end{align*}
with the cost functions $f_{n}$ defined as in \eqref{eq:lag_cons} and where the dependence on $T$ of the local estimates is stressed to underline that only the updates of $\hat{\theta}_{n}$ are directly influenced by the current measurements. Note that the update for $\hat{\theta}^{g}$ is a combination of the mean of the local estimates, i.e.~\eqref{eq:mean}, and the mean of the Lagrange multipliers.\\
As \eqref{eq:admm_cons:setp2}-\eqref{eq:admm_cons:setp3} are independent from the specific choice of $f_{n}(\theta_{n})$, we focus on the update of the local estimates, i.e.~\eqref{eq:admm_cons:setp1}, with the ultimate goal of finding recursive updates for $\hat{\theta}_{n}$.\\
Thanks to the characteristics of the chosen local cost functions, the closed-form solution for the problem in \eqref{eq:admm_cons:setp1} is given by
\begin{align}\label{eq:exp_sol2}
\hat{\theta}_{n}^{(k+1)}(T)&=\phi_{n}(T)\left(\mathcal{Y}_{n}(T)-\delta_{n}^{(k)}+\rho\hat{\theta}^{g,(k)}\right),\\
\mathcal{Y}_{n}(t)&=\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau}X_{n}(\tau)y_{n}(\tau), \ \ t=1,\ldots,T,\\
\phi_{n}(t)&=\left(\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau}X_{n}(\tau)(X_{n}(\tau))'+\rho I_{n_{\theta}}\right)^{-1}, \ \ t=1,\ldots,T. \label{eq:phi_1}
\end{align}
With the aim of obtaining recursive formulas to update $\hat{\theta}_{n}$, consider the local estimate obtained at $T-1$, which is given by
\begin{equation}\label{eq:prev_est2}
\hat{\theta}_{n}(T-1)=\phi_{n}(T-1)\left(\mathcal{Y}_{n}(T-1)+\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right),
\end{equation}
with $\delta_{n}(T-1)$ and $\hat{\theta}^{g}(T-1)$ denoting the Lagrange multiplier and the global estimate computed at $T-1$, respectively. It has then to be proven that $\hat{\theta}_{n}^{(k)}(T-1)$ can be computed as a function of $\hat{\theta}(T-1)$, $y_{n}(T)$ and $X_{n}(T)$.\\
Consider the inverse matrix $\phi_{n}$ \eqref{eq:phi_1}, given by
\begin{align*}
&\phi_{n}(T)^{-1}=\mathcal{X}_{n}(T)+\rho I_{n_{\theta}},\\
&\mathcal{X}_{n}(t)=\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau}X_{n}(\tau)(X_{n}(\tau))'.
\end{align*}
Based on \eqref{eq:phi_1}, it can be proven that $\phi_{n}(T)^{-1}$ can be computed as a function of $\phi_{n}(T-1)^{-1}$. In particular:
\begin{align}
\nonumber &\phi_{n}(T)^{-1}=\mathcal{X}_{n}(T)+\rho I_{n_{\theta}}=\\
\nonumber &=\lambda_{n}\mathcal{X}_{n}(T-1)+X_{n}(T)(X_{n}(T))'+\rho I_{n_{\theta}}=\\
\nonumber &=\lambda_{n}\left[\mathcal{X}_{n}(T-1)+\rho I_{n_{\theta}}\right]+X_{n}(T)(X_{n}(T))'+(1-\lambda_{n})\rho I_{n_{\theta}}=\\
&=\lambda_{n}\phi_{n}(T-1)^{-1}+X_{n}(T)(X_{n}(T))'+(1-\lambda_{n})\rho I_{n_{\theta}}. \label{eq:phi_presimpl}
\end{align}
Introducing the extended regressor vector $\tilde{X}_{n}(T)$
\begin{equation}\label{eq:tildeX_1}
\tilde{X}_{n}(T)=\begin{bmatrix}
X_{n}(T) & \sqrt{(1-\lambda_{n})\rho}I_{n_{\theta}}
\end{bmatrix} \in \mathbb{R}^{n_{\theta}\times(n_{y}+n_{\theta})},
\end{equation}
\eqref{eq:phi_presimpl} can then be further simplified as
\begin{equation*}
\phi_{n}(T)^{-1}=\lambda_{n}\phi_{n}(T-1)^{-1}+\tilde{X}_{n}(T)(\tilde{X}_{n}(T))'.
\end{equation*}
Applying the matrix inversion lemma, the resulting recursive formulas to update $\phi_{n}$ are
\begin{align}
\mathcal{R}_{n}(T)&=\lambda_{n} I_{(n_{y}+n_{\theta})}+(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T), \label{eq:toinvert_1}\\
K_{n}(T)&=\phi_{n}(T-1)\tilde{X}_{n}(T)(\mathcal{R}_{n}(T))^{-1}, \label{eq:gain_2}\\
\phi_{n}(T)&=\lambda_{n}^{-1}\left(I_{n_{\theta}}-K_{n}(T)(\tilde{X}_{n}(T))'\right)\phi_{n}(T-1). \label{eq:phi_rec2}
\end{align}
Note that the gain $K_{n}$ and matrix $\phi_{n}$ are updated as in standard RLS (see~\cite{ljung1999system}), with the exceptions of the increased dimension of the identity matrix in \eqref{eq:toinvert_1} and the substitution of the regressor with $\tilde{X}_{n}$. Only when $\lambda_{n}=1$ the regressor $X_{n}$ and $\tilde{X}_{n}$ are equal. Moreover, observe that \eqref{eq:toinvert_1}-\eqref{eq:phi_rec2} are independent from $k$ and, consequently, $\{\mathcal{R}_{n},K_{n},\phi_{n}\}_{n=1}^{N}$ can be updated once fer step $t$.\\
Consider again \eqref{eq:exp_sol2}. Adding and subtracting
\begin{equation*}
\lambda_{n}\phi_{n}(T)\left[ \rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right]
\end{equation*}
to \eqref{eq:exp_sol2}, the solution of \eqref{eq:admm_cons:setp1} corresponds to
\begin{align}
\nonumber \hat{\theta}_{n}^{(k+1)}(T)&=\phi_{n}(T)\left[\lambda_{n}\left(\mathcal{Y}_{n}(T-1)-\delta_{n}(T-1)+\rho \hat{\theta}^{g}(T-1)\right)+\right.\\
\nonumber & \hspace{-0.6cm}\left.+X_{n}(T)y_{n}(T)-\left(\delta_{n}^{(k)}-\lambda_{n}\delta_{n}(T-1)\right)+\rho\left(\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1)\right)\right]=\\
&\hspace{-0.5cm} =\hat{\theta}_{n}^{RLS}(T)+\hat{\theta}_{n}^{ADMM,(k+1)}(T), \label{eq:est_dec1}
\end{align}
with
\begin{align}
\nonumber&\hat{\theta}_{n}^{RLS}(T)=\phi_{n}(T)\left\{\lambda_{n}\left(\mathcal{Y}_{n}(T-1)+\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right)+\right.\\
&\hspace{2cm} \left.+X_{n}(T)y_{n}(T)\right\}, \label{eq:est_rls2} \\
&\hat{\theta}_{n}^{ADMM,(k+1)}(T)=\phi_{n}(T)\left[\rho\Delta_{g,\lambda_{n}}^{(k+1)}(T)-\Delta_{\lambda_{n}}^{(k+1)}(T)\right], \label{eq:est_admm2}
\end{align}
and
\begin{align}
\Delta_{g,\lambda_{n}}^{k+1}(T)&=\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1), \label{eq:admm_k+1_g}\\ \Delta_{\lambda_{n}}^{(k+1)}(T)&=\delta_{n}^{(k)}-\lambda_{n}\delta_{n}(T-1). \label{eq:admm_k+1_dual}
\end{align}
Observe that \eqref{eq:est_admm2} is independent from the past data-pairs $\{y_{n}(t),X_{n}(t)\}_{t=1}^{T}$, while \eqref{eq:est_rls2} depends on $\mathcal{Y}_{n}(T-1)$. Aiming at obtaining recursive formulas to update $\hat{\theta}_{n}$, the dependence of \eqref{eq:est_rls2} should be eliminated.\\
Consider \eqref{eq:est_rls2}. Exploiting \eqref{eq:phi_rec2} and \eqref{eq:prev_est2}, $\hat{\theta}_{n}^{RLS}(T)$ is given by
\begin{align}
\nonumber&\hat{\theta}_{n}^{RLS}(T)=\phi_{n}(T-1)\left\{ \left(\mathcal{Y}_{n}(T-1)+\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right)\right\}+\\
\nonumber &\hspace{0.5cm}-K_{n}(T)(\tilde{X}_{n}(T)')\phi_{n}(T-1)\left\{\left(\mathcal{Y}_{n}(T-1)+\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right)\right\}+\\
\nonumber &\hspace{0.5cm}+\phi_{n}(T)X_{n}(T)y_{n}(T)=\\ &\hspace{0.5cm}=\hat{\theta}_{n}(T-1)-K_{n}(T)(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)+\phi_{n}(T)X_{n}(T)y_{n}(T). \label{eq:rls_def1}
\end{align}
For \eqref{eq:rls_def1} to be dependent on the extended regressor only, we define the extended measurement vector
\begin{equation*}
\tilde{y}_{n}(T)=\begin{bmatrix}
(y_{n}(T))' & 0_{1\times n_{g}}
\end{bmatrix}'.
\end{equation*}
The introduction of $\tilde{y}_{n}$ yields \eqref{eq:rls_def1} can be modified as
\begin{equation*}
\hat{\theta}_{n}^{RLS}(T)=\hat{\theta}_{n}(T-1)-K_{n}(T)(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)+\phi_{n}(T)\tilde{X}_{n}(T)\tilde{y}_{n}(T).
\end{equation*}
Notice that the equality $\phi_{n}(T)\tilde{X}_{n}(T)=K_{n}(T)$ holds and it can be proven as follows
\begin{align*}
&\phi_{n}(T)\tilde{X}_{n}(T)=\lambda_{n}^{-1}\left(I_{n_{\theta}}-K_{n}(T)(\tilde{X}_{n}(T))'\right)\phi_{n}(T-1)\tilde{X}_{n}(T)=\\
&=\lambda_{n}^{-1}\left(I_{n_{\theta}}-\phi_{n}(T-1)\tilde{X}_{n}(T)(\mathcal{R}_{n}(T))^{-1}(\tilde{X}_{n}(T))'\right)\phi_{n}(T-1)\tilde{X}_{n}(T)=\\
&=\phi_{n}(T-1)\tilde{X}_{n}(T)\left(\lambda_{n}^{-1}I_{n_{\theta}}-\lambda_{n}^{-1}(\mathcal{R}_{n}(T))^{-1}(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T)\right)=\\
&=\phi_{n}(T-1)\tilde{X}_{n}(T)\left(\lambda_{n}^{-1}I_{n_{\theta}}+\right.\\
&\hspace{0.1cm}\left.-\lambda_{n}^{-1}(\lambda_{n} I_{(n_{y}+n_{\theta})}+(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T))^{-1}(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T)\right)=\\
&=\phi_{n}(T-1)\tilde{X}_{n}(T)\left(\lambda_{n}^{-1}I_{n_{\theta}}-\lambda_{n}^{-1}( I_{(n_{y}+n_{\theta})}+\lambda_{n}^{-1}(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T))^{-1}\hspace{-0.3cm}\cdot\right.\\
&\hspace{0.1cm} \left. \cdot (\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T)\lambda_{n}^{-1}\right)=\\
&=\phi_{n}(T-1)\tilde{X}_{n}(T)\left(\lambda_{n}I_{n_{\theta}}+(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T)\right)^{-1}=K_{n}(T),
\end{align*}
where the matrix inversion lemma and \eqref{eq:gain_2}-\eqref{eq:phi_rec2} are used.\\
It can thus be proven that $\hat{\theta}_{n}^{RLS}$ can be updated as
\begin{equation}\label{eq:rls_full}
\hat{\theta}_{n}^{RLS}(T)=\hat{\theta}_{n}(T-1)+K_{n}(T)(\tilde{y}_{n}(T)-\tilde{X}_{n}(T)'\hat{\theta}_{n}(T-1)).
\end{equation}
While the update for $\hat{\theta}_{n}^{ADMM}$ \eqref{eq:est_admm2} depends on both the values of the Lagrange multipliers and the global estimates, $\hat{\theta}_{n}^{RLS}$ \eqref{eq:rls_full} is computed on the basis of the previous local estimate and the current measurements. Consequently, $\hat{\theta}_{n}^{RLS}$ is updated recursively.\\
Under the hypothesis that both $\hat{\theta}^{g}$ and $\delta_{n}$ are stored on the \textquotedblleft cloud\textquotedblright, it does seems legitimate to update $\hat{\theta}^{g}$ and $\delta_{n}$ on the \textquotedblleft cloud\textquotedblright, along with $\hat{\theta}_{n}^{ADMM}$. Instead, the partial estimates $\hat{\theta}_{n}^{RLS}$, $n=1,\ldots,N$, can be updated by the local processors. Thanks to this choice, the proposed method, summarized in Algorithm~\ref{algo3} and \figurename{~\ref{Fig:ADMMscheme}}, allows to obtain estimates both at the ($i$) agent and ($ii$) \textquotedblleft cloud\textquotedblright \ level.\\
Observe that, thanks to the independence of \eqref{eq:rls_full} from $k$, $\hat{\theta}_{n}^{RLS}$ can be updated once per step $t$. The local updates are thus regulated by a local clock and not by the one controlling the ADMM iterations on the \textquotedblleft cloud\textquotedblright.\\
Looking at \eqref{eq:toinvert_1}-\eqref{eq:phi_rec2} and \eqref{eq:rls_full}, it can be noticed that $\hat{\theta}_{n}^{RLS}$ is updated through standard RLS, with the exceptions that, at step $t \in \{1,\ldots,T\}$, the update depends on the previous local estimate $\hat{\theta}_{n}(t-1)$ instead of depending on $\hat{\theta}_{n}^{RLS}(t-1)$ and that the output/regressor pair $\{y_{n}(t),X_{n}(t)\}$ is replaced with $\{\tilde{y}_{n}(t),\tilde{X}_{n}(t)\}$. As a consequence, the proposed method can be easily integrated with pre-existing RLS estimators already available locally.
\begin{figure}[!tb]
\centering
\hspace{-1cm}
\resizebox{6cm}{5cm}{\begin{tikzpicture}[node distance=1.3cm,>=stealth',bend angle=45,auto,->]
\tikzstyle{agents}=[rectangle,rounded corners,thick,draw=blue!75,fill=blue!20,minimum size=1cm]
\tikzstyle{dots}=[rectangle,fill=white]
\tikzstyle{cloud}=[rectangle,rounded corners,thick,draw=red!75,fill=red!20,minimum size=1cm]
\tikzstyle{aid}=[coordinate]
\node[agents] (1) {$\# 1$};
\node[dots,right of=1, node distance=2cm] (dots1) {$\boldmath{\cdots}$};
\node[aid, above of=1, node distance=2cm] (aidIn1) {};
\node[agents, right of = dots1, node distance=2cm](N) {$\# N$};
\node[aid, above of=N, node distance=2cm] (aidInN) {};
\node[cloud, below of= dots1, node distance=3.5cm] (C) {CLOUD};
\node[aid, right of=C, node distance=2cm] (aidIn) {};
\node[aid, below of=C, node distance=1cm] (aidOut1) {};
\node[aid, right of=aidOut1, node distance=4cm] (aid1) {};
\node[aid, right of=N, node distance=2cm] (aid2) {};
\node[aid, left of=aidOut1, node distance=4cm] (aid3) {};
\node[aid, left of=1, node distance=2cm] (aid4) {};
\node[aid, below of=aidOut1, node distance=1cm] (aidOut2) {};
\path (aidIn1) edge[thick] node {$\phi_{1}(0)$} (1)
(C) edge[thick,dashed,black!20!red] node[swap] {\hspace{1.5cm}$\{\hat{\theta}_{n}^{ADMM}\}_{n=1}^{N}$} (aidIn)
(aidInN) edge[thick] node {$\phi_{N}(0)$} (N)
(1) edge[thick] node [swap,yshift=0.2cm,xshift=0.1cm]{
$\{\hat{\theta}_{1}^{RLS}(t),\phi_{1}(t)\}$} (C)
(N) edge[thick] node[yshift=0.2cm,xshift=-0.1cm] {
$\{\hat{\theta}_{N}^{RLS}(t),\phi_{N}(t)\}$} (C)
(C) edge[-,thick] node {} (aidOut1)
(aidOut1) edge[-,thick] node {} (aid1)
(aid1) edge[-,thick] node[swap] {$\hat{\theta}_{N}(t)$} (aid2)
(aid2) edge[,thick] node {} (N)
(aidOut1) edge[-,thick] node {} (aid3)
(aid3) edge[-,thick] node {$\hat{\theta}_{1}(t)$} (aid4)
(aid4) edge[,thick] node {} (1)
(aidOut1) edge[thick] node {$\hat{\theta}_{g}$, $\delta_{n}$, $\{\hat{\theta}^{n}(t)\}_{n=1}^{N}$} (aidOut2);
\end{tikzpicture}}
\caption{ADMM-RLS. Schematic of the information exchanges between the agents and the \textquotedblleft cloud\textquotedblright when using a N2C2N communication scheme.}
\label{Fig:ADMMscheme}
\end{figure}
\begin{remark}
Algorithm~\ref{algo1} requires the initialization of the local and global estimates. If some data are available to be processed in a batch mode, $\hat{\theta}_{n}(0)$ can be chosen as the best linear model, i.e.
\begin{equation*}
\hat{\theta}_{n}(0)=\underset{\theta_{n}}{\argmin} \sum_{t=1}^{\tau} \|y_{n}(t)-X_{n}(t)'\theta\|_{2}^{2}
\end{equation*}
and $\hat{\theta}^{g}(0)$ can be computed as the mean of $\{P\hat{\theta}_{n}(0)\}_{n=1}^{N}$. Moreover, the matrices $\phi_{n}$, $n=1,\ldots,N$, can be initialized as $\phi_{n}(0)=\gamma I_{n_{\theta}}$, with $\gamma>0$.
\hfill $\blacksquare$
\end{remark}
\begin{remark}
The chosen implementation requires $\hat{\theta}_{n}^{RLS}$ and $\phi_{n}$ to be transmitted from the local processors to the \textquotedblleft cloud\textquotedblright \ at each step, while the \textquotedblleft cloud\textquotedblright has to communicate $\hat{\theta}_{n}$ to all the agents. As a consequence, the proposed approach is based on N2C2N transmissions. \hfill $\blacksquare$
\end{remark}
\begin{algorithm}[!tb]
\caption{ADMM-RLS for full consensus (N2C2N)}
\label{algo3}
~~\textbf{Input}: Sequence of observations $\{X_{n}(t),y_{n}(t)\}_{t=1}^T$, initial matrices $\phi_{n}(0) \in \mathbb{R}^{n_{\theta} \times n_{\theta}}$, initial local estimates $\hat{\theta}_{n}(0)$, initial dual variables $\delta_{n,\mathrm{o}}$, $n=1,\ldots,N$, initial global estimate $\hat{\theta}_{\mathrm{o}}^{g}$, parameter $\rho \in \mathbb{R}^{+}$.
\vspace*{.1cm}\hrule\vspace*{.1cm}
\begin{enumerate}[label=\arabic*., ref=\theenumi{}]
\item \textbf{for} $t=1,\ldots,T$ \textbf{do}
\begin{itemize}
\item[] \hspace{-0.5cm} \textbf{\underline{Local}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{for} $n=1,\ldots,N$ \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\tilde{X}_{n}(t)$ as in \eqref{eq:tildeX_1};
\item \textbf{compute} $K_{n}(t)$ and $\phi_{n}(t)$ with \eqref{eq:gain_2}~-~\eqref{eq:phi_rec2};
\item \textbf{compute} $\hat{\theta}_{n}^{RLS}(t)$ with \eqref{eq:rls_full};
\end{enumerate}
\item \textbf{end for};
\end{enumerate}
\item[] \hspace{-0.5cm} \textbf{\underline{Global}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\hat{\theta}_{n}^{ADMM,(k+1)}(t)$ with \eqref{eq:est_admm2}, $n=1,\ldots,N$;
\item \textbf{compute} $\hat{\theta}^{g,(k+1)}(t)$ with \eqref{eq:admm_cons:setp2};
\item \textbf{compute} $\delta_{n}^{(k+1)}$ with \eqref{eq:admm_cons:setp3}, $n=1,\ldots,N$;
\end{enumerate}
\item \textbf{until} a stopping criteria is satisfied (e.g. maximum number of iterations attained);
\end{enumerate}
\end{itemize}
\item \textbf{end}.
\end{enumerate}
\vspace*{.1cm}\hrule\vspace*{.1cm}
~~\textbf{Output}: Estimated global parameters $\{\hat{\theta}^{g}(t)\}_{t=1}^{T}$, estimated local parameters $\{\hat{\theta}_{n}(t)\}_{t=1}^{T}$, $n=1,\ldots,N$.
\end{algorithm}
\subsection{Example 1. Static parameters}
Suppose that $N$ data-generating systems are described by the following models
\begin{equation}\label{eq:syst1}
y_{n}(t)=0.9y_{n}(t-1)+0.4u_{n}(t-1)+e_{n}(t),
\end{equation}
where $y_{n}(t) \in \mathbb{R}$, $X_{n}(t)=\smallmat{y_{n}(t-1) & u_{n}(t-1)}'$, $u_{n}$ is known and is generated in this example as a sequence of i.i.d. elements uniformly distributed in the interval $\smallmat{2 & 3}$ and $e_{n} \sim \mathcal{N}(0,R_{n})$ is a white noise sequence, with $\{R_{n} \in \mathbb{N}\}_{n=1}^{N}$ randomly chosen in the interval $\smallmat{1 & 30}$. Evaluating the effect of the noise on the output $y_{n}$ through the Signal-to-Noise Ratio $SNR_{n}$, i.e.
\begin{equation}\label{eq:snr}
\mathrm{SNR}_{n}=10\log{\frac{\sum_{t=1}^T\left(y_{n}(t)-e_{n}(t)\right)^2}{\sum_{t=1}^T e_{n}(t)^2}}~dB
\end{equation}
the chosen covariance matrices yield $\mbox{SNR}_{n}\in [7.8 \ 20.8]$~dB, $n=1,\ldots,N$. Note that \eqref{eq:syst1} can be equivalently written as
\begin{equation*}
y_{n}(t)=(X_{n}(t))'\theta^{g}+e_{n}(t) \mbox{ with } \theta^{g}=\smallmat{0.9 & 0.4}'
\end{equation*}
and the regressor $X_{n}(t)$ is defined as in \eqref{eq:reg}, i.e. $X_{n}=\smallmat{y_{n}(t-1) & u_{n}(t-1)}$.\\
Observe that the deterministic input sequences $\{u_{n}(t)\}_{t=1}^{T}$ are all different. However, they are all generated accordingly to the same distribution, as it seems reasonable to assume that systems described by the same model are characterized by similar inputs.\\
Initializing $\phi_{n}$ as $\phi_{n}(0)=0.1I_{n_{\theta}}$, while $\hat{\theta}_{n}(0)$ and $\hat{\theta}_{\mathrm{o}}^{g}$ are sampled from the distributions $\mathcal{N}(\hat{\theta}^{g},2I_{n_{\theta}})$ and $\mathcal{N}(\hat{\theta}^{g},I_{n_{\theta}})$, respectively, and $\{\lambda_{n}=\Lambda\}_{n=1}^{N}$, with $\Lambda=1$, we first evaluate the performance of the greedy approaches. The actual parameter $\theta^{g}$ and the estimate obtained with the different greedy approaches are reported in \figurename{~\ref{Fig:pramsGreedy1}}.\\
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1Greedy_ex1-eps-converted-to}}
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2Greedy_ex1-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 1. True vs estimated parameters. Black : true,
red : C-RLS, blue : S-RLS, cyan : SW-RLS, magenta : M-RLS, green : MW-RLS.}
\label{Fig:pramsGreedy1}
\end{figure}
Despite the slight difference performances in the first $300$ steps, which seems to be legitimate, the estimates obtained with SW-RLS, M-RLS and MW-RLS are similar. Moreover, $\hat{\theta}^{g}$ obtained with the different methods are comparable with respect with the estimate computed with C-RLS.\\
In particular, the similarities between the estimates obtained with M-RLS, MW-RLS and C-RLS prove that, in the considered case, the choice of the \textquotedblleft mixed\textquotedblright \ strategy allows to enhance the accuracy of $\hat{\theta}^{g}$. Comparing the estimates obtained with S-RLS and SW-RLS, observe that the convergence of the estimate to the actual value of $\theta^{g}$ tends to be faster if $\hat{\theta}^{g}$ is computed as in \eqref{eq:w_mean}.\\
Setting $\rho=0.1$, the performance of the ADMM-RLS are assessed for different values of $N$ and $T$. Moreover, the retrieved estimates are compared to the ones obtained with C-RLS and the greedy approaches.\\
The accuracy of the estimate $\hat{\theta}^{g}$ is assessed through the Root Mean Square Error (RMSE), i.e.
\begin{equation}\label{eq:rmse_glob}
\mathrm{RMSE}_{i}^{g}=\sqrt{\frac{\sum_{t=1}^T\left(\theta^{g}_{i}-\hat{\theta}_{i}^{g}(t)\right)^{2}}{T}}, \mbox{ } i=1,\ldots,n_{g}.
\end{equation}
\begin{table}[!tb]
\begin{center}
\caption{ADMM-RLS: $\|\mbox{RMSE}^{g}\|_{2}$} \label{Tab:RMSE1_1}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\hline
\backslashbox{\textbf{N}}{\textbf{T}} & \textbf{10} & $\mathbf{10^{2}}$ & $\mathbf{10^{3}}$ & $\mathbf{10^{4}}$ \\
\hline
\textbf{2} & 1.07 & 0.33 & 0.16 & 0.10\\
\hline
\textbf{10} & 0.55 & 0.22 & 0.09 & 0.03 \\
\hline
$\mathbf{10^{2}}$ & 0.39 & 0.11 & 0.03 & 0.01 \\
\hline
\end{tabular}}
\end{center}
\end{table}
As expected (see \tablename{~\ref{Tab:RMSE1_1}), the accuracy of the estimates tends to increase if the number of local processors $N$ and the estimation horizon $T$ increase. In the case $N=100$ and $T=1000$, the estimates obtained with both C-RLS and the SW-RLS and MW-RLS have comparable accuracy. See~\tablename{~\ref{Tab:RMSE_Comp1}}.\\
The estimates obtained with ADMM-RLS, C-RLS and MW-RLS are further compared in \figurename{~\ref{Fig:pramsAndErrors1}} and, as expected the three estimates are barely distinguishable.
\begin{table}[!tb]
\begin{center}
\caption{$\|\mbox{RMSE}^{g}\|_{2}$: C-RLS and greedy methods vs ADMM-RLS} \label{Tab:RMSE_Comp1}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{6}{c|}{Method} \\
\cline{2-7}
\multicolumn{1}{c|}{} & C-RLS & S-RLS & SW-RLS & M-RLS & MW-RLS & ADMM-RLS \\
\hline
$\|\mbox{RMSE}^{g}\|_{2}$ & 0.03 & 0.05 & 0.03 & 0.04 & 0.03 & 0.03 \\
\hline
\end{tabular}}
\end{center}
\end{table}
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex1-eps-converted-to}}
\subfigure[$|\hat{\theta}_{1}^{g}-\theta_{1}^{g}|$]{\includegraphics[scale=0.7]{errg1_ex1-eps-converted-to}}\\
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2_ex1-eps-converted-to}}
\subfigure[$|\hat{\theta}_{2}^{g}-\theta_{2}^{g}|$]{\includegraphics[scale=0.7]{errg2_ex1-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 1. Model parameters. Black : true,
red : C-RLS, green : MW-RLS, blue : ADMM-RLS.}
\label{Fig:pramsAndErrors1}
\end{figure}
Thus the proposed ADMM-RLS algorithm, which uses local estimates and the cloud, is able to obtain good accuracy versus the fully centralized approach. Moreover, ADMM-RLS allows to retrieve estimates as accurate as the ones obtained with the MW-RLS, i.e. the greedy approach associated with the least RMSE.\\
\subsubsection{Non-informative agents}
Using the previously introduced initial setting and parameters, lets assume that some of the available data sources are non-informative, i.e. some systems are not excited enough to be able to retrieve locally an accurate estimate of all the unknown parameters \cite{ljung1999system}. Null input sequences and white noise sequences characterized by $R_{n}=10^{-8}$ are used to simulate the behavior of the $N_{ni}\leq N$ non-informative agents.\\
Consider the case $N=100$ and $T=5000$. The performance of ADMM-RLS are studied under the hypothesis that an increasing number $N_{ni}$ of systems is non-informative. Looking at the RMSEs in \tablename{~\ref{Tab:RMSE_Nni1}} and the estimates reported in \figurename{~\ref{Fig:pramsvsNni}}, it can be noticed that the quality of the estimate starts to decrease only when half of the available systems are non-informative.
\begin{table}[!tb]
\begin{center}
\caption{Example 1. $\|\mbox{RMSE}^{g}\|_{2}$ vs $N_{ni}$} \label{Tab:RMSE_Nni1}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{$N_{ni}$} \\
\cline{2-5}
\multicolumn{1}{c|}{} & 1 & 10 & 20 & 50 \\
\hline
$\|\mbox{RMSE}^{g}\|_{2}$ & 0.02 & 0.02 & 0.02 & 0.03 \\
\hline
\end{tabular}} \vspace{-0.5cm}
\end{center}
\end{table}
In case of $N_{ni}=20$, the estimates obtained with ADMM-RLS are then compared with the ones computed with C-RLS and the greedy approaches. As it can be noticed from the RMSEs reported in \tablename{~\ref{Tab:RMSE_Noninf1}}, in presence of non-informative agents SW-RLS tends to perform better than the other greedy approaches and the accuracy of the estimates obtained with C-RLS, SW-RLS and ADMM-RLS are comparable.
\begin{table}[!tb]
\begin{center}
\caption{Example 1. $\|\mbox{RMSE}^{g}\|_{2}$: $20\%$ of non-informative agents} \label{Tab:RMSE_Noninf1}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{6}{c|}{Method} \\
\cline{2-7}
\multicolumn{1}{c|}{} & C-RLS & S-RLS & SW-RLS & M-RLS & MW-RLS & ADMM-RLS \\
\hline
$\|\mbox{RMSE}^{g}\|_{2}$ & 0.02 & 0.03 & 0.02 & 0.07 & 0.03 & 0.02 \\
\hline
\end{tabular}} \vspace{-0.5cm}
\end{center}
\end{table}
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex1NotExComp-eps-converted-to}}
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2_ex1NotExComp-eps-converted-to}}
\end{tabular}
}
\caption{Example 1. Model parameters vs $N_{ni}$. Black : true,
red : $N_{ni}=1$, blue : $N_{ni}=10$, cyan : $N_{ni}=20$, magenta : $N_{ni}=50$.}
\label{Fig:pramsvsNni}
\end{figure}
\subsubsection{Agents failure}
Consider again $N=100$ and $T=5000$ and suppose that, due to a change in the behavior of $N_{f}$ local agents the parameters of their models suddenly assume different values with respect to $\smallmat{0.9 & 0.4}$. We study the performance of ADMM-RLS under the hypothesis that the change in the value of the parameters happens at an unknown instant $t_{n}$, randomly chosen in the interval $[1875,3750]$~samples, and simulating the change in the local parameters using $\theta_{n,1}$ sampled from the distribution $\mathcal{U}_{\smallmat{0.2 & 0.21}}$ and $\theta_{n,2}$ sampled from $\mathcal{U}_{\smallmat{1.4 & 1.43}}$ after $t_{n}$.\\
Observe that it might be beneficial to use a non-unitary forgetting factor, due to the change in the local parameters. Consequently, $\lambda_{n}$, $n=1,\ldots,N$, is set to $0.99$ for all the $N$ agents.\\
The performance of ADMM-RLS are initially assessed considering an increasing number of systems subject to failure. See \tablename{~\ref{Tab:RMSE2_Nf}} and \figurename{~\ref{Fig:pramsAndErrors2FailADMM}}.
\begin{table}[!tb]
\begin{center}
\caption{Example 1. ADMM-RLS: $\|\mbox{RMSE}^{g}\|_{2}$ vs $N_f$} \label{Tab:RMSE2_Nf}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\cline{2-5}
\multicolumn{1}{c|}{} & \multicolumn{4}{c|}{$N_{f}$} \\
\cline{2-5}
\multicolumn{1}{c|}{} & 1 & 10 & 20 & 50 \\
\hline
$\|\mbox{RMSE}^{g}\|_{2}$ & 0.03 & 0.03 & 0.03 & 0.04\\
\hline
\end{tabular}} \vspace{-0.5cm}
\end{center}
\end{table}
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex2FailComp-eps-converted-to}}
\subfigure[$|\hat{\theta}_{1}^{g}-\theta_{1}^{g}|$]{\includegraphics[scale=0.7]{errg1_ex2FailComp-eps-converted-to}}\\
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2_ex2FailComp-eps-converted-to}}
\subfigure[$|\hat{\theta}_{2}^{g}-\theta_{2}^{g}|$]{\includegraphics[scale=0.7]{errg2_ex2FailComp-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 1. Model parameters vs $N_{f}$. Black : true,
red : $N_{f}=1$, blue : $N_{f}=10$, cyan : $N_{f}=20$, magenta : $N_{f}=50$.}
\label{Fig:pramsAndErrors2FailADMM}
\end{figure}
Observe that the failure of the agents seems not to influence the accuracy of the obtained estimates if $N_{f}\neq 50$. The use of ADMM-RLS thus allows to compute accurate global estimates even when some of the agent experience a failure.
\subsection{Example 2. Time-varying parameters}
The presence of the forgetting factor in the cost functions $f_{n}$ (see~\eqref{eq:least_sqCost}) allows to estimate time-varying parameters, as it enables to weight differently past and currently collected data.\\
Suppose that the behavior of $N$ systems is described by the ARX model
\begin{equation}\label{eq:syst2}
y_{n}(t+1)=\theta_{1}^{g}(t)y_{n}(t-1)+\theta_{2}^{g}(t)u_{n}(t-1)+e_{n}(t)
\end{equation}
where $\theta_{1}^{g}=0.9\sin{(x)}$ and $\theta_{2}^{g}=0.4\cos{(x)}$, with $x \in [0,2\pi]$, and $u_{n} \sim \mathcal{U}_{\smallmat{2 & 3}}$. The white noise sequences $e_{n}\sim \mathcal{N}(0,R_{n})$, $n=1,\ldots,N$, have covariances $R_{n}$ randomly selected in the interval $\smallmat{ 1 & 30 }$ yielding to $SNR_{n} \in \smallmat{ 2.4 & 6.5 }~\mbox{dB}$.\\
Considering an estimation horizon $T=1000$, imposing $\phi_{n}$ as $\phi_{n}(0)=0.1I_{n_{\theta}}$, while $\hat{\theta}_{n}(0)$ and $\hat{\theta}_{\mathrm{o}}^{g}$ are sampled from the distributions $\mathcal{N}(\hat{\theta}^{g},2I_{n_{\theta}})$ and $\mathcal{N}(\hat{\theta}^{g},I_{n_{\theta}})$, respectively, $\rho=0.1$ and setting $\{\lambda_{n}=\Lambda\}_{n=1}^{N}$, with $\Lambda=0.95$, the performances of ADMM-RLS are compared with the ones of C-RLS and the four greedy approaches. See \tablename{~\ref{Tab:RMSE_Comp2}}. As for the case where time-invariant parameters have to be estimated (see Example~1), SW-RLS and MW-RLS tend to perform slightly better than the other greedy approaches. Note that the accuracy of the estimates C-RLS, SW-RLS and MW-RLS is comparable.\\
\begin{table}[!tb]
\begin{center}
\caption{Example 2. $\|\mbox{RMSE}^{g}\|_{2}$ vs Method} \label{Tab:RMSE_Comp2}
{\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|}
\cline{2-7}
\multicolumn{1}{c|}{} & \multicolumn{6}{c|}{Method} \\
\cline{2-7}
\multicolumn{1}{c|}{} & C-RLS & S-RLS & SW-RLS & M-RLS & MW-RLS & ADMM-RLS \\
\hline
$\|\mbox{RMSE}^{g}\|_{2}$ & 0.08 & 0.10 & 0.08 & 0.09 & 0.08 & 0.08 \\
\hline
\end{tabular}} \vspace{-0.5cm}
\end{center}
\end{table}
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex2-eps-converted-to}}
\subfigure[$|\hat{\theta}_{1}^{g}-\theta_{1}^{g}|$]{\includegraphics[scale=0.7]{errg1_ex2-eps-converted-to}}\\
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2_ex2-eps-converted-to}}
\subfigure[$|\hat{\theta}_{2}^{g}-\theta_{2}^{g}|$]{\includegraphics[scale=0.7]{errg2_ex2-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 2. True vs estimated model parameters. Black : true,
red : C-RLS, blue : ADMM-RLS.}
\label{Fig:pramsAndErrors2}
\end{figure}
\figurename{~\ref{Fig:pramsAndErrors2}} reports the actual global parameters and the estimates obtained with C-RLS and ADMM-RLS, along with the estimation errors. As already observed, the accuracy of the estimates computed with C-RLS and ADMM-RLS is comparable.
\section{Collaborative estimation for partial consensus}\label{Sec:2}
Consider the more general hypothesis that there exist a parameter vector $\theta^g \in \mathbb{R}^{n_{g}}$, with $n_{g}\leq n_{\theta}$ such that:
\begin{equation}\label{eq:partial_constr}
P\theta_{n}=\theta^{g} \ \ \forall n \in \{1,\ldots,N\},
\end{equation}
where $P \in \mathbb{R}^{n_{g}\times n_{\theta}}$ is a matrix assumed to be known a priori. The problem that we want to solve is then given by
\begin{equation}\label{eq:prob_locglob}
\begin{aligned}
&\min_{\{\theta_{n}\}_{n=1}^{N}}&& \sum_{n=1}^{N} f_{n}(\theta_{n})\\
& \mbox{s.t. } && P\theta_{n}=\theta^{g}, \ \ n=1,\ldots,N,
\end{aligned}
\end{equation}
with $f_{n}$ defined as in \eqref{eq:least_sqCost}. Note that \eqref{eq:prob_locglob} corresponds to \eqref{eq:problem} with the consensus constraint modified as
\begin{equation*}
F(\theta_{n})=\theta^{g} \rightarrow P\theta_{n}=\theta^{g}.
\end{equation*}
The considered consensus constraint allows to enforce consensus over a linear combination of the components of $\theta_{n}$. Note that, through proper choices of $P$, different settings can be considered, e.g. if $P=I_{n_{\theta}}$ then $\theta_{n}=\theta^{g}$ and thus \eqref{eq:prob_locglob} is equal to \eqref{eq:gen_consensus}. We can also enforce consensus only over some components of $\theta_{n}$, so that some of the unknowns are assumed to be \emph{global} while others are supposed to assume a different value for each agent.\\
As we are interested in obtaining an estimate for both $\{\theta_{n}\}_{n=1}^{N}$ and $\theta^{g}$, note that \eqref{eq:prob_locglob} cannot be solved resorting to a strategy similar to C-RLS (see Appendix~\ref{Appendix:A}). In particular, even if properly modified, a method as C-RLS would allow to compute an estimate for the global parameter only.\\
The ADMM iterations to solve problem \eqref{eq:prob_locglob} are given by
\begin{align}
&\hat{\theta}_{n}^{(k+1)}(T)=\underset{\theta_{n}}{\argmin} \ \mathcal{L}(\theta_{n},\hat{\theta}^{g,(k)},\delta_{n}^{(k)}),
\label{step:loc_est}\\
&\hat{\theta}^{g,(k+1)}=\underset{\theta^{g}}{\argmin} \ \mathcal{L}(\{\hat{\theta}_{n}^{(k+1)}(T)\}_{n=1}^{N},\theta^{g},\{\delta_{n}^{(k)}\}_{n=1}^{N}), \label{step:glob_est}\\
&\delta_{n}^{(k+1)}=\delta_{n}^{(k)}+\rho(P\hat{\theta}_{n}^{(k+1)}(T)-\hat{\theta}^{g,(k+1)}), \label{step:dual_est}
\end{align}
with $k \in \mathbb{N}$ indicating the ADMM iteration, $\rho \in \mathbb{R}^{+}$ being a tunable parameter, $\delta_{n} \in \mathbb{R}^{n_{g}}$ representing the Lagrange multiplier and the augmented Lagrangian $\mathcal{L}$ given by
\begin{equation}
\mathcal{L}=\sum_{n=1}^{N}\left\{f_{n}(\theta_{n})+\delta_{n}'(P\theta_{n}-\theta^{g})+\frac{\rho}{2}\left\|P\theta_{n}-\theta^{g}\right\|_{2}^{2}\right\}.
\end{equation}
Note that the dependence on $T$ is explicitly indicated only for the local estimates $\hat{\theta}_{n}$, as they are the only quantities directly affected by the measurement and the regressor at $T$.\\
Consider the update of the estimate $\hat{\theta}^{g}$. The closed form solution for \eqref{step:glob_est} is
\begin{equation}\label{eq:glob_est}
\hat{\theta}^{g,(k+1)}=\frac{1}{N}\sum_{n=1}^{N}\left(P\hat{\theta}_{n}^{(k+1)}(T)+\frac{1}{\rho}\delta_{n}^{(k)}\right).
\end{equation}
The estimate of the global parameter is thus updated through the combination of the mean of $\{\delta_{n}\}_{n=1}^{N}$ and the mean of $\{P\hat{\theta}_{n}^{(k+1)}(T)\}_{n=1}^{N}$. As expected, \eqref{eq:glob_est} resembles \eqref{eq:admm_cons:setp2}, where the local estimates are replaced by a linear combination of their components.\\
Consider the update for the estimate of the local parameters. The close form solution for \eqref{step:loc_est} is given by:
\begin{align}
\hat{\theta}_{n}^{(k+1)}(T)&=\phi_{n}(T)\left\{\mathcal{Y}_{n}(T)+P'(\rho \hat{\theta}^{g,(k)}-\delta_{n}^{(k)}) \right\}, \label{eq:est_P}\\
\mathcal{Y}_{n}(t)&=\sum_{\tau=1}^{t} \lambda_{n}^{t-\tau}X_{n}(\tau)y_{n}(\tau), \ \ t=1,\ldots,T,\\
\phi_{n}(t)&=\left(\left[\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau}X_{n}(\tau)X_{n}(\tau)' \right] +\rho P'P\right)^{-1}, \ \ t=1,\ldots,T. \label{eq:phi_form1}
\end{align}
As also in this case we are interested in obtaining recursive formulas for the local updates, consider $\hat{\theta}_{n}(T-1)$, defined as
\begin{equation}
\hat{\theta}_{n}(T-1)=\phi_{n}(T-1)\left(\mathcal{Y}_{n}(T-1)+P'(\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1)) \right),
\end{equation}
where $\phi_{n}(T-1)$ is equal to \eqref{eq:phi_form1}, and $\hat{\theta}^{g}(T-1)$ and $\delta_{n}(T-1)$ are the global estimate and the Lagrange multiplier obtained at $T-1$, respectively.\\
Observe that the following equalities hold
\begin{align*}
\phi_{n}(T)&=\left(\mathcal{X}_{n}(T)+\rho P'P\right)^{-1}=\\
&=\left(\lambda_{n}\mathcal{X}_{n}(T-1)+X_{n}(T)X_{n}(T)'+\rho P'P\right)^{-1}=\\
&=\left(\lambda_{n}\left(\mathcal{X}_{n}(T-1)+\rho P'P\right)+X_{n}(T)X_{n}(T)'+\rho(1-\lambda_{n}) P'P\right)^{-1}=\\
&=\left(\lambda_{n}\phi_{n}(T-1)^{-1}+X_{n}(T)X_{n}(T)'+\rho(1-\lambda_{n}) P'P\right)^{-1},
\end{align*}
with
\begin{equation*}
\mathcal{X}_{n}(t)=\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau}X_{n}(\tau)(X_{n}(\tau))', \ \ t=1,\ldots,T.
\end{equation*}
Introducing the extended regressor
\begin{equation}\label{eq:tildeX_2}
\tilde{X}_{n}(T)=\begin{bmatrix} X_{n}(T) & \sqrt{\rho(1-\lambda_{n})}P' \end{bmatrix} \in \mathbb{R}^{n_{\theta}\times(n_{y}+n_{g})}
\end{equation}
and applying the matrix inversion lemma, it can be proven that $\phi_{n}$ can be updated as
\begin{align}
\mathcal{R}_{n}(T)&=\lambda_{n} I_{(n_{y}+n_{g})}+(\tilde{X}_{n}(T))'\phi_{n}(T-1)\tilde{X}_{n}(T), \label{eq:toinvert_2}\\
K_{n}(T)&=\phi_{n}(T-1)\tilde{X}_{n}(T)\left(\mathcal{R}_{n}(T)\right)^{-1}, \label{eq:gain_3}\\
\phi_{n}(T)&=\lambda_{n}^{-1}(I_{n_{\theta}}-K_{n}(T)(\tilde{X}_{n}(T))')\phi_{n}(T-1). \label{eq:phi_rec3}
\end{align}
Note that \eqref{eq:toinvert_2}-\eqref{eq:phi_rec3} are similar to \eqref{eq:toinvert_1}-\eqref{eq:phi_rec2}, with differences due to the new definition of the extended regressor.\\
Consider again \eqref{eq:est_P}. Adding and subtracting
\begin{equation*}
\lambda_{n}\phi_{n}(T)P'\left(\rho \hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right)
\end{equation*}
to \eqref{eq:est_P}, $\hat{\theta}_{n}^{(k+1)}$ can be computed as
\begin{align}
\nonumber \hat{\theta}_{n}^{(k+1)}(T)&=\phi_{n}(T)\left[\lambda_{n}\left(\mathcal{Y}_{n}(T-1)+P'(\rho \hat{\theta}^{g}(T-1)-\delta_{n}(T-1)\right))+\right.\\
\nonumber & \hspace{-0.7cm}\left.+X_{n}(T)y_{n}(T)-P'\left(\delta_{n}^{(k)}-\lambda_{n}\delta_{n}(T-1)\right)+P'\rho\left(\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1)\right)\right]=\\
&\hspace{-0.7cm} =\hat{\theta}_{n}^{RLS}(T)+\hat{\theta}_{n}^{ADMM,(k+1)}(T). \label{eq:est_dec2}
\end{align}
In particular,
\begin{align}\label{eq:est_rls3}
\nonumber \hat{\theta}_{n}^{RLS}(T)&=\phi_{n}(T)\lambda_{n}\left\{\mathcal{Y}_{n}(T-1)+\rho P' \hat{\theta}(T-1)-P'\delta_{n}(T-1)\right\}+\\
& \hspace{1cm}+\phi_{n}(T)\tilde{X}_{n}(T)y_{n}(T),
\end{align}
and
\begin{equation}\label{eq:est_admm3}
\hat{\theta}_{n}^{ADMM,(k+1)}(T)=\phi_{n}(T)P'\left(\rho \Delta_{g,\lambda_{n}}^{(k+1)}(T)- \Delta_{\lambda_{n}}^{(k+1)}\right),
\end{equation}
with
\begin{align*}
\Delta_{g,\lambda_{n}}^{k+1}(T)&=\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1), \\ \Delta_{\lambda_{n}}^{(k+1)}(T)&=\delta_{n}^{(k)}-\lambda_{n}\delta_{n}(T-1).
\end{align*}
Observe that, as for \eqref{eq:admm_cons:setp2} and \eqref{eq:glob_est}, \eqref{eq:est_admm3} differs from \eqref{eq:est_admm2} because of the presence of $P$.\\
Note that, accounting for the definition of $\phi_{n}(T-1)$, exploiting the equality $K_{n}(T)=\phi_{n}(T)\tilde{X}_{n}(T)$ (see Section~\ref{Sec:1} for the proof) and introducing the extended measurement vector
\begin{equation*}
\tilde{y}_{n}(T)=\begin{bmatrix}y_{n}(T)' & O_{1\times n_{g}}\end{bmatrix}',
\end{equation*}
the formula to update $\hat{\theta}_{n}^{RLS}$ in \eqref{eq:est_admm3} can be further simplified as
\begin{align}
\nonumber&\hat{\theta}_{n}^{RLS}(T)=\phi_{n}(T-1)\left\{ \left(\mathcal{Y}_{n}(T-1)+P'(\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1))\right)\right\}+\\
\nonumber &\hspace{0.5cm}-K_{n}(T)(\tilde{X}_{n}(T)')\phi_{n}(T-1)\left\{\left(\mathcal{Y}_{n}(T-1)+P'(\rho\hat{\theta}^{g}(T-1)-\delta_{n}(T-1))\right)\right\}+\\
\nonumber &\hspace{0.5cm}+\phi_{n}(T)X_{n}(T)y_{n}(T)=\\
\nonumber &\hspace{0.5cm}=\hat{\theta}_{n}(T-1)-K_{n}(T)(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)+\phi_{n}(T)\tilde{X}_{n}(T)\tilde{y}_{n}(T)=\\
&\hspace{0.5cm}=\hat{\theta}_{n}(T-1)+K_{n}(T)(\tilde{y}_{n}(T)-(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)). \label{eq:rls_par}
\end{align}
As the method tailored to attain full consensus (see Section~\ref{Sec:1}), note that both $\hat{\theta}^{g}$ and $\delta_{n}$ should be updated on the \textquotedblleft cloud\textquotedblright. As a consequence, also $\hat{\theta}_{n}^{ADMM}$ should be updated on the \textquotedblleft cloud\textquotedblright, due to its dependence on both $\hat{\theta}^{g}$ and $\delta_{n}$. On the other hand, $\hat{\theta}_{n}^{RLS}$ can be updated by the local processors. As for the case considered in Section~\ref{Sec:1}, note that \eqref{eq:rls_par} is independent from $k$ and, consequently, the synchronization between the local clock and the one on the \textquotedblleft cloud\textquotedblright is not required.\\
The approach is outlined in Algorithm~\ref{algo5} and the transmissions characterizing each iteration is still the one reported in the scheme in \figurename{~\ref{Fig:ADMMscheme}}. As a consequence, the observations made in Section~\ref{Sec:1} with respect to the information exchange between the nodes and the \textquotedblleft cloud\textquotedblright \ hold also in this case.
\begin{algorithm}[!tb]
\caption{ADMM-RLS for partial consensus (N2C2N)}
\label{algo5}
~~\textbf{Input}: Sequence of observations $\{X_{n}(t),y_{n}(t)\}_{t=1}^T$, initial matrices $\phi_{n}(0) \in \mathbb{R}^{n_{\theta} \times n_{\theta}}$, initial local estimates $\hat{\theta}_{n}(0)$, initial dual variables $\delta_{n,\mathrm{o}}$, forgetting factors $\lambda_{n}$, $n=1,\ldots,N$, initial global estimate $\hat{\theta}_{\mathrm{o}}^{g}$, parameter $\rho \in \mathbb{R}^{+}$.
\vspace*{.1cm}\hrule\vspace*{.1cm}
\begin{enumerate}[label=\arabic*., ref=\theenumi{}]
\item \textbf{for} $t=1,\ldots,T$ \textbf{do}
\begin{itemize}
\item[] \hspace{-0.5cm} \textbf{\underline{Local}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{for} $n=1,\ldots,N$ \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\tilde{X}_{n}(t)$ with \eqref{eq:tildeX_2};
\item \textbf{compute} $K_{n}(t)$ and $\phi_{n}(t)$ with \eqref{eq:gain_3}~-~\eqref{eq:phi_rec3};
\item \textbf{compute} $\hat{\theta}_{n}^{RLS}(t)$ with \eqref{eq:rls_par};
\end{enumerate}
\item \textbf{end for};
\end{enumerate}
\item[] \hspace{-0.5cm} \textbf{\underline{Global}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\hat{\theta}_{n}^{ADMM,(k+1)}(t)$ with \eqref{eq:est_admm3}, $n=1,\ldots,N$;
\item \textbf{compute} $\hat{\theta}_{n}^{(k+1)}(t)$ with \eqref{eq:est_dec2}, $n=1,\ldots,N$;
\item \textbf{compute} $\hat{\theta}^{g,(k+1)}$ with \eqref{eq:glob_est};
\item \textbf{compute} $\delta_{n}^{(k+1)}$ with \eqref{step:dual_est}, $n=1,\ldots,N$;
\end{enumerate}
\item \textbf{until} a stopping criteria is satisfied (e.g. maximum number of iterations attained);
\end{enumerate}
\end{itemize}
\item \textbf{end}.
\end{enumerate}
\vspace*{.1cm}\hrule\vspace*{.1cm}
~~\textbf{Output}: Estimated global parameters $\{\hat{\theta}^{g}(t)\}_{t=1}^{T}$, estimated local parameters $\{\hat{\theta}_{n}(t)\}_{t=1}^{T}$, $n=1,\ldots,N$.
\end{algorithm}
\subsection{Example 3}
Assume to collect data for $T=1000$ from a set of $N=100$ dynamical systems modelled as
\begin{equation}\label{eq:syst3}
y_{n}(t)=\theta_{1}^{g}y_{n}(t-1)+\theta_{n,2}y_{n}(t-2)+\theta_{2}^{g}u_{n}(t-1)+e_{n}(t),
\end{equation}
where $\theta^{g}=\smallmat{0.2 & 0.8}'$ and $\theta_{n,2}$ is sampled from a normal distribution $\mathcal{N}(0.4,0.0025)$, so that it is different for the $N$ systems. The white noise sequence $e_{n}\sim \mathcal{N}(0,R_{n})$, where, for the \textquotedblleft informative\textquoteright \ systems, $R_{n} \in [1 \ 20]$ yields $SNR \in [3.1,14.6]$~dB (see~\eqref{eq:snr}).\\
Initializing $\phi_{n}$ as $\phi_{n}(0)=0.1I_{n_{\theta}}$, while $\hat{\theta}_{n}(0)$ and $\hat{\theta}_{\mathrm{o}}^{g}$ are sampled from the distributions $\mathcal{N}(\hat{\theta}^{g},2I_{n_{\theta}})$ and $\mathcal{N}(\hat{\theta}^{g},I_{n_{\theta}})$, respectively, $\{\lambda_{n}=\Lambda\}_{n=1}^{N}$, with $\Lambda=1$, and $\rho=0.1$, the performance of the proposed approach are evaluated. \figurename{~\ref{Fig:globParams3}} shows $\hat{\theta}^{g}$ obtained with ADMM-RLS, along with the estimation error. Observe that the estimates tends to converge to the actual value of the global parameters.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex3-eps-converted-to}}
\subfigure[$|\hat{\theta}_{1}^{g}-\theta_{1}^{g}|$]{\includegraphics[scale=0.7]{errg1_ex3-eps-converted-to}}\\
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag3_ex3-eps-converted-to}}
\subfigure[$|\hat{\theta}_{2}^{g}-\theta_{2}^{g}|$]{\includegraphics[scale=0.7]{errg2_ex3-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 3. True vs estimated global parameters. Black : true, blue : ADMM-RLS.}
\label{Fig:globParams3}
\end{figure}
To further assess the performances of ADMM-RLS, $\theta_{n}$, $\hat{\theta}_{n}$ and $\hat{\theta}_{n}^{RLS}$ obtained for the $5$th system, i.e. $n=5$, are compared in \figurename{~\ref{Fig:localParams3}}. It can thus be seen that the difference between $\hat{\theta}_{n}^{RLS}$ and $\hat{\theta}_{n}$ is mainly noticeable at the beginning of the estimation horizon, but then $\hat{\theta}_{n}^{RLS}$ and $\hat{\theta}_{n}$ are barely distinguishable. Note that $\mbox{SNR}_{5}=8.9$~dB.
\begin{figure}[!tb]
\begin{tabular}[t]{cc}
\subfigure[$\theta_{5,1}$ vs $\hat{\theta}_{5,1}$ and $\hat{\theta}_{5,1}^{RLS}$]{\includegraphics[scale=0.7]{theta1_5_rlsvsloc_ex3-eps-converted-to}}
\subfigure[$\theta_{5,2}$ vs $\hat{\theta}_{5,2}$ and $\hat{\theta}_{5,2}^{RLS}$]{\includegraphics[scale=0.7]{theta2_5_rlsvsloc_ex3-eps-converted-to}}\\
\multicolumn{1}{c}{\subfigure[$\theta_{5,3}$ vs $\hat{\theta}_{5,3}$ and $\hat{\theta}_{5,3}^{RLS}$]{\includegraphics[scale=0.7]{theta3_5_rlsvsloc_ex3-eps-converted-to}}}
\end{tabular}
\caption{Example 3. Local parameter $\theta_{n,2}$, $n=5$. Black : true,
blue : $\hat{\theta}_{5}$, red: $\hat{\theta}_{5}^{RLS}$.}
\label{Fig:localParams3}
\end{figure}
\subsubsection{Non-informative agents}
Suppose that among the $N=100$ systems described by the model in \eqref{eq:syst3}, $N_{ni}=20$ randomly chosen agents are non-informative, i.e. their input sequences $u_{n}$ are null and $R_{n}=10^{-8}$.\\
As it can be observed from the estimates reported in \figurename{~\ref{Fig:pramsGreedyADMM3}}, $\{\hat{\theta}_{i}^{g}\}_{i=1}^{2}$ converge to the actual values of the global parameters even if $20$\% of the systems provide non-informative data.\\
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1Greedy_ex3NotExcnogreedy-eps-converted-to}}
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2Greedy_ex3NotExcnogreedy-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 3. True vs estimated global parameters. Black : true, blue : ADMM-RLS.}
\label{Fig:pramsGreedyADMM3}
\end{figure}
The local estimates $\hat{\theta}_{n,2}$ for the $8$th and $65$th system ($SNR_{65}\approx6$~dB) are reported in \figurename{~\ref{Fig:prams8_65_notexc}. As, the $8$th system is among the ones with a non exciting input, $\hat{\theta}_{8,2}=\hat{\theta}_{8,2}(0)$ over the estimation horizon. Instead, $\hat{\theta}_{65,2}$ tends to converge to the actual value of $\theta_{65,2}$. Even if the purely local parameter is not retrieved from the data, using the proposed collaborative approach $\theta_{8,1}$ and $\theta_{8,3}$ are accurately estimated (see~\figurename{~\ref{Fig:prams8_notexc}}). We can thus conclude that the proposed estimation method \textquotedblleft forces\textquotedblright \ the estimates of the global components of $\theta_{n}$ to follow $\hat{\theta}^{g}$, which is estimated automatically discarding the contributions from the systems that lacked excitation.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\theta_{8,2}$ vs $\hat{\theta}_{8,2}$ \label{Fig:sub_loc8}]{\includegraphics[scale=0.7]{theta2_8_ex3-eps-converted-to}}
\subfigure[$\theta_{65,2}$ vs $\hat{\theta}_{65,2}$\label{Fig:sub_loc65}]{\includegraphics[scale=0.7]{theta2_65_ex3-eps-converted-to}}
\end{tabular}
}
\caption{Example 3. Local parameters $\theta_{n,2}$, $n=8,65$. Black : true, blue : ADMM-RLS.}
\label{Fig:prams8_65_notexc}
\end{figure}
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\theta_{8,1}$ vs $\hat{\theta}_{8,1}$ and $\hat{\theta}_{8,1}^{RLS}$ ]{\includegraphics[scale=0.7]{theta1_8_locvsrlsex3-eps-converted-to}} &
\subfigure[$\theta_{8,3}$ vs $\hat{\theta}_{8,3}$ and $\hat{\theta}_{8,3}^{RLS}$]{\includegraphics[scale=0.7]{theta3_8_locvsrlsex3-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 3. Local parameters $\theta_{8,i}$, $i=1,3$. Black : true, blue : $\hat{\theta}_{8,i}^{RLS}$, red: $\hat{\theta}_{8,i}$.}
\label{Fig:prams8_notexc}
\end{figure}
\section{Constrained Collaborative estimation for partial consensus}\label{Sec:3}
Suppose that the value of the local parameter $\theta_{n}$ is constrained to a set $\mathcal{C}_{n}$ and that this hypothesis holds for all the agents $n \in \{1,\ldots,N\}$. With the objective of reaching partial consensus among the agents, the problem to be solved can thus be formulated as
\begin{equation}\label{eq:prob_const}
\begin{aligned}
&\mbox{minimize } && \sum_{n=1}^{N} f_{n}(\theta_{n})\\
&\mbox{s.t. }&&P\theta_{n}=\theta, \ \ n=1,\ldots,N,\\
&&& \theta_{n} \in \mathcal{C}_{n}, \ \ n=1,\ldots,N.
\end{aligned}
\end{equation}
Observe that \eqref{eq:prob_const} corresponds to \eqref{eq:problem} if the nonlinear consensus constraint is replaced with \eqref{eq:partial_constr}.\\
To use ADMM to solve \eqref{eq:prob_const}, the problem has to be modified as
\begin{equation}\label{eq:prob_const2}
\begin{aligned}
&\mbox{minimize } &&\sum_{n=1}^{N} \left\{f_{n}(\theta_{n}) + g_{n}(z_{n})\right\}\\
& \mbox{s.t. } && P\theta_{n}=\theta^{g} \ \ n=1,\ldots,N\\
&&& \theta_{n}=z_{n}, \ \ n=1,\ldots,N
\end{aligned}
\end{equation}
where $\{g_{n}\}_{n=1}^{N}$ are the indicator functions of the sets $\{\mathcal{C}_{n}\}_{n=1}^{N}$ (defined as in \eqref{eq:ind_func}) and $\{z_{n} \in \mathbb{R}^{n_{\theta}}\}_{n=1}^{N}$ are auxiliary variables. Observe that \eqref{eq:prob_const2} can be solved with ADMM. Given the augmented Lagrangian associated with \eqref{eq:prob_const2}, i.e.
\begin{align}\label{eq:lag_const}
\nonumber \mathcal{L}&=\sum_{n=1}^{N}\{f_{n}(\theta_{n})+g_{n}(z_{n})+\delta_{n,1}'(\theta_{n}-z_{n})+\delta_{n,2}'(P\theta_{n}-\theta^{g})+\\
&\hspace{1cm}+\frac{\rho_{1}}{2}\|\theta_{n}-z_{n}\|_{2}^{2}+\frac{\rho_{2}}{2}\|P\theta_{n}-\theta^{g}\|_{2}^{2}\},
\end{align}
the iterations that have to be performed to solve the addressed problem with ADMM are
\begin{align}
&\hat{\theta}_{n}^{(k+1)}(T)=\underset{\theta_{n}}{\argmin} \ \mathcal{L}(\theta_{n},\hat{\theta}^{g,(k)},z_{n}^{(k)},\delta_{n}^{(k)}),
\label{step:loc_est2}\\
&z_{n}^{(k+1)}=\underset{z_{n}}{\argmin} \ \mathcal{L}(\hat{\theta}_{n,(k+1)}(T),\hat{\theta}^{g,(k)},z_{n},\delta_{n}^{(k)}),
\label{step:aux_est2}\\
&\hat{\theta}^{g,(k+1)}=\underset{\theta^{g}}{\argmin} \ \mathcal{L}(\{\hat{\theta}_{n}^{(k+1)}\}_{n=1}^{N},\theta^{g},\{z_{n}^{(k+1)},\delta_{n}^{(k)}\}_{n=1}^{N}), \label{step:glob_est2}\\
&\delta_{n,1}^{(k+1)}=\delta_{n,1}^{(k)}+\rho_{1}(\hat{\theta}_{n}^{(k+1)}(T)-z^{(k+1)}),\label{eq:dual1_update}\\
&\delta_{n,2}^{(k+1)}=\delta_{n,2}^{(k)}+\rho_{2}(P\hat{\theta}_{n}^{(k+1)}(T)-\hat{\theta}^{g,(k+1)}).\label{eq:dual2_update}
\end{align}
Note that two sets of Lagrangian multipliers, $\{\delta_{n,1}\}_{n=1}^{N}$ and $\{\delta_{n,2}\}_{n=1}^{N}$, have been introduced. While $\delta_{n,1} \in \mathbb{R}^{n_g}$ is associated with the partial consensus constraint, $\delta_{n,2} \in \mathbb{R}^{n_{\theta}}$ is related to the constraint $\theta_{n} \in \mathcal{C}_{n}$, $n=1,\ldots,N$.\\
Solving \eqref{step:aux_est2}-\eqref{step:glob_est2}, the resulting updates for the auxiliary variables and the global estimates are
\begin{align}
z_{n}^{(k+1)}=&\mathcal{P}_{\mathcal{C}_{n}}\left(\hat{\theta}_{n}^{(k+1)}(T)+\frac{1}{\rho_{1}}\delta_{n,1}^{(k)}\right), \ \ n=1,\ldots,N, \label{eq:z_update}\\
\hat{\theta}^{g,(k+1)}&=\frac{1}{N}\sum_{n=1}^{N}\left(P\hat{\theta}_{n}^{(k+1)}(T)+\frac{1}{\rho_{2}}\delta_{n,2}^{(k)} \right). \label{eq:thetag_update}
\end{align}
Observe that $z$-update is performed projecting onto the set $\mathcal{C}_{n}$ a combination of the updated local estimate and $\delta_{n,1}^{(k)}$, while $\hat{\theta}^{g,(k+1)}$ is computed as in Section~\ref{Sec:2}, with $\delta_{n}$ replaced by $\delta_{n,2}$.\\
Consider the close form solution of \eqref{step:loc_est2}, which is given by
\begin{align}
\hat{\theta}_{n}^{(k+1)}(T)&=\phi_{n}(T)\left\{\mathcal{Y}_{n}(T)-\delta_{n,1}^{(k)}-P'\delta_{n,2}^{(k)}+\rho_{1}z_{n}^{(k)}+\rho_{2}P'\hat{\theta}^{g,(k)} \right\}, \label{eq:explicit1}\\
\mathcal{Y}_{n}(t)&=\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau} X_{n}(\tau)y_{n}(\tau),\\
\phi_{n}(t)&=\left(\left[\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau} X_{n}(\tau)X_{n}(\tau)'\right]+\rho_{1}I_{n_{\theta}}+\rho_{2}P'P\right)^{-1}. \label{eq:phi_3}
\end{align}
Aiming at finding recursive formulas to update the estimates of the local parameters, we introduce the $n$th local estimate obtained at $T-1$, i.e.
\begin{align}\label{eq:prev_1}
\nonumber \hat{\theta}_{n}(T-1)&=\phi_{n}(T-1)\left\{\mathcal{Y}_{n}(T-1)-\delta_{n,1}(T-1)-P'\delta_{n,2}(T-1)+\right.\\
&\hspace{1cm}\left.+\rho_{1}z_{n}(T-1)+\rho_{2}P'\hat{\theta}^{g}(T-1) \right\}
\end{align}
with $\delta_{n,1}(T-1)$, $\delta_{n,2}(T-1)$, $z_{n}(T-1)$ and $\hat{\theta}^{g}(T-1)$ being the Lagrange multipliers and the global estimate obtained at $T-1$, respectively.\\
To obtain recursive formulas to compute $\hat{\theta}_{n}^{(k+1)}$, we start proving that $\phi_{n}(T)$ can be computed as a function of $\phi_{n}(T-1)$. in particular, introducing
\begin{equation*}
\mathcal{X}_{n}(t)=\sum_{\tau=1}^{t}\lambda_{n}^{t-\tau} X_{n}(\tau)(X_{n}(\tau))',
\end{equation*}
note that
\begin{align*}
&\phi_{n}(T)^{-1}=\mathcal{X}_{n}(T)+\rho_{1}I_{n_{\theta}}+\rho_{2}P'P=\\
&=\lambda_{n}\mathcal{X}_{n}(T-1)+X_{n}(T)X_{n}(T)'+\rho_{1}I_{n_{\theta}}+\rho_{2}P'P=\\
&=\lambda_{n}\left[\mathcal{X}_{n}(T-1)+\rho_{1}I_{n_{\theta}}+\rho_{2}P'P\right]+X_{n}(T)X_{n}(T)'+(1-\lambda_{n})\rho_{1}+(1-\lambda_{n})\rho_{2}P'P=\\
&=\lambda_{n}\phi_{n}(T-1)^{-1}+X_{n}(T)X_{n}(T)'+(1-\lambda_{n})\rho_{1}+(1-\lambda_{n})\rho_{2}P'P.
\end{align*}
Defining the extended regressor as
\begin{equation}\label{eq:tildeX_3}
\tilde{X}_{n}(T)=\begin{bmatrix}
X_{n}(T) & \sqrt{(1-\lambda_{n})\rho_{1}}I_{n_{\theta}} & \sqrt{(1-\lambda_{n})\rho_{2}}P'
\end{bmatrix} \in \mathbb{R}^{n_{\theta}\times(n_{y}+n_{\theta}+n_{g})},
\end{equation}
and applying the matrix inversion lemma, it can be easily proven that $\phi_{n}(T)$ can then be computed as:
\begin{align}
\mathcal{R}_{n}(T)&=\lambda_{n}I_{(n_{y}+n_{\theta}+n_{g})}+\tilde{X}_{n}(T)'\phi_{n}(T)\tilde{X}_{n}(T),\\
K_{n}(T)&=\phi_{n}(T-1)\tilde{X}_{n}(T)(\mathcal{R}_{n}(T))^{-1},\label{eq:gain_4}\\
\phi_{n}(T)&=\lambda_{n}^{-1}(I_{n_{\theta}}-K_{n}(T)\tilde{X}_{n}(T)')\phi_{n}(T-1).\label{eq:phi_rec4}
\end{align}
The same observations relative to the update of $\phi_{n}$ made in Section~\ref{Sec:2} holds also in the considered case.\\
Consider \eqref{eq:explicit1}. Adding and subtracting
\begin{equation*}
\lambda_{n}\left[-\delta_{n,1}(T-1)-P'\delta_{n,2}(T-1)+\rho_{1}z_{n}(T-1)+\rho_{2}P'\hat{\theta}^{g}(T-1)\right]
\end{equation*}
to \eqref{eq:explicit1} and considering the definition of $\phi_{n}(T-1)$ (see~\eqref{eq:phi_3}), the formula to update $\hat{\theta}_{n}$ can be further simplified as
\begin{align} \nonumber &\hat{\theta}_{n}^{(k+1)}(T)=\phi_{n}(T)\{\lambda_{n}\left(\mathcal{Y}_{n}(T-1)-\delta_{n,1}(T-1)-P'\delta_{n,2}(T-1)+\right.\\
\nonumber &\hspace{0.2cm}\left.+\rho_{1}z_{n}(T-1)+\rho_{2}P'\hat{\theta}^{g}(T-1)\right) +X_{n}(T)y_{n}(T)+\rho_{1}(z_{n}^{(k)}-\lambda_{n}z_{n}(T-1))+\\
\nonumber&\hspace{0.2cm}+\rho_{2}P'(\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1))-(\delta_{n,1}^{(k)}-\lambda_{n}\delta_{n,1}(T-1))+\\
\nonumber&\hspace{0.2cm}-P'(\delta_{n,2}^{(k)}-\lambda_{n}\delta_{n,2}(T-1))\}=\\
\nonumber &\hspace{0cm}=\hat{\theta}_{n}(T-1)-K_{n}(T)\tilde{X}_{n}(T)\hat{\theta}_{n}(T-1)+\phi_{n}(T)\{X_{n}(T)y_{n}(T)+\\
\nonumber &\hspace{0.2cm} +\rho_{1}(z_{n}^{(k)}-\lambda_{n}z_{n}(T-1))+\rho_{2}P'(\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1))\}+\\
\nonumber & \hspace{0.2cm} -(\delta_{n,1}^{(k)}-\lambda_{n}\delta_{n,1}(T-1))-P'(\delta_{n,2}^{(k)}-\lambda_{n}\delta_{n,2}(T-1))\}=\\
&= \hat{\theta}_{n}^{RLS}(T)+\hat{\theta}_{n}^{ADMM,(k+1)}(T). \label{eq:est_dec3}
\end{align}
In particular,
\begin{align}\label{eq:est_rls4} \nonumber\hat{\theta}_{n}^{RLS}&=\phi_{n}(T)\lambda_{n}\left(\mathcal{Y}_{n}(T-1) -\delta_{n,1}(T-1)-P'\delta_{n,2}(T-1)+\rho_{1}z_{n}(T-1)+\right.\\
&\hspace{1cm}\left.+\rho_{2}P'\hat{\theta}^{g}(T-1)\right)+\phi_{n}(T)X_{n}(T)y_{n}(T),
\end{align}
while
\begin{equation}\label{eq:est_admm4}
\hat{\theta}_{n}^{ADMM,(k+1)}(T)=\phi_{n}(T)\left[\rho_{1}\Delta_{z,\lambda_{n}}^{(k+1)}(T)+\rho_{2}P' \Delta_{g,\lambda_{n}}^{(k+1)}(T)- \Delta_{1,\lambda_{n}}^{(k+1)}-P'\Delta_{2,\lambda_{n}}^{(k+1)}\right].
\end{equation}
with
\begin{align*} &\Delta_{z,\lambda_{n}}^{(k+1)}(T)=z_{n}^{(k)}-\lambda_{n}z_{n}(T-1),\\ &\Delta_{g,\lambda_{n}}^{(k+1)}(T)=\hat{\theta}^{g,(k)}-\lambda_{n}\hat{\theta}^{g}(T-1),\\ &\Delta_{1,\lambda_{n}}^{(k+1)}=\delta_{n,1}^{(k)}-\lambda_{n}\delta_{n,1}(T-1),\\
&\Delta_{2,\lambda_{n}}^{(k+1)}=\delta_{n,2}^{(k)}-\lambda_{n}\delta_{n,2}(T-1).
\end{align*}
Note that \eqref{eq:est_admm4} differs from \eqref{eq:est_admm3} because of the introduction of the additional terms $\Delta_{z,\lambda_{n}}$ and $\Delta_{1,\lambda_{n}}$.\\
Similarly to what is presented in Section~\ref{Sec:2}, thanks to \eqref{eq:phi_rec4} the formula to update $\hat{\theta}_{n}^{RLS}$ can be further reduced as
\begin{align*}
\hat{\theta}_{n}^{RLS}&=\hat{\theta}_{n}(T-1)-K_{n}(T)(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)+\phi_{n}(T)X_{n}(T)y_{n}(T)=\\
&=\hat{\theta}_{n}(T-1)-K_{n}(T)(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)+\phi_{n}(T)\tilde{X}_{n}(T)\tilde{y}_{n}(T),
\end{align*}
with the extended measurement vector $\tilde{y}_{n}(T)$ is defined as
\begin{equation*}
\tilde{y}_{n}(T)=\begin{bmatrix}y_{n}(T)' & O_{1\times n_{\theta}} & O_{1\times n_{n_{g}}}\end{bmatrix}'.
\end{equation*}
Exploiting the equality $K_{n}(T)=\phi_{n}(T)\tilde{X}_{n}(T)$ (the proof can be found in \eqref{Sec:1}), it can thus be proven that
\begin{equation}\label{eq:rls_parC}
\hat{\theta}_{n}^{RLS}=hat{\theta}_{n}(T-1)+K_{n}(T)(\tilde{y}_{n}(T)-(\tilde{X}_{n}(T))'\hat{\theta}_{n}(T-1)).
\end{equation}
It is worth remarking that $\hat{\theta}_{n}^{RLS}$ can be updated ($i$) locally, ($ii$) recursively and ($iii$) once per step $t$.
\begin{algorithm}[!b]
\caption{ADMM-RLS algorithm for constrained consensus}
\label{algo6}
~~\textbf{Input}: Sequence of observations $\{X_{n}(t),y_{n}(t)\}_{t=1}^T$, initial matrices $\phi_{n}(0) \in \mathbb{R}^{n_{\theta} \times n_{\theta}}$, initial local estimates $\hat{\theta}_{n}(0)$, initial dual variables $\delta_{n,1}^{\mathrm{o}}$ and $\delta_{n,2}^{\mathrm{o}}$, initial auxiliary variables $\hat{z}_{n,\mathrm{o}}$, forgetting factors $\lambda_{n}$, $n=1,\ldots,N$, initial global estimate $\hat{\theta}_{\mathrm{o}}^{g}$, parameters $\rho_{1}, \rho_{2} \in \mathbb{R}^{+}$.
\vspace*{.1cm}\hrule\vspace*{.1cm}
\begin{enumerate}[label=\arabic*., ref=\theenumi{}]
\item \textbf{for} $t=1,\ldots,T$ \textbf{do}
\begin{itemize}
\item[] \hspace{-0.5cm} \textbf{\underline{Local}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{for} $n=1,\ldots,N$ \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\tilde{X}_{n}(t)$ with \eqref{eq:tildeX_3};
\item \textbf{compute} $K_{n}(t)$ and $\phi_{n}(t)$ with \eqref{eq:gain_4}~-~\eqref{eq:phi_rec4};
\item \textbf{compute} $\hat{\theta}_{n}^{RLS}(t)$ with \eqref{eq:rls_parC};
\end{enumerate}
\item \textbf{end for};
\end{enumerate}
\item[] \hspace{-0.5cm} \textbf{\underline{Global}}
\begin{enumerate}[label=\theenumi{}.\arabic*., ref=\theenumi{}.\theenumii{}]
\item \textbf{do}
\begin{enumerate}[label=\theenumii{}.\arabic*., ref=\theenumi{}.\theenumii{}.\theenumiii{}]
\item \textbf{compute} $\hat{\theta}_{n}^{ADMM,(k+1)}(t)$ with \eqref{eq:est_admm4}, $n=1,\ldots,N$;
\item \textbf{compute} $\hat{\theta}_{n}^{(k+1)}(t)$ with \eqref{eq:est_dec3}, $n=1,\ldots,N$;
\item \textbf{compute} $z_{n}^{(k+1)}(t)$ with \eqref{eq:z_update}, $n=1,\ldots,N$;
\item \textbf{compute} $\hat{\theta}^{g,(k+1)}$ with \eqref{eq:thetag_update};
\item \textbf{compute} $\delta_{n,1}^{(k+1)}$ with \eqref{eq:dual1_update}, $n=1,\ldots,N$;
\item \textbf{compute} $\delta_{n,2}^{(k+1)}$ with \eqref{eq:dual2_update}, $n=1,\ldots,N$;
\end{enumerate}
\item \textbf{until} a stopping criteria is satisfied (e.g. maximum number of iterations attained);
\end{enumerate}
\end{itemize}
\item \textbf{end}.
\end{enumerate}
\vspace*{.1cm}\hrule\vspace*{.1cm}
~~\textbf{Output}: Estimated global parameters $\{\hat{\theta}^{g}(t)\}_{t=1}^{T}$, estimated local parameters $\{\hat{\theta}_{n}(t)\}_{t=1}^{T}$, $n=1,\ldots,N$.
\end{algorithm}
\begin{remark}
The proposed method, summarized in Algorithm~\ref{algo6} and in \figurename{~\ref{Fig:ADMMscheme}}, requires the agents to transmit $\{\hat{\theta}_{n}^{RLS},\phi_{n}\}$ to the \textquotedblleft cloud\textquotedblright, while the \textquotedblleft cloud\textquotedblright \ has to communicate $\hat{\theta}_{n}$ to each node once it has been computed. As a consequence, a N2C2N transmission scheme is required. \hfill $\blacksquare$
\end{remark}
\subsection{Example 4}
Suppose that the data are gathered from $N=100$ systems, described by \eqref{eq:syst3} and collected over an estimation horizon $T=5000$. Moreover, assume that the a priori information constraints parameter estimates to the following ranges:
\begin{equation}\label{eq:constraints}
\begin{aligned}
& \ell_{n,1}\leq \hat{\theta}_{n,1} \leq up_{n,1} \hspace{-1.5cm} && \hspace{-1.5cm} \ell_{n,2}\leq \hat{\theta}_{n,2} \leq up_{n,2}\\
&& \ell_{n,3}\leq \hat{\theta}_{n,3} \leq up_{n,3}.
\end{aligned}
\end{equation}
Observe that the parameters $\rho_{1},\rho_{2} \in \mathbb{R}^{+}$ have to be tuned. To assess how the choice of these two parameters affects the satisfaction of \eqref{eq:constraints}, consider the number of steps the local estimates violate the constraints over the estimation horizon $T$, $\{{N}_{i}^{b}\}_{i=1}^{3}$. Assuming that \textquotedblleft negligible\textquotedblright \ violations of the constraints are allowed, \eqref{eq:constraints} are supposed to be violated if the estimated parameters fall outside the interval $\mathcal{B}_{n}=\smallmat{\ell_{n}-10^{-4} & u_{n}+10^{-4}}$. Considering the set of constraints
\begin{equation*}
\mathcal{S}_{2}=\{\ell_{n}=\smallmat{0.19 & \theta_{n,2}-0.1 & 0.79}, up_{n}=\smallmat{0.21 & \theta_{n,2}+0.1 & 0.81}\},
\end{equation*}
\figurename{~\ref{Fig:rhostudy_ex1}} shows the average percentage of violations over the $N$ agents obtained fixing $\rho_{2}=0.1$ and choosing
\begin{equation*}
\rho_{1}=\{10^{-5},10^{-4},10^{-3},10^{-2}.10^{-1},1,10,20\}.
\end{equation*}
Observe that if $\rho_{1}$ dominates over $\rho_{2}$ the number of violations tends to decrease, as in the augmented Lagrangian \eqref{eq:constraints} are weighted more than the consensus constraint. However, if $\rho_{1}/\rho_{2}>100$, $\{\bar{N}_{i}^{b}\}_{i=1}^{3}$ tend to slightly increase. It is thus important to trade-off between the weights attributed to \eqref{eq:constraints} and the consensus constraint.
\begin{figure}[!tb]
\centering
\includegraphics[scale=0.7]{rostudy_ex4-eps-converted-to}
\caption{Example 4. $\bar{N}^{b}$ vs $\rho_{1}/\rho_{2}$: black = $\bar{N}_{1}^{b}$, red = $\bar{N}_{2}^{b}$, blue = $\bar{N}_{3}^{b}$.}
\label{Fig:rhostudy_ex1}
\end{figure}
To evaluate how the stiffness of the constraints affects the choice of the parameters, $\{N_{i}^{b}\}_{i=1}^{3}$ are computed considering three different sets of box constraints
\begin{align*}
\mathcal{S}_{1}&=\{\ell_{n}=\smallmat{0.195 & \theta_{n,2}-0.05 & 0.795}, up_{n}=\smallmat{0.205 & \theta_{n,2}+0.05 & 0.805}\},\\ \mathcal{S}_{2}&=\{\ell_{n}=\smallmat{0.19 & \theta_{n,2}-0.1 & 0.79}, up_{n}=\smallmat{0.21 & \theta_{n,2}+0.1 & 0.81}\},\\ \mathcal{S}_{3}&=\{\ell_{n}=\smallmat{0.15 & \theta_{n,2}-0.5 & 0.75}, up_{n}=\smallmat{0.25 & \theta_{n,2}+0.5 & 0.85}\}.
\end{align*}
The resulting $\{\bar{N}_{i}^{b}\}_{i=1}^{3}$ are reported in \figurename{~\ref{Fig:rhostudy2_ex1}}.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[!tb]{cc}
\subfigure[$\bar{N}_{1}^{b}$]{\includegraphics[scale=0.7]{rostudy21_ex4-eps-converted-to}}
\subfigure[$\bar{N}_{2}^{b}$]{\includegraphics[scale=0.7]{rostudy22_ex4-eps-converted-to}}\\
\multicolumn{1}{c}{\subfigure[$\bar{N}_{3}^{b}$]{\includegraphics[scale=0.7]{rostudy23_ex4-eps-converted-to}}}\\
\end{tabular}
}
\caption{Example 4. Average percentage of constraint violations $\bar{N}_{i}^{b}$ \%, $i=1,2,3$, vs $\rho_{1}/\rho_{2}$. Black : $\mathcal{S}_{1}$, red : $\mathcal{S}_{2}$, blue : $\mathcal{S}_{3}$.}
\label{Fig:rhostudy2_ex1}
\end{figure}
Note that also in this case the higher the ratio $\rho_{1}/\rho_{2}$ is, the smaller $\{\bar{N}_{i}^{b}\}_{i=1}^{3}$ are. However, also in this case, the constraint violations tend to increase for $\rho_{1}/\rho_{2}>100$.\\
Focusing on the assessment of ADMM-RLS performances when the set of constraints is $\mathcal{S}_{2}$, \figurename{~\ref{Fig:glob_ex4}} shows the global estimates obtained using the same initial conditions and forgetting factors as in Section~\ref{Sec:3}, with $\rho_{1}=10$ and $\rho_{2}=0.1$.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{1}^{g}$ vs $\hat{\theta}_{1}^{g}$]{\includegraphics[scale=0.7]{thetag1_ex4-eps-converted-to}}
\subfigure[$\theta_{2}^{g}$ vs $\hat{\theta}_{2}^{g}$]{\includegraphics[scale=0.7]{thetag2_ex4-eps-converted-to}}\\
\end{tabular}
}
\caption{Example 4.Global model parameters: black = true, blue = ADMM-RLS, red = upper and lower bounds.}
\label{Fig:glob_ex4}
\end{figure}
Note that the global estimates satisfy \eqref{eq:constraints}, showing that the constraints on the global estimate are automatically enforced imposing $\theta_{n} \in \mathcal{C}_{n}$.
As it concerns the RMSEs for $\hat{\theta}^{g}$ \eqref{eq:rmse_glob}, they are equal to:
\begin{align*}
RMSE_{1}^{g}=0.001 \mbox{ and } RMSE_{2}^{g}=0.006,
\end{align*}
and their relatively small values can be related to the introduction of the additional constraints, that allow to limit the resulting estimation error.\\
\figurename{~\ref{Fig:loc_ex4} show the estimate $\hat{\theta}_{n}$ for $n=11$, with $SNR_{11}=10.6$~dB. Note that the estimated parameters tend to satisfy the constraints.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{11,1}$ vs $\hat{\theta}_{11,1}$]{\includegraphics[scale=0.7]{theta111_ex4-eps-converted-to}}
\subfigure[$\theta_{11,2}$ vs $\hat{\theta}_{11,2}$]{\includegraphics[scale=0.7]{theta112_ex4-eps-converted-to}}\\
\multicolumn{1}{c}{\subfigure[$\theta_{11,3}$ vs $\hat{\theta}_{11,3}$]{\includegraphics[scale=0.7]{theta113_ex4-eps-converted-to}}}\\
\end{tabular}
}
\caption{Local parameter $\theta_{n}$, $n=11$. Black : true, blue : ADMM-RLS, red : upper and lower bounds}
\label{Fig:loc_ex4}
\end{figure}
In \figurename{~\ref{Fig:locvsrls_ex4}} $\hat{\theta}_{n}$ and $\hat{\theta}_{n}^{RLS}$, with $n=11$, are compared. As it can be noticed, while $\hat{\theta}_{11}$ satisfied the imposed constraints on its values, the effect of using $\hat{\theta}_{11}$ to update $\hat{\theta}_{11}^{RLS}$ (see~\eqref{eq:rls_parC}) is not strong enough to enfoce also the estimates computed locally to satisfy the contraints.
\begin{figure}[!tb]
\centerline{
\begin{tabular}[t]{cc}
\subfigure[$\theta_{11,1}$ vs $\hat{\theta}_{11,1}$ and $\hat{\theta}_{11,1}^{RLS}$]{\includegraphics[scale=0.7]{theta111_locvsrlsex4-eps-converted-to}}
\subfigure[$\theta_{11,2}$ vs $\hat{\theta}_{11,2}$ and $\hat{\theta}_{11,2}^{RLS}$]{\includegraphics[scale=0.7]{theta112_locvsrlsex4-eps-converted-to}}\\
\multicolumn{1}{c}{\subfigure[$\theta_{11,3}$ vs $\hat{\theta}_{11,3}$ and $\hat{\theta}_{11,3}^{RLS}$]{\includegraphics[scale=0.7]{theta113_locvsrlsex4-eps-converted-to}}}\\
\end{tabular}
}
\caption{Local parameter $\theta_{n}$, $n=11$, for $t \in [1 \ 1000]$. Black : true, blue : $\hat{\theta}_{11}^{RLS}$, cyan : $\hat{\theta}_{11}$, red : upper and lower bounds}
\label{Fig:locvsrls_ex4}
\end{figure}
To further assess the performance of the proposed approach, the RMSE for the local estimates
\begin{equation}\label{eq:rmse_loc}
\mathrm{RMSE}_{n,i}=\sqrt{\frac{\sum_{t=1}^T\left(\theta_{n,i}-\hat{\theta}_{n,i}(t)\right)^2}{T}}.
\end{equation}
is also considered. $RMSE_{n,2}$ obtained for each of the $N$ systems is reported in \figurename{~\ref{Fig:rmse_ex4}} and, as it can be noticed, $RMSE_{n,2}$ is relatively small. As for the global parameters' estimates, this result can be related to the introduction of the additional constraints.
\begin{figure}[!tb]
\centering
\includegraphics[scale=0.7]{rmse_ex4-eps-converted-to}
\caption{$RMSE_{2}$ for each agent $n$, $n=1,\ldots,N$.}
\label{Fig:rmse_ex4}
\end{figure}
\section{Concluding Remarks and Future Work}\label{Sec:Conclusions}
In this report a method for collaborative least-squares parameter estimation is presented based on output measurements from multiple systems which can perform local computations and are also connected to a centralized resource in the \textquotedblleft cloud\textquotedblright. The approach includes two stages: ($i$) a \emph{local} step, where estimates of the unkown parameters are obtained using the locally available data, and ($ii$) a \emph{global} stage, performed on the cloud, where the local estimates are fused.\\
Future research will address extentions of the method to the nonlinear and multi-class consensus cases. Moreover, an alternative solution of the problem will be studied so to replace the transmission policy required now, i.e. N2C2N, with a Node-to-Cloud (N2C) communication scheme. This change should allow to alleviate problems associated with the communication latency between the cloud and the nodes. Moreover, it should enable to obtain local estimators that run independently from the data transmitted by the cloud, and not requiring synchronous processing by the nodes and \textquotedblleft cloud\textquotedblright. Other, solutions to further reduce the trasmission complexity and to obtain an asynchronous scheme with the same characteristics as the one presented in this report will be investigated.
\begin{appendices}
\section{Centralized RLS}\label{Appendix:A}
Consider problem \eqref{eq:gen_consensus}, with the cost functions given by
\begin{equation*}
f_{n}(\theta_{n})=\frac{1}{2}\sum_{t=1}^{T}\|y_{n}(t)-(X_{n}(t))'\theta_{n}\|_{2}^{2}.
\end{equation*}
The addressed problem can be solved in a fully centralized fashion, if at each step $t$ all the agents transmit the collected data pairs $\{y_{n}(t),X_{n}(t)\}$, $n=1,\ldots,N$, to the \textquotedblleft cloud\textquotedblright. This allows the creation of the lumped measurement vector and regressor, given by
\begin{equation}
\begin{aligned}
\check{y}(t)&=\begin{bmatrix}y_{1}(t)' & \ldots & y_{N}(t)'\end{bmatrix}' \in \mathbb{R}^{N \cdot n_{y} \times 1 }, \\
\check{X}(t)&=\begin{bmatrix}X_{1}(t)' & \ldots & X_{N}(t)'\end{bmatrix}' \in \mathbb{R}^{ n_{\theta}\times n_{y} \cdot N}.
\end{aligned}
\end{equation}
Through the introduction of the lumped vectors, \eqref{eq:gen_consensus} with $f_{n}$ as in \eqref{eq:least_sqCost} is equivalent to
\begin{equation}\label{eq:c_RLS}
\min_{\theta^{g}} \frac{1}{2}\sum_{t=1}^{T} \left\|\check{y}(t)-(\check{X}(t))'\theta^{g}\right\|_{2}^{2}.
\end{equation}
The estimate for the unknown parameters $\hat{\theta}^{g}$ can thus be retrieved applying standard RLS (see \cite{ljung1999system}), i.e. performing at each step $t$ the following iterations
\begin{align}
\mathcal{K}(t)&=\phi(t-1)\check{X}(t)\left(I_{\hspace{-0.1cm}\footnotesize{\mbox{\textcalligra{D}}}\hspace{0.1cm}}+(\check{X}(t))'\phi(t-1)\check{X}(t)\right)^{-1},\\ \phi(t)&=\left(I_{n_{\theta}}-\mathcal{K}(t)(\check{X}(t))'\right)\phi(t-1),\\
\hat{\theta}^{g}(t)&=\hat{\theta}^{g}(t-1)+\mathcal{K}(t)\left(\check{y}(t)-(\check{X}(t))'\hat{\theta}^{g}(t-1)\right),
\end{align}
with \textcalligra{D} $=N \cdot n_{y} \times 1$.
\end{appendices}
\bibliographystyle{plain}
| {'timestamp': '2017-09-26T02:03:17', 'yymm': '1709', 'arxiv_id': '1709.07972', 'language': 'en', 'url': 'https://arxiv.org/abs/1709.07972'} |
\section{Introduction}\label{sec:intro}
Modern optical superresolution (OSR) imaging has drawn much interest over the past fifty years, starting with the pioneering modern work of Rushforth and Harris \cite{Rushforth68} on the role of noise in classical image restoration from spatially filtered images. Novel optical designs utilizing super-oscillating point-spread functions (PSFs) \cite{Berry06, Yuan16, Gbur19}, new metamaterial based super-lenses \cite{Durant06, Jacob06, Salandrino06, Liu07}, structured-illumination microscopy (SIM) \cite{Gustaffson00}, superresolution imaging flourescence imaging (SOFI) \cite{SOFI09}, superresolution imaging with quantum emitters employing quantum correlations in second \cite{Schwartz12} and higher orders \cite{Schwartz13, Monticone14, Israel17}, and SIM enhanced by quantum correlations of single emitters\cite{Classen17}, pushed at the theoretical limits of super-resolution in different ways. They all have practical limitations of one form or another, however, and achieve only moderate improvements by factors of order 2-3 even at very high signal-to-noise ratios.
It was not until more recently that single-molecule localization imaging using uncorrelated photons from randomly photoactivated, well separated individual molecules \cite{Rust06} led to a qualitatively major advance in super-resolution, reaching ten to hundred fold improvement when compared to the classic Rayleigh-Abbe resolution limits. But these methods are limited to the biological domain where photoactivations and observations of only a subset of well separated fluorescent molecules are enabled, which only requires localization microscopy for each such sub-image, entailing a photon budget that follows an inverse quadratic dependence on the sought localization precision \cite{Thompson02,Ober04}. The final superresolved image only emerges when a large number of such source-localization-based subimages are carefully registered with respect to (w.r.t.) a fixed high-resolution grid and then superposed.
The use of coherent detection techniques \cite{Roberts16,Yang16} have promised to enable qualitatively superior super-resolution of closely spaced point sources via quantum-correlated, optical centroid measuring states \cite{Tsang09, Schwartz13, Unternahrer18} and wavefront projections \cite{Tsang16, Nair16, Paur16, Rehacek17,Tham17,Chrostowski17,Zhou18,Tsang18,Tsang20}. These latter papers, led principally by the work of Tsang and collaborators \cite{Tsang16}, have provided the most fundamental, quantum mechanical estimation-theoretic limits of superresolution possible by {\it any} method and their realization for point-source imaging in domains including and, notably, beyond microscopy. In the photon counting limit, the variance for estimating the separation between a closely spaced, symmetrical pair of point sources using wavefront projections can, in principle, approach this quantum limit, with the corresponding photon cost scaling according to an inverse-square law w.r.t.~separation, rather than the inverse quartic law for intensity-based images \cite{Ram06,Prasad14}.
Three recent papers by the present author \cite{YuPrasad18, PrasadYu19, Prasad20} have derived quantum estimation-theoretic limits on full three-dimensional (3D) localization and separation of a pair of incoherent point sources when the sources emit at a single wavelength. Coherent wavefront projections have been proposed and demonstrated as a way of realizing the lowest possible, quantum-mechanical bound on the variance of an unbiased estimation of the pair separation, as determined by the inverse of the quantum Fisher information (QFI) \cite{Helstrom76,Braunstein94,Paris09}.
The projective wavefront coding approach to spatial point-source-pair OSR can be readily generalized to the time-frequency domain as well, as shown in Ref.~\cite{Donohue18} for a pair of Gaussian pulse forms with slightly different center frequencies for their spectra using Hermite-Gaussian time-frequency modes. But the calculation and possible realization of the quantum bounds on the spatial OSR problem when source emission has a finite optical bandwidth, a problem that combines experimentally relevant spatial and temporal characteristics in a single setting, has not been treated before. The fundamental quantum bound on the variance of estimation of both the location and separation of the source pair by an imager is expected to degrade with increasing bandwidth of incoherent emission, since the imager's PSF being optical-frequency dependent broadens.
In this paper, we calculate the quantum estimation-theoretic fidelity for two problems of interest of finite-bandwidth emission in two dimensions - the transverse localization of a single point source w.r.t.~the optical axis and the separation of a pair of equally bright point sources that are symmetrically located w.r.t.~to the optical axis. Assuming uniform incoherent emission over a finite bandwidth, with no emission outside it, we utilize the basis of one-dimensional (1D) prolate spheroidal wave functions (PSWFs) to calculate QFI for these two problems when the imaging pupil is a clear circular disk with perfect transmission, for which the PSF is of the Airy form \cite{Goodman96}. Since, as previously noted \cite{Paur16,YuPrasad18}, in the photon counting limit the symmetrical pair OSR problem with a fixed midpoint of the pair separation vector and the single-source localization problem entail the same minimum estimation error, we expect to obtain similar results for the two problems.
The use of PSWFs largely eliminates errors that would accrue from a direct numerical integration of the relevant integral eigenvalue equation based on a singular kernel function \cite{Bertero98}, while yielding important insights into the notion of an effective dimensionality \cite{Landau62,diFrancia69} of the continuous-state problem. The PSWF based approach, as we show in Ref.~\cite{Prasad20c}, also furnishes an excellent method for computing the quantum limits on superresolution imaging of spatially extended 1D and two dimensional (2D) sources.
\section{Quantum Bound on Source Localization with Single Photons}
Let a point source, which is located at position $\br$ in the plane of best focus, emit a photon into a uniformly mixed state of finite bandwidth $B\omega_0$ centered at frequency $\omega_0$ and let the photon be subsequently captured by an imaging system with aperture function $P(\bu)$. The state of such a photon may be described by the following single-photon density operator (SPDO):
\be
\label{rho}
\hrho = {1\over B}\int_\cB df\, |K_f\ra\la K_f|,
\ee
in which $f=(\omega-\omega_0)/\omega_0$ is the normalized frequency detuning, obtained by dividing the difference of the actual frequency, $\omega$, from the center frequency, $\omega_0$, by the latter. Correspondingly, the fractional detuning range, $\cB$, denotes the symmetrical interval, $-B/2<f<B/2$. Typical values of $B$ are expected to be small compared to 1. The wavefunction for the photon emitted into the pure state, $|K_f\ra$, of normalized frequency detuning $f$ and then captured by the imaging system has the following form in the system's exit pupil \cite{YuPrasad18}:
\be
\label{wavefunction}
\la \bu| K_f\ra = {1\over\sqrt{\pi}}P(\bu)\, \exp[-i2\pi (1+f)\bl\cdot\bu],
\ee
where the pupil position vector $\bu$ is the true position vector normalized by dividing the latter by the characteristic spatial scale $R$ of the exit pupil. For a circular aperture, we will take $R$ to be the radius of the exit pupil. The symbol $\bl$ denotes the normalized transverse location vector of the point source, $\bl=\br/\delta$, obtained by dividing its physical position vector $\br$ by the characteristic Airy diffraction parameter, $\delta\defeq \lambda_0 z_I/R$, corresponding to the center optical wavelength, $\lambda_0=2\pi c/\omega_0$, and the distance $z_I$ of the image plane from the exit pupil. The parameter $\delta$ sets the Rayleigh resolution scale. In this section, we consider the minimum quantum limited variance of estimation of the distance, $l=|\bl|$, of the source from a known origin using a circular imaging pupil, assuming that the angle $\phi$ that $\bl$ makes with the $x$ axis may be estimated accurately in advance.
The matter of how well we can localize a point source in the photon-counting limit can be treated by calculating the single-photon QFI w.r.t.~the source distance, $l$, from a fixed origin and then simply scaled up by multiplying the result with the observed number of photons. Such scaling is well justified for most thermal and other incoherent sources in nature because of their low mean photon emission number per coherence interval \cite{Goodman15}, $\delta_c <<1$, which thus may be regarded as emitting photons independently. The quantum state of $N$ independently emitted photons may be described by a tensor product of the density operators for the individual photons, so the overall QFI when $N$ independent photons are observed is simply $N$ times \cite{Tsang16,Liu20} that for a single photon. The same scaling holds for the classical Fisher information as well.
\subsection{General Expression for QFI}
The QFI per photon w.r.t.~a set of parameters, $\{\theta_1,\cdots,\theta_P\}$, on which SPDO has a differentiable dependence, is defined as the matrix \cite{Helstrom76} with elements that are the real part, denoted by the symbol, Re, of the following trace, denoted by the symbol, Tr:
\be
\label{QFIdef}
H_{\mu\nu} = \Re\, \Tr \left(\hrho\hat L_\mu\hat L_\nu\right),
\ee
where $\hat L_\mu$ is the symmetric logarithmic derivative of SPDO, $\hrho$, defined in terms of the partial derivative $\pmu\hrho$ w.r.t.~parameter $\theta_\mu$ by the relation,
\be
\label{SLD}
\partial_\mu \hrho = {1\over 2}\left(\hat L_\mu\hrho+\hrho\hat L_\mu\right).
\ee
By evaluating the trace in Eq.~(\ref{QFIdef}) in the basis of orthonormal eigenstates, $\{\lambda_i,\ |\lambda_i\ra\,|\,i=1,2,\ldots\}$ and calculating the partial trace over the null space of SPDO in terms of the partial trace over its range space, we may express $H_{\mu\nu}$ as \cite{YuPrasad18},
\begin{align}
\label{Hmn1}
H_{\mu\nu}=&4\sum_{i\in \cR}{1\over \lambda_i}\Re \langle \lambda_i|\partial_\mu \hat\rho\,\partial_\nu \hat\rho|\lambda_i\rangle]+2\sum_{i\in \cR}\sum_{j\in \cR}\Bigg[{1\over {(\lambda_i+\lambda_j)}}\nn
&-{1\over \lambda_i}-{1\over\lambda_j}\Bigg]\Re\langle \lambda_i|\partial_\mu \hat\rho|\lambda_j\rangle\langle \lambda_j|\partial_\nu \hat\rho|\lambda_i\rangle,
\end{align}
where $\cR$ denotes the space of values of the index of the eigenstates of SPDO associated with non-zero eigenvalues and the symbol $\partial_\mu$ denotes first-order partial derivative with respect to the parameter $\theta_\mu$.
For the present problem of estimating a single parameter, $l$, we may drop the parameter labels as well as the operator $\Re$ everywhere. By incorporating the $i=j$ terms from the double sum into the first sum in Eq.~(\ref{Hmn3}), we arrive at the following expression for QFI:
\ba
\label{Hmn3}
H=&\sum_{i\in \cR}{1\over \lambda_i}\left[4\langle \lambda_i|(\partial \hat\rho)^2|\lambda_i\rangle-3\la\lambda_i|\partial\hrho|\lambda_i\ra^2\right]\nn
+&2\sum_{i\neq j\in \cR}\left[{1\over (\lambda_i+\lambda_j)}-{1\over \lambda_i}-{1\over\lambda_j}\right]|\langle \lambda_i|\partial\hat\rho|\lambda_j\rangle|^2.
\end{align}
As we see from Eq.~(\ref{Hmn3}), evaluating QFI requires accurately computing the eigenstates and eigenvalues of SPDO given by Eq.~(\ref{rho}).
\subsection{Eigenstates and Eigenvalues of SPDO }
In view of expression (\ref{wavefunction}), the overlap function of two single-photon states at two different frequency detunings $f,f'$ is given by the following pupil-plane integral over the normalized position vector, $\bu=\brho/R$:
\ba
\label{overlap}
O(f-f') &\defeq \la K_f|K_{f'}\ra\nn
&= \int d^2 u |P(\bu)|^2\exp [i2\pi(f-f')\bl\cdot\bu].
\end{align}
For a circular clear pupil, for which $P(\bu)$ is simply $1/\sqrt{\pi}$ times the indicator function over the unit-radius pupil, the above integral may be evaluated in terms of Bessel function $J_1$ as \cite{Goodman96}
\be
\label{overlap1}
O(f-f') = {J_1(2\pi|f-f'|l)\over \pi|f-f'|l},
\ee
which reduces to 1 when $f\to f'$, as required by normalization of the single-photon states. The set of states, $\{|K_f\ra\}$, is clearly non-orthogonal.
Let $|\lambda\ra$ be an eigenstate of $\hrho$ of non-zero eigenvalue $\lambda$. Since $\hrho$ is supported over the subspace $\cH_B$ spanned by the basis $\{|K_f\ra,\ f\in \cB\}$, all its eigenstates with non-zero eigenvalues must also be fully contained in $\cH_B$. Consider therefore an expansion of $|\lambda\ra$ in this basis of form,
\be
\label{expansion}
|\lambda\ra = {1\over B} \int_\cB df' \, \dl(f') |K_{f'}\ra.
\ee
On substituting expressions (\ref{rho}) and (\ref{expansion}) for $\hrho$ and $|\lambda\ra$ into the eigenstate relation,
\be
\label{eigenrelation}
\hrho|\lambda \ra =\lambda |\lambda\ra,
\ee
and then equating the coefficients of each $|K_f\ra$ term on the two sides of the resulting equation, which is permitted due to the linear independence of these monochromatic single-photon states, we obtain the following integral equation for the coefficient function $\dl(f)$:
\be
\label{eigenrelation1}
{1\over B} \int_\cB O(f-f')\, \dl (f')\, df' = \lambda \dl(f).
\ee
By defining the Fourier transform of $\dl(f)$ as
\be
\label{FTcoeff}
\Dl (x)=\int_{-B/2}^{B/2} \dl(f)\exp(i2\pi f lx)\, df,\ \ x\in \cR,
\ee
we may transform Eq.~(\ref{eigenrelation1}) to the Fourier domain, as we show in Appendix A, re-expressing it as
\ba
\label{FTcoeff4}
\int_{-1}^1 \sqrt{1-x^{'2}}\,\sinc Bl(x-x')\,\Dl(x')\,&dx'={\pi \lambda\over 2} \Dl(x).\nn
& \ x\in \cR,
\end{align}
Note that without the square root inside the integrand Eq.~(\ref{FTcoeff4}) would be identical to the integral equation obeyed by the prolate spheroidal wave functions (PSWFs) first introduced by Slepian and Pollak \cite{Slepian61}.
Let us expand $\Dl(x)$ in the complete orthogonal PSWF basis over the interval $(-1,1)$,
\be
\label{PSWFexpansion}
\Dl(x) = \sum_n d_n^{(\lambda)}\Psi_n(x;C),
\ee
where $C\defeq\pi Bl$ is the space-bandwidth parameter (SBP) of the associated PSWF problem. Substituting expansion (\ref{PSWFexpansion}) into Eq.~(\ref{FTcoeff4}), we can convert the original SPDO eigenvalue problem into a matrix eigenvalue problem of form,
\be
\label{FTcoeffM1}
\bM \ud^{(\lambda)}=\lambda \ud^{(\lambda)},
\ee
in which $\ud^{(\lambda)}$ denotes the column vector of coefficients,
\be
\label{columnvector}
\ud^{(\lambda)} = (d_0,d_1,\ldots)^T,
\ee
with the superscript $T$ on a matrix denoting its simple transpose and the elements of the matrix $\bM$ are defined as the integral,
\be
\label{M}
M_{mn} = {2\over C}\int_{-1}^1 \sqrt{1-x^2}\, \Psi_m(x)\, \Psi_n(x)\, dx.
\ee
We relegate the details of this evaluation to Appendix A.
The PSWFs alternate in parity,
\be
\label{PSWFparity}
\Psi_n(-x;C)=(-1)^n\Psi_n(x;C),
\ee
and their associated eigenvalues $\lambda_n(C)$ are all positive and arranged in descending order, and obey the sum rule,
\be
\label{PSWFsum}
\sum_{n=0}^\infty \lambda_n^{(C)} = 2{C\over \pi},
\ee
with approximately $S\defeq\lceil 2C/\pi\rceil $ of these eigenvalues being close to $\min(2C/\pi,1)$ and the rest decaying rapidly toward 0 with increasing index value. Here the function $\lceil x\rceil$ takes the value 1 plus the integer part of $x$. The number $S$ is called the Shannon number, which was first introduced and discussed in the context of imaging by Toraldo di Francia \cite{diFrancia55, diFrancia69} as a measure of the effective number of degrees of freedom when a finite object is imaged with a finite-aperture imager.
Since the PSWF $\Psi_n$ is either even or odd under inversion according to whether the index $n$ is even or odd, it follows that $M_{mn}$ is non-zero only if $m$ and $n$ are either both even or both odd. It then follows from Eq.~(\ref{FTcoeffM1}) that the set of coefficients $\{d_n^{(\lambda)}|n=0,1,\ldots\}$ separates into two subsets of coefficients, namely $\cD_e=\{d_n^{(\lambda)}|n=0,2,\ldots\}$ and $\cD_o=\{ d_n^{(\lambda)}| n=1,3,\ldots\}, $ that are only coupled within each subset. Correspondingly, in view of expansion (\ref{PSWFexpansion}) and parity-alternation property (\ref{PSWFparity}), the associated eigenfunctions $\Dl(x)$ are either even or odd under inversion, a fact that also follows directly from the form of the kernel of the integral equation (\ref{FTcoeff4}). For the two sets of even-order and odd-order coefficients, the matrix eigenvalue equation (\ref{FTcoeffM1}) may be solved quite efficiently for its eigenvalues and eigenvectors by truncating the size of the matrix at some finite but sufficiently high value $N$, {\it i.e.}, $0\leq m,n\leq N-1$. We evaluated integral (\ref{M}) by approximating the integral by a discretized Riemann sum and then using the Matlab routine {\it dpss} \cite{Percival98} that computes discrete sequences of the PSWFs for different values of SBP and sequence length on the interval $(-1,1)$.
Due to the closeness of Eq.~(\ref{FTcoeff4}) to the integral equation obeyed by the PSWF, we expect there to be only a number of order $S$ of significantly large non-negative eigenvalues, $\lambda_p$, with the largest one being of order 1 and the successively smaller eigenvalues decreasing rapidly by many orders from one to the next. In other words, the nominal rank and the dimension of the range space of SPDO $\hrho$ are both expected to be of order $S$. This observation renders the problem numerically highly efficient, particularly when $C\sim 1$, for which the truncation index value, $N$, need not be greater than 10-20. These properties and the sum rule,
\be
\label{eig_sumrule}
\sum_{p=0}^\infty \lambda_p= 1,
\ee
obeyed by the eigenvalues of $\hrho$, since ${\rm Tr}\,\hrho = 1$, were verified numerically.
\subsection{Evaluation of QFI for 2D Source Localization}
By differentiaing expression (\ref{rho}) w.r.t.~$l$, we obtain
\be
\label{drho}
\partial\hrho={1\over B}\int df\, [\partial|K_f\ra\la K_f|+|K_f\ra\partial\la K_f|],
\ee
which, upon squaring and noting relation (\ref{overlap}), further yields
\ba
\label{drho2}
(\partial\hrho)^2=&{1\over B^2}\int df\int df'\, [\partial|K_f\ra\la K_f|\partial|K_{f'}\ra\la K_{f'}|\nn
+\partial&|K_f\ra O(f-f')\, \partial\la K_{f'}|+|K_f\ra\partial\la K_f|\partial|K_{f'}\ra\la K_{f'}|\nn
+&|K_f\ra\partial\la K_f|K_{f'}\ra\partial\la K_{f'}|].
\end{align}
For notational brevity, we henceforth use the convention that $\partial$ only operates on the quantity immediately following it and have dropped explicit reference to the range, $(-B/2,B/2)$, of the frequency integrals.
Next, taking the scalar product of the state vector $|K_{f'}\ra$ with expression (\ref{expansion}) for the eigenstate $|\lambda\ra$ and subsequently using the integral equation (\ref{eigenrelation1}) that the coefficients $d_\lambda (f)$ satisfies, we may show readily that
\be
\label{Kf_lambda_matrixelement}
\la K_{f'}|\lambda_i\ra= \lambda_i d_i(f').
\ee
Use of expression (\ref{wavefunction}) for the wave function permits evaluation of the matrix element $\la K_{f'}|\partial|K_f\ra$ for a clear circular pupil for which $P(\bu)$ is simply $1/\sqrt{\pi}$ times its indicator function as
\ba
\label{KpK}
\la K_{f'}|\partial|K_f\ra=&-2i(1+f)\int_{u<1}\!\! \!\!d^2 u\cos\Phi_u\nn
&\qquad\times \exp[-i2\pi (f-f')ul\cos\Phi_u]\nn
=&-4\pi (1+f)\int_0^1 du\, u^2 J_1\big(2\pi (f-f')ul\big) \nn
=&-{2(1+f)\over (f-f')l} J_2\big(2\pi (f-f')l\big)\nn
=&(1+f)P(f-f'),\quad P(x)\defeq -2 {J_2(2\pi xl)\over xl}
\end{align}
in which $\Phi_u=\phi_u-\phi$ and we made successive use of the following identities for integrating first over the azimuthal angle, $\phi_u$, and then over the radial variable, $u$, of the pupil plane:
\ba
\label{BesselIdentities}
\oint d\Phi\cos n\Phi \exp[\pm iz\cos(\Phi-\psi)] = &(\pm i)^n 2\pi \cos n\psi J_n(z);\nn
z^n J_{n-1} (z)=&{d\over dz}\left[z^n J_n(z)\right].
\end{align}
We can now evaluate the matrix element $\la\lambda_i|\partial\hrho|\lambda_j\ra$, with $\partial\hrho$ given by (\ref{drho}), by using relations (\ref{Kf_lambda_matrixelement}), (\ref{KpK}), and (\ref{expansion}) as,
\ba
\label{lambda_drho_lambda}
\la \lambda_i|\partial\hrho|\lambda_j\ra ={1\over B^2}&\int\int df \, df' (1+f)\big[\lambda_id_i(f')\, d_j(f) \nn
&+\lambda_j d_j(f')\, d_i(f)\big]\, P(f-f').
\end{align}
To evaluate the matrix elements $\la \lambda_i|(\partial\hrho)^2|\lambda_i\ra$, we first note from Eq.~(\ref{drho2}) that we need one more matrix element involving single-frequency emission states, namely $\partial \la K_f|\partial |K_{f'}\ra$, which we may evaluate as
\ba
\label{pKpK}
\partial \la K_f|\partial &|K_{f'}\ra =4\pi (1+f)(1+f')\nn
\times&\int_{u<1}d^2u\, u^2\cos^2\Phi_u \exp[-i2\pi(f-f')Bl\cos\Phi_u]\nn
= (2\pi)^2&(1+f)(1+f')\int_0^1du\, u^3\Big[J_0\big(2\pi (f-f')lu\big)\nn
&\qquad\qquad+i^2J_2\big(2\pi (f-f')lu\big)\Big],
\end{align}
in which we used the identity, $2\cos^2\Phi_u=(1+\cos 2\Phi_u)$, and then used the first of the identities (\ref{BesselIdentities}) twice to reach the final equality. The indefinite integral of the first term in the integrand is known to be \cite{besint19}
\be
\label{BesselIdentity3}
\int dz\, z^3 J_0(z) = 2z^2 J_0(z) +z(z^2-4) J_1(z),
\ee
while the second term in the integrand may be evaluated immediately using the second of the identities (\ref{BesselIdentities}) for $n=3$. We obtain in this way the result,
\be
\label{pKpK2}
\partial \la K_f|\partial |K_{f'}\ra =(1+f)(1+f')\, Q(f-f'),
\ee
where the function $Q$ is defined by the relation
\ba
\label{Q}
Q(x)=&\Bigg[{2\over x^2l^2}\left(J_0\big(2\pi xl\big) -2{J_1\big(2\pi xl\big)\over 2\pi xl}\right)\nn
&+{2\pi\over xl}\left(J_1\big(2\pi xl\big)-J_3\big(2\pi xl\big)\right)\Bigg].
\end{align}
In terms of the functions $O,\ P,$ and $Q$, we may express Eq.~(\ref{drho2}) as
\ba
\label{drho2a}
(\partial\hrho)^2=&{1\over B^2}\int df\int df'\, \big[\partial|K_f\ra (1+f')P(f'-f)\la K_{f'}|\nn
&+\partial|K_f\ra O(f-f')\, \partial\la K_{f'}|\nn
&+(1+f)(1+f')|K_f\ra Q(f-f')\la K_{f'}|\nn
&+(1+f)|K_f\ra P(f-f')\partial\la K_{f'}|\big].
\end{align}
The matrix element $\la \lambda_i|(\partial\hrho)^2|\lambda_i\ra$ now follows from a repeated use of identity (\ref{Kf_lambda_matrixelement}) and expansion (\ref{expansion}), the latter yielding the relation,
\ba
\label{lambda_dKf}
\partial\la K_f|\lambda_i\ra^*=&\la \lambda_i|\partial|K_f\ra ={1\over B}\int df^{''}d_i(f^{''}) \la K_f^{''}|\partial|K_f\ra\nn
=&{1\over B}(1+f)\int df^{''} P(f-f^{''})d_i(f^{''}),
\end{align}
which can be evaluated efficiently by discretizing the integral as a Riemann sum that can be expressed as a matrix-vector product.
In Fig.~\ref{2DLocQFI_vs_B}, we display numerically evaluated QFI for estimating the source location for a number of different values of its distance $l$ away from the {\it a priori} well determined axial point in the plane of Gaussian focus. The source distance, $l$, expressed in image-plane units of the Airy diffraction width parameter, $\lambda_0 z_I/R$, is allowed to vary in the sub-diffractive regime from 0.2 to 1.0. As expected and seen from the figure, QFI decreases from the maximum theoretical zero-bandwidth value \cite{YuPrasad18} of $4 \pi^2$ as the fractional bandwidth, $B$, increases but this decrease is rather gradual. Even for $l=1$, the maximum reduction of QFI at 20\% fractional bandwidth is no larger than about 10\%. The drop in QFI as $B$ varies between 0.02 and 0.04 for $l=0.2$ is presumably a numerical artifact, as we expect the localization QFI in this range to be quite close to the maximum value. Since the minimum variance of unbiased estimation is the reciprocal of QFI \cite{Helstrom76}, the minimum quantum-limited error for estimating $l$ correspondingly increases with increasing $B$.
To see heuristically how bandwidth increase causes source-localization QFI to decrease, note that for monochromatic imaging ($B=0$) QFI is given by the expression \cite{YuPrasad18},
\be
\label{QFImono}
H=4\left[(\partial_l \la K_f|)\partial_l|K_f\ra-|\la K_f\partial_l|K_f\ra|^2\right],
\ee
which reduces, for an inversion symmetric aperture like the clear circular aperture, to four times the aperture average of the squared gradient w.r.t.~$l$ of the wavefront phase $\Psi$ in the aperture plane,
\be
\label{MeanSqPhaseGrad}
H ={4\over \pi}\int d^2u P(\bu) (\partial_l \Psi)^2.
\ee
Since $\Psi=2\pi\,l\,u\,(1+f)\cos\Phi$ according to Eq.~(\ref{wavefunction}), Eq.~(\ref{MeanSqPhaseGrad}) evaluates for the clear circular aperture and monochromatic imaging ($f=0$) to the value $4\pi^2$. When the wavefunction $\la\bu|K_f\ra$, given by Eq.~(\ref{wavefunction}), is distributed over a finite bandwidth, the overall phase of any superposition of such wavefunctions gets scrambled, the more so, the larger the bandwidth, which increasingly reduces the mean-squared phase gradient over the aperture and thus $H$ with increasing bandwidth. This heuristic picture is supported by our quantitatively rigorous calculation of QFI based on the SPDO eigenvalues and eigenfunctions computed using PSWFs and displayed in Fig.~\ref{2DLocQFI_vs_B}.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{2DLocQFI_vs_B_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of QFI for estimating the distance, $l$, of a point source in the plane of Gaussian focus from the point of intersection of that plane with the optical axis vs. the fractional bandwidth, $B$, for different values of source distance $l$.}
\label{2DLocQFI_vs_B}
\end{figure}
\section{Quantum Bound on 2D Source-Pair Superresolution with Single Photons}
We will now evaluate QFI (\ref{Hmn3}) for estimating the separation of a symmetrical pair of closely spaced sources in a plane transverse to the optical axis of a circular-aperture imager. This calculation is closely related to the single-source localization QFI that we have considered so far.
Consider a pair of equally bright incoherent point sources that are located at positions $\pm \bl$ with respect to the mid-point of their separation vector, which we choose to be fixed {\it a priori} at the origin. The SPDO for light emitted by the pair and transmitted through the imager to its pupil may be written as the integral
\be
\label{rho2}
\hrho={1\over 2B}\int_\cB\left[|K_{+f}\ra\la K_{+f}|+|K_{-f}\ra\la K_{-f}|\right]\, df
\ee
in which, as for the localization problem, we take the detuning power spectrum of the imaging photon to be a top-hat function of fractional bandwidth $B$. The state $|K_{\pm f}\ra$ is the pure monochromatic-photon state vector of fractional frequency $f$ emitted by the source located at $\pm \bl$, with its pupil-plane wave function of form,
\be
\label{wavefunction2}
\la \bu| K_{\pm f}\ra = {1\over\sqrt{\pi}}P(\bu)\, \exp[\mp i2\pi (1+f)\bl\cdot\bu].
\ee
Because of the unit normalization of each of these states, $\la K_{\pm f}|K_{\pm f}\ra=1,$ expression (\ref{rho2}) has unit trace, $\trace(\hrho)=1$, as required. Also, the various pure-state overlap functions, for the case of a circular clear imaging aperture we are considering here, are real and equal in pairs,
\ba
\label{overlap2D}
\la K_{\pm f}|K_{\pm f'}\ra &= O(f-f');\nn
\la K_{\pm f}|K_{\mp f'}\ra = &O(2+f+f'),
\end{align}
in terms of the function $O$ defined by relation (\ref{overlap1}). We now calculate the eigenvalues and eigenstates of SPDO (\ref{rho2}) in terms of which QFI (\ref{Hmn3}) is defined.
\subsection{Eigenvalues and Eigenstates of SPDO (\ref{rho2})}
Let an eigenstate of SPDO (\ref{rho2}) obeying the relation,
\be
\label{eigen2}
\hrho |\lambda\ra = \lambda|\lambda\ra,
\ee
have the expansion
\be
\label{expansion2}
|\lambda\ra = {1\over B}\int_\cB \left[d_+(f)|K_{+f}\ra+d_-(f)|K_{-f}\ra\right] df.
\ee
Substitution of expression (\ref{rho2}) and expansion (\ref{expansion2}) into eigenrelation (\ref{eigen2}) and use of relations (\ref{overlap2D}) for the various state overlap functions, followed by equating coefficients of the different monochromatic source states on the two sides of the resulting equation, yield the following pair of coupled integral equations for the coefficient functions, $d_\pm(f)$:
\ba
\label{IntEq2}
{1\over 2B}\int_\cB df'\, \big[&d_+(f')O(f-f')\nn
&+d_-(f') O(2+f+f')\big] = \lambda d_+(f);\nn
{1\over 2B}\int_\cB df'\, \big[&d_+(f')O(2+f+f')\nn
&\qquad +d_-(f') O(f-f')\big] = \lambda d_-(f).
\end{align}
The two coupled equations in Eq.~(\ref{IntEq2}) may be decoupled by either adding them or subtracting one from the other as
\ba
\label{IntEq2Decoupled}
{1\over 2B}\int_\cB df'\, \left[O(f-f')+ O(2+f+f')\right]S_+(f') = &\lambda S_+(f);\nn
{1\over 2B}\int_\cB df'\, \left[O(f-f')-O(2+f+f')\right]S_-(f') = &\lambda S_-(f),
\end{align}
where $S_+$ and $S_-$ are the sum and difference functions,
\be
\label{SA}
S_+(f)=d_+(f)+d_-(f);\ \ S_-(f)=d_+(f)-d_-(f).
\ee
The two uncoupled equations (\ref{IntEq2Decoupled}) can be satisfied simultaneously by choosing either $S_+\neq 0,\ S_-=0$ or $S_+=0,\ S_-\neq 0$, corresponding, per Eq.~(\ref{SA}), to the choices $d_+(f)=\pm d_-(f)$. The nontrivial equation in each case may then be solved independently by using the same approach as for the 2D localization problem. Since the kernel functions, $[O(f-f')\pm O(2+f+f')]$, are not invariant under inversion, $f\to -f, \ f'\to -f'$, both even and odd PSWFs will be present in each such expansion, however.
We first transform the problem to the Fourier domain,
\be
\label{FT2}
\tilde S_\pm(x) = \int_{-B/2}^{B/2} df \, \exp(i2\pi lxf)\, S_\pm(f),
\ee
and use the same $\delta$-function trick we used in going from Eq.~(\ref{AFTcoeff1}) to (\ref{AFTcoeff3}). Use of the Fourier shift theorem, which imples that the FT of the function $O(2+f)$ is simply $\exp(i4\pi lx)$ times the FT of the unshifted function, $O(f)$, we see that Eqs.~(\ref{IntEq2Decoupled}) transform to a pair of more convenient equations, which we can write more compactly as a single equation with its lower and upper signs corresponding to the two separate equations,
\ba
\label{IntEq2FTscaled}
\int_{-1}^1& dx'\sqrt{1-x^{'2}}[\sinc Bl(x-x') \pm \exp(4\pi i lx')\nn
&\times\sinc Bl(x+x')] \tS_\pm(x')=\pi\lambda\tS_\pm(x),\ \ x\in \cR.
\end{align}
We may now substitute the spectral expansion (\ref{Aspectral_expansion}) of the sinc function and the expansion of the eigenfunctions $\tS_\pm (x)$ in terms of the PSWFs, namely
\be
\label{eigen2_expansion}
\tS_\pm(x)=\sum_{n=0}^\infty s^{(\pm)}_n \Psi_n(x;C),
\ee
into Eqs.~(\ref{IntEq2FTscaled}), then use the second of the orthogonality relations (\ref{APSWFnorm}), and finally equate the coefficients of the individual PSWFs on both sides to convert those two integral equations into the following pair of matrix equations:
\be
\label{eigen2_matrix_eq}
\sum_{n=0}^\infty \left[F_{mn}\pm (-1)^m G_{mn}\right] s^{(\pm)}_n=\lambda s^{(\pm)}_m,
\ee
in which the matrix elements $F_{mn}$ and $G_{mn}$ are defined as the integrals,
\ba
\label{FGmn}
F_{mn} = &{1\over C}\int_{-1}^1 \!\!dx'\sqrt{1-x^{'2}}\Psi_m(x';C)\Psi_n(x';C);\nn
G_{mn} = &{1\over C}\int_{-1}^1 \!\!dx'\sqrt{1-x^{'2}}\exp(4\pi ix')\,\Psi_m(x';C)\Psi_n(x';C).
\end{align}
To reach Eq.~(\ref{eigen2_matrix_eq}), we also used the parity-alternation property (\ref{PSWFparity}) of the PSWFs.
We now make use of the reality condition on the coefficient functions $d_\pm(f)$, or equivalently on their sum and difference functions, $S_\pm(f)$, in the frequency domain. This condition requires that in the Fourier domain ($x$), the functions $\tS_\pm(x)$ obey the condition,
\be
\label{reality2}
\tS_\pm^*(x)=\tS_\pm(-x),
\ee
which upon substitution of expansion (\ref{eigen2_expansion}) and use of parity property (\ref{PSWFparity}) yields the equivalent condition,
\be
\label{reality2coeff}
s^{(\pm )*}_n=(-1)^n s^{(\pm)}_n.
\ee
In other words, the coefficients $s^{(\pm)}_n$ are alternately either purely real or purely imaginary, as the index $n$ ranges over all non-negative integer values. As such, we may express them in terms of real coefficients $t^{(\pm)}_n$ by the relation,
\be
\label{real_coeff}
s^{(\pm)}_n=i^n t^{(\pm)}_n.
\ee
A substitution of this relation into the eigenrelation (\ref{eigen2_matrix_eq}) yields the equivalent eigenrelation,
\be
\label{eigen2_matrix_eq_real}
\sum_{n=0}^\infty \left(\tF_{mn}\pm \tG_{mn}\right) t^{(\pm)}_n=\lambda t^{(\pm)}_m,
\ee
in which the matrix elements $\tF_{mn}$ and $\tG_{mn}$ are defined by the relation
\be
\label{real_matrix}
\tF_{mn}=i^{n-m} F_{mn},\ \tG_{mn}=i^{n+m}G_{mn}.
\ee
In view of the alternating parity of the PSWFs with changing order, the parity-even property of $\sqrt{1-x^{'2}}$ and of the integration range, the definitions (\ref{FGmn}) of the matrix elements, and since $\exp(4\pi i lx')$ is the sum of a real parity-even and an imaginary parity-odd part, we can see that ${\bf F}$ and ${\bf G}$ are symmetric matrices, $F_{mn}=0$ when the index difference $m-n$ is odd, and $G_{mn}$ is purely real when $m+n$ is even and purely imaginary when $m+n$ is odd. It then follows that $\tF_{mn}$ and $\tG_{mn}$ defined by Eq.~(\ref{real_matrix}) are both real and symmetric. The eigenrelations (\ref{eigen2_matrix_eq_real}) are thus purely real equations involving symmetric system matrices, and are thus guaranteed to have real eigenvalues and orthogonal eigenvectors for non-degenerate eigenvalues.
We have numerically evaluated the eigenvalues and eigenvectors of the two matrices $(\tilde{\bf F}\pm\tilde{\bf G})$ by first calculating their matrix elements in terms of the discrete prolate spheroidal sequences discussed earlier, taking the latter to have a suffiiciently large length and truncating the matrices at some high but finite order of the PSWFs to ensure good accuracy. It helps, as with the localization problem, to know that only the largest $\mathcal{O}\lceil 2 C/\pi\rceil$ eigenvalues are sufficiently different from 0 to contribute significantly to QFI. In fact, for $C<<1$, which is the case of interest here, we ensure more than sufficient accuracy by truncating the matrix at order no larger than $15\times 15$ for which the smallest reproducibly computable eigenvalue has already dropped to a value more than fifteen orders of magnitude smaller than the largest one.
The orthogonality condition for the eigenvectors, $\la\lambda|\lambda'\ra=\delta_{\lambda \lambda'}$, can be shown, analogously to that for the localization problem, to be the same as Eq.~(\ref{Acolumn_orthogonality}), which for the column vector of real coefficients $t^{(\lambda)}_n$ is also the same,
\be
\label{column_orthogonality2}
\ut^{(\lambda)\dagger}\ut^{(\lambda')}= {B\over l \lambda}\delta_{\lambda\lambda'},
\ee
where the superscript $(\lambda)$ labels the column vector corresponding to the eigenstate $|\lambda\ra$. Since the Hermitian transpose for a real column vector such as $\ut^{(\lambda)}$ amounts to its simple matrix transpose, we may renormalize each ordinary orthonormal eigenvector obtained from a numerical evaluation of that eigenvector by an extra factor of $\sqrt{B/(l\lambda)}$.
\subsection{QFI Calculation}
We use expression (\ref{Hmn3}) for the evaluation of QFI for the single parameter of interest, the semi-separation parameter $l$. Unlike the localization problem, expression (\ref{rho2}) for SPDO is now more involved as it entails emission from two sources, rather than one. However, since we can work in the symmetric and anti-symmetric invariant subspaces of the DO, the two problems are rather analogous. In particular, we see that an eigenstate of SPDO in either of its $\pm$ range subspaces, which we denote as $\cH_B^{(\pm)}$, may be expressed as
\be
\label{eigen2_pm}
|\lambda^{(\pm)}\ra = {1\over B}\int d_+(f')\,\left(|K_{+f'}\ra\pm |K_{-f'}\ra\right) df',
\ee
with the notation $|\lambda^{(\pm)}\ra$ referring to an eigenstate belonging to the $\cH_B^{(\pm)}$ subspace. In view of this form, we can derive the relation,
\ba
\label{Kpf_lambda_pm}
\la K_{+f}|\lambda^{(\pm)}\ra =& {1\over B}\int d_+(f')\,\left[O(f-f')\pm O(2+f+f')\right] df'\nn
=&2\lambda^{(\pm)}d_+(f),
\end{align}
with the first relation following from a use of the overlap functions (\ref{overlap2D}) and the second from the eigenfunction relation (\ref{IntEq2}) in which we also used the fact that $d_-(f)=\pm d_+(f)$ in the two subspaces. We may similarly show that
\be
\label{Kmf_lambda_pm}
\la K_{-f}|\lambda^{(\pm)}\ra =2\lambda^{(\pm)} d_{-}(f)=\pm 2\lambda^{(\pm)}d_+(f).
\ee
The evaluation of the matrix elements $\la \lambda^{(\pm)}_i|\drho|\lambda^{(\pm)}_j\ra$ and $\la \lambda^{(\pm)}_i|(\drho)^2|\lambda^{(\pm)}_i\ra$ within each of the subspaces separately can now be carried out by differentiating expression (\ref{rho2}) with respect to $l$ first. The latter operation generates four terms, a pair of terms for each of the bilinear products, $|K_{+f}\ra\la K_{+f}|$ and $|K_{-f}\ra\la K_{-f}|$, inside the $f$ integral. Squaring $\drho$ then generates 16 terms inside a double frequency integral, for each of which terms one must evaluate the diagonal matrix element in an eigenstate $|\lambda^{(\pm)}_i\ra$. These calculations, although tedious, may be performed straightforwardly. Expressions (\ref{Kpf_lambda_pm}) and (\ref{Kmf_lambda_pm}) for the overlap functions greatly simplify these calculations, as we show in Appendix B, with the following results:
\ba
\label{MatElements2}
&\la\lambda_j^{(\pm)}|\drho|\lambda_i^{(\pm)}\ra\! =\! {2\over B^2}\!\iint\! df\, df' [P(f-f')\pm P(2+f+f')]\nn
&\times(1+f)\left[\lambda_i^{(\pm)}d_+^{(i)}(f)d_+^{(j)}(f')+\lambda_j^{(\pm)}d_+^{(j)}(f)d_+^{(i)}(f')\right];\nn
&\la\lambda_i^{(\pm)}|(\drho)^2|\lambda_i^{(\pm)}\ra={1\over 2B^2}\iint df\, df'\Big\{[O(f-f')\nn
&\qquad\pm O(2+f+f')]\la \lambda_i^{(\pm)}|\partial|K_{+f}\ra \la \lambda_i^{(\pm)}|\partial|K_{+f'}\ra \nn
&\qquad+4\lambda_i^{(\pm)}(1+f)[P(f-f')\pm P(2+f+f')]\nn
&\qquad\times d_+^{(i)}(f)\la\lambda_i^{(\pm)}|\partial|K_{+f'}\ra \nn
&\qquad+4\lambda_i^{(\pm)2}(1+f)(1+f')[Q(f-f')\nn
&\qquad\pm Q(2+f+f')]d_+^{(i)}(f)d_+^{(i)}(f')\Big\},
\end{align}
where the functions $P$ and $Q$ have been defined earlier by Eqs.~(\ref{KpK}) and (\ref{Q}).
The upper and lower signs in these expressions refer to the eigenstates drawn from the two subspaces, $\Omega_B^{(\pm)}$, respectively. What we also show in Appendix B is that any matrix element of form, $\la\lambda_j^{(\mp)}|\drho|\lambda_i^{(\pm)}\ra$, between any two states belonging to different subspaces vanishes identically,
\be
\label{MatElements3}
\la\lambda_j^{(\pm)}|\drho|\lambda_i^{(\mp)}\ra =0.
\ee
This allows both sums in expression (\ref{Hmn3}) to be evaluated separately over eigenstates belonging to the two different subspaces before adding their contributions to compute the total QFI.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{2DOSR_QFI_vs_B_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of QFI for estimating the semi-separation distance, $l$, of each point source from the pair centroid that has been fixed along the optical axis in the plane of Gaussian focus vs. fractional bandwidth, $B$, for different values of $l$.}
\label{2D_OSR_QFI_vs_B}
\end{figure}
\subsection{Numerical Results for QFI for 2D Pair OSR}
In Fig.~\ref{2D_OSR_QFI_vs_B}, we plot the value of QFI for estimating the separation of a symmetric source pair that is located in the transverse plane of Gaussian focus, with the origin in that plane fixed at the axial point that we take to be the pair's centroid. As the fractional bandwidth increases, QFI decreases much as it did for 2D localization of a single source that we treated in the previous section. However, even for 10\% fractional emission bandwidth and pair separations that are twice as large as the Airy parameter, QFI decreases to a value that is no more than 5\% below the maximum theoretical value of $4\pi^2$ for estimating the 2D pair separation distance for purely monochromatic emission. In other words, the maximum information that can be extracted about the pair separation remains rather robust with increasing emission bandwidth.
\section{Realization of QFI via Low-order Zernike Projections}
We have noted previously \cite{YuPrasad18, PrasadYu19} that low-order Zernike wavefront projections furnish an entirely classical measurement protocol that can realize pair-superresolution QFI in the extreme limit of vanishing pair separation. The Zernike modes, being real and orthogonal over the unit disk, might also meet the optimality criteria laid out by Rehacek, {\it et al.} \cite{Rehacek17}, when extended to two dimensions with respect to a clear circular imaging aperture. We now show that the same protocol using the lowest four orders of Zernike polynomials, namely $Z_1, Z_2, Z_3,Z_4$ in Noll's notation \cite{Noll76}, works well even when the emission bandwidth of the sources is not particularly narrow and the source separation not too large. Since, due to the realness of the Zernike modes, the squared moduli of their normalized projections, which determine their probabilities, are the same for both the symmetric source-pair separation and single-source localization problems, identical results for Zernike-based classical FI (CFI) are obtained for both, provided the semi-separation distance in the former problem is identified, as we already have, with the source distance in the latter.
The first four Zernikes are defined as the following functions of polar coordinates over the unit disk in the pupil plane:
\ba
\label{Z1234}
Z_1(\bu)=&{1\over \sqrt{\pi}};\ \
Z_2(\bu)={2\over\sqrt{\pi}}u\,\cos\phi_u;\nn
Z_3(\bu)=&{2\over\sqrt{\pi}}u\,\sin\phi_u;\ \
Z_4(\bu)=\sqrt{3\over \pi}(1-2u^2).
\end{align}
The choice of the specific coefficients for these functions ensures that they have unit norm over the unit disk, {\it i.e.,} $\la Z_n|Z_n\ra=1$. The probability of observing an imaging photon in the $n$th Zernike mode, $P_n=\la Z_n|\hrho|Z_n\ra$, is the same whether $\hrho$ is given by Eq.~(\ref{rho}) or Eq.~(\ref{rho2}) for the two different problems considered in this paper. In view of form (\ref{wavefunction}) for the wavefunction, $\la \bu|K_f\ra$, we may express $P_n$ as
\be
\label{ProbZ}
P_n\!=\!{1\over \pi B}\int_{-B/2}^{B/2}\!\!\!\!\!\! df \left\vert\int\!\! P(\bu)\exp[-i2\pi(1+f)\bl\cdot\bu]\, Z_n(\bu) d^2u\right\vert^2.
\ee
For the four Zernikes of interest here, we may calculate the corresponding probabilities as
\ba
\label{ProbZ1234}
P_1(l)=&{2\over B\pi l}\int_{x_-}^{x_+}dx{J_1^2(x)\over x^2};\nn
P_2(l)=&{8\over B\pi l}\cos^2\phi_l\int_{x_-}^{x_+}dx{J_2^2(x)\over x^2};\nn
P_3(l)=&{8\over B\pi l}\sin^2\phi_l\int_{x_-}^{x_+}dx{J_2^2(x)\over x^2};\nn
P_4(l)=&{96\over B\pi l}\int_{x_-}^{x_+}dx\Bigg[{J_0^2(x)\over x^4}+J_1^2(x)\Big({4\over x^6}-{1\over x^4}\nn &+{1\over 16 x^2}\Big)+J_0(x)J_1(x)\left({1\over2 x^3}-{4\over x^5}\right)\Bigg],
\end{align}
where $x_\pm$ are defined as
\be
\label{xpm}
x_\pm = 2\pi l\,(1\pm B/2).
\ee
We derived expressions (\ref{ProbZ1234}) by individually substituting the four Zernike polynomials (\ref{Z1234}) into Eq.~(\ref{ProbZ}), using the first of the Bessel identities in Eq.~(\ref{BesselIdentities}) to integrate over the angular coordinate $\phi_u$ in the unit disk, and then using the second of these identities and a third Bessel identity (\ref{BesselIdentity3}) to integrate over the radial coordinate $u$. The final step involved a simple scaling of the integration variable $f$ via the substitution $x=2\pi(1+f)l$.
All of the integrals in Eq.~(\ref{ProbZ1234}) may in fact be evaluated in closed form. The values of the corresponding indefinite integrals, listed in the tables of Bessel integrals in Ref.~\cite{besint19} on pages 244 and 263, were used to express the requisite probabilities, $P_n(l), \ n=1,\ldots,4,$ in closed form. Their derivatives, $dP_n/dl$, on the other hand, are more simply calculated by noting that expressions (\ref{ProbZ1234}) depend on $l$ only through its presence in the denominator of the overall coefficient and in the integration limits, which renders this calculation quite simple when we use the identity,
\ba
\label{integral_identity}
{d\over dl}\left[{1\over l}\int_{b(l)}^{a(l)} f(x)\, dx\right]&=-{1\over l^2}\int_{b(l)}^{a(l)} f(x)\, dx\nn
&+{1\over l}\left[f(a) {da\over dl}-f(b){db\over dl}\right].
\end{align}
Based on the {\em observed} mode-projection probabilities and their derivatives, we can now calculate the classical FI for estimating the distance $l$. Since an imaging photon has the probability $\bar P=1-\sum_{n=1}^N P_n$ of being found in the {\em unobserved} modes, we can write down the full CFI \cite{VT68} per photon for estimating $l$ from projective measurements in the $N$ Zernike modes as
\be
\label{CFI}
F_N(l)=\sum_{n=1}^N {1\over P_n}\left({dP_n\over dl}\right)^2+{1\over \bar P}\left({d\bar P\over dl}\right)^2.
\ee
In Fig.~\ref{CFI_TipTilt}, we plot the numerically evaluated CFI for estimating $l$ when only projections into the tip and tilt modes, $Z_2,Z_3$, are observed and the remaining mode projections are not, for values of $l$ varying between 0 and 2 for five different values of the fractional bandwidth, $B$, namely 0, 0.05, 0.10, 0.15, and 0.20. As expected, the fidelity of estimation, represented by CFI, degrades with increasing bandwidth, since the diffraction induced image, whose width in the image domain is proportional to the wavelength, gets fuzzier with an increasing range of emission wavelengths. Note that the shorter the distance $l$, the less impact the bandwidth increase has on the value of tip-tilt CFI, which approaches the quantum FI in the limit of $l\to 0$, regardless of the value of $B$ even with observations in the tip and tilt modes alone. This behavior was noted earlier in Refs.~\cite{YuPrasad18,PrasadYu19} as arising from the fact that these tip and tilt modes are perfect matched filters for the $x$ and $y$ coordinates, respectively, of vector $\bl$ in this limit. The oscillatory behavior of the CFI curves with increasing $l$, with alternating local maxima and minima, on the other hand, have to do with the fact that at certain values of $l$, $dP_2/dl=dP_3/dl=0$ and consequently the first-order information provided by the tip and tilt modes alone about $l$ vanishes for those values.
The values of CFI increase with the inclusion of further Zernike modes, as Fig.~\ref{CFI_TipTiltPistonDefocus} demonstrates. In this figure, we plot the relative contributions of the various Zernike modes, starting with the tip and tilt modes for two different values of $B$, namely 0 and 0.2, which correspond to the same values of $B$ as for the outside pair of curves in Fig.~\ref{CFI_TipTilt}. The lowest pair of curves that are bunched together represent the tip-tilt contribution to CFI for the two values of $B$. The next higher closely paired curves display CFI for the same two values of $B$ when the contribution of the piston Zernike, $Z_1$, is added, while the second highest pair of curves exhibit CFI when the final Zernike mode, $Z_4$, often called the defocus mode, is also included. The very highest pair of curves represent the overall CFI when the contributions from these four Zernikes and all other unobserved modes are added together. In each curve pair, the higher, solid one corresponds to $B=0$ and the lower, dashed one to $B=0.20$. To avoid confusion, we have not displayed the dependence of CFI for the remaining three, intermediate values of $B$ also covered by Fig.~\ref{CFI_TipTilt}, but those dependences fall, as expected, between each pair of solid and dashed curves shown in Fig.~\ref{CFI_TipTiltPistonDefocus}. As we readily see, even adding the piston mode to tip-tilt mode projections greatly enhances CFI over a much larger range of separations than tip-tilt projections alone.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{CFI_TipTiltRest_vs_separation_varyingB_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of CFI for estimating $l$ from wavefront projections into the tip-tilt modes, $Z_2$ and $Z_3$, vs. $l$ for a variety of values of the fractional bandwidth, $B=\Delta f/f_0$.}
\label{CFI_TipTilt}
\end{figure}
\begin{figure}[htb]
\centerline{\includegraphics[width=0.9\columnwidth]{CFI_TipTiltPistonDefocusRest_vs_separation_B0_pt2_bw.eps}}
\vspace{-0.2cm}
\caption{Plot of CFI for estimating $l$ from wavefront projections into the tip-tilt, piston, and defocus modes, namely $Z_1,Z_2,Z_3,Z_4$, vs. $l$ for values 0 (solid lines) and 0.20 (dashed lines) of the fractional bandwidth, $B=\Delta f/f_0$. The bottom three pairs of closely-bunched curves capture the increase of CFI from partial contributions of the tip-tilt, piston, and defocus modes, while the top pair represent the total CFI from inclusion of the unobserved modes as well.}
\label{CFI_TipTiltPistonDefocus}
\end{figure}
\paragraph*{Discussion}
To gain some quantitative sense about the scale of resolution and number of photons that might be needed to achieve minimum acceptable estimation variances, let us consider the following example. A symmetrical pair of point sources are separated by $2l=0.4$, in Airy diffraction parameter units, emit at the center wavelength $\lambda_0=500$ nm, and produce geometrical images a distance $z_I=0.2$ m away from a thin spherical mirror of aperture radius, $R=0.1$ m. In physical units, the pair separation has the value, $2l\,\delta=400$ nm. If the pair emission is observed in a 10\% fractional bandwidth ($B=0.1$), the values, per photon, of CFI calculated from observing projections into the tip-tilt Zernikes alone and tip-tilt-piston-defocus Zernikes alone are equal to 22.85 and 39.29, respectively, while QFI for $l=0.2$ and $B=0.1$ has the value 39.41, just a bit lower than the zero-bandwidth value of 4$\pi^2=39.48$ per photon. In other words, observing tip-tilt ($Z_2,Z_3$) projections alone can in principle achieve 58\% of the quantum upper bound on Fisher information (FI), while including piston and defocus mode ($Z_1,Z_4$) projections as well raises CFI to about 99.5\% of the quantum limit, making the latter classical limit essentially indistinguishable from the quantum upper bound.
As for minimum standard deviations (SDs), $\sigma_l^{({\rm min})}$, for estimating $l$, assuming {\em unbiased} estimation, their quantum and classical lower limits are given by the square root of the reciprocals of QFI and CFI, respectively. For our specific example, we calculate these SDs on estimating $l$ to be 0.1593 and 0.1595 units per photon. For $N$ photons, since CFI and QFI both scale up by factor $N$ in the photon counting regime, the SDs are smaller by the factor $\sqrt{N}$. For $N=100$, the minimum fractional error for estimating $l$ from the four lowest Zernike-mode projections is equal to $\sigma_l^{({\rm min})}/l=0.01593/0.2$, which is less than 8\%, making such estimations quite accurate even with just 100 photons. If finite detection errors, such as dark counts or additive noise are present, as is true for even the best photon counting detectors \cite{Hadfield09,Slussarenko19}, the minimum photon numbers needed for resolving such source pair would need to be higher.
The mode projections, be they in the Zernike or another basis, can be acquired in the laboratory using a digital holographic mask \cite{Paur16} that encodes the modes in a Fourier optics set-up \cite{Goodman96, YuPrasad18}. A maximum-likelihood algorithm, as we also discussed in Ref.~\cite{YuPrasad18}, can then be used to recover the parameters of interest from the projection count data acquired using a photon-counting camera.
\section{Concluding Remarks}
This paper has presented a theoretical analysis of the problems of quantum limited source localization and symmetrical point-source-pair separation in a single 2D object plane as the fractional bandwidth, $B$, of incoherent emission from the sources increases from zero and detection is limited only by photon counting noise. For both problems, the most important parameter that determines how the quantum estimation theroetic bound degrades with increasing fractional bandwidth, $B$, is the effective space-bandwidth parameter, $\pi B\ell$, where $\ell$, in units of the Airy diffraction parameter, is either the source distance from a fixed point when localizing a point source or the distance of either source from the {\it a priori} known midpoint of the line joining the pair of point sources when resolving the pair. In both cases, the fixed point was chosen without loss of generality to be the point at which the optical axis intersects the object plane taken to be the plane of Gaussian focus.
The number of eigenstates of the imaging-photon density operator with eigenvalues significantly different from 0 and which thus significantly control the fundamental quantum limit on the minimum variance of any possible unbiased estimation of $l$ is of order $S\defeq \lceil 2Bl\rceil$, with that limiting minimum error variance increasing significantly only when $S$ greatly exceeds 1. We may regard $S$ as the effective dimensionality of the continuous-state eigenvalue problem for the single-photon density operator for a point source emitting incoherently in a finite bandwidth. We have used the machinery of prolate spheroidal wave functions to formulate and obtain a solution of the eigenvalue problem, with which we then calculated the quantum bound numerically for a clear circular imaging pupil, exhibiting the detailed manner in which the quantum error bound increases with increasing value of $S$ for the two problems.
We have also shown that wavefront projections in the basis of Zernike modes can yield estimation fidelities approaching the quantum upper bound, even with a few low-order mode projections when the localization and pair separation distances are comparable to or smaller than the characteristic Rayleigh resolution scale. Including higher-order Zernike modes will surely reduce the gap between CFI and QFI for all values of such distances, but our recent work on quantum bounds for extended-source imaging has shown that this gap may not close fully even when {\em all} Zernike projections are included \cite{Prasad20c}.
While this paper has considered in detail the simplest form of a uniformly distributed emission power spectrum over a finite bandwidth outside which it vanishes identically, any general integrable power spectrum may be treated by an adaptation of the present calculation, as we show without detailed numerical evaluations in Appendix C. For unimodal power spectra, such as Lorentzian and Gaussian power spectra, we can always identify an effective SBP of form $\pi Bl$, in which $B$ is of order full width at half maximum (FWHM) of the emission spectrum when expressed as a fraction of the center frequency of that spectrum. We expect the detailed calculations presented in this paper and conclusions drawn from them to hold qualitatively even for such general power spectra.
Extensions of the finite-bandwidth QFI calculation to the axial dimension and pair brightness asymmetry for full 3D pair localization and separation will accord wavefront-projection-based superresolution techniques further value. Ultimately, however, these considerations will need to be generalized to finite sources with spatially non-uniform brightness distributions for a variety of low-light imaging applications.
\acknowledgments
The author is grateful for the research facilities provided by the School of Physics and Astronomy at the U. of Minnesota where he has held the position of Visiting Professor for the last two years. This work was partially supported under a consulting agreement with the Boeing Company and by Hennepin Healthcare Research Institute under a research investigator appointment.
| {'timestamp': '2020-09-01T02:37:13', 'yymm': '2006', 'arxiv_id': '2006.00982', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.00982'} |
\section{Introduction}
The Standard Model (SM) of strong and electroweak interactions, spectrally
completed by the discovery of its Higgs boson at the LHC \cite{higgs-mass},
seems to be the model of the physics at the Fermi energies. It does so because various experiments
have revealed so far no new particles beyond the SM spectrum. There is, however, at least the dark matter (DM), which requires new particles beyond the SM. Physically, therefore, we must use every
opportunity to understand where those new particles can hide, if any.
In the present work we study a massive spin-3/2 field hidden in the SM spectrum. This higher-spin field, described by the Rarita-Schwinger equations \cite{Rarita:1941mf,pilling}, has to obey certain constraints to have correct degrees of freedom when it is on the physical shell. At the renormalizable level, it can couple to the SM matter via only the neutrino portal (the composite SM singlet formed by the lepton doublet and the Higgs field). This interaction is such that it vanishes when the spin-3/2 field is on shell. In Sec. 2 below we give the model and basic constraints on the spin-3/2 field.
In Sec. 3 we study collider signatures of the spin-3/2 field. We study there $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings in detail. We give analytical computations and numerical predictions. We propose there a neutrino-Higgs collider and emphasize importance of the linear collider in probing the spin-3/2 field.
In Sec. 4 we turn to loop effects of the spin-3/2 field. We find that the spin-3/2 field adds logarithmic and quartic UV-sensitivities atop the logarithmic and quadratic ones in the SM. We convert power-law UV-dependent terms into curvature terms as a result of the incorporation of gravity into the SM. Here we use the results of \cite{gravity,gravity2} which shows that gravity can be incorporated into the SM properly and naturally {\it (i)} if the requisite curved geometry is structured by interpreting the UV cutoff as a constant value assigned to the spacetime curvature, and {\it (ii)} if the SM is extended by a secluded new physics (NP) that does not have to interact with the SM. This mechanism eliminates big hierarchy problem by metamorphosing the quadratic UV part of the Higgs boson mass turns into Higgs-curvature coupling.
In Sec. 5 we discuss possibility of Higgs inflation via the large Higgs non-minimal coupling induced by the spin-3/2 field. We find that Higgs inflation is possible in a wide range of parameters provided that the secluded NP sector is crowded enough.
In Sec. 6 we discuss the DM. We show therein that the spin-3/2 field is a viable DM candidate. We also show that the singlet fields in the NP can form a non-interacting DM component.
In Sec. 7 we conclude. There, we give a brief list of problems that can be studied as furthering of the material presented this work.
\section{A Light Spin-3/2 Field}
Introduced for the first time by
Rarita and Schwinger \cite{Rarita:1941mf}, $\psi_{\mu}$ propagates with
\begin{eqnarray}
S^{\alpha\beta}(p) = \frac{i}{{\slashed{p}} - M} \Pi^{\alpha\beta}(p),
\end{eqnarray}
to carry one spin-3/2 and two spin-1/2 components through the
projector \cite{pilling}
\begin{eqnarray}
\label{project}
\Pi^{\alpha\beta} = -\eta^{\alpha\beta} +
\frac{\gamma^{\alpha}\gamma^{\beta}}{3}+
\frac{\left(\gamma^{\alpha}p^{\beta} -
\gamma^{\beta}p^{\alpha}\right)}{3M}+\frac{2
p^{\alpha}p^{\beta}}{3 M^2},
\end{eqnarray}
that exhibits both spinor and vector characteristics.
It is necessary to impose \cite{pilling}
\begin{eqnarray}
\label{eqn4}
p^{\mu}\psi_{\mu}(p)\rfloor_{p^2=M^2}=0,
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn4p}
\gamma^{\mu}\psi_{\mu}(p)\rfloor_{p^2=M^2}=0,
\end{eqnarray}
to eliminate the two spin-1/2 components to make $\psi_{\mu}$
satisfy the Dirac equation
\begin{eqnarray}\label{eqn5}
\left(\slashed{p} - M\right)\psi_{\mu}=0
\end{eqnarray}
as expected of an on-shell fermion. The constraints (\ref{eqn4}) and (\ref{eqn4p}) imply that $p^{\mu}\psi_{\mu}(p)$ and $\gamma^{\mu}\psi_{\mu}(p)$ both vanish on the physical shell $p^2=M^2$. The latter is illustrated in Fig. \ref{fig:Px} taking $\psi_{\mu}$ on-shell.
Characteristic of singlet fermions, the $\psi_{\mu}$, at the renormalizable level, makes contact with the SM via
\begin{eqnarray}
\label{int1}
{\mathcal{L}}^{(int)}_{3/2} = c^{i}_{{3/2}} \overline{L^{i}} H \gamma^{\mu}\psi_{\mu} + {\text{h.c.}}
\end{eqnarray}
in which
\begin{eqnarray}
L^i = \left(\begin{array}{c}\nu_{\ell L}\\ \ell_L\end{array}\right)_{i}
\end{eqnarray}
is the lepton doublet ($i=1,2,3$), and
\begin{eqnarray}
H = \frac{1}{\sqrt{2}}\left(\begin{array}{c}v + h + i \varphi^0\\ \sqrt{2} \varphi^{-}\end{array}\right)
\end{eqnarray}
is the Higgs doublet with vacuum expectation value $v\approx 246\ {\rm GeV}$, Higgs boson $h$, and Goldstone bosons $\varphi^{-}$, $\varphi^0$ and $\varphi^+$ (forming the longitudinal components of $W^{-}$, $Z$ and $W^{+}$ bosons, respectively).
In general, neutrinos are sensitive probes of singlet fermions. They can get masses through, for instance, the Yukawa interaction (\ref{int1}), which leads to the Majorana mass matrix
\begin{eqnarray}
(m_{\nu})^{i j}_{3/2} \propto c^i_{{3/2}} \frac{v^2}{M} c^{\star j}_{{3/2}}
\end{eqnarray}
after integrating out $\psi_{\mu}$. This mass matrix can, however, not lead to the experimentally known neutrino mixings \cite{neutrino-mass}. This means that flavor structures necessitate additional singlet fermions. Of such are the right-handed neutrinos $\nu_R^k$ of mass $M_k$ ($k=1,2,3,\dots$), which interact with the SM through
\begin{eqnarray}
\label{int2}
{\mathcal{L}}^{(int)}_{R} = c_{{R}}^{i k} \bar{L}^i H \nu_R^k + {\text{h.c.}}
\end{eqnarray}
to generate the neutrino Majorana masses
\begin{eqnarray}
(m_{\nu})^{i j}_{R} \propto c_{{R}}^{i k} \frac{v^2}{M_k} c_{{R}}^{\star k j}
\end{eqnarray}
of more general flavor structure. This mass matrix must have enough degrees of freedom to fit to the data \cite{neutrino-mass}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.5]{onshell.pdf}
\end{center}
\caption{$\psi_{\mu}-h-\nu_L$ coupling with vertex factor $i c_{3/2} \gamma^{\mu}$. Scatterings in which $\psi_{\mu}$ is on shell must all be forbidden since $c_{3/2} \gamma^{\mu} \psi_{\mu}$ vanishes on mass shell by the constraint (\ref{eqn4p}). This ensures stability of $\psi_{\mu}$ against decays and all sort of co-annihilations.} \label{fig:Px}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.45]{vhvhZmed-cropped.pdf}
\end{center}
\caption{The $\nu-Z$ box mediating the $\nu_L h \rightarrow \nu_L h$ scattering in the SM. The $e-W$ box is not shown. } \label{nhnh-SM}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.40]{DM2-cropped.pdf}
\end{center}
\caption{$\nu_L h \rightarrow \nu_L h$ scattering with $\psi_{\mu}$ mediation. No resonance can occur at $\sqrt{s}=M$ because $\psi_{DM}$ cannot come to mass shell.} \label{nhnh-3/2}
\end{figure}
Here we make a pivotal assumption. We assume that $\psi_{\mu}$ and $\nu_R^k$ can weigh as low as a TeV, and that $c^i_{{3/2}}$ and some of $c_{{R}}^{i k}$ can be ${\mathcal{O}}(1)$. We, however, require that contributions to neutrino masses from
$\psi_{\mu}$ and $\nu_R$ add up to reproduce with experimental result
\begin{eqnarray}
\label{numass}
(m_{\nu})^{i j}_{3/2} + (m_{\nu})^{i j}_{R} \approx (m_{\nu})^{i j}_{exp}
\end{eqnarray}
via cancellations among different terms. We therefore take
\begin{eqnarray}
c_{{3/2}} \lesssim {\mathcal{O}}(1)\,,\; M\gtrsim {\rm TeV}
\end{eqnarray}
and investigate the physics of $\psi_{\mu}$. This cancellation requirement does not have to cause any excessive fine-tuning simply because $\psi_{\mu}$ and $\nu_R^k$ can have appropriate symmetries that correlate their couplings. One possible symmetry would be rotation of $\gamma^{\mu}\psi_{\mu}$ and $\nu_R^k$ into each other. We defer study of possible symmetries to another work in progress \cite{Ozan}. The right-handed sector, which can involve many $\nu_R^k$ fields, is interesting by itself but hereon we focus on $\psi_{\mu}$ and take, for simplicity, $c^i_{{3/2}}$ real and family-universal ($c^i_{{3/2}}=c_{{3/2}}$ for $\forall$ $i$).
\section{Spin-3/2 Field at Colliders}
It is only when it is off-shell that $\psi_{\mu}$ can reveal itself through the interaction (\ref{int1}). This means that its effects are restricted to modifications in scattering rates of the SM particles. To this end, as follows from (\ref{int1}), it participates in
\begin{enumerate}
\item $\nu_L h \rightarrow \nu_{L} h$ (and also $\nu_{L}\nu_{L} \rightarrow h h$)
\item $e^+ e^- \rightarrow W^+_L W^-_L$ (and also $\nu_{L}\nu_{L} \rightarrow Z_L Z_L$)
\end{enumerate}
at the tree level. They are analyzed below in detail.
\subsection{$\nu_L h \rightarrow \nu_{L} h$ Scattering}
Shown in Fig. \ref{nhnh-SM} are the two box diagrams which enable $\nu_L h \rightarrow \nu_L h$ scattering in the SM. Added to this loop-suppressed SM piece is the $\psi_{\mu}$ piece depicted in Fig. \ref{nhnh-3/2}. The two contributions add up to give the cross section
\begin{eqnarray}
\frac{d\sigma(\nu_L h \rightarrow \nu_L h)}{dt}= \frac{1}{16\pi}\frac{{\mathcal{T}_{\nu h}}({{s}},{{t}})}{(s-m_{h}^2)^2}
\end{eqnarray}
in which the squared matrix element
\begin{widetext}
\begin{eqnarray}
\label{mat-el-nuhnuh}
{\mathcal{T}_{\nu h}}({{s}},{{t}}) &=& 9\! \left(\frac{c_{3/2}}{3 M}\right)^4\!\! \left(\!
\left({{s}}-m_h^2\right)^2 + {{s}}{{t}}\right) \!-\! 16\! \left(\frac{c_{3/2}}{3 M}\right)^2\!\! \left(\!
2\left({{s}}-m_h^2\right)^2 \!+\! \left(2{{s}} -m_h^2\right){{t}}\right) {\mathbb{L}} \!+\! 2\left(
{{s}}-m_h^2\right)\left({{s}} + {{t}}-m_h^2\right) {\mathbb{L}}^2
\end{eqnarray}
\end{widetext}
\noindent involves the loop factor
\begin{eqnarray}
{\mathbb{L}}=\! \frac{(g_W^2\!+\!g_Y^2)^2 M_Z^2 m_h^2 I(M_Z)}{192 \pi^2}\! + \!\frac{g_W^4 M_W^2 m_h^2 I(M_W)}{96 \pi^2}
\end{eqnarray}
in which $g_W$ ($g_Y$) is the isospin (hypercharge) gauge coupling, and
\begin{widetext}
\begin{eqnarray}
I(\mu)=\int_{0}^{1}dx\int_{0}^{1-x}dy\int_{0}^{1-x-y}dz \left((s-m_h^2)(x+y+z-1) y - txz + m_h^2 y (y-1) + \mu^2 (x + y + z)\right)^{-2}
\end{eqnarray}
\end{widetext}
\noindent is the box function. In Fig. \ref{fig:Pxx}, we plot the total cross section $\sigma(\nu_L h \rightarrow \nu_L h)$ as a function of the neutrino-Higgs center-of-mass energy for different $M$ values. The first important thing about the plot is that there is no resonance formation around $\sqrt{s}=M$. This confirms the fact that $\psi_{\mu}$, under the constraint (\ref{eqn4p}), cannot come to physical shell with the couplings in (\ref{int1}). In consequence, the main search strategy for $\psi_{\mu}$ is to look for deviations from the SM rates rather than resonance shapes. The second important thing about the plot is that, in general, as revealed by (\ref{mat-el-nuhnuh}), larger the $M$ smaller the $\psi_{\mu}$ contribution. The cross section starts around $10^{-7}\ {\rm pb}$, and falls rapidly with $\sqrt{s}$. (The SM piece, as a loop effect, is too tiny to be observable: $\sigma(\nu_L h \rightarrow \nu_L h)\lesssim 10^{-17}\ {\rm pb}$). It is necessary to have some $10^{4}/fb$ integrated luminosity (100 times the target luminosity at the LHC) to observe few events in a year. This means that $\nu_L \nu_L \rightarrow h h$ scattering can probe $\psi_{\mu}$ at only high luminosity but with a completely new scattering scheme.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.5]{vhvh-total-xsection-cropped.pdf}
\end{center}
\caption{The total cross section for $\nu_L h \rightarrow \nu_L h$ scattering as a function of the neutrino-Higgs center-of-mass energy $\sqrt{s}$ for $M=1, 2$ and $3\ {\rm TeV}$ at $c_{3/2}= 1$. Cases with $c_{3/2}\neq 1$ can be reached via the rescaling $M\rightarrow M/c_{3/2}$.} \label{fig:Pxx}
\end{figure}
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.50]{skecth-cropped.pdf}
\end{center}
\caption{Possible neutrino-Higgs collider to probe $\psi_{\mu}$.} \label{fig:P10}
\end{figure}
Fig. \ref{fig:Pxx} shows that neutrino-Higgs scattering can be a promising channel to probe $\psi_{\mu}$ (at high-luminosity, high-energy machines). The requisite experimental setup would involve crossing of Higgs factories with accelerator neutrinos. The setup, schematically depicted in Fig. \ref{fig:P10}, can be viewed as incorporating future Higgs (CEPC\cite{Ruan:2014xxa}, FCC-ee \cite{Gomez-Ceballos:2013zzn} and ILC \cite{Baer:2013cma}) and neutrino \cite{Choubey:2011zzq} factories. If ever realized, it could be a rather clean experiment with negligible SM background. This hypothetical ``neutrino-Higgs collider'', depicted in Fig. \ref{fig:P10}, must have, as suggested by Fig. \ref{fig:Pxx}, some $10^4/fb$ integrated luminosity to be able to probe a TeV-scale $\psi_{\mu}$. In general, need to high luminosities is a disadvantage of this channel. (Feasibility study, technical design and possible realization of a ``neutrino-Higgs collider'' falls outside the scope of the present work.)
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.45]{elpoWW-cropped.pdf}
\end{center}
\caption{The Feynman diagram for $e^+ e^- \rightarrow W_L^+ W_L^-$ scattering. The $\nu_L \nu_L \rightarrow Z_L Z_L$ scattering has the same topology.} \label{fig:w6}
\end{figure}
\subsection{$e^+ e^- \rightarrow W_L^+ W_L^-$ Scattering}
It is clear that $\psi_{\mu}$ directly couples to the Goldstone bosons $\varphi^{+,-,0}$ via (\ref{int1}). The Goldstones, though eaten up by the $W$ and $Z$ bosons in acquiring their masses, reveal themselves at high energies. In fact, the Goldstone equivalence theorem \cite{equivalence} states that scatterings at energy $E$ involving longitudinal $W^{\pm}_L$ bosons are equal to scatterings that involve $\varphi^{\pm}$ up to terms ${\mathcal{O}}(M_W^2/E^2)$. This theorem, with similar equivalence for the longitudinal $Z$ boson, provides a different way of probing $\psi_{\mu}$. In this regard, depicted in Fig. \ref{fig:w6} is $\psi_{\mu}$ contribution to
$e^+ e^- \rightarrow W_L^+ W_L^-$ scattering in light of the Goldstone equivalence. The SM amplitude is given in \cite{equivalence}. The total differential cross section
\begin{eqnarray}
\frac{d\sigma(e^+ e^- \rightarrow W^+_L W^-_L)}{dt}= \frac{1}{16\pi s^2} {{\mathcal{T}_{W_L W_L}}({{s}},{{t}})}
\end{eqnarray}
involves the squared matrix element
\begin{widetext}
\begin{eqnarray}
\label{mat-el-nuhnuh}
{{\mathcal{T}_{W_L W_L}}({{s}},{{t}})}\! &=&\! \left(\! \frac{g_W^2}{s-M_Z^2}\left(\!-1+\! \frac{M_Z^2}{4 M_W^2}\! +\! \frac{M_Z^2-M_W^2}{s}\right)\! +\! \frac{g_W^2}{s-4 M_Z^2}\left(\!1+\! \frac{M_W^2}{t}\right)\! +\! \frac{c^{2}_{3/2}}{3 M^2}\right)^{2}\!\!\! \left(-2 s M_W^2 -2 (t-M_W^2)^2\right) \nonumber\\
&+&\frac{c^4_{3/2} s}{18 M^2} \left(4 + \frac{t}{t-M^2}\right)^2
\end{eqnarray}
\end{widetext}
\noindent Plotted in Fig. \ref{fig:Wxx} is $\sigma(e^+ e^- \rightarrow W^+_L W^-_L)$ as a function of the $e^+ e^-$ center-of-mass energy for different values of $M$. The cross section, which falls with $\sqrt{s}$ without exhibiting a resonance shape, is seen to be large enough to be measurable at the ILC \cite{Baer:2013cma}. In general, larger the $M$ smaller the cross section but even $1/fb$ luminosity is sufficient for probing $\psi_{\mu}$ for a wide range of mass values.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1.45]{elpoWW-xsection-cropped.pdf}
\end{center}
\caption{ The total cross section for $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scattering as a function of the electron-positron center-of-mass energy $\sqrt{s}$ for $M=1, 2$ and $3\ {\rm TeV}$ at $c_{3/2}= 1$. Cases with $c_{3/2}\neq 1$ can be reached via the rescaling $M\rightarrow M/c_{3/2}$.} \label{fig:Wxx}
\end{figure}
Collider searches for $\psi_{\mu}$, as illustrated by $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings, can access spin-3/2 fields of several TeV mass. For instance, the ILC, depending on its precision, can confirm or exclude a $\psi_{\mu}$ of even 5 TeV mass with an integrated luminosity around $1/fb$. Depending on possibility and feasibility of a neutrino-neutrino collider (mainly accelerator neutrinos), it may be possible to study also $\nu_L \nu_L \rightarrow h h$ and $\nu_L \nu_L \rightarrow Z_L Z_L$ scatterings, which are expected to have similar sensitivities to $M$.
\section{Spin-3/2 Field in Loops}
As an inherently off-shell field, $\psi_{\mu}$ is expected to reveal itself mainly in loops. Its one possible loop effect would be generation of neutrino masses but chirality forbids it. Despite the couplings in (\ref{int1}), therefore, neutrino masses do not get any contribution from $\psi_{\mu}-h$ loop.
One other loop effect of $\psi_{\mu}$ would be radiative corrections to the Higgs boson mass. This is not forbidden by any symmetry. The relevant Feynman diagram is depicted in Fig. \ref{fig:P7}. It adds to the Higgs boson squared-mass a logarithmic piece
\begin{eqnarray}
\label{log-corr}
\left(\delta m_h^2\right)_{log} = \frac{c_{3/2}^2}{12\pi^2}M^2\log G_F M^2
\end{eqnarray}
relative to the logarithmic piece $\log G_F \Lambda^2$ in the SM, and a quartic piece
\begin{eqnarray}\label{eqn88}
\left(\delta m_h^2\right)_{4} = \frac{c_{3/2}^2}{ 48 \pi^2} \frac{ \Lambda^4}{M^2}
\end{eqnarray}
which have the potential to override the experimental result \cite{higgs-mass} depending on how large the UV cutoff $\Lambda$ is compared to the Fermi scale $G_F^{-1/2} = 293\ {\rm GeV}$.
The logarithmic contribution in (\ref{log-corr}), which originates from the $\eta^{\alpha\beta}$ part of (\ref{project}), gives rise to the little hierarchy problem in that larger the $M$ stronger the destabilization of the SM Higgs sector. Leaving aside the possibility of cancellations with similar contributions from the right-handed neutrinos $\nu_R^k$ in (\ref{int2}), the little hierarchy problem can be prevented if $M$ (more precisely $M/c_{3/2}$) lies in the TeV domain.
The quartic contribution in (\ref{eqn88}), which originates from the longitudinal $p^{\alpha} p^{\beta}$ term in (\ref{project}), gives cause to the notorious big hierarchy problem in that larger the $\Lambda$ larger the destabilization of the SM Higgs sector. This power-law UV sensitivity exists already in the SM
\begin{eqnarray}\label{eqn8}
\left(\delta m_h^2\right)_{2}&=&\frac{3 \Lambda^2}{16 \pi^2 {\left|\langle H \rangle\right|^2}}\left( m_h^2 + 2 M_W^2 + M_Z^2 - 4 m_t^2\right)
\end{eqnarray}
at the quadratic level \cite{Veltman:1980mj} and
violates the LHC bounds unless $\Lambda \lesssim 550\
{\rm GeV}$. This bound obviously contradicts with the
LHC experiments since the latter continue to confirm the
SM at multi {\rm TeV} energies. This experimental fact makes
it obligatory to find a natural UV completion to the SM.
\begin{figure}[ht!]
\begin{center}
\includegraphics[scale=0.45]{naturalness.pdf}
\end{center}
\caption{The $\psi_{\mu}-\nu_L$ loop that generates the logarithmic correction in (\ref{eqn8}) and the quartic correction in (\ref{eqn88}).} \label{fig:P7}
\end{figure}
One possibility is to require $\left(\delta m_h^2\right)_{4}$ to cancel out $\left(\delta m_h^2\right)_{2}$. This requirement involves a severe fine-tuning (as with a scalar field
\cite{fine-tune-scalar}, Stueckelberg vector \cite{besteyle} and
spacetime curvature \cite{curvature-ft}) and cannot form a viable
stabilization mechanism.
Another possibility would be to switch, for instance, to dimensional
regularization scheme, wherein the quartic and quadratic
UV-dependencies are known to disappear. This, however, is not a
solution. The reason is that the SM, as a quantum field theory of
the strong and electroweak interactions, needs gravity to be
incorporated as the forth known force. And the fundamental scale of
gravity, $M_{Pl}$, inevitably sets an ineliminable physical UV
cutoff (rendering $\Lambda$ physical). This cutoff forces quantum field theories to exist in between
physical UV and IR scales. The SM plus $\psi_{\mu}$ (plus right-handed neutrinos), for instance,
ranges from $G_{F}^{-1/2}$ at the IR up to $\Lambda$ at the UV such
that both scales are physical (not to be confused with the formal
momentum cutoffs employed in the cutoff regularization).
To stabilize the SM, it is necessary to metamorphose the
destabilizing UV effects. This necessitates a physical agent. The
most obvious candidate is gravity. That is to say, the
UV-naturalness problems can be a clue to how quantized matter must
gravitate. Indeed, quantized matter in classical curved geometry
suffers from inconsistencies. The situation can be improved by
considering long-wavelength matter by integrating out high-frequency
modes. This means that the theory to be carried into curved geometry
for incorporating gravity is not the full action but the effective
action (see the discussions in \cite{gravity} and \cite{gravity2}). Thus, starting with
the SM effective action in flat spacetime with well-known
logarithmic, quartic and quadratic UV-sensitivities, gravity can be
incorporated in a way ensuring UV-naturalness. More precisely,
gravity gets incorporated properly and naturally {\it (i)} if the
requisite curved geometry is structured by interpreting $\Lambda^2$
as a constant value assigned to the spacetime curvature, and {\it
(ii)} if the SM is extended by new physics (NP) that does not have
to interact with the SM. The $\psi_{\mu}$ can well be an NP field.
Incorporating gravity by identifying $\Lambda^2 g_{\mu\nu}$ with
the Ricci curvature $R_{\mu\nu}(g)$, fundamental scale of
gravity gets generated as
\begin{eqnarray}
\label{MPl}
M_{Pl}^2 \approx \frac{\left(n_b-n_f\right)}{2(8 \pi)^2} \Lambda^2
\end{eqnarray}
where $n_b$ ($n_f$) are the total number of bosons (fermions) in the
SM plus the NP. The $\psi_{\mu}$ increases $n_f$ by 4, right-handed neutrinos by 2. There are
various other fields in the NP, which contribute to $n_b$ and $n_f$
to ensure $\Lambda \lesssim M_{Pl}$. Excepting $\psi_{\mu}$, they
do not need to interact with the SM fields. Induction of $M_{Pl}$
ensures that the quadratic UV-contributions to vacuum energy are
canalized not to the cosmological constant but to the gravitational
constant (see \cite{demir-ccp} arriving at this result in a
different context). This suppresses the cosmological constant down
to the neutrino mass scale.
The quartic UV contributions in (\ref{eqn88}) and the quadrat\-ic
contributions in (\ref{eqn8}) (suppressing contributions from the right-handed
neutrinos $\nu_R^k$) change their roles with the inclusion
of gravity. Indeed, corrections to the Higgs mass term $\left[\left(\delta m_h^2\right)_{4}\!+\!\left(\delta m_h^2\right)_{2} \right]\!\! H^{\dagger}\! H$ turns
into
\begin{equation}
\label{exp}
\left[\!\frac{3\!\left(\!m_h^2\! +\! 2 M_W^2\! +\! M_Z^2\! -\! 4 m_t^2\!\right)}{(8\pi)^2\left|\langle H \rangle\right|^2}
\!+\! \frac{c_{3/2}^2}{12(n_b\!-\!n_f)}\! \frac{M_{Pl}^2}{M^2}\! \right]\!\! R H^{\dagger}\! H
\end{equation}
which is nothing but the direct coupling of the Higgs field to the
scalar curvature $R$. This Higgs-curvature coupling is perfectly
natural; it has no potential to de-stabilize the Higgs sector.
Incorporation of gravity as in \cite{gravity,gravity2} leads, therefore, to
UV-naturalization of the SM with a nontrivial NP sector
containing $\psi_{\mu}$ as its interacting member.
\section{Spin-3/2 Field as Enabler of Higgs Inflation}
The non-minimal Higgs-curvature coupling in (\ref{exp}) reminds one at once the possibility of Higgs inlation. Indeed, the Higgs field has been shown in \cite{higgs-inf,higgs-inf-2} to lead to correct inflationary expansion provided that
\begin{eqnarray}
\frac{c_{3/2}^2}{12(n_b-n_f)} \frac{M_{Pl}^2}{M^2} \approx 1.7\times 10^{4}
\end{eqnarray}
after dropping the small SM contribution in (\ref{exp}). This relation puts constraints on $M$ and $\Lambda$ depending on how crowded the NP is.
For a Planckian UV cutoff $\Lambda \approx M_{Pl}$, the Planck scale in (\ref{MPl}) requires $n_b - n_f\approx 1300$, and this leads to $M/c_{3/2}\approx 6.3\times 10^{13}\ {\rm GeV}$. This heavy $\psi_{\mu}$, weighing not far from the see-saw and axion scales, acts as an enabler of Higgs inflation. (Of course, all this makes sense if the $\psi_{\mu}$ contribution in (\ref{log-corr}) is neutralized by similar contributions from the right-handed neutrinos $\nu_R^k$ to alleviate the little
hierarchy problem.)
For an intermediate UV cutoff $\Lambda\ll M_{Pl}$, $n_b-n_f$ can be large enough to bring $M$ down to lower scales. In fact, $M$ gets lowered to $M\sim {\rm TeV}$ for $n_b-n_f\simeq 10^{24}$, and this sets the UV cutoff $\Lambda \sim 3\ {\rm TeV}$. This highly crowded NP illustrates how small $M$
and $\Lambda$ can be. Less crowded NP sectors lead to intermediate-scale $M$ and $\Lambda$.
It follows therefore that it is possible to realize Higgs inflation through the Higgs-curvature coupling (corresponding to quartic UV-dependence the $\psi_{\mu}$ induces on the Higgs mass). It turns out that Higgs inflation is decided by how heavy $\psi_{\mu}$ is and how crowded the NP is. It is interesting that the $\psi_{\mu}$ hidden in the SM spectrum enables successful Higgs inflation if gravity is incorporated into the SM as in \cite{gravity,gravity2}.
\section{Spin-3/2 Field as Dark Matter}
Dark matter (DM), forming one-forth of the matter in the Universe, must be electrically-neutral and long-lived. The negative searches \cite{plehn,leszek} so far have added one more feature: The DM must have exceedingly suppressed interactions with the SM matter. It is not hard to see that the spin-3/2 fermion $\psi_{\mu}$ possesses all these properties. Indeed, the constraint (\ref{eqn4p}) ensures that scattering processes in which $\psi_{\mu}$ is on its mass shell must all be forbidden simply because its interactions in (\ref{int1}) involves the vertex factor $c_{3/2} \gamma^{\mu}$. This means that decays of $\psi_{\mu}$ as in Fig.\ref{fig:Px} as well as its co-annihilations with the self and other SM fields are all forbidden. Its density therefore does not change with time, and the observed DM relic density \cite{planck} must be its primordial density, which is determined by the short-distance physics the $\psi_{\mu}$ descends from. It is not possible to calculate the relic density without knowing the short-distance physics. Its mass and couplings, on the other hand, can be probed via the known SM-scatterings as studied in Sec. 3 above. In consequence, the $\psi_{\mu}$, as an inherently off-shell fermion hidden in the SM spectrum, possesses all the features required of a DM candidate.
Of course, the $\psi_{\mu}$ is not the only DM candidate in the setup. The crowded NP sector, needed to incorporate gravity in a way solving the hierarchy problem (see Sec. 4 above), involves various fields which do not interact with the SM matter. They are viable candidates for non-ineracting DM as well as dark energy (see the detailed analysis in \cite{gravity2}). The non-interacting NP fields can therefore contribute to the total DM distribution in the Universe. It will be, of course, not possible to search for them directly or indirectly. In fact, they do not have to come to equilibrium with the SM matter.
Interestingly, both $\psi_{\mu}$ and the secluded fields in the NP act as extra fields hidden in the SM spectrum. Unlike $\psi_{\mu}$ which reveal itself virtually, the NP singlets remain completely intact. The main implication is that, in DM phenomenology, one must keep in mind that there can exist an unobservable, undetectable component of the DM \cite{gravity2}.
\section{Conclusion and Outlook}
In this work we have studied a massive spin-3/2 particle $\psi_{\mu}$ obeying the constraint (\ref{eqn4p}) and interacting with the SM via (\ref{int1}). It hides in the SM spectrum as an
inherently off-shell field. We first discussed its collider signatures by studying $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings in detail in Sec. 3. Following this, we turned to its loop effects and determined how it contributes to big and little hierarchy problems in the SM. Resolving the former by appropriately incorporating gravity, we show that Higgs field can inflate the Universe. Finally, we show that $\psi_{\mu}$ is a viable
DM candidate, which can be indirectly probed via the scattering processes we have analyzed.
The material presented in this work can be extended in various ways. A partial list would include:
\begin{itemize}
\item Determining under what conditions right-handed neutrinos can lift the constraints on $\psi_{\mu}$ from the neutrino masses,
\item Improving the analyses of $\nu_L h \rightarrow \nu_{L} h$ and $e^{-}e^{+}\rightarrow W^{+}W^{-}$ scatterings by including loop contributions,
\item Simulating $e^{-}e^{+}\rightarrow W^{+}W^{-}$ at the ILC by taking into account planned detector acceptances and collider energies,
\item Performing a feasibility study of the proposed neutrino-Higgs collider associated with $\nu_L h \rightarrow \nu_{L} h$ scattering,
\item Exploring UV-naturalness by including right-handed neutrinos, and determining under what conditions the little hierarchy problem is softened.
\item Including effects of the right-handed neutrinos into Higgs inflation, and determining appropriate parameter space.
\item Giving an in-depth analysis of the dark matter and dark energy by taking into account the spin-3/2 field, right-handed neutrinos and the secluded NP fields.
\item Studying constraints on the masses of NP fields from nucleosynthesis and other processes in the early Universe.
\end{itemize}
We will continue to study the spin-3/2 hidden field starting with some of these points.
{\bf Acknowledgements.}
This work is supported in part by the TUBITAK grant 115F212. We thank to the conscientious referee for enlightening comments and suggestions.
| {'timestamp': '2017-08-29T02:04:02', 'yymm': '1708', 'arxiv_id': '1708.07956', 'language': 'en', 'url': 'https://arxiv.org/abs/1708.07956'} |
\section{Introduction}\label{sec: introduction} The weight
part of generalisations of Serre's conjecture has seen significant
progress in recent years, particularly for (forms of) $\operatorname{GL}_2$. Conjectural
descriptions of the set of Serre weights were made in increasing
generality by \cite{bdj}, \cite{MR2430440} and \cite{GHS}, and cases
of these conjectures were proved in \cite{geebdj} and
\cite{geesavitttotallyramified}. Most recently, significant progress
was made towards completely establishing the conjecture for rank two
unitary groups in \cite{blggU2}. We briefly recall this result. Let
$p>2$ be prime, let $F$ be a CM field, and let
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ be a modular representation (see
\cite{blggU2} for the precise definition of ``modular'', which is in
terms of automorphic forms on compact unitary groups). There is a
conjectural set $W^?(\bar{r})$ of Serre weights in which $\bar{r}$ is
predicted to be modular, which is defined in Section \ref{sec: serre
weight definitions} below, following \cite{GHS}. Then the main
result of \cite{blggU2} is that under mild technical hypotheses,
$\bar{r}$ is modular of every weight in $W^?(\bar{r})$.
It remains to show that if $\bar{r}$ is modular of some weight, then
this weight is contained in $W^?(\bar{r})$. It had been previously
supposed that this was the easier direction; indeed, just as in the
classical case, the results of
\cite{blggU2} reduce the weight part of Serre's conjecture for these
unitary groups to a purely local problem in $p$-adic Hodge
theory. However, this problem has proved to be difficult,
and so far only fragmentary results are
known. In the present paper we resolve the problem in the totally
ramified case, so that in combination with \cite{blggU2} we resolve
the weight part of Serre's conjecture in this case, proving the
following Theorem (see Theorem \ref{thm: the main result, modular if
and only if predicted}).
\begin{ithm}
\label{thm: intro: the main result, modular if and only if predicted}Let
$F$ be an imaginary CM field with maximal totally real subfield
~$F^+$, and suppose that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
that $\zeta_p\notin F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that
$p>2$, and that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible
modular representation with split ramification such that
$\bar{r}(G_{F(\zeta_p)})$ is adequate. Assume that for each place $w|p$
of $F$, $F_w/\Qp$ is totally ramified.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre weight. Then
$a_w\inW^?(\bar{r}|_{G_{F_w}})$ if and only if $\bar{r}$ is modular of
weight $a$.
\end{ithm}(See the body of the paper, especially Section~\ref{ss:global}, for any unfamiliar notation and
terminology.) While \cite{blggU2} reduced this result to a purely
local problem, our methods are not purely local; in fact we use the
main result of \cite{blggU2}, together with potential automorphy
theorems, as part of our proof.
In the case that $\bar{r}|_{G_{F_w}}$ is semisimple for each place
$w|p$, the result was established (in a slightly different setting) in
\cite{geesavitttotallyramified}. The method of proof was in part
global, making use of certain potentially Barsotti-Tate lifts to
obtain conditions on $\bar{r}|_{G_{F_w}}$. We extend this analysis in
the present paper to the case that $\bar{r}|_{G_{F_w}}$ is reducible but
non-split,
obtaining conditions on the extension classes that can occur; we show
that (other than in one exceptional case) they lie in a certain set $L_{\operatorname{flat}}$, defined in terms of finite
flat models.
In the case that $\bar{r}|_{G_{F_w}}$ is reducible the definition of
$W^?$ also depends on the extension class; it is required to lie in
a set $L_{\operatorname{crys}}$, defined in terms of reducible crystalline lifts with
specified Hodge-Tate weights. To complete the proof, one must show
that $L_{\operatorname{crys}}=L_{\operatorname{flat}}$. An analogous result was proved in generic
unramified cases in section 3.4 of \cite{geebdj} by means of explicit
calculations with Breuil modules; our approach here is less direct,
but has the advantage of working in non-generic cases, and requires
far less calculation.
We use a global argument to show that
$L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$. Given a class in $L_{\operatorname{crys}}$, we use potential
automorphy theorems to realise the corresponding local representation
as part of a global modular representation, and then apply the main
result of \cite{blggU2} to show that this representation is modular of
the expected weight. Standard congruences between automorphic forms
then show that this class is also contained in $L_{\operatorname{flat}}$.
To prove the converse inclusion, we make a study of different finite
flat models to show that $L_{\operatorname{flat}}$ is contained in a vector space of
some dimension $d$. A standard calculation shows that $L_{\operatorname{crys}}$
contains a space of dimension $d$, so equality follows. As a
byproduct, we show that both $L_{\operatorname{flat}}$ and $L_{\operatorname{crys}}$ are vector
spaces. We also show that various spaces defined in terms of
crystalline lifts are independent of the choice of lift (see Corollary
\ref{cor: independence of lift for H^1_f}). The analogous property was
conjectured in the unramified case in \cite{bdj}.
It is natural to ask whether our methods could be extended to handle
the general case, where $F_w/\Qp$ is an arbitrary
extension. Unfortunately, this does not seem to be the case, because
in general the connection between being modular of some weight and
having a potentially Barsotti-Tate lift of some type is less direct. We expect that our methods could be used to reprove the results of
section 3.4 of \cite{geebdj}, but we do not see how to extend them to
cover the unramified case completely.
We now explain the structure of the paper. In Section \ref{sec: serre
weight definitions} we recall the definition of~$W^?$, and the
global results from \cite{blggU2} that we will need. In Section \ref{sec:local to
global} we recall a potential automorphy result from \cite{geekisin}, allowing us to
realise a local mod $p$ representation globally. Section \ref{sec:
congruences to weight 0} contains the definitions of the spaces
$L_{\operatorname{crys}}$ and $L_{\operatorname{flat}}$ and the proof that $L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$, and in
Section \ref{sec: finite flat
models} we carry out the necessary calculations with Breuil modules
to prove our main local results. Finally, in section \ref{sec: global
consequences} we combine our local results with the techniques of
\cite{geesavitttotallyramified} and the main result of \cite{blggU2}
to prove Theorem \ref{thm: intro: the main result, modular if and only if predicted}.
\subsection{Notation}If $M$ is a field, we let $G_M$ denote its
absolute Galois group. Let~$\epsilon$ denote the $p$-adic cyclotomic
character, and $\bar{\epsilon}$ the mod $p$ cyclotomic character.
If~$M$ is a global field and $v$ is a place of $M$, let $M_v$ denote
the completion of $M$ at $v$. If
~$M$ is a finite extension of $\mathbb{Q}_l$ for some $l$, we write $I_M$
for the inertia subgroup of~$G_M$. If $R$ is a local ring we write
$\mathfrak{m}_{R}$ for the maximal ideal of $R$.
Let $K$ be a finite extension of $\Qp$, with ring of integers $\mathcal{O}_K$
and residue field~$k$. We write ${\operatorname{Art}}_K:K^\times\to W_K^{{\operatorname{ab}}}$ for
the isomorphism of local class field theory, normalised so that
uniformisers correspond to geometric Frobenius elements. For each $\sigma\in {\operatorname{Hom}}(k,{\overline{\F}_p})$ we
define the fundamental character $\omega_{\sigma}$ corresponding
to~$\sigma$ to be the composite $$\xymatrix{I_K \ar[r] & W_K^{{\operatorname{ab}}} \ar[r]^{{\operatorname{Art}}_K^{-1}} &
\mathcal{O}} \newcommand{\calO}{\mathcal{O}_{K}^{\times}\ar[r] & k^{\times}\ar[r]^{\sigma} &
{\overline{\F}_p}^{\times}.}$$
In the case that $k\cong{\F_p}$, we will sometimes write $\omega$ for
$\omega_\sigma$. Note that in this case we have $\omega^{[K:\Qp]}=\epsilonbar$.
We fix an algebraic closure $\overline{{K}}$ of $K$. If $W$ is a de Rham representation of $G_K$ over
$\overline{{\Q}}_p$ and $\tau$ is an embedding $K \hookrightarrow \overline{{\Q}}_p$ then the multiset
$\operatorname{HT}_\tau(W)$ of Hodge-Tate weights of $W$ with respect to $\tau$ is
defined to contain the integer $i$ with multiplicity $$\dim_{\overline{{\Q}}_p} (W
\otimes_{\tau,K} \widehat{\overline{{K}}}(-i))^{G_K},$$ with the usual notation
for Tate twists. Thus for example
$\operatorname{HT}_\tau(\epsilon)=\{ 1\}$.
\section{Serre weight conjectures: definitions}\label{sec: serre
weight definitions}\subsection{Local definitions}We begin by recalling some
generalisations of the weight part of Serre's conjecture. We begin
with some purely local definitions. Let $K$ be a finite totally ramified extension of
$\Qp$ with absolute ramification index $e$, and let $\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous
representation.
\begin{defn}
A \emph{Serre weight} is an irreducible ${\overline{\F}_p}$-representation of
$\operatorname{GL}_2({\F_p})$. Up to isomorphism, any such representation is of the
form \[F_a:=\det{}^{a_{2}}\otimes\operatorname{Sym}^{a_{1}-a_{2}}{\mathbb{F}}_p^2\otimes_{{\mathbb{F}}_p}{\overline{\F}_p}\]
where $0\le a_{1}-a_{2}\le p-1$. We also use the term Serre weight
to refer to the pair $a = (a_1,a_2)$.
\end{defn}
We say that two Serre weights $a$ and $b$ are \emph{equivalent} if and only if
$F_a\cong F_b$ as representations of $\operatorname{GL}_2({\F_p})$. This is equivalent
to demanding that we
have $a_{1}-a_{2}=b_{1}-b_{2}$ and $a_2\equiv b_2\pmod{p-1}$.
We write ${\mathbb{Z}}^2_+$ for the set of pairs of integers $(n_1,n_2)$ with
$n_1\ge n_2$, so that a Serre weight $a$ is by definition an element
of ${\mathbb{Z}}^2_+$. We say that an element
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ is a \emph{lift} of a weight
$a\in{\mathbb{Z}}^2_+$ if there is an element $\tau\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ such that
$\lambda_{\tau}=a$, and for all other $\tau'\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ we have
$\lambda_{\tau'}=(0,0)$.
\begin{defn}
\label{defn: Galois representation of Hodge type some weight}Let
$K/\Qp$ be a finite extension, let
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$, and let
$\rho:G_K\to\operatorname{GL}_2({\overline{\Q}_p})$ be a de Rham representation. Then we say
that $\rho$ has \emph{Hodge type} $\lambda$ if for each
$\tau\in{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$ we have $\operatorname{HT}_\tau(\rho)=\{\lambda_{\tau,1}+1,\lambda_{\tau,2}\}$.
\end{defn}
Following \cite{GHS} (which in turn follows \cite{bdj} and \cite{MR2430440}), we define an explicit
set of Serre weights $W^?(\rhobar)$.
\begin{defn}
\label{defn: W? niveau 1}If $\rhobar$ is reducible, then a Serre
weight $a\in{\mathbb{Z}}^2_+$ is in $W^?(\rhobar)$ if
and only if $\rhobar$ has a crystalline lift of the
form \[ \begin{pmatrix}\psi_1&*\\ 0& \psi_2
\end{pmatrix}\] which has Hodge type $\lambda$ for some lift
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ of $a$. In particular, if $a\in W^?(\rhobar)$ then by Lemma 6.2 of \cite{geesavitttotallyramified} it is necessarily the case that there is a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$ and an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix} \omega^{\delta}
\prod_{ \sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta} \prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}}. \end{pmatrix}\]
\end{defn}
We remark that while it may seem strange to consider the single
element set ${\operatorname{Hom}}({\F_p},{\overline{\F}_p})$, this notation will be convenient for us.
\begin{defn}
\label{defn: W? niveau 2}
Let $K'$ denote the quadratic unramified
extension of $K$ inside~$\overline{{K}}$, with residue field
$k'$ of order $p^2$.
If $\rhobar$ is irreducible, then a Serre
weight $a\in{\mathbb{Z}}^2_+$ is in $W^?(\rhobar)$ if
and only if there is a subset $J\subset{\operatorname{Hom}}(k',{\overline{\F}_p})$ of size $1$,
and an integer $0\le
\delta\le e-1$ such that if we write
${\operatorname{Hom}}(k',{\overline{\F}_p})=J\coprod J^c$, then \[\rhobar|_{I_K}\cong
\begin{pmatrix}\prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1+\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}+e-1-\delta}&0\\ 0& \prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1+\delta}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}+e-1-\delta}
\end{pmatrix}.\]
\end{defn}
We remark that by Lemma 4.1.19 of \cite{blggU2}, if
$a\inW^?(\rhobar)$ and $\rhobar$ is irreducible then
$\rhobar$ necessarily has a crystalline lift of Hodge type $\lambda$ for any lift
$\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$ of $a$. Note also that if $a$
and $b$ are equivalent and $a\inW^?(\rhobar)$ then $b\inW^?(\rhobar)$.
\begin{remark}\label{rem: conjectured weights independent of
unramified twist}
Note that if $\thetabar: G_K\to{\overline{\F}_p}^\times$ is an unramified character, then
$W^?(\bar{r})=W^?(\bar{r}\otimes\thetabar)$. \end{remark}
\subsection{Global conjectures}\label{ss:global} The point of the local definitions
above is to allow us to formulate global Serre weight
conjectures. Following \cite{blggU2}, we work with rank two unitary
groups which are compact at infinity. As we will not need to make
any arguments that depend on the particular definitions made in
\cite{blggU2}, and our main results are purely local, we simply
recall some notation and basic properties of the definitions,
referring the reader to \cite{blggU2} for precise formulations.
We emphasise that our conventions for Hodge-Tate weights are the
opposite of those of \cite{blggU2}; for this reason, we must introduce
a dual into the definitions.
Fix an imaginary CM field $F$, and let $F^+$ be its maximal totally
real subfield. We assume that each prime of $F^+$ over $p$ has residue
field ${\mathbb{F}}_p$ and splits in $F$. We define a global notion of Serre weight by taking a
product of local weights in the following way.
\begin{defn}
\label{defn:global-serre-wts}
Let $S$ denote the set
of places of $F$ above $p$. If $w \in S$ lies over a place $v$ of
$F^+$, write $v = w w^c$. Let
$({\mathbb{Z}}^2_+)_0^{S}$ denote the subset of
$({\mathbb{Z}}^2_+)^{S}$ consisting of elements $a = (a_w)_{w \in S}$ such
that $a_{w,1}+a_{w^c,2}=0$ for all $w\in S$. We say that an
element $a\in({\mathbb{Z}}^2_+)_0^{S}$ is a \emph{Serre
weight} if for each $w|p$ we
have \[p-1\ge a_{w,1}-a_{w,2}.\]
\end{defn}
Let $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous irreducible
representation. Definition 2.1.9 of \cite{blggU2} states what it
means for $\bar{r}$ to be modular, and more precisely for $\bar{r}$ to be
modular of some Serre weight $a$; roughly speaking, $\bar{r}$ is modular
of weight $a$ if there is a cohomology class on some unitary group
with coefficients in the local system corresponding to $a$ whose
Hecke eigenvalues are determined by the characteristic polynomials of
$\bar{r}$ at Frobenius elements. Since our conventions for Hodge-Tate
weights are the opposite of those of \cite{blggU2}, we make the
following definition.
\begin{defn}
Suppose that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous
irreducible modular representation. Then we say that $\bar{r}$ \emph{is modular
of weight} $a\in({\mathbb{Z}}^2_+)_0^S$ if
$\bar{r}^\vee$ is modular of weight $a$ in the sense of Definition 2.1.9
of \cite{blggU2}.
\end{defn} We globalise the definition of the set
$W^?(\rhobar)$ in the following natural fashion.
\begin{defn}
If $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous representation, then
we define $W^?(\bar{r})$ to be the set of Serre weights
$a\in({\mathbb{Z}}^2_+)_0^S$ such that for each
place $w|p$ the corresponding Serre weight
$a_w\in{\mathbb{Z}}^2_+$ is an element of
$W^?(\bar{r}|_{G_{F_w}})$.
\end{defn}
One then has the following conjecture.
\begin{conj}\label{conj: global Serre weight explicit conjecture}
Suppose that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous irreducible
modular representation, and that
$a\in({\mathbb{Z}}^2_+)_0^S$ is a Serre
weight. Then $\bar{r}$ is modular of weight $a$ if and only if
$a\inW^?(\bar{r})$.
\end{conj}
If $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is a continuous representation,
then we say that $\bar{r}$ has \emph{split ramification} if any finite
place of $F$ at which $\bar{r}$ is ramified is split over $F^+$. The
following result is
Theorem 5.1.3 of \cite{blggU2}, one of the
main theorems of that paper, in the special case where $F_w/\Qp$ is
totally ramified for all $w|p$. (Note that in \cite{blggU2}, the set
of weights $W^?(\bar{r})$ is referred to as
$W^{\operatorname{explicit}}(\bar{r})$.)
\begin{thm}
\label{thm: explicit local lifts implies Serre
weight}Let $F$ be an imaginary CM field with maximal totally real subfield~$F^+$. Assume that $\zeta_p\notin F$, that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ has residue field ${\mathbb{F}}_p$ and splits completely in $F$,
and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular
representation with split ramification. Assume that $\bar{r}(G_{F(\zeta_p)})$ is adequate.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a
Serre weight. Assume that $a\in W^?(\bar{r})$. Then $\bar{r}$ is
modular of weight $a$.
\end{thm}
Here \emph{adequacy} is a group-theoretic condition, introduced in
\cite{jack}, that for subgroups of $\operatorname{GL}_2({\overline{\F}_p})$ with $p > 5$ is
equivalent to the usual condition that $\bar{r}|_{G_{F(\zeta_p)}}$ is irreducible. For a precise
definition we refer the reader to Definition A.1.1 of \cite{blggU2}.
We also remark that the hypotheses that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
and that $[F^+:{\mathbb{Q}}]$ is even, are in fact part of the definition of
``modular'' made in \cite{blggU2}.)
Theorem~\ref{thm: explicit local lifts implies Serre
weight} establishes one direction of Conjecture \ref{conj: global Serre
weight explicit conjecture}, and we are left with the problem of
``elimination,'' i.e., the problem of proving that if $\bar{r}$ is
modular of weight $a$, then $a\in W^?(\bar{r})$.
We believe that this problem should have a purely local resolution,
as we now explain.
The key point is the relationship between being
modular of weight $a$, and the existence of certain de Rham lifts of
the local Galois representations $\bar{r}|_{G_{F_w}}$, $w|p$. The link
between these properties is provided by local-global compatibility
for the Galois representations associated to the automorphic
representations under consideration; rather than give a detailed
development of this connection, for which see \cite{blggU2}, we
simply summarise the key results from \cite{blggU2} that we will
use. The
following is Corollary 4.1.8 of \cite{blggU2}.
\begin{prop}
\label{prop: modular of some weight implies crystalline lifts
exist}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ has residue field
${\mathbb{F}}_p$ and splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let
$a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre
weight. If $\bar{r}$ is modular of weight $a$, then for each place
$w|p$ of $F$, there is a crystalline representation
$\rho_w:G_{F_w}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting $\bar{r}|_{G_{F_w}}$, such
that $\rho_w$ has Hodge type $\lambda_w$ for some lift
$\lambda_w\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(F_w,{\overline{\Q}_p})}$ of $a$.
\end{prop}
We stress that Proposition~\ref{prop: modular of some weight implies crystalline lifts
exist} does not already complete the proof of Conjecture \ref{conj: global Serre
weight explicit conjecture}, because the representation $\rho_w$
may be irreducible (compare with Definition~\ref{defn: W? niveau 1}).
However, in light of this result, it is natural to make the following
purely local conjecture, which together with Theorem \ref{thm:
explicit local lifts implies Serre weight} would essentially resolve
Conjecture \ref{conj: global Serre weight explicit conjecture}.
\begin{conj}
\label{conj: crystalline lift implies explicit crystalline lift}
Let $K/\Qp$ be a finite totally ramified extension, and let
$\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Let
$a\in{\mathbb{Z}}^2_+$ be a Serre weight, and suppose
that for some lift $\lambda\in({\mathbb{Z}}^2_+)^{{\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})}$, there is
a continuous crystalline representation
$\rho:G_{K}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting $\rhobar$, such
that $\rho$ has Hodge type $\lambda$.
Then $a\inW^?(\bar{r})$.
\end{conj}
We do not know how to prove this conjecture, and we do not directly
address the conjecture in the rest of this paper. Instead, we
proceed more indirectly. Proposition \ref{prop: modular of some
weight implies crystalline lifts exist} is a simple consequence of
lifting automorphic forms of weight $a$ to forms of weight
$\lambda$; we may also obtain non-trivial information by lifting to
forms of weight $0$ and non-trivial type. In this paper, we will
always consider principal series types. Recall that if $K/\Qp$ is a finite extension the
\emph{inertial type} of a potentially semistable Galois
representation $\rho:G_K\to\operatorname{GL}_n({\overline{\Q}_p})$ is the restriction to
$I_K$ of the corresponding Weil-Deligne representation. In this
paper we normalise this definition as in the appendix to
\cite{MR1639612}, so that for example the inertial type of a finite
order character is just the restriction to inertia of that
character.
\begin{prop}
\label{prop: modular of some weight implies potentially BT lifts
exist}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ has residue field
${\F_p}$ and splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a
Serre weight. If $\bar{r}$ is modular of weight $a$, then for each
place $w|p$ of $F$, there is a continuous potentially semistable
representation $\rho_w:G_{F_w}\to\operatorname{GL}_2({\overline{\Q}_p})$ lifting
$\bar{r}|_{G_{F_w}}$, such that $\rho_w$ has Hodge type $0$ and
inertial type $\omegat^{a_1}\oplus\omegat^{a_2}$. (Here $\omegat$ is
the Teichm\"uller lift of $\omega$.) Furthermore, $\rho_w$ is
potentially crystalline unless $a_{1}-a_{2}=p-1$ and $\bar{r}|_{G_{F_w}}\cong
\begin{pmatrix}
\chibar\epsilonbar&*\\0&\chibar
\end{pmatrix}
$ for some character $\chibar$.
\end{prop}
\begin{proof}
This may be proved in exactly the same way as Lemma 3.4 of
\cite{geesavitttotallyramified}, working in the setting of
\cite{blggU2} (cf. the proof of Lemma 3.1.1 of \cite{blggU2}). Note
that if $\rho_w$ is not potentially crystalline, then it is
necessarily a twist of an extension of the trivial character by the
cyclotomic character.
\end{proof}
\section{Realising local representations globally}\label{sec:local to
global}\subsection{}We now recall a result from the forthcoming paper \cite{geekisin}
which allows us to realise local representations globally, in order to
apply the results of Section~\ref{ss:global} in a purely local
setting.
\begin{thm}
\label{thm: the final local-to-global result} Suppose that $p>2$,
that $K/\Qp$ is a finite extension, and let
$\bar{r}_K:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Then
there is an imaginary CM field $F$ and a continuous irreducible
representation $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ such that, if $F^+$ denotes the maximal totally real subfield of $F$,
\begin{itemize}
\item each place $v|p$ of $F^+$ splits in $F$ and has $F^+_v\cong
K$,
\item for each place $v|p$ of $F^+$, there is a place ${\widetilde{{v}}}$ of $F$
lying over $F^+$ with $\bar{r}|_{G_{F_{\widetilde{{v}}}}}$ isomorphic to an
unramified twist of $\bar{r}_K$,
\item $\zeta_p\notin F$,
\item $\bar{r}$ is unramified outside of $p$,
\item $\bar{r}$ is modular in the sense of \cite{blggU2}, and
\item $\bar{r}(G_{F(\zeta_p)})$ is adequate.
\end{itemize}
\end{thm}
\begin{proof}We sketch the proof; the full details will appear in
\cite{geekisin}. The argument is a straightforward application of
potential modularity techniques. First, an application of
Proposition 3.2 of \cite{frankII} supplies a totally real field $L^+$ and a continuous irreducible
representation $\bar{r}:G_{L^+}\to\operatorname{GL}_2({\overline{\F}_p})$ such that
\begin{itemize}
\item for each place $v|p$ of $L^+$, $L^+_v\cong K$ and
$\bar{r}|_{L^+_v}\cong\bar{r}_K$,
\item for each place $v|\infty$ of $L^+$, $\det\bar{r}(c_v)=-1$, where
$c_v$ is a complex conjugation at $v$, and
\item there is a non-trivial finite extension ${\mathbb{F}}/{\mathbb{F}}_p$ such that
$\bar{r}(G_{L^+})=\operatorname{GL}_2({\mathbb{F}})$.
\end{itemize}
By a further base change one can also arrange that $\bar{r}|_{G_{L^+_v}}$ is unramified
at each finite place $v\nmid p$ of $L^+$.
By Lemma 6.1.6 of \cite{blggord} and the proof of
Proposition 7.8.1 of \cite{0905.4266}, $\bar{r}_K$ admits a potentially
Barsotti-Tate lift, and one may then apply Proposition 8.2.1 of
\cite{0905.4266} to deduce that there is a finite totally real Galois
extension $F^+/L^+$ in which all primes of $L^+$ above $p$ split
completely, such that $\bar{r}|_{G_{F^+}}$ is modular in the sense
that it is congruent to the Galois representation associated to some
Hilbert modular form of parallel weight $2$.
By the theory of base change between $\operatorname{GL}_2$ and unitary groups
(\textit{cf.} section 2 of \cite{blggU2}), it now suffices to show that
there is a totally imaginary quadratic extension $F/F^+$ and a
character $\thetabar:G_F\to{\overline{\F}_p}^\times$ such that
$\bar{r}|_{G_F}\otimes\thetabar$ has multiplier~$\epsilonbar^{-1}$ and
such that for each place $v|p$ of $F^+$, there is a place ${\widetilde{{v}}}$ of $F$
lying over $v$ with $\thetabar|_{G_{F_{{\widetilde{{v}}}}}}$ unramified. The
existence of such a character is a straightforward exercise in class
field theory, and follows for example from Lemma 4.1.5 of \cite{cht}.
\end{proof}
\section{Congruences}\label{sec: congruences to weight 0}\subsection{} Having realised a local mod $p$
representation globally, we can now use the results explained in
Section \ref{sec: serre
weight definitions} to deduce non-trivial local consequences.
\begin{thm}
\label{thm: explicit weight implies pot BT lift}Let $p>2$ be prime,
let $K/\Qp$ be a finite totally ramified extension, and let
$\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$ be a continuous representation. Let
$a\inW^?(\rhobar)$ be a Serre weight. Then there is a continuous
potentially semistable representation $\rho:G_K\to\operatorname{GL}_2({\overline{\Q}_p})$
lifting $\rhobar$, such that $\rho$ has Hodge type $0$ and inertial
type $\omegat^{a_1}\oplus\omegat^{a_2}$. Furthermore, $\rho$ is
potentially crystalline unless $a_{1}-a_{2}=p-1$ and $\rhobar\cong
\begin{pmatrix}
\chibar\epsilonbar&*\\0&\chibar
\end{pmatrix}$ for some character $\chibar$.
\end{thm}
\begin{proof} By Theorem \ref{thm: the final local-to-global result}, there is
an imaginary CM field $F$ and a modular representation
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ such that
\begin{itemize}
\item for each place $v|p$ of $F^+$, $v$ splits in $F$ as
${\widetilde{{v}}}\tv^c$, and we have $F_{\widetilde{{v}}}\cong K$, and $\bar{r}|_{G_{F_{\widetilde{{v}}}}}$ is
isomorphic to an unramified twist of $\rhobar$,
\item $\bar{r}$ is unramified outside of $p$,
\item $\zeta_p\notin F$, and
\item $\bar{r}(G_{F(\zeta_p)})$ is adequate.
\end{itemize}Now, since the truth of the result to be proved is
obviously unaffected by making an unramified twist (if $\rhobar$ is
replaced by a twist by an unramified character $\overline{\theta}$, one may
replace $\rho$ by a twist by an unramified
lift of $\overline{\theta}$), we may without loss of
generality suppose that $\bar{r}|_{G_{F_w}}\cong\rhobar$. Let
$b\in({\mathbb{Z}}^2_+)_0^{S}$ be the Serre weight such that
$b_{\widetilde{{v}}}=a$ for each place $v|p$ of $F^+$, where $S$ denotes the set of
places of $F$ above $p$. By Remark \ref{rem: conjectured weights independent of
unramified twist}, $b\inW^?(\bar{r})$. Then by Theorem \ref{thm: explicit local lifts implies Serre
weight}, $\bar{r}$ is modular of weight $b$. The result now follows
from Proposition \ref{prop: modular of some weight implies potentially BT lifts
exist}.
\end{proof}
\subsection{Spaces of crystalline extensions}\label{subsec: H^1_f}We
now specialise to the setting of Definition \ref{defn: W? niveau
1}. As usual, we let $K/\Qp$ be a finite totally ramified extension with residue
field $k={\F_p}$, ramification index $e$, and uniformiser $\pi$. We fix a Serre weight $a\in{\mathbb{Z}}^2_+$. We fix a
continuous representation $\rhobar:G_K\to\operatorname{GL}_2({\overline{\F}_p})$, and we assume
that there is:
\begin{itemize}
\item a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and
\item an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix}
\omega^\delta\prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}}. \end{pmatrix}\]
\end{itemize}
Note that in general there might be several choices
of $J$, $\delta$. Fix such a choice for the
moment. Consider pairs of characters $\chi_1$,
$\chi_2:G_K\to{\overline{\Q}_p}^\times$ with the properties that:
\begin{enumerate}
\item $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$,
\item $\chi_1$ and $\chi_2$ are crystalline, and
\item if we let $S$ denote the set of
${\operatorname{Hom}}_{\Qp}(K,{\overline{\Q}_p})$, then either
\begin{enumerate}[(i)]
\item $J$ is non-empty, and there is one embedding $\tau\in
S$ with $\operatorname{HT}_\tau(\chi_1)=a_{1}+1$ and
$\operatorname{HT}_\tau(\chi_2)=a_{2}$, there are $\delta$ embeddings
$\tau\in S$ with $\operatorname{HT}_\tau(\chi_1)=1$ and
$\operatorname{HT}_\tau(\chi_2)=0$, and for the remaining $e-1-\delta$
embeddings $\tau\in S$ we have $\operatorname{HT}_\tau(\chi_1)=0$ and
$\operatorname{HT}_\tau(\chi_2)=1$, or
\item $J=\emptyset$, and there is one embedding $\tau\in
S$ with $\operatorname{HT}_\tau(\chi_1)=a_{2}$ and
$\operatorname{HT}_\tau(\chi_2)=a_{1}+1$, there are $\delta$ embeddings
$\tau\in S$ with $\operatorname{HT}_\tau(\chi_1)=1$ and
$\operatorname{HT}_\tau(\chi_2)=0$, and for the remaining $e-1-\delta$
embeddings $\tau\in S$ we have $\operatorname{HT}_\tau(\chi_1)=0$ and
$\operatorname{HT}_\tau(\chi_2)=1$.
\end{enumerate}
\end{enumerate}
Note that these properties do not specify the characters $\chi_1$ and
$\chi_2$ uniquely, even in the unramified case, as one is always free
to twist either character by an unramified character which is trivial
mod $p$. We point out that the Hodge type of any de Rham extension of
$\chi_2$ by $\chi_1$
will be a lift of $a$. Conversely, by Lemma~6.2 of \cite{geesavitttotallyramified} any $\chi_1,\chi_2$ satisfying
(1) and (2) such that the Hodge type of $\chi_1 \oplus \chi_2$ is a
lift of $a$ will satisfy (3) for a valid choice of $J$ and $\delta$
(unique unless $a=0$).
Suppose now that we have fixed two such characters $\chi_1$ and
$\chi_2$, and we now allow the (line corresponding to the) extension
class of $\rhobar$ in ${\operatorname{Ext}}_{G_K}(\chibar_2,\chibar_1)$ to vary. We
naturally identify ${\operatorname{Ext}}_{G_K}(\chibar_2,\chibar_1)$ with
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ from now on.
\begin{defn}
Let $L_{\chi_1,\chi_2}$ be the subset of
$H^1(G_K,\chibar_1\chibar_2^{-1})$ such that the corresponding
representation $\rhobar$ has a crystalline lift $\rho$ of the
form \[
\begin{pmatrix}
\chi_1&*\\0&\chi_2
\end{pmatrix}.\]
\end{defn}
We have the following variant of Lemma 3.12 of \cite{bdj}.
\begin{lem}
\label{lem: dimension of H^1_f spaces} $L_{\chi_1,\chi_2}$ is an
${\overline{\F}_p}$-vector subspace of $ H^1(G_K,\chibar_1\chibar_2^{-1})$ of
dimension $|J|+\delta$, unless
$\chibar_1=\chibar_2$, in which case it has dimension
$|J|+\delta+1$.
\end{lem}
\begin{proof} Let $\chi=\chi_1\chi_2^{-1}$.
Recall that
$H^1_f(G_K,\overline{\Z}_p(\chi))$ is the preimage of
$H^1_f(G_K,{\overline{\Q}_p}(\chi))$ under the natural map
$\eta : H^1(G_K,\overline{\Z}_p(\chi))\to H^1(G_K,{\overline{\Q}_p}(\chi))$, so that
$L_{\chi_1,\chi_2}$ is the image of $H^1_f(G_K,\overline{\Z}_p(\chi))$ in
$H^1(G_K,\chibar)$. The kernel of $\eta$ is precisely the torsion
part of $H^1_f(G_K,\overline{\Z}_p(\chi))$, which (since $\chi\neq 1$,
e.g. by examining Hodge-Tate weights) is non-zero if and only if
$\chibar=1$, in which case it has the form $\kappa^{-1} \overline{\Z}_p/\overline{\Z}_p$
for some $\kappa \in \mathfrak{m}_{\overline{\Z}_p}$.
By
Proposition 1.24(2) of \cite{nekovar} we see that $\dim_{\overline{\Q}_p}
H^1_f(G_K,{\overline{\Q}_p}(\chi))=|J|+\delta$, again using $\chi \neq 1$. Since
$H^1(G_K,\overline{\Z}_p(\chi))$ is a finitely generated $\overline{\Z}_p$-module,
the result follows.
\end{proof}
\begin{defn}
\label{defn: union of H^1_f subspaces}If $\chibar_1$ and
$\chibar_2$ are fixed, we define $L_{\operatorname{crys}}$ to be the subset of
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ given by the union of the $L_{\chi_1,\chi_2}$
over all $\chi_1$ and $\chi_2$ as above.
\end{defn}
Note that $L_{\operatorname{crys}}$ is a union of subspaces of possibly varying
dimensions, and as such it is not clear that $L_{\operatorname{crys}}$ is itself a
subspace. Note also that the representations $\rhobar$ corresponding
to elements of $L_{\operatorname{crys}}$ are by definition precisely those for which
$F_a\inW^?(\rhobar)$.
\begin{defn}
\label{defn: H^1_flat subspace}Let $L_{\operatorname{flat}}$ be the subset
of $H^1(G_K,\chibar_1\chibar_2^{-1})$ consisting of classes with the property that
if $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$ is the corresponding representation, then there is a
finite field $k_E \subset {\overline{\F}_p}$ and a
finite flat $k_E$-vector space scheme over $\mathcal{O}_{K(\pi^{1/(p-1)})}$ with
generic fibre
descent data to $K$ of the
form $ \omega^{a_{1}}\oplus\omega^{a_{2}}$
(see Definition~\ref{defn:dd-of-the-form}) whose generic fibre is $\rhobar$.
\end{defn}
\begin{thm}
\label{thm: crystalline extension implies flat}Provided that
$a_{1}-a_{2}\ne p-1$ or that $\chibar_1\chibar_2^{-1}\ne \epsilonbar$,
$L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$.
\end{thm}
\begin{proof}
Take a class in $L_{\operatorname{crys}}$, and consider the corresponding
representation $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$. As remarked above, $F_a\inW^?(\rhobar)$, so by
Theorem \ref{thm: explicit weight implies pot BT lift}, $\rhobar$
has a crystalline lift of Hodge type $0$ and inertial
type \[\omegat^{a_{1}}\oplus\omegat^{a_{2}},\] and this
representation can be taken to have coefficients in the ring of
integers $\mathcal{O}_E$ of a finite
extension $E/\Qp$. Let $\varpi$ be a uniformiser of $\mathcal{O}_E$, and $k_E$
the residue field. Such a representation
corresponds to a $p$-divisible $\mathcal{O}_E$-module with generic fibre descent data, and
taking the $\varpi$-torsion
gives a finite flat $k_E$-vector space scheme with generic fibre descent
data whose generic fibre is $\rhobar$. By Corollary 5.2 of \cite{geesavittquaternionalgebras} this
descent data has the form $\omega^{a_1} \oplus \omega^{a_2}$.
\end{proof}
In the next section we will make calculations with finite flat group
schemes in order to relate $L_{\operatorname{flat}}$ and $L_{\operatorname{crys}}$.
\section{Finite flat models}\label{sec: finite flat
models}\subsection{}We work throughout this section in the following setting:
\begin{itemize}
\item $K/\Qp$ is a finite extension with ramification index $e$,
inertial degree $1$, ring
of integers $\mathcal{O}_K$, uniformiser $\pi$ and residue field ${\F_p}$.
\item $\chibar_1$, $\chibar_2$ are characters
$G_K\to{\overline{\F}_p}^\times$.
\item $a\in{\mathbb{Z}}^2_+$ is a Serre weight.
\item There is a decomposition ${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and an integer $0\le
\delta\le e-1$ such that \[\chibar_1|_{I_K}=\omega^\delta\prod_{\sigma\in
J}\omega^{a_{1}+1}\prod_{\sigma\in
J^c}\omega^{a_{2}},\] \[\chibar_2|_{I_K}=\omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega^{a_{1}+1}\prod_{\sigma\in
J}\omega^{a_{2}}.\]
\end{itemize}
Note in particular that $(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$.
Let $K_1:=K(\pi^{1/(p-1)})$. Let $k_E$ be a finite extension of ${\F_p}$
such that $\chibar_1,\chibar_2$ are defined over $k_E$; for the moment
$k_E$ will be fixed, but eventually it will be allowed to vary.
We wish to consider the representations $\rhobar\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}$ such that there is a finite flat $k_E$-vector space
scheme $\mathcal{G}$ over $\mathcal{O}_{K_1}$ with generic fibre descent data to $K$ of the form
$\omega^{a_1}\oplus\omega^{a_2}$ (see Definition~\ref{defn:dd-of-the-form}), whose generic fibre is
$\rhobar$.
In order to do, we will work with Breuil modules with descent data
from $K_1$ to~$K$. We
recall the necessary definitions from
\cite{geesavittquaternionalgebras}.
Fix $\pi_1$, a $(p-1)$-st root of $\pi$ in $K_1$. Write
$e'=e(p-1)$. The category $\operatorname{BrMod}_{\operatorname{dd}}$
consists of quadruples $(\mathcal{M},{\operatorname{Fil}}^1
\mathcal{M},\phi_{1},\{\widehat{g}\})$ where:
\begin{itemize}\item $\mathcal{M}$ is a finitely generated free
$k_E[u]/u^{e'p}$-module,
\item ${\operatorname{Fil}}^1 {\mathcal{M}}$ is a $k_E[u]/u^{e'p}$-submodule of ${\mathcal{M}}$ containing $u^{e'}{\mathcal{M}}$,
\item $\phi_{1}:{\operatorname{Fil}}^1{\mathcal{M}}\to{\mathcal{M}}$ is $k_E$-linear and $\phi$-semilinear
(where $\phi:{\F_p}[u]/u^{e'p}\to {\F_p}[u]/u^{e'p}$ is the $p$-th power map)
with image generating ${\mathcal{M}}$ as a $k_E[u]/u^{e'p}$-module, and
\item $\widehat{g}:{\mathcal{M}}\to{\mathcal{M}}$ for each $g\in{\operatorname{Gal}}(K_1/K)$ are additive
bijections that preserve ${\operatorname{Fil}}^1 {\mathcal{M}}$, commute with the $\phi_1$-,
and $k_E$-actions, and satisfy $\widehat{g}_1\circ
\widehat{g}_2=\widehat{g_1\circ g}_2$ for all
$g_1,g_2\in{\operatorname{Gal}}(K_1/K)$; furthermore $\widehat{1}$ is the identity,
and if $a\in k_E$, $m\in{\mathcal{M}}$ then
$\widehat{g}(au^{i}m)=a((g(\pi)/\pi)^{i})u^{i}\widehat{g}(m)$.\end{itemize}
The category $\operatorname{BrMod}_{\operatorname{dd}}$ is equivalent to the category of finite
flat $k_E$-vector space schemes over $\mathcal{O}_{K_1}$ together with
descent data on the generic fibre from $K_1$ to~$K$
(this equivalence depends on $\pi_1$); see \cite{sav06}, for instance. We obtain the associated
$G_{K}$-representation (which we will refer to as the generic fibre)
of an object of $\operatorname{BrMod}_{\operatorname{dd},K_1}$ via the covariant functor
$T_{{\operatorname{st}},2}^{K}$ (which is defined immediately before Lemma 4.9 of
\cite{MR2137952}).
\begin{defn}
\label{defn:dd-of-the-form}
Let ${\mathcal{M}}$ be an object of $\operatorname{BrMod}_{\operatorname{dd}}$ such that the underlying
$k_E$-module has rank two. We say that the finite flat $k_E$-vector
space scheme corresponding to ${\mathcal{M}}$ \emph{has descent data
of the form} $\omega^{a_1} \oplus \omega^{a_2}$ if ${\mathcal{M}}$ has a basis
$e_1,e_2$ such that $\widehat{g}(e_i) = \omega^{a_i}(g) e_i$. (Here
we abuse notation by identifying an element of $G_K$ with its image
in ${\operatorname{Gal}}(K_1/K)$.)
\end{defn}
We now consider a finite flat group scheme with generic fibre descent data $\mathcal{G}$ as above. By a standard scheme-theoretic
closure argument, $\chibar_1$ corresponds to a finite flat subgroup
scheme with generic fibre descent data
$\mathcal{H}$ of $\mathcal{G}$, so we begin by analysing the possible
finite flat group schemes corresponding to characters.
Suppose now that ${\mathcal{M}}$ is an object of $\operatorname{BrMod}_{\operatorname{dd}}$. The rank
one objects of $\operatorname{BrMod}_{\operatorname{dd}}$ are classified as follows.
\begin{prop} \label{prop:rank one breuil modules} With our fixed choice of uniformiser
$\pi$, every rank one object of $\operatorname{BrMod}_{\operatorname{dd}}$ has the form:
\begin{itemize}
\item ${\mathcal{M}} = (k_E[u]/u^{e'p}) \cdot v $,
\item ${\operatorname{Fil}}^1 {\mathcal{M}} = u^{x(p-1)} {\mathcal{M}}$,
\item $\phi_1( u^{x(p-1)} v) = cv$ for some $c \in k_E^{\times}$, and
\item $\widehat{g}(v) = \omega(g)^kv$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}
where $0 \le x \le e$ and $0 \le k< p-1$ are
integers.
Then $T_{{\operatorname{st}},2}^{K}({\mathcal{M}}) =
\omega^{k + x} \cdot \mathrm{ur}_{c^{-1}}$, where $\mathrm{ur}_{c^{-1}}$ is the
unramified character taking an arithmetic Frobenius element to
$c^{-1}$.\end{prop}
\begin{proof}
This is a special case of Proposition 4.2 and Corollary 4.3 of
\cite{geesavittquaternionalgebras}.
\end{proof}
Let ${\mathcal{M}}$ (or ${\mathcal{M}}(x)$) be the rank one Breuil
module with $k_E$-coefficients and
descent data from $K_1$ to $K$ corresponding to $\mathcal{H}$, and
write ${\mathcal{M}}$ in the form given by Proposition \ref{prop:rank one breuil
modules}. Since $\mathcal{G}$ has descent data of the form
$\omega^{a_1}\oplus\omega^{a_2}$,
we must have $\omega^k \in \{\omega^{a_1},\omega^{a_2}\}$.
\subsection{Extensions} Having determined the rank one characters, we
now go further and compute the possible extension
classes. By a scheme-theoretic closure argument, the Breuil module
$\mathcal{P}$ corresponding to $\mathcal{G}$ is an extension of $\mathcal{N}$ by
$\mathcal{M}$, where $\mathcal{M}$ is as in the previous section, and $\mathcal{N}$ (or
$\mathcal{N}(y)$) is defined
by \begin{itemize}
\item ${\mathcal{N}} = (k_E[u]/u^{e'p}) \cdot w $,
\item ${\operatorname{Fil}}^1 {\mathcal{N}} = u^{y(p-1)} {\mathcal{N}}$,
\item $\phi_1( u^{y(p-1)} v) = dw$ for some $d \in k_E^{\times}$, and
\item $\widehat{g}(v) = \omega(g)^lv$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}
where $0 \le y \le e$ and $0 \le l< p-1$ are
integers. Now, as noted above, the descent data for $\mathcal{G}$ is of the form
$\omega^{a_1}\oplus\omega^{a_2}$, so we must have that either $\omega^k=\omega^{a_1}$
and $\omega^l=\omega^{a_2}$, or $\omega^{k}=\omega^{a_2}$ and $\omega^l=\omega^{a_1}$. Since by definition we have
$(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$, we see from
Proposition \ref{prop:rank one breuil modules} that \[x+y\equiv e\pmod{p-1}.\]
We have the following classification of extensions of $\mathcal{N}$ by $\mathcal{M}$.
\begin{prop}\label{prop: possible extensions of Breuil modules} Every extension of $\mathcal{N}$ by
$\mathcal{M}$ is isomorphic to exactly one of the form
\begin{itemize}
\item $\mathcal{P} = (k_E[u]/u^{e'p}) \cdot v + (k_E[u]/u^{e'p}) \cdot w $,
\item ${\operatorname{Fil}}^1 \mathcal{P} =(k_E[u]/u^{e'p}) \cdot u^{x(p-1)} v +
(k_E[u]/u^{e'p}) \cdot (u^{y(p-1)}w+\lambda v) $,
\item $\phi_1(u^{x(p-1)} v) = cv$, $\phi_1(u^{y(p-1)}w+\lambda v)=dw$,
\item $\widehat{g}(v) =\omega^k(g)v$ and $\widehat{g}(w) =\omega^l(g)w$ for all $g \in {\operatorname{Gal}}(K_1/K)$,
\end{itemize}where $\lambda\in u^{\max\{0,(x+y-e)(p-1)\}}k_E[u]/u^{e'p}$
has all nonzero terms of degree congruent to $l-k$ modulo $p-1$, and has all terms
of degree less than $x(p-1)$, unless $\chibar_1=\chibar_2$ and $x\ge y$,
in which case it may additionally have a term of degree $px-y$.
\end{prop}
\begin{proof}
This is a special case of Theorem 7.5 of \cite{MR2004122}, with the
addition of $k_E$-coefficients in place of ${\F_p}$-coefficients. When
$K$ (in the notation of \emph{loc.~cit.}) is totally ramified over $\Qp$, the
proof of \emph{loc.~cit.} is argued in precisely the same manner when
coefficients are added, taking care to note the following changes:
\begin{itemize}
\item Replace Lemma 7.1 of \emph{loc.~cit.} (i.e., Lemma 5.2.2 of
\cite{MR1839918}) with Lemma 5.2.4 of \cite{MR1839918} (with
$k'=k_E$ and $k={\F_p}$ in the notation of that Lemma). In particular
replace $t^l$ with $\phi(t)$ wherever it appears in the proof, where~$\phi$
is the $k_E$-linear endomorphism of $k_E[u]/u^{e'p}$ sending
$u^i$ to $u^{pi}$.
\item Instead of applying Lemma 4.1 of \cite{MR2004122}, note that the
cohomology group
$H^1({\operatorname{Gal}}(K_1/K),k_E[u]/u^{e'p})$ vanishes because ${\operatorname{Gal}}(K_1/K)$ has prime-to-$p$
order while $k_E[u]/u^{e'p}$ has $p$-power order.
\item Every occurrence of $T_{i}^l$ in the proof (for any subscript $i$) should be replaced with
$T_{i}$. In the notation of \cite{MR2004122} the element $\eta$ is
defined when the map $\alpha \mapsto (1-b/a)\alpha$ on $k_E$ is not
surjective, i.e., when $a=b$; we may then take $\eta=1$.
\item The coefficients of $h,t$ are permitted to lie in $k_E$
(i.e., they are not constrained to lie in any particular proper subfield).
\end{itemize}
\end{proof}
Note that the recipe for $\mathcal{P}$ in the statement of
Proposition~\ref{prop: possible extensions of Breuil modules} defines
an extension of $\mathcal{N}$ by $\mathcal{M}$ provided that $\lambda$ lies in $u^{\max\{0,(x+y-e)(p-1)\}}k_E[u]/u^{e'p}$
and has all nonzero terms of degree congruent to $l-k$ modulo $p-1$
(\emph{cf.} the discussion in Section 7 of \cite{MR2004122}). Denote
this Breuil module by $\mathcal{P}(x,y,\lambda)$. Note that $c$ is fixed
while $x$ determines
$k$, since we require $\omega^{k+x} \cdot \mathrm{ur}_{c^{-1}} =
\chibar_1$; similarly $d$ is fixed and $y$ determines $l$. So this notation
is reasonable.
We would like to compare the generic fibres of extensions of different
choices of $\mathcal{M}$ and $\mathcal{N}$. To this end, we have the following
result. Write
$\chibar_1|_{I_K}=\omega^\alpha$, $\chibar_2|_{I_K}=\omega^\beta$.
\begin{prop}
\label{prop: comparing extensions}The Breuil module $\mathcal{P}(x,y,\lambda)$ has the same generic fibre as the Breuil module $\mathcal{P}'$,
where \begin{itemize}
\item $\mathcal{P}' = (k_E[u]/u^{e'p}) \cdot v' + (k_E[u]/u^{e'p}) \cdot w' $,
\item ${\operatorname{Fil}}^1 \mathcal{P}' =(k_E[u]/u^{e'p}) \cdot u^{e(p-1)} v' +
(k_E[u]/u^{e'p}) \cdot (w'+u^{p(e-x)+y}\lambda v') $,
\item $\phi_1(u^{e(p-1)} v') = cv'$, $\phi_1(w'+u^{p(e-x)+y}\lambda v')=dw'$,
\item $\widehat{g}(v') =\omega^{\alpha-e}(g)v'$ and $\widehat{g}(w') =\omega^{\beta}(g)w'$ for all $g \in {\operatorname{Gal}}(K_1/K)$.
\end{itemize}
\end{prop}
\begin{proof}
Consider the Breuil module $\mathcal{P}''$ defined by \begin{itemize}
\item $\mathcal{P}'' = (k_E[u]/u^{e'p}) \cdot v'' + (k_E[u]/u^{e'p}) \cdot w'' $,
\item ${\operatorname{Fil}}^1 \mathcal{P}'' =(k_E[u]/u^{e'p}) \cdot u^{e(p-1)} v'' +
(k_E[u]/u^{e'p}) \cdot (u^{y(p-1)}w''+u^{p(e-x)}\lambda v'') $,
\item $\phi_1(u^{e(p-1)} v'') = cv''$, $\phi_1(u^{y(p-1)}w''+u^{p(e-x)+y}\lambda v'')=dw''$,
\item $\widehat{g}(v'') =\omega^{k+x-e}(g)v''$ and $\widehat{g}(w'') =\omega^{l}(g)w''$ for all $g \in {\operatorname{Gal}}(K_1/K)$.
\end{itemize}
(One checks without difficulty that this \emph{is} a Breuil module. For instance the condition
on the minimum degree of terms appearing in $\lambda$ guarantees that
${\operatorname{Fil}}^1 \mathcal{P}''$ contains $u ^{e'}\mathcal{P}''$.) Note that $k+x\equiv \alpha\pmod{p-1}$,
$l+y\equiv\beta\pmod{p-1}$. We claim that $\mathcal{P}$, $\mathcal{P}'$ and $\mathcal{P}''$ all have the
same generic fibre. To see this, one can check directly that there is a morphism
$\mathcal{P}\to\mathcal{P}''$ given by \[v\mapsto u^{p(e-x)}v'',\ w\mapsto w'',\]and a
morphism $\mathcal{P}'\to\mathcal{P}''$ given by \[v'\mapsto v'',\ w'\mapsto
u^{py}w''.\] By Proposition 8.3 of \cite{MR2004122}, it is enough to
check that the kernels of these maps do not contain any free
$k_E[u]/(u^{e'p})$-submodules, which is an immediate consequence of
the inequalities $p(e-x),py<e'p$.
\end{proof}
\begin{rem}
\label{rem:extension-classes}
We note for future reference that while the classes in
$H^1(G_K,\chibar_1 \chibar_2^{-1})$ realised by $\mathcal{P}(x,y,\lambda)$ and
$\mathcal{P}'$ may not coincide, they differ at most by multiplication
by a $k_E$-scalar. To see this, observe that the maps $\mathcal{P} \to
\mathcal{P}''$ and $\mathcal{P}' \to \mathcal{P}''$ induce $k_E$-isomorphisms on the
rank one sub- and quotient Breuil modules.
\end{rem}
We review the constraints on the integers $x,y$: they must lie
between $0$ and~$e$, and if we let $k,l$ be the residues of
$\alpha-x,\beta-y \pmod{p-1}$ in the interval $[0,p-1)$ then we must
have $\{\omega^k,\omega^l\} = \{\omega^{a_1},\omega^{a_2}\}$. Call such a pair $x,y$ \emph{valid}.
Note that $l-k \equiv \beta-\alpha + x - y \pmod{p-1}$ for any valid pair.
\begin{cor}
\label{cor:comparison-of-good-models}
Let $x',y'$ be another valid pair.
Suppose that $x' + y' \le e$ and $p(x'-x)+(y -y') \ge 0$. Then $\mathcal{P}(x,y,\lambda)$ has
the same generic fibre as
$\mathcal{P}(x',y',\lambda')$, where $\lambda' = u^{p(x'-x)+(y-y')} \lambda$.
\end{cor}
\begin{proof}
The Breuil module $\mathcal{P}(x',y',\lambda')$ is well-defined: one checks
from the definition that the congruence
condition on the degrees of the nonzero terms in $\lambda'$ is
satisfied, while since $x'+y' \le e$
there is no condition on the lowest degrees appearing in $\lambda'$.
Now the result is immediate from Proposition~\ref{prop: comparing extensions},
since $u^{p(e-x)+y}\lambda = u^{p(e-x')+y'}\lambda'$.
\end{proof}
Recall that $x+y \equiv e\pmod{p-1}$, so that $x$ and $e-y$ have the
same residue modulo $p-1$. It follows that if $x,y$ is a valid pair
of parameters, then so is $e-y,y$. Let $X$ be the largest
value of $x$ over all valid pairs $x,y$, and similarly $Y$ the smallest value of $y$;
then $Y=e-X$, since if if we had $Y > e-X$ then $e-Y$ would be a
smaller possible value for $x$.
\begin{cor}
\label{cor:generic-fibres-all-occur-extremally}
The module $\mathcal{P}(x,y,\lambda)$ has the same generic fibre as
$\mathcal{P}(X,Y,\mu)$ where $\mu \in k_E[u]/u^{e'p}$ has all nonzero terms of degree congruent to $\beta-\alpha+X-Y$ modulo $p-1$, and has all terms
of degree less than $X(p-1)$, unless $\chibar_1=\chibar_2$,
in which case it may additionally have a term of degree
$pX-Y$.
\end{cor}
\begin{proof}
Since $X+Y=e$ and $p(X-x)+(y-Y) \ge 0$ from the choice of $X,Y$, the
previous Corollary shows that $\mathcal{P}(x,y,\lambda)$ has the same generic
fibre as some $\mathcal{P}(X,Y,\lambda')$; by Proposition~\ref{prop:
possible extensions of Breuil modules} this has the same generic
fibre as $\mathcal{P}(X,Y,\mu)$ for $\mu$ as in the statement. (Note that
if $\chibar_1=\chibar_2$ then automatically $X \ge Y$, because in this
case if $x,y$ is a valid pair then so is $y,x$.)
\end{proof}
\begin{prop}
\label{prop:computation of the dimension of Lflat}Let $X$ be as
above, i.e., $X$ is the maximal integer
such
that
\begin{itemize}
\item $0\le X\le e$, and
\item either $\chibar_1|_{I_K}=\omega^{a_1+X}$ or
$\chibar_1|_{I_K}=\omega^{a_2+X}$.
\end{itemize}
Then $L_{\operatorname{flat}}$is an ${\overline{\F}_p}$-vector space of dimension at most
$X$, unless $\chibar_1=\chibar_2$, in which case it has
dimension at most $X+1$.
\end{prop}
\begin{proof}
Let $L_{\mathrm{flat},k_E} \subset L_{\operatorname{flat}}$ consist of the classes $\eta$ such that the containment $\eta \in L_{\operatorname{flat}}$ is
witnessed by a $k_E$-vector space scheme with generic fibre descent
data. By
Corollary~\ref{cor:generic-fibres-all-occur-extremally} and Remark~\ref{rem:extension-classes} these are exactly
the classes arising from the Breuil modules $\mathcal{P}(X,Y,\mu)$ with
$k_E$-coefficients as in
Corollary~\ref{cor:generic-fibres-all-occur-extremally}. These
classes form a $k_E$-vector space (since they are \emph{all} the
extension classes arising from extensions of $\mathcal{N}(Y)$ by $\mathcal{M}(X)$),
and by counting the (finite) number of possibilities for $\mu$ we see
that $\dim_{k_E} L_{\mathrm{flat},k_E}$ is at most $X$ (resp.
$X+1$ when $\chibar_1=\chibar_2$).
Since $L_{\mathrm{flat},k_E} \subset L_{\mathrm{flat},k'_E}$ if
$k_E \subset k'_E$ it follows easily that $L_{\operatorname{flat}} = \cup_{k_E}
L_{\mathrm{flat},k_E}$ is an ${\overline{\F}_p}$-vector space of dimension at
most $X$ (resp. $X+1$).
\end{proof}
We can now prove our main local result, the promised relation between $L_{\operatorname{flat}}$ and
$L_{\operatorname{crys}}$. \begin{thm}
\label{thm: crystalline equals flat}Provided that either $a_1-a_2\ne
p-1$ or $\chibar_1\chibar_2^{-1}\ne\epsilonbar$, we
have $L_{\operatorname{flat}}=L_{\operatorname{crys}}$.
\end{thm}
\begin{proof}By Theorem \ref{thm: crystalline extension implies flat},
we know that $L_{\operatorname{crys}}\subsetL_{\operatorname{flat}}$, so by Proposition~\ref{prop:computation of the dimension of Lflat} it suffices to show that
$L_{\operatorname{crys}}$ contains an ${\overline{\F}_p}$-subspace of dimension
$X$ (respectively $X+1$ if $\chibar_1 = \chibar_2$). Since $L_{\operatorname{crys}}$ is the union of the spaces
$L_{\chi_1,\chi_2}$, it suffices to show that one of these spaces
has the required dimension. Let $X$ be as in the statement of
Proposition \ref{prop:computation of the dimension of Lflat}, so
that $X$ is maximal in $[0,e]$ with the property that either $\chibar_1|_{I_K}=\omega^{a_1+X}$ or
$\chibar_1|_{I_K}=\omega^{a_2+X}$. Note that by the assumption
that there is a decomposition
${\operatorname{Hom}}({\F_p},{\overline{\F}_p})=J\coprod J^c$, and an integer
$0\le \delta\le e-1$ such that \[\rhobar|_{I_K}\cong
\begin{pmatrix}
\omega^\delta \prod_{\sigma\in
J}\omega_{\sigma}^{a_{1}+1}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{2}}&*\\ 0& \omega^{e-1-\delta}\prod_{\sigma\in
J^c}\omega_\sigma^{a_{1}+1}\prod_{\sigma\in
J}\omega_\sigma^{a_{2}} \end{pmatrix},\]we see that
if $X=0$ then $\chibar_1|_{I_K}=\omega^{a_2}$ (and $J$ must be empty).
If $\chibar_1|_{I_K}=\omega^{a_2+X}$ then we take $J$ to be empty
and we take $\delta=X$; otherwise $X > 0$ and $\chibar_1|_{I_K} =
\omega^{a_1+X}$, and we can take $J^c$ to be empty and
$\delta=X-1$. In either case, we may define characters $\chi_1$ and
$\chi_2$ as in Section \ref{subsec: H^1_f}, and we see from Lemma
\ref{lem: dimension of H^1_f spaces} that
$\dim_{{\overline{\F}_p}}L_{\chi_1,\chi_2}=X$ unless $\chibar_1=\chibar_2$, in
which case it is $X+1$. The result follows.\end{proof}
As a consequence of this result, we can also address the question of
the relationship between the different spaces $L_{\chi_1,\chi_2}$ for
a fixed Serre weight $a\inW^?(\rhobar)$. If $e$ is large, then
these spaces do not necessarily have the same dimension, so they
cannot always be equal. However, it is usually the case that the
spaces of maximal dimension coincide, as we can now see.
\begin{cor}
\label{cor: independence of lift for H^1_f}If either $a_1-a_2\ne
p-1$ or $\chibar_1\chibar_2^{-1}\ne\epsilonbar$, then
the spaces $L_{\chi_1,\chi_2}$ of maximal dimension are all equal.
\end{cor}
\begin{proof}
In this case $\dim_{{\overline{\F}_p}} L_{\chi_1,\chi_2}=\dim_{{\overline{\F}_p}}L_{\operatorname{crys}}$
by the proof of Theorem \ref{thm: crystalline equals flat}, so we
must have $L_{\chi_1,\chi_2}=L_{\operatorname{crys}}$.
\end{proof}
Finally, we determine $L_{\operatorname{crys}}$ in the one remaining case, where the
spaces $L_{\chi_1,\chi_2}$ of maximal dimension no longer coincide.
\begin{prop}
\label{prop: Lcrys in the exceptional case}Suppose that
$a_1-a_2=p-1$ and that $\chibar_1\chibar_2^{-1}=\epsilonbar$. Then $L_{\operatorname{crys}}=H^1(G_K,\epsilonbar)$.
\end{prop}
\begin{proof}We prove this in a similar fashion to the proof of Lemma
6.1.6 of \cite{blggord}. By twisting we can reduce to the case
$(a_1,a_2)=(p-1,0)$. Let $L$ be a given line in
$H^1(G_K,\epsilonbar)$, and choose an unramified character $\psi$
with trivial reduction. Let
$\chi$ be some fixed crystalline character of $G_K$ with Hodge-Tate weights
$p,1,\dots,1$ such that $\chibar=\epsilonbar$. Let $E/\Qp$ be a finite extension with ring
of integers $\mathcal{O}$, uniformiser $\varpi$ and residue field ${\mathbb{F}}$, such
that $\psi$ and $\chi$ are defined over $E$ and $L$ is defined over ${\mathbb{F}}$. Since any extension of $1$ by $\chi\psi$ is
automatically crystalline, it suffices to show that we can choose
$\psi$ so that $L$ lifts to $H^1(G_K,\mathcal{O}(\psi\chi))$.
Let $H$ be the
hyperplane in $H^1(G_K,\mathbb{F})$ which annihilates $L$ under the Tate
pairing. Let $\delta_1 : H^1(G_K,\mathbb F(\overline{\epsilon})) \to
H^2(G_K,\mathcal{O}(\psi\chi))$ be the map coming from
the exact sequence $0\to \mathcal{O}(\psi\chi)\stackrel{\varpi}{\to}\mathcal
O(\psi\chi)\to \mathbb F(\overline{\epsilon})\to 0$ of
$G_K$-modules. We need to show that $\delta_1(L)=0$ for some choice
of $\psi$.
Let $\delta_0$ be the map
$H^0(G_K,(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon)) \to
H^{1}(G_K,\mathbb{F})$ coming from the exact sequence $0 \to \mathbb{F} \to
(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon) \stackrel{\varpi}{\to}
(E/\mathcal{O})(\psi^{-1}\chi^{-1}\epsilon) \to 0$ of $G_K$-modules. By
Tate local duality, the condition that $L$ vanishes under the map
$\delta_1$ is equivalent to the condition that the image of the map
$\delta_0$ is contained in $H$. Let $n \geq 1$ be the largest
integer with the property that $\psi^{-1}\chi^{-1}\epsilon \equiv 1
\pmod{\varpi^n}$. Then we can write $\psi^{-1}\chi^{-1}\epsilon(x)=
1+\varpi^n \alpha(x)$ for some function $\alpha : G_K \to
\mathcal{O}$. Let $\overline{\alpha}$ denote $\alpha \pmod{\varpi} : G_K
\to \mathbb{F}$. Then $\overline{\alpha}$ is additive and the choice of
$n$ ensures that it is non-trivial. It is straightforward to check
that the image of the map $\delta_0$ is the line spanned by
$\overline{\alpha}$. If $\overline{\alpha}$ is in $H$, we are
done. Suppose this is not the case. We break the rest of the proof
into two cases.
\medskip{\sl Case 1: $L$ is
tr\`es ramifi\'e:} To begin, we observe that it is
possible to have chosen
$\psi$ so that
$\overline{\alpha}$ is ramified. To see this, let $m$ be the largest integer with the property that
$(\psi^{-1} \chi^{-1} \epsilon)|_{I_K} \equiv 1 \pmod{\varpi^m}$. Note that $m$ exists since the
Hodge-Tate weights of $\psi^{-1}\chi^{-1}\epsilon$ are not all $0$.
If $m = n$ then we are done, so assume instead that $m >n$. Let $g\in
G_K$ be a lift of ${\operatorname{Frob}}_K$. We claim that
$\psi^{-1}\chi^{-1}\epsilon(g)= 1 +\varpi^{n} \alpha(g)$ such that
$\alpha (g) \not \equiv 0 \pmod{\varpi}$. In fact, if $\alpha
(g)\equiv 0 \pmod{\varpi}$ then $\psi^{-1}\chi^{-1}\epsilon(g) \in 1
+ \varpi^{n+1} \mathcal{O}_K$. Since $m > n$ we see that
$\psi^{-1}\chi^{-1}\epsilon(G_K) \subset 1 + \varpi^{n+1} \mathcal{O}_K$
and this contradicts the selection of $n$. Now define a unramifed
character $\psi'$ with trivial reduction by setting $\psi' (g) =
1 - \varpi^n \alpha (g)$. After replacing $\psi$ by $\psi \psi'$ we
see that $n$ has increased but $m$ has not changed. After finitely
many iterations of this procedure we have $m=n$, completing the
claim.
Suppose, then, that $\overline{\alpha}$ is ramified. The fact that $L$ is tr\`es
ramifi\'e implies that $H$ does not contain the unramified line in
$H^1(G_K,\mathbb{F})$. Thus there is a unique $\overline{x} \in
\mathbb{F}^\times$ such that $\overline{\alpha}+u_{\overline{x}} \in H$
where $u_{\overline{x}}: G_K\to \mathbb{F}$ is the unramified
homomorphism sending ${\operatorname{Frob}}_K$ to $\overline{x}$. Replacing $\psi$ with $\psi$ times
the unramified character sending ${\operatorname{Frob}}_K$ to $(1+\varpi^n x)^{-1}$,
for $x$ a lift of $\overline{x}$, we are done.
\medskip{\sl Case 2: $L$ is peu ramifi\'e:} Making a ramified
extension of $\mathcal{O}$ if necessary, we can and do assume that $n\geq
2$. The fact that $L$ is peu ramifi\'e implies that $H$ contains the
unramified line. It follows that if we replace $\psi$ with $\psi$
times the unramified character sending ${\operatorname{Frob}}_K$ to $1+\varpi$, then
we are done (as the new $\overline{\alpha}$ will be unramified).
\end{proof}
\section{Global consequences}\label{sec: global
consequences}\subsection{}We now deduce our main global results,
using the main theorems of \cite{blggU2} together with our local
results to precisely determine the set of Serre weights for a global
representation in the totally ramified case.
\begin{prop}
\label{prop: semisimple elimination if totally ramified}Let $F$ be an imaginary CM field with maximal totally real
subfield $F^+$, and suppose that $F/F^+$ is unramified at all finite
places, that every place of $F^+$ dividing $p$ splits completely in
$F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that $p>2$, and that
$\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible modular representation
with split ramification. Let
$a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre
weight such that $\bar{r}$ is modular of weight $a$. Let $w$ be a
place of $F$ such that $F_w/\Qp$ is totally ramified of degree $e$. Write
$a_w=(a_1,a_2)$, and write $\omega$ for the unique fundamental
character of $I_{F_w}$ of niveau one.
Then $a_w\inW^?(\bar{r}|_{G_{F_w}})$.
\end{prop}
\begin{proof}
Suppose first that $\bar{r}|_{G_{F_w}}$ is irreducible. Then the
proof of Lemma 5.5 of \cite{geesavitttotallyramified} goes through
unchanged, and gives the required result. So we may suppose that
$\bar{r}|_{G_{F_w}}$ is reducible. In this case the proof of Lemma 5.4 of
\cite{geesavitttotallyramified} goes through unchanged, and shows
that we have \[\bar{r}|_{G_{F_w}}\cong
\begin{pmatrix}
\chibar_1&*\\0&\chibar_2
\end{pmatrix}\]where
$(\chibar_1\chibar_2)|_{I_K}=\omega^{a_1+a_2+e}$, and either
$\chibar_1|_{I_K}=\omega^{a_1+z}$ or
$\chibar_1|_{I_K}=\omega^{a_2+e-z}$ for some $1\le z\le e$, so we
are in the situation of Section \ref{subsec: H^1_f}. Consider the
extension class in $H^1(G_{F_w},\chibar_1\chibar_2^{-1})$
corresponding to $\bar{r}|_{G_{F_w}}$. By Proposition \ref{prop:
modular of some weight implies potentially BT lifts exist}, either
$a_1-a_2=p-1$ and $\chibar_1\chibar_2^{-1}=\epsilonbar$, or this extension class is in $L_{\operatorname{flat}}$. In either case,
by Theorem \ref{thm: crystalline equals flat} and Proposition
\ref{prop: Lcrys in the exceptional case}, the extension class is in
$L_{\operatorname{crys}}$, so that $a_w\inW^?(\bar{r}|_{G_{F_w}})$, as required.
\end{proof}
Combining this with Theorem 5.1.3 of \cite{blggU2}, we obtain our
final result.
\begin{thm}
\label{thm: the main result, modular if and only if predicted}Let
$F$ be an imaginary CM field with maximal totally real subfield
$F^+$, and suppose that $F/F^+$ is unramified at all finite places,
that every place of $F^+$ dividing $p$ splits completely in $F$,
that $\zeta_p\notin F$, and that $[F^+:{\mathbb{Q}}]$ is even. Suppose that
$p>2$, and that $\bar{r}:G_F\to\operatorname{GL}_2({\overline{\F}_p})$ is an irreducible
modular representation with split ramification such that
$\bar{r}(G_{F(\zeta_p)})$ is adequate. Assume that for each place $w|p$
of $F$, $F_w/\Qp$ is totally ramified.
Let $a\in({\mathbb{Z}}^2_+)_0^S$ be a Serre weight. Then
$a_w\inW^?(\bar{r}|_{G_{F_w}})$ for all $w$ if and only if $\bar{r}$ is modular of
weight $a$.
\end{thm}
\bibliographystyle{amsalpha}
| {'timestamp': '2011-06-29T02:01:40', 'yymm': '1106', 'arxiv_id': '1106.5584', 'language': 'en', 'url': 'https://arxiv.org/abs/1106.5584'} |
\section{Overview}
ALICE (A Large Ion Collider Experiment)\cite{ALICEref} at the LHC\cite{LHCref} is a general purpose experiment designed to study the phase transition between ordinary nuclear matter and the quark-gluon plasma, which occurs in high energy nucleus-nucleus collisions.
To enhance its capabilities for measuring jet properties, the ALICE detector has been upgraded in 2010 with a large acceptance ($\Delta \eta \times \Delta \phi = 1.4 \times 1.86$ (107\textdegree)) ElectroMagnetic Calorimeter (EMCal)\cite{EMCALTDR} providing a measurement of the neutral fraction of the jet energy and an unbiased jet trigger, thanks to a centrality dependent energy threshold.
The sampling calorimeter consists of 12288 towers of layered Pb-scintillator arranged in modules of $2 \times 2$ towers, with each tower containing 77 layers for a total of 20.1 radiation lengths.
A tower is read out with an avalanche photodiode (APD) which collects, via a bundle of optical fibers, the light created by particle interactions.
A charge sensitive preamplifier (CSP) is used to instrument each APD.
A supermodule (SM) is made of 24 strips of 12 modules (1152 towers). In 2010, the EMCal comprised four SM, in 2011 there were ten SM and in 2012 the full EMCal contains ten complete SM plus two thirds of a SM.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{./figs/trigger_overview}
\caption{Flat view of the EMCal detector with its surrounding trigger electronics.}
\label{elec_overview}
\end{center}
\end{figure}
A schematic view of the EMCal detector with its front-end and trigger electronics is sketched in fig.~\ref{elec_overview}.
Each SM is divided into three regions, and each region is instrumented by 12 FEE cards, so each SM has 36 FEE cards\cite{FEEpaper}.
Each FEE takes 32 analog inputs and generates eight fastOR signals. These are fast shaped (100\,ns) analog sums over one module, i.e. four tower signals.
The individual tower signals are used for energy measurement while the 3072 module analog sums (fastOR), are used to build the trigger.
The Trigger Region Unit (TRU) \cite{TRUpaper} are used to digitize, at the machine bunch crossing rate (40.08\,MHz), the fastOR signals provided by the FEE and to compute and generate the local Level 0 (L0) triggers. Finally, the Summary Trigger Unit (STU), computes the global L0 trigger by ORing the local L0 triggers.
The STU also collects and aggregates the TRU data used to compute the the two Level 1 (L1) triggers, the photon trigger and the jet trigger.
The L1 thresholds are computed event-by-event using the ALICE beam-beam counter detector\cite{V0paper} (V0) according to a 2\textsuperscript{nd} order fit function $A \cdot V0_{count}^2 + B \cdot V0_{count}+C$, where $V0_{count}$ is the total charge information provided by the V0 and A, B, C the threshold parameters.
The communication between the TRUs and the STU is performed through 12\,m point to point cat7 Ethernet cables.
Additionally, the STU is included in the EMCal readout via a Detector Data Link\cite{DDLpaper} (DDL) to the ALICE DAQ\cite{ALICEDAQ}. The readout, which is primarily used to return the triggering indexes and thresholds used on a event-by-event basis, can also be used to provide the primitive triggering data in order to recheck off-line the on-line trigger quality.
Additionally, an Ethernet interface to the Detector Control System (DCS) interface is used for fast FPGA firmware upload and run configuration (thresholds parameters, trigger delays, etc).
\section{Trigger algorithms}
\subsection{TRU L0 algorithm}
After digitization, each fastOR is digitally integrated over a sliding time window of four samples. Then, the results of these operations are continuously fed to $2 \times 2$ spatial sum processors that compute the energy deposit in patches of $4\times4$ towers (or $2 \times 2$ fastOR) for the region managed. Each patch energy is constantly compared to a minimum bias threshold; whenever it is crossed and the maximum of the peak has been found, a local L0 trigger is fired. In preparation for the L1 algorithm, the time integrated sums are also stored in a circular buffer for later retrieval and transmission to STU.
Note that Level 0 trigger suffers from some spatial trigger inefficiencies due to the fact that TRUs cannot compute the spatial sum for patches sitting on region boundaries
\subsection{Global EMCal triggers computed in STU}
The STU is the access point to the Central trigger Processor CTP\cite{CTPpaper} for EMCal.
Consequently, it is used to provide the global L0, which is an OR of the 32 L0 locally calculated by the TRUs and two L1 triggers: the L1-gamma trigger and the L1-jet trigger.
The L1-gamma trigger uses the same patch size as L0, but without the inefficiencies displayed by the local L0 (i.e. $2\times2$ patch across several TRU regions can be computed).
The L1-jet trigger is built by summing energy over a sliding window of $4\times4$ subregions, where a subregion is defined as a $4 \times 4$ fastOR (or $8 \times 8$ towers) area, see fig.~\ref{SM_map}.
\begin{figure}[b]
\begin{center}
\includegraphics[angle=0,width=0.8\textwidth]{./figs/SM_map3}
\caption{Cartoon of different possible L0, L1-gamma and L1-jet trigger patches.}
\label{SM_map}
\end{center}
\end{figure}
With the given EMCal geometry and due to the various trigger patches sizes, there are a total of 2208 L0, 2961 L1-gamma and 117 L1-jet trigger patches that can be fired.
\subsection{L1 trigger processing}
A block diagram of the L1 trigger processing is shown in fig.~\ref{L1_trig_proc}.
The L1-processing is not continuously running, i.e. pipelined, it is instead initiated on the confirmed L0 reception provided by the CTP via the TTC\cite{TTCpaper} links (TRUs and STU).
At this moment, 1.2\,\textmu s after interaction, the TRUs send to the STU the appropriate time integrated data from their circular buffers to the STU via the custom serial links.
The serialization, propagation delay and deserialization takes 3075\,ns.
Meanwhile, the V0 detector transfers its charge information to the STU via a direct optical link. The thresholds for photon and jet patches are immediately processed and made available before the actual trigger processing starts.
Once the TRU data reception is achieved, the L1-photon trigger processing and also the subregion energy calculation are done in parallel for each TRU region.
Then when the previous processing is over, the L1-jet trigger starts and uses the previously generated subregion accumulation. Finally, both triggers are adequately delayed to accommodate the L1-trigger latency expected by the CTP.
More technical details about the trigger implementation may be found in \cite{STU_twepp2010}.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.9\textwidth]{./figs/L1_trig_proc}
\caption{Block diagram of the L1 trigger processing annotated with the time required to go through each step. }
\label{L1_trig_proc}
\end{center}
\end{figure}
\section{Custom serial protocol}
\subsection{Original solution}
The main motivation for the development of this custom serial link was the desire to reuse the TRU design made for the \textbf{PHO}ton \textbf{S}pectrometer (PHOS) which was equipped with a spare RJ45 connector directly linked to its FPGA.
The original trigger timing constraints drove the design in the same direction.
This solution minimizes transmission latency and meets some functional requirements, allowing the STU to be used as a low jitter reference clock distributor for TRUs.
Additionally, the fact that the local L0s had to be forwarded to STU for feeding its global OR required a custom solution.
Thus, the choice was made to use a four-pair LVDS link transported over CAT7 Ethernet cables because they have the appropriate impedance and feature low signal attenuation and low skew between pairs (see fig.~\ref{original_serial_link}).
Pair usage is as follows: one pair is dedicated for the LHC reference clock transfer to the TRU, another is used by the TRUs to forward their local L0 candidates and the two remaining are used for synchronous serial data transfer without any encoding.
Each data pair was running at 400\,Mb/s and the clock used for transfer is the LHC clock multiplied by 10.
With this very light protocol, the latency is only the sum of the cable delay and bit transmission time.
Each TRU sends simultaneously its 96 values of 12 bit coded time integrated fastOR data to the STU at 800\,Mb/s; in this case the serialization latency is thus 1.44\,\textmu s. The communication protocol was simple, outside of the data payload transmission a known inter-packet word was continuously transfered. Then at the transmission time, right after the confirmed L0 reception, a header packet was sent followed by the time-integrated data.
The link synchronization is done before each start of run by a Finite State Machine (FSM) implemented in the FPGA.
This is done in two steps. In the first step, the data phase alignment takes place, it relies on the individual data path fine granularity delaying feature available in the Virtex 5 FPGA (up to 64 steps of 78\,ps).
A scanning of all delay values is made in order to obtain the zone where data reception is stable and then the central value is applied.
In the second step, character framing is performed for associating individually each incoming bit to the good deserialized word.
This whole process is performed with the inter-packet data word used as the synchronization/training pattern.
For the link quality monitoring, error counter monitors were implemented.
These were incremented for each bad inter-packet received outside of the expected payload transmission.
The counters were checked every minute via DCS, and an alarm was raised in case of transmission errors.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.85\textwidth]{./figs/original_serial_link}
\caption{Sketch of the original custom serial link.}
\label{original_serial_link}
\end{center}
\end{figure}
\subsection{Problem encountered and diagnosis tool}
The original custom serial link solution was successfully validated in the laboratory and also in 2010 with four installed SM by regularly performing TRU/STU data correlation checks. Unfortunately, in 2011, when the EMCal was fully installed, several random TRU-STU links were displaying communication errors during some runs, while for all links the start of run synchronization went through correctly.
As expected from the missing links, off-line validation showed missing L1-photon triggers for the missing regions.
But, from time to time, the on-line L1 jet trigger rate (relative to accepted L0 triggers) jumped from a nominal value of 2\% to 100\% (no rejection).
In order to understand where the problem lay, a frame reception monitor was inserted in the deployed firmware.
It is able to check, for each TRU-STU link and for each event, the good or bad reception of the packet header.
The resulting reception bit mask is inserted in the data stream along with the corresponding event.
A run diagnosis example is shown in fig.~\ref{frame_errors_vs_rate} for run 163532. It can be seen that while the error counter monitoring tool is indicating that TRU 1, 21 and 30 are badly communicating with the STU, the frame reception monitor shows that in fact TRU~1 is not communicating at all with the STU and that TRU~21 is transmitting data most of the time. Remarkably, TRU~30 has only three successful data transfers toward STU and the first one is actually causing the trigger rate increase.
This observation not only confirmed the suspected communication problem, but also revealed that there was a flaw in the L1-jet trigger algorithm implementation.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.95\textwidth]{./figs/frame_errors_vs_rate}
\caption{Communication failure and trigger rate diagnosis of run 163532.
Top figure shown the L1 trigger rate relative to accepted L0 trigger (L1-jet in red and L1-photon in blue). This plot was obtained by dividing the error count (maximum of 65535) by 100000 and by adding the corresponding TRU number.
Bottom left plot shows the error count recorded for every minute and for each TRU. Bottom right plot shows the frame bit received for each accepted trigger and for each TRU. This latter plot was obtained by dividinng the frame error (maximum of 1) by 2 and by adding the corresponding TRU number.
The correlation between the first successful communication of TRU-STU link 30 and the L1-jet trigger rate increase is shown.}
\label{frame_errors_vs_rate}
\end{center}
\end{figure}
For fixing the communication problem, the first cure attempt was to decrease the transmission rate to $2 \times 240$\,Mb/s, thus relaxing the serial link timing constraints. This was possible in 2011, thanks to the increased timing budget for providing the candidate L1 trigger at the CTP input (from 6.2 to 7.3\,\textmu s).
Unfortunately, this did not solve the problem.
By performing a data recording at fixed latency after confirmed L0 reception, instead of recording the payload after a packet header reception, the issue was found to be due to the serialization/deserialization.
While the synchronization seemed good, sometimes a cycle delay between the LSB part and MSB part of the transmitted data word appeared. Obviously, this problem could not be observed at the synchronization time with a single word training pattern.
Therefore, the second, and successful, cure applied was to use a three word training pattern, in conjunction with the possibility to delay the MSB or the LSB part of a word during the synchronization phase.
\section{Correcting fake and missing triggers fixing, from simulation to on-line debugging}
From the early development stage of the hardware and firmware, gateway tools were developed to exchange data between ``physics and ''firmware`` simulations, as shown schematically in fig.~\ref{vhdl_aliroot}.
This allowed for the validation of the core STU algorithms (jet and photon) before deployment and, as a side benefit, for the quick adaptation of gateway tools --- such as the trigger index decoding routine --- to the off-line software.
While these tools were useful in the early stage of development, they were limited.
For instance, the firmware simulation is slow. Moreover, it is not easy to validate all possible external effects which could cause false and/or missing triggers.
Examples of such possible effects include communication breakdown, clock jitter, and other, not necessarily predictable, issues.
Consequently, an ``event player'' feature was added in the STU firmware.
As shown in fig.~\ref{L1_trig_proc_pattern}, this on-line tool allows to select the data to be used by the trigger processors between the TRU received data and DCS preloaded data.
The ``event player'' can play up to eight different patterns.
It offers the possibility of validating the entire L1 algorithms in-situ and to check the compliance with the ALICE DAQ after each data packet modification.
Additionally, it may be used to accumulate statistics to check for eventual timing issues or other such as radiations effect.
Thanks to this debugging tool, the L1-jet trigger rate issue was pinpointed to the missing ``sub-region'' buffer clearing when serial communication links failed.
Indeed, for flaky links, the ``sub-region'' computed from the last correctly received data were constantly used by the L1-jet processor.
Hence, when the last received information contained a high energy event, the subsequent L0 confirmed event were mistakenly accepted at L1.
After correcting this problem, both L1 triggers performed as expected, as detailed in the next section.
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.6\textwidth]{./figs/vhdl_aliroot}
\caption{Overview of the ``physics'' and ``firmware'' co-simulation.}
\label{vhdl_aliroot}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=-90,width=0.8\textwidth]{./figs/L1_trig_proc_pattern}
\caption{Modified STU firmware featuring the ``event-player'' allowing to select the data source to be used by the trigger processor between the TRU received data and DCS preloaded data. The buffer causing fake L1-jet trigger when not reseted between event and with flaky communication is colored in orange. }
\label{L1_trig_proc_pattern}
\end{center}
\end{figure}
\section{Trigger performance}
As an illustration of the trigger performance during the 2011 lead beam period (Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76\,TeV), the event selection as a function of the centrality of the collision for Minimum Bias (MB) and EMCal L1-jet trigger classes is shown in fig.~\ref{plot_efficacite}.
The upper plot shows the minimum bias\footnote{The minimum bias trigger class is composed of the coincidence of the V0 detector L0 trigger signal and the ZDC L1 trigger signal (Zero Degree Calorimeter).} and L1-jet samples\footnote{The L1-jet sample is a subsample of the MB, obtained with the coincidence with EMCal L1 jet trigger signal.} for a linear energy threshold with two threshold parameter sets.
The lower plot shows the MB to L1-jet ratios for the different threshold parameters.
The set of parameters giving the magenta distribution rejects too many central events, while the set of of parameters giving the red one is more uniform for V0A + V0C signal above 5000 ADC.
The L1 trigger could provide a uniform background rejection, in a large centrality region, while disfavoring the most peripheral events.
This behavior, inherent to the order of the threshold computation, will be improved by using a second order centrality dependent energy threshold.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{./figs_ext/plot_efficacite}
\caption{Plot of the event selection for Pb/Pb in 2011. The event yields versus the centrality for minimum bias and EMCal L1-jet trigger classes is shown.
The lower plot shows the MB to L1-jet ratios distribution for the different threshold parameters. Horizontal scale is the total amount of V0 charge expressed in ADC counts.}
\label{plot_efficacite}
\end{center}
\end{figure}
As shown on the left of fig.~\ref{spatial_uniformity}, a spatial non uniformity in jet triggers was observed.
While the APD inter-calibration was done in the laboratory using cosmics, an in-situ calibration was performed using $\pi^0$ data at the end of 2011.
The calibration constants obtained roughly reproduce the trigger non uniformity.
This modified APD inter-calibration correction was used in 2012, further detailed analysis are required to assess the correction benefit.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.48\textwidth]{./figs_ext/L1PatchPosition}
\includegraphics[angle=0,width=0.45\textwidth]{./figs_ext/coefCalib}
\caption{Left plot shows the occurrence of a jet trigger patch, a factor of six can be observed between the most active and the least active patch. Right plot shows in-situ calibration constants using $\pi^0$ for each jet trigger patch.}
\label{spatial_uniformity}
\end{center}
\end{figure}
\section{Perspectives}
Thanks to STU flexibility (available FPGA resources and spare trigger outputs), a 2\textsuperscript{nd} set of threshold parameters has been implemented to improve the L1 data sample overlap with the MB data sample. The resource usage increased from 69\% to 92\%.
In mid-2013, ALICE is forecasted to be upgraded with the Di-jet CALorimeter, that will increase the coverage by $\Delta \eta \times \Delta \phi = 1.4 \times 100$\textdegree (PHOS included).
It is composed of six shorter SM that will be installed on either side of the PHOS. For this operation one or two STUs will be used, depending whether PHOS will be included in the DCAL trigger or not (one STU for DCAL, one STU for PHOS).
\section{Summary}
The STU has been installed for two years, and all the system interfaces have been validated.
The custom serial protocol, which has been modified, has been demonstrated to operate in realistic conditions with intensive readout.
The fast FPGA remote configuration proved to be an asset for regular upgrades and problem solving. Moreover, it has been noted that from early on, it is an advantage to implement monitoring tools and to develop diagnosis tools. For instance, the ``event player'' demonstrated to be good tool for in situ validation of trigger algorithm without beam.
| {'timestamp': '2012-11-28T02:08:04', 'yymm': '1210', 'arxiv_id': '1210.8078', 'language': 'en', 'url': 'https://arxiv.org/abs/1210.8078'} |
\section{Introduction\label{sec-introduction}}
All the assessment reports (AR) published by the Intergovernmental Panel of Climate Change (IPCC) show that there is overwhelming scientific evidence of the existence of global warming (GW). It is also well known that climate change (CC) is a non-uniform phenomenon. What is not so clear is the degree of heterogeneity across all the regions in our planet. In fact, an important part of the Sixth Assessment Report (AR6) published by the IPCC in 2021-2022 is dedicated to this issue: climate (warming) heterogeneity. This is reflected in the chapters studying regional climate change. Our paper introduces a new quantitative methodology that builds on that described in Gadea and Gonzalo 2020 (GG2020) to characterize, measure and test the existence of such climate change heterogeneity (CCH). This is done in three steps. First, we introduce a warming typology (\textit{W1}, \textit{W2} and \textit{W3}) based on the trending behavior of the quantiles of the temperature distribution of a given geographical location. Second, we define in a testable format the concepts of warming acceleration and warming amplification. These concepts help to characterize (more ordinally than cardinally) the warming process of different regions. And third, we propose the new concept of warming dominance (WD) to establish when region \textit{A} suffers a worse warming process than region \textit{B}.
We have chosen Spain as a benchmark geographical location because, as the AR6 report states “. . . Spain is fully included in the Mediterranean (MED) Reference Region, but is one of the most climatically diverse countries in the world. . . ”. This fact opens up the possibility of studying warming heterogeneity (WH) from Spain to the Globe (outer heterogeneity, OWH) and also from Spain to some of its regions represented by Madrid and Barcelona (inner heterogeneity, IWH).
The three steps rely on the results reported in GG2020, where the different distributional characteristics (moments, quantiles, inter quantile range, etc.) of the temperature distribution of a given geographical location are converted into time series objects. By doing this, we can easily implement and test all the concepts involved in the three steps.
A summary of the results is as follows. Spain and the Globe present a clear warming process; but it evolves differently. Spain goes from a warming process where lower and upper temperatures share the same trend behavior (\textit{IQR} is maintained constant over time, warming type \textit{W1}) to one characterized by a larger increase in the upper temperatures (\textit{IQR} increases over time, warming type \textit{W3}). In contrast, the Globe as a whole maintains a stable warming type process characterized by lower temperatures that increase more than the upper ones (\textit{IQR} decreases in time).\footnote{Similar results for Central England are found in GG2020 and for the US in Diebold and Rudebush, 2022.} In our typology, this constitutes a case of warming type \textit{W2}. Climate heterogeneity can go further. For instance, within Spain we find that Madrid is of type \textit{W3} while the warming process of Barcelona is of type \textit{W1}. This is in concordance with the Madrid climate being considered a Continental Mediterranean climate while Barcelona is more a pure Mediterranean one.
The proposed warming typology (\textit{W1}, \textit{W2} and \textit{W3}), although dynamic, is more ordinal than cardinal. In this paper, the strength of a warming process is captured in the second step by analyzing its acceleration and its amplification with respect to a central tendency measure of the temperature distribution. Acceleration and amplification contribute to the analysis of warming heterogeneity. The acceleration in the Globe is present in all the quantiles above \textit{q30} while in Spain it already becomes significant above the 10$^{th}$ quantile. We find an asymmetric behavior of warming amplification; in Spain (in comparison with the Globe mean temperature) this is present in the upper temperatures (above the 80$^{th}$ and 90$^{th}$ quantiles) while in the Globe the opposite occurs (below the 20$^{th}$ and 30$^{th}$ quantiles). Within Spain, Madrid and Barcelona also behave differently in terms of acceleration and amplification. Overall, warming in Spain dominates that of the Globe in all the quantiles except for the lower quantile \textit{q05}, and between Madrid and Barcelona there is a partial WD. Madrid WD Barcelona in the upper part of the distribution and Barcelona WD Madrid in the lower one.
The existence of a clear heterogeneous warming process opens the door to the need of a new non-uniform causal (effect) research. One that goes beyond the standard causality in mean analysis (see Tol, 2021). CCH also suggests that in order for the
mitigation-adaptation policies to be as efficient as possible they should be designed following a type of common factor structure: a common global component plus an idiosyncratic local element. This goes in the line with the results found in Brock and Xepapadeas (2017), D’Autume et al. (2016) and Peng et al. (2021). Future climate agreements should clearly have this CCH into account. An important by-product of our warming heterogeneity results is the increase that this heterogeneity can generate in the public awareness of the GW process. A possible explanation for that can be found in the behavioral economics work by Malmendier (2021), in the results of the European Social Survey analyzed in Nowakowski and Oswald (2020) or in the psychology survey by Maiella et al. (2020).
The rest of the paper is organized as follows. Section 2 describes our basic climate econometrics methodology. Section 3 presents a brief description of the temperature data from Spain and the Globe. Section 4 addresses the application of our quantitative methodology in the cross-sectional version (temperatures measured monthly by stations in an annual interval) to Spain and (versus) the Globe. It also reports the results of applying the methodology using a purely temporal dimension (local daily temperature on an annual basis) for two representative stations in Spain (Madrid and Barcelona, empirical details in the Appendix). Section 5 offers a comparison and interpretation of the results. Finally, Section 6 concludes the paper.
\section{Climate Econometrics Methodology\label{sec-method}}
In this section, we briefly summarize the novel econometric methodology introduced in GG2020 to analyze Global and Local Warming processes.
Following GG2020, Warming is defined as an increasing trend in certain characteristics of the temperature distribution. More precisely:
\begin{defn} \label{def1} \textit{(\underline{Warming})}:
\textit{ Warming is defined as the existence of an increasing trend in some of the characteristics measuring the central tendency or position (quantiles) of the temperature distribution.}
\end{defn}
An example is a deterministic trend with a polynomial function for certain values of the $\beta$ parameters $C_{t}=\beta _{0}+\beta _{1}t+\beta _{2}t^{2}+...+\beta _{k}t^{k}$. \\
In GG2020 temperature is viewed as a functional stochastic process $X=(X_{t}(\omega), t \in T)$, where $T$ is an interval in $\mathbb{R}$, defined in a probability space $(\Omega, \Im, P)$. A convenient example of an infinite-dimensional discrete-time process consists of associating $\xi=(\xi_n, n \in \mathbb{R}_{+})$ with a sequence of random variables whose values are in an appropriate function space. This may be obtained by setting
\begin{equation}
X_{t}(n)=\xi_{tN+n}, \text{ } 0\leq n \leq N, \text{ } t=0,1,2, ..., T \label{example}
\end{equation}
so $X=(X_{t}, t=0,1,2,...,T)$. If the sample paths of $\xi$ are continuous, then we have a sequence $X_{0}, X_{1}, ....$ of random variables in the space $C[0, N]$. The choice of the period or segment $t$ will depend on the situation in hand. In our case, $t$ will be the period of a year, and $N$ represents cross-sectional units or higher-frequency time series.
We may be interested in modeling the whole sequence of $\mathbf{G}$ functions, for instance the sequence of state densities ($f_{1}(\omega), f_{2}(\omega), ..., f_{T}(\omega) $ ) as in Chang et al. (2015, 2016) or only certain characteristics ($C_{t}(w)$) of these $\mathbf{G}$ functions, for instance, the state mean, the state variance, the state quantile, etc. These characteristics can be considered time series objects and, therefore, all the econometric tools already developed in the time series literature can be applied to $C_{t}(w)$. With this characteristic approach we go from $\Omega$ to $\mathbb{R}^{T}$, as in a standard stochastic process, passing through a $\mathbf{G}$ functional space:
\begin{center}
$\underset{(w)}{\Omega} \xrightarrow{X} \underset{X_{t}(w)}{\mathbf{G}} \xrightarrow{C} \underset{C_{t}(w)}{\mathbb{R}}$ \\
\end{center}
Going back to the convenient example and abusing notation, the stochastic structure can be summarized in the following array:
\begin{equation}
\scalebox{0.8}{
\begin{tabular}{|c|cc|c|c|}
\hline
${\small X}_{{\small 10}}{\small (w)=}$ $\xi _{0}(w)$ & \multicolumn{1}{|c|}{%
${\small X}_{{\small 11}}{\small (w)=}$ $\xi _{1}(w)$} & $.$ $.$ $.$ &
\multicolumn{1}{|c|}{${\small X}_{{\small 1N}}{\small (w)=}$ $\xi _{N}(w)$}
& ${\small C}_{{\small 1}}{\small (w)}$ \\ \hline
${\small X}_{{\small 20}}{\small (w)=}$ $\xi _{N+1}(w)$ &
\multicolumn{1}{|c|}{${\small X}_{{\small 21}}{\small (w)=}$ $\xi _{N+2}(w)$}
& $.$ $.$ $.$ & \multicolumn{1}{|c|}{${\small X}_{{\small 2N}}{\small (w)=}$
$\xi _{2N}(w)$} & ${\small C}_{{\small 2}}{\small (w)}$ \\ \hline
$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$ & \multicolumn{1}{|c|}{$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$} & $%
\begin{array}{c}
.\text{ }.\text{ }. \\
.\text{ }.\text{ }. \\
.\text{ }.\text{ }.%
\end{array}%
$ & \multicolumn{1}{|c|}{$%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$} & $%
\begin{array}{c}
. \\
. \\
.%
\end{array}%
$ \\ \hline
${\small X}_{{\small T0}}{\small (w)=}$ $\xi _{(T-1)N+1}(w)$ &
\multicolumn{1}{|c|}{${\small X}_{{\small T1}}{\small (w)=}$ $\xi
_{(T-1)N+2}(w)$} & $.$ $.$ $.$ & \multicolumn{1}{|c|}{${\small X}_{{\small TN%
}}{\small (w)=}$ $\xi _{TN}(w)$} & ${\small C}_{{\small T}}{\small (w)}$ \\
\hline
\end{tabular}
}
\label{eq-scheme}
\end{equation}
The objective of this section is to provide a simple test to detect the existence of a general unknown trend component in a given characteristic $C_t$ of the temperature process $X_t$. To do this, we need to convert Definition \ref{def1} into a more practical definition.
\begin{defn} \label{def2} \textit{(\underline{Trend test})}: \textit{Let $h(t)$ be an increasing function of $t$. A characteristic $C_{t}$ of a functional stochastic process $X_{t}$ contains a trend if $\beta \neq 0$ in the regression}
\begin{equation}
C_{t}=\alpha +\beta h(t)+u_{t}, \text{ } t=1,...,T. \label{tbeta}
\end{equation}
\end{defn}
The main problem of this definition is that the trend component in $C_t$ as well as the function $h(t)$ are unknown. Therefore this definition can not be easily implemented. If we assume that $C_t$ does not have a trend component (it is $I(0)$)\footnote{Our definition of an I(0) process follows Johansen (1995). A stochastic process $Y_{t}$ that satisfies $Y_{t}-E(Y_{t})$ $%
=\sum \limits_{i=1}^{\infty }\Psi_{i}\varepsilon _{t-i}$ is called I(0) if $%
\sum \limits_{i=1}^{\infty }\Psi$ $_{i}z^{i}$ converges for $\left \vert
z\right \vert <1+\delta$, for some $\delta>0$ and $\sum \limits_{i=1}^{\infty }\Psi$ $_{i}\neq 0,$ where the
condition $\varepsilon_{t}\thicksim $ iid(0,$\sigma ^{2})$ with $\sigma ^{2}>0$ is understood.} and $h(t)$ is linear, then we have the following well known result.
\begin{prop}\label{prop1}
Let $C_{t}=I(0)$. In the regression
\begin{equation}
C_{t}=\alpha +\beta t + u_{t}
\label{eq-reg}
\end{equation}
the OLS estimator
\begin{equation}
\widehat{\beta}=\frac{\sum \limits_{t=1}^{T}(C_{t}-\overline{C})(t-\overline{t})}{\sum \limits_{t=1}^{T}(t-\overline{t})^{2}}
\end{equation}
satisfies
\begin{equation}
T^{3/2}\widehat{\beta }=O_{p}(1)
\end{equation}
and asymptotically ($T \rightarrow \infty$)
\begin{equation*}
t_{\beta =0} \text{ is } N(0,1).
\end{equation*}
\end{prop}
In order to analyze the behavior of the t-statistic $t_{\beta }=0,$ for a general trend component in $C_t$, it is very convenient to use the concept of \textit{Summability} (Berenguer-Rico and Gonzalo, 2014)
\begin{defn} \label{def3} \textit{(\underline{Order of Summability})}: \textit{ A trend $h(t)$ is said to be summable of order ``$\delta$'' $(S(\delta ))$ if there exists a slowly varying function $L(T)$,\footnote{A positive Lebesgue measurable function, L, on $(0,\infty)$ is slowly varying (in Karamata's sense) at $\infty$ if
\begin{equation}
\frac{L(\lambda n)}{L(n)}\rightarrow 1\text{ }(n\rightarrow \infty )\text{ }%
\forall \lambda >0.
\end{equation}
(See Embrechts et al., 1999, p. 564).} such that}
\begin{equation}
S_{T}=\frac{1}{T^{1+\delta }}L(T)\sum_{t=1}^{T}h(t) \label{eq_sum}
\end{equation}
\textit{is $O(1)$, but not $o(1)$.}
\end{defn}
\begin{prop}\label{prop2}
Let $C_{t}=h(t)+I(0)$ such that $h(t)$ is $ S(\delta )$ with $\delta \geq 0$, and such that the function $g(t)=h(t)t $ is $ S(\delta +1)$.
In the regression
\begin{equation}
C_{t}=\alpha +\beta t + u_{t} \label{tbeta2}
\end{equation}
the OLS $\widehat{\beta}$ estimator satisfies
\begin{equation}
T^{(1-\delta )}\widehat{\beta }=O_{p}(1).
\end{equation}
Assuming that the function $h(t)^{2}$ is $ S(1+2 \delta-\gamma)$ with $0\leq \gamma \leq1+\delta $, then
\begin{equation}
t_{\beta =0} = \left \{
\begin{array}{l}
O_{p}(T^{\gamma/2})$ for $0\leq \gamma \leq1 \\
O_{p}(T^{1/2})$ for $1\leq \gamma \leq1+\delta
\end{array} \right.
\end{equation}
\end{prop}
Examples of how this proposition applies for different particular Data Generating Processes (DGP) can be found in GG.\\
A question of great empirical importance is how our trend test ($TT$) of Proposition \ref{prop2} behaves when $C_t=I(1)$ (accumulation of an I(0) process). Following Durlauf and Phillips (1988), $T^{1/2}\widehat{\beta}=O_{p}(1)$; however, $t_{\beta =0}$ diverges as $ T {\rightarrow } \infty$. Therefore, our $TT$ can detect the stochastic trend generated by an I(1) process. In fact, our test will detect trends generated by any of the three standard persistent processes considered in the literature (see Muller and Watson, 2008): (i) fractional or long-memory models; (ii) near-unit-root AR models; and (iii) local-level models. Let
\begin{equation}
C_{t}=\mu+z_{t},\text{ } t=1,...,T. \label{eq-sto_trend}
\end{equation}
In the first model, $z_{t}$ is a fractional process with $1/2<d<3/2$. In the second model, $z_{t}$ follows an AR, with its largest root close to unity, $\rho _{T}=1-c/T$. In the third model, $z_{t}$ is decomposed into an I(1) and an I(0) component. Its simplest format is $z_{t}$ = $\upsilon _{t}$ + $\epsilon _{t}$ with $\upsilon _{t}$ = $\upsilon _{t-1}$ +$\eta _{t}$, where $\epsilon _{t}$ is $ID(0,q\ast \sigma ^{2}$), $\eta _{t}$ is $ID(0,\sigma ^{2})$, $\sigma^{2} >0$ and both disturbances are serially and mutually independent. Note that the pure unit-root process is nested in all three models: $d=1$, $c=0$, and $q=0$.
The long-run properties implied by each of these models can be characterized using the stochastic properties of the partial sum process for $z_{t}$. The standard assumptions considered in the macroeconomics or finance literature assume the existence of a ``$\delta$,'' such that $T^{-1/2+\delta }\sum_{t=1}^{T}z_{t}\longrightarrow \sigma $ $H(.)$, where ``$\delta$'' is a model-specific constant and $H$ is a model-specific zero-mean Gaussian process with a given covariance kernel $k(r,s).$ Then, it is clear that the process $C_{t}=\mu+z_{t}$ is summable (see Berenguer-Rico and Gonzalo, 2014). This is the main reason why Proposition \ref{prop3} holds for these three persistent processes.
\begin{prop}\label{prop3}
Let $C_{t}=\mu+z_{t},t=1,...,T$, with $z_{t}$ any of the following three processes: (i) a fractional or long-memory model, with $1/2<d<3/2$; (ii) a near-unit-root AR model; or (iii) a local-level model. Furthermore, $T^{-1/2+\delta }\sum_{t=1}^{T}z_{t}\longrightarrow \sigma $ $H(.)$,
where ``$\delta$'' is a model-specific constant and $H$ is a model-specific zero-mean Gaussian process with a given covariance kernel $k(r,s).$
Then, in the LS regression
\begin{equation*}
C_{t}=\alpha+\beta t+u_{t},
\end{equation*}
the t-statistic diverges,
\begin{equation*}
t_{\beta =0}=O_{p}(T^{1/2}).
\end{equation*}
\end{prop}
After the development of the theoretical core, we are in a position to design tools to approach the empirical strategy. The following subsection describes each of them.
\subsection{Empirical tools: definitions and tests}
From Propositions \ref{prop2} and \ref{prop3}, Definition \ref{def2} can be simplified into the following testable and practical definition.
\begin{defn} \label{def4} \textit{(\underline{Practical definition 2})}: \textit{ A characteristic $C_{t}$ of a functional stochastic process $X_{t}$ contains a trend if in the LS regression,}
\begin{equation}
C_{t}=\alpha +\beta t+u_{t}, \text{ } t=1,...,T, \label{tbeta3}
\end{equation}
\textit{$\beta=0$ is rejected.}
\end{defn}
Several remarks are relevant with respect to this definition: (i) regression (\ref{tbeta3}) has to be understood as the linear LS approximation of an unknown trend function $h(t)$ (see White, 1980); (ii) the parameter $\beta$ is the plim of $\widehat{\beta}_{ols}$; (iii) if the regression (\ref{tbeta3}) is the true data-generating process, with $u_t\sim I(0)$, then the OLS $\widehat{\beta }$ estimator is asymptotically equivalent to the GLS estimator (see Grenander and Rosenblatt, 1957); (iv) in practice, in order to test $\beta=0$, it is recommended to use a robust HAC version of $t_{\beta =0}$ (see Busetti and Harvey, 2008); and (v) this test only detects the existence of a trend but not the type of trend.
For all these reasons, in the empirical applications we implement Definition \ref{def4} by estimating regression (\ref{tbeta3}) using OLS and constructing a HAC version of $t_{\beta =0}$ (Newey and West, 1987).
These linear trends can be common across characteristics indicating similar patters in the time evolution of these characteristics.
\begin{defn} \label{def5} \textit{(\underline{Co-trending})}: \textit{A set of $m $ distributional characteristics ($C_{1t}$,$C_{2t}$,...,$C_{mt}$) do linearly co-trend if in the multivariate regression \\}
\begin{equation}
\begin{pmatrix}
C_{1t} \\
... \\
C_{mt}%
\end{pmatrix}%
=%
\begin{pmatrix}
\alpha _{1} \\
... \\
\alpha _{m}%
\end{pmatrix}%
+%
\begin{pmatrix}
\beta _{1} \\
... \\
\beta _{m}%
\end{pmatrix}%
t+%
\begin{pmatrix}
u_{1t} \\
... \\
u_{mt}%
\end{pmatrix}%
\label{cotrend}
\end{equation}
\textit{ all the slopes are equal, $\beta _{1}=\beta _{2}=...=\beta _{m}.$} \footnote{This definition is slightly different from the one in Carrion-i-Silvestre and Kim (2019).}
\end{defn}
This co-trending hypothesis can be tested by a standard Wald test.
When $m=2$ an alternative linear co-trending test can be obtained from
the regression
\begin{equation*}
C_{it}-C_{jt}=\alpha +\beta t+u_{t}
\end{equation*}
$i\neq j$ $i,j=1,...,m$ by testing the null hypothesis of $\beta =0$ vs $\beta \neq 0$ using
a simple $t_{\beta =0}$ test.
Climate classification is a tool used to recognize, clarify and simplify the existent climate heterogeneity in the Globe. It also helps us to better understand the Globe’s climate and therefore to design more efficient global warming mitigation policies. The prevalent climate typology is that proposed by K\"oppen (1900) and later on modified in K\"oppen and Geiger (1930). It is an empirical classification that divides the climate into five major types, which are represented by the capital letters A (tropical zone), B (dry zone), C (temperate zone), D (continental zone), and E (polar zone). Each of these climate types except for B is defined by temperature criteria. More recent classifications can been found in the AR6 of the IPCC (2021, 2022) but all of them share the spirit of the original one of K\"oppen (1900).
The climate classification we propose in this section is also based on temperature data and it has three simple distinctive characteristics:
\begin{itemize}
\item It considers the whole temperature distribution and not only the average
\item It has a dynamic nature: it is based on the evolution of the trend of the temperature quantiles (lower and upper).
\item It can be easily tested
\end{itemize}
\begin{defn} \label{def6} \textit{(\underline{Warming Typology})}:
\textit{We define four types of warming processes:}
\begin{itemize}
\item \textbf{W0}: \textit{There is no trend in any of the quantiles (No warming).}
\item \textbf{W1}: \textit{All the location distributional characteristics have the same positive trend (dispersion does not contain a trend)}
\item \textbf{W2}: \textit{The Lower quantiles have a larger positive trend than the Upper quantiles (dispersion has a negative trend)}
\item \textbf{W3}: \textit{The Upper quantiles have a larger positive trend than the Lower quantiles (dispersion has a positive trend).}
\end{itemize}
\end{defn}
Climate is understood, unlike weather, as a medium and long-term phenomenon and, therefore, it is crucial to take trends into account. Notice that this typology can be used to describe macroclimate as well as microclimate locations.
Most of the literature on Global or Local warming only considers the trend behavior of the central part of the distribution (mean or median). By doing this, we are losing very useful information that can be used to describe the whole warming process. This information is considered in the other elements of the typology \textit{W1}, \textit{W2} and \textit{W3}. This typology does not say anything about the intensity of the warming process and its dynamic. Part of this intensity is captured in the following definitions of warming acceleration and warming amplification.
\begin{defn} \label{def7} \textit{(\underline{Warming Acceleration})}:
\textit{We say that there is warming acceleration in a distributional temperature characteristic $C_{t}$ between the time periods $t_1=(1,..., s)$ and $t_2=(s+1,..., T)$ if in the following two regressions:
}
\begin{equation}
C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T,
\end{equation}
\begin{equation}
C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \label{acc}
\end{equation}
\textit{the second trend slope is larger than the first one: $\beta_{2} > \beta_{1}$.}\\
\end{defn}
In practice, we implement this definition by testing in the previous system the null hypothesis $\beta_{2}=\beta_{1}$ against the alternative $\beta_{2}>\beta_{1}$ An alternative warming acceleration test can be formed by testing for a structural break at $t=s$. Nevertheless, we prefer the approach of Definition \ref{def7} because it matches closely the existent narrative on warming acceleration in the climate literature.
\begin{defn} \label{def8} \textit{(\underline{Warming Amplification with respect to the mean})}:
\textit{ We say that there is a warming amplification in distributional characteristic $C_{t}$ with respect the $mean$ if in the following regression:}
\begin{equation}
C_{t}=\beta _{0}+\beta _{1} mean_{t}+\epsilon_{t} \label{ampl}
\end{equation}
\textit{the mean slope is greater than one: $\beta_{1} >1$. }
\end{defn}
When the mean, $mean_{t}$, and $C_{t}$ come from the same distribution, we name this ``inner'' warming amplification. Otherwise, the mean may come from an external environment and, in that case, we call it ``outer'' warming amplification.
Both concepts, acceleration and amplification, introduce a quantitative dimension to the ordinarily defined classification. For example, the acceleration, which has a dynamic character, allows us to observe the transition from one type of climate to another. Amplification, on the other hand, makes it possible to compare the magnitude of the trends that define each type of climate. It should be noted that, although static in nature, it can be computed recursively at different points in time.
In the previous definitions, we classify the warming process of different regions which is crucial in the design of local mitigation and adaptation policies. But we, also, need to compare the different climate change processes of two regions in order to characterize climate heterogeneity independently of the type of warming they are experimenting. For this purpose, we propose the following definition that shares the spirit of the stochastic dominance concept used in the economic-finance literature.
\begin{defn} \label{def9} \textit{(\underline{Warming Dominance (WD)}}:
\textit{We say that the temperature distributions of \textbf{Region $A$} warming dominates (\textbf{$WD$}) the temperature distributions of \textbf{Region $B$} if in the following regression
}
\begin{equation}
q_{\tau t}(A)- q_{\tau t}(B)=\alpha_{\tau} +\beta_{\tau} t +u_{\tau t} \label{wd},
\end{equation}
\textit{$\beta_{\tau}\geq 0$ for all $0<\tau<1$ and there is at least one value $\tau^{*}$ for which a strict inequality holds.}
\end{defn}
It is also possible to have only \emph{partial} (\textbf{$WD$}). For instance, in the lower or upper quantiles.
\section{The data\label{sec-data}}
\subsection{Spain}
The measurement of meteorological information in Spain started in the eighteenth century. However, it was not until the mid-nineteenth century that reliable and regular data became available. In Spain, there are four main sources of meteorological information: the Resumen Anual, Bolet\'{\i}n Diario, Bolet\'{\i}n Mensual de Climatolog\'{\i}a and Calendario Meteorol\'ogico. These were first published in 1866, 1893, 1940 and 1943, respectively. A detailed explanation of the different sources can be found in Carreras and Tafunell (2006).
Currently, AEMET (Agencia Estatal de Meterolog\'{\i}a) is the agency responsible for storing, managing and providing meteorological data to the public. Some of the historical publications, such as the Bolet\'{\i}n Diario and Calendario Meteorol\'ogico can be found in digital format in their respective archives for whose use it is necessary to use some kind of Optical Character Recognition (OCR) software.\footnote{$http://www.aemet.es/es/conocermas/recursos_en_linea/calendarios?n=todos$ and $https://repositorio.aemet.es/handle/20.500.11765/6290$.}
In 2015, AEMET developed AEMET OpenData, an Application Programming Interface (API REST) that allows the dissemination and reuse of Spanish meteorological and climatological information. To use it, the user needs to obtain an API key to allow access to the application. Then, either through the GUI or through a programming language such as Java or Python, the user can request data. More information about the use of the API can be found on their webpage.\footnote{$https://opendata.aemet.es/centrodedescargas/inicio$. The use of AEMET data is regulated in the following resolution $https://www.boe.es/boe/dias/2016/01/05/pdfs/BOE-A-2016-111.pdf$.}
In this paper, we are concerned with Spanish daily station data, specifically temperature data. Each station records the minimum, maximum and average temperature as well as the amount of precipitation, measured as liters per square meter. The data period ranges from 1920 to 2019. However, in 1920 there were only 13 provinces (out of 52) who had stations available. It was not until 1965 that all the 52 provinces had at least one working station. Moreover, it is important to keep in mind that the number of stations has increased substantially from only 14 stations in 1920 to more than 250 in 2019.
With this information in mind, we select the longest span of time that guarantees a wide sample of stations so that all the geographical areas of peninsular Spain are represented. For this reason, we decided to work with station data from 1950 to 2019. There are 30 stations whose geographical distribution is displayed in the map in Figure \ref{fig-data}. The original daily data are converted into monthly data, so that we finally work with a total of 30x12 station-month units corresponding to peninsular Spain and, consequently, we have 360 observations each year with which to construct the annual distributional characteristics.
\subsection{The Globe}
In the case of the Globe, we use the database of the Climate Research Unit (CRU) that offers monthly and yearly data of land and sea temperatures in both hemispheres from 1850 to the present, collected from different stations around the world.\footnote{We use CRUTEM version 5.0.1.0, which can be downloaded from (https://crudata.uea.ac.uk/cru/data/temperature/). A recent revision of the methodology can be found in Jones et al. (2012).} Each station temperature is converted to an anomaly, taking 1961-1990 as the base period, and each grid-box value, on a five-degree grid, is the mean of all the station anomalies within that grid box. This database (in particular, the annual temperature of the Northern Hemisphere) has become one of the most widely used to illustrate GW from records of thermometer readings. These records form the blade of the well-known ``hockey stick'' graph, frequently used by academics and other institutions, such as, the IPCC. In this paper, we prefer to base our analysis on raw station data, as in GG2020.
The database provides data from 1850 to nowadays, although due to the high variability at the beginning of the period it is customary in the literature to begin in 1880. In this work, we have selected the stations that are permanently present in the period 1950-2019 according to the concept of the station-month unit. In this way, the results are comparable with those obtained for Spain. Although there are 10,633 stations on record, the effective number fluctuates each year and there are only 2,192 stations with data for all the years in the sample period, which yields 19,284 station-month units each year (see this geographical distribution in the map in Figure \ref{fig-data}).\footnote{In the CRU data there are 115 Spanish stations. However, after removing stations not present for the whole 1880 to 2019 period, only Madrid-Retiro, Valladolid and Soria remain. Since 1950, applying the same criteria, only 30 remain.} In summary, we analyze raw global data (stations instead of grids) for the period 1950 to 2019, compute station-month units that remain all the time and with these build the annual distributional characteristics.
\begin{figure}[h!]
\begin{center}
\caption{Geographical distribution of stations}
\label{fig-data}
\subfloat[{\small Spain. Selected stations, AEMET data 1950-2019}]{
\includegraphics[scale=0.9]{Figures/stations_1950e}}\\
\subfloat[{\small The Globe. Selected stations, CRU data 1950-2019}]{
\includegraphics[scale=0.7]{Figures/Map_Globe2}}
\end{center}
\end{figure}
\section{Empirical strategy\label{sec-emp}}
In this section we apply our three-step quantitative methodology to show the existent climate heterogeneity between Spain and the Globe as well as within Spain, between Madrid and Barcelona. Because all our definitions are written in a testing format, it is straightforward to empirically apply them. First, we test for the existence of warming by testing the existence of a trend in a given distributional characteristic. How common are the trends of the different characteristics (revealed by a co-trending test) determine the warming typology. Second, the strength of the warming process is tested by testing the hypothesis of warming acceleration and warming amplification. And third, independently of the warming typology, we determine how the warming process of Spain compares with that of the Globe as a whole (we do the same for Madrid and Barcelona). This is done by testing for warming dominance.
The results are presented according to the following steps: first, we apply our trend test (see Definition \ref{def4}) to determine the existence of local or global warming and test for any possible warming acceleration; second, we test different co-trending hypotheses to determine the type of warming of each area; thirdly, we test the warming amplification hypothesis for different quantiles with respect to the mean (of Spain as well as of the Globe): $H_{0}: \beta_{1}=1$ versus $H_{a}: \beta_{1}>1$ in (\ref{ampl}); and finally, we compare the \textit{CC} of different regions, for Spain and the Globe, and within Spain, between Madrid and Barcelona, with our warming dominance test (see \ref{wd}).\footnote{Before testing for the presence of trends in the distributional characteristics of the data, we test for the existence of unit roots. To do so, we use the well-known Augmented Dickey-Fuller test (ADF; Dickey and Fuller, 1979), where the number of lags is selected in accordance with the SBIC criterion. The results, available from the authors on request, show that the null hypothesis of a unit root is rejected for all the characteristics considered.}
\subsection{Local warming: Spain \label{sec-cross-Spain}}
The cross-sectional analysis is approached under two assumptions. First, choosing a sufficiently long and representative period of the geographical diversity of the Spanish Iberian Peninsula, 1950-2019. Second, we work with month-station units from daily observations to construct the annual observations of the time series object from the data supplied by the stations, following a methodology similar to that carried out for the whole planet in GG2020.\footnote{The results with daily averages are very similar. The decision to work with monthly data instead of daily in the cross-sectional approach has been based on its compatibility with the data available for the Globe. } The study comprises the steps described in the previous section. The density of the data and the evolution of characteristics are displayed, respectively in Figures \ref{fig-density-Spain} and \ref{fig-char-1950-monthly}.
We find positive and significant trends in the \textit{mean}, \textit{max}, \textit{min} and all the quantiles. Therefore from definition \ref{def1}, we conclude there exists a clear local warming (see Table \ref{tab-1950-Spain-monthly-rec-acc}).
The recursive evolution for the periods 1950-2019 and 1970-2019 shows a clear increase in the trends of the \textit{mean}, some dispersion measures and higher quantiles (see the last column of Table \ref{tab-1950-Spain-monthly-rec-acc}). More precisely, there is a significant trend acceleration in most of the distributional characteristics except the lower quantiles (below \textit{q20}). These quantiles, \textit{q05} and \textit{q10}, remain stable.
The co-trending tests for the full sample 1950-2019 show a similar evolution of the trend for all the quantiles with a constant \textit{iqr} (see Table \ref{Tab-cotrend-since1950-monthly-1950}). This indicates that in this period the warming process of Spain can be considered a \textit{W1} type. More recently, 1970-2019, the co-trending tests (see Table \ref{Tab-cotrend-since1950-monthly-1970}) indicate the upper quantiles grow faster than the lower ones. This, together with a positive trend in the dispersion measured by the \textit{iqr} shows that Spain has evolved from a \textit{W1} to a \textit{W3} warming type process
Finally, no evidence of ``inner'' amplification during the period 1950-2019 is found in the lower quantiles. Regarding the upper quantiles, we found both ``inner'' and ``outer'' amplification in the second period, which supports the previous finding of a transition from type \textit{W1} to type \textit{W3} (see Table \ref{tab-amplif-Spain}).
Summing up, with our proposed tests for the evolution of the trend of the whole temperature distribution, we conclude that Spain has evolved from a \textit{W1} type to a much more dangerous \textit{W3} type. The results of acceleration and dynamic amplification reinforce the finding of this transition to type \textit{W3}.
\begin{figure}[h!]
\begin{center}
\caption{Spain annual temperature density calculated with monthly data across stations} \label{fig-density-Spain}
\includegraphics[scale=0.5]{Figures/Figure_density_Spain_1950}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Characteristics of temperature data in Spain with stations selected since 1950 (monthly data across stations, AEMET, 1950-2019)} \label{fig-char-1950-monthly}
\includegraphics[scale=0.5]{Figures/fig_quantiles_monthly_selected_stations_since_1950.png}
\end{center}
\end{figure}
\begin{table}[h!]\caption{Trend acceleration hypothesis (Spain monthly data across stations, AEMET, 1950-2019)}\label{tab-1950-Spain-monthly-rec-acc}\begin{center}\scalebox{0.5}{\begin{tabular}{l*{5}{c}} \hline \hline
& \multicolumn{2}{c}{Trend test by periods}& \multicolumn{1}{c}{Acceleration test}\\
names/periods&1950-2019& 1970-2019& 1950-2019, 1970-2019\\ \hline
mean &0.0242&0.0389&3.0294\\
& (0.0000)& (0.0000)& (0.0015) \\
max &0.0312&0.0526&2.7871\\
& (0.0000)& (0.0000)& (0.0030) \\
min &0.0289&0.0251&-0.2557\\
& (0.0000)& (0.0654)& (0.6007) \\
std &0.0036&0.0098&1.7952\\
& (0.0518)& (0.0021)& (0.0374) \\
iqr &0.0051&0.0158&1.8197\\
& (0.1793)& (0.0028)& (0.0355) \\
rank &0.0023&0.0276&1.2705\\
& (0.8249)& (0.1127)& (0.1030) \\
kur &-0.0010&-0.0018&-0.9191\\
& (0.0203)& (0.0198)& (0.8202) \\
skw &0.0011&-0.0002&-1.5989\\
& (0.0271)& (0.7423)& (0.9439) \\
q5 &0.0227&0.0206&-0.2559\\
& (0.0000)& (0.0059)& (0.6008) \\
q10 &0.0200&0.0203&0.0406\\
& (0.0000)& (0.0077)& (0.4838) \\
q20 &0.0209&0.0300&1.4158\\
& (0.0000)& (0.0000)& (0.0796) \\
q30 &0.0221&0.0333&2.0100\\
& (0.0000)& (0.0000)& (0.0232) \\
q40 &0.0213&0.0366&2.4867\\
& (0.0000)& (0.0000)& (0.0071) \\
q50 &0.0211&0.0404&3.2496\\
& (0.0000)& (0.0000)& (0.0007) \\
q60 &0.0246&0.0446&3.1147\\
& (0.0000)& (0.0000)& (0.0011) \\
q70 &0.0273&0.0478&3.3143\\
& (0.0000)& (0.0000)& (0.0006) \\
q80 &0.0275&0.0471&2.6949\\
& (0.0000)& (0.0000)& (0.0040) \\
q90 &0.0321&0.0548&3.2441\\
& (0.0000)& (0.0000)& (0.0007) \\
q95 &0.0335&0.0526&3.3568\\
& (0.0000)& (0.0000)& (0.0005) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: OLS estimates and HAC p-values in parenthesis of the $t_{\beta=0}$ test from regression: $C_{t}=\alpha+\beta t+u_{t}$, for two different time periods. For the acceleration hypothesis we run the system: $C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T, C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \text{and test the null hypothesis } \beta_{2}=\beta_{1} \text{ against the alternative} \beta_{2}>\beta_{1}$. We show the value of the t-statistic and its HAC p-value.}
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (Spain monthly data across stations, AEMET, 1950-2019)}\label{Tab-cotrend-since1950-monthly-1950}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&13.235&0.211 \\
Lower quantiles (q05, q10, q20, q30) &0.310&0.958 \\
Medium quantiles (q40, q50, q60) &0.438&0.803 \\
Upper quantiles (q70, q80, q90, q95) &1.515&0.679 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &0.771&0.993 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &8.331&0.215 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &11.705&0.111 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.002&0.786 \\
q95-q50&0.012&0.000 \\
q95-q05 &0.011&0.096 \\
q75-q25 (iqr) &0.005&0.179 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (Spain monthly data across stations, AEMET, 1970-2019)}\label{Tab-cotrend-since1950-monthly-1970}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&38.879&0.000 \\
Lower quantiles (q05, q10, q20, q30) &3.121&0.373 \\
Medium quantiles (q40, q50, q60) &1.314&0.518 \\
Upper quantiles (q70, q80, q90, q95) &1.719&0.633 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &12.771&0.047 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &10.675&0.099 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &37.892&0.000 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &0.020&0.029 \\
q95-q50&0.012&0.050 \\
q55-q05 &0.032&0.002 \\
q75-q25 (iqr) &0.016&0.003 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Amplification hypothesis (Spain monthly data, AEMET 1950-2019}\label{tab-amplif-Spain}\begin{center}\scalebox{0.8}{\begin{tabular}{l*{5}{c}} \hline \hline
periods/variables&1950-2019&1970-2019&1950-2019&1970-2019\\ \hline
& \multicolumn{2}{c}{Inner}& \multicolumn{2}{c}{Outer}\\ \hline
q05&0.80&0.56&0.55&0.39\\
& (0.866)& (0.998)& (0.990)& (0.996) \\
q10&0.83&0.65&0.62&0.52\\
& (0.899)& (0.994)& (0.992)& (0.986) \\
q20&0.94&0.90&0.76&0.81\\
& (0.816)& (0.890)& (0.993)& (0.899) \\
q30&0.93&0.91&0.77&0.87\\
& (0.935)& (0.929)& (0.997)& (0.834) \\
q40&0.97&1.03&0.80&0.97\\
& (0.744)& (0.318)& (0.978)& (0.566) \\
q50&0.98&1.10&0.83&1.12\\
& (0.612)& (0.067)& (0.944)& (0.212) \\
q60&1.09&1.15&0.96&1.23\\
& (0.103)& (0.051)& (0.619)& (0.056) \\
q70&1.11&1.16&1.05&1.30\\
& (0.040)& (0.006)& (0.350)& (0.028) \\
q80&1.11&1.14&1.06&1.29\\
& (0.083)& (0.071)& (0.325)& (0.060) \\
q90&1.14&1.16&1.19&1.45\\
& (0.101)& (0.118)& (0.078)& (0.007) \\
q95&1.10&1.09&1.18&1.36\\
& (0.089)& (0.191)& (0.051)& (0.008) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: OLS estimates and HAC p-values of the t-statistic of testing $H_{0}: \beta_{i}=1$ versus $H_{a}: \beta_{i}>1$ in the regression: $C_{it}=\beta _{i0}+\beta _{i1} mean_{t}+\epsilon_{it}$. $mean$ refers to the average of the Spanish Global temperature distribution for the ``inner'' and ``outer''cases, respectively.
\end{tablenotes}\end{table}
\clearpage
\subsection{Global warming: the Globe}
In this section, we carry out a similar analysis to that described in the previous subsection for Spain. Figures \ref{fig-density-Globe} and \ref{fig-quantiles-Globe-monthly} show the time evolution of the Global temperature densities and their different distributional characteristics from 1950 to 2019. The data in both figures are obtained from stations that report data throughout the sample period.
Table \ref{Tab-1950-Globe-monthly-acc} shows a positive trend in the mean as well as in all the quantiles. This indicates the clear existence of Global warming, more pronounced (larger trend) in the lower part of the distribution (a negative trend in the dispersion measures). The warming process suffers an acceleration in all the quantiles above \textit{q30}.
From the co-trending analysis (see Tables \ref{Tab-cotrend-Globe-monthly-1950-2019} and \ref{Tab-cotrend-Globe-monthly-1970-2019}) we can determine the type of warming process characterizing the whole Globe. Table \ref{Tab-cotrend-Globe-monthly-1950-2019} indicates that in the period 1950-2019 the Globe experimented a \textit{W2} warming type (the lower part of the temperature distribution grows faster than the middle and upper part, implying \textit{iqr} and \textit{std} have a negative trend). Similar results are maintained for the period 1970-2019 (in this case only the dispersion measure \textit{std} has a negative trend).
The asymmetric amplification results shown in Table \ref{tab-amplif-Globe} reinforce the \textit{W2} typology for the whole Globe: an increase of one degree in the global mean temperature increases the lower quantiles by more than one degree. This does not occur with the upper part of the distribution. Notice that this amplification goes beyond the standard Artic amplification (\textit{q05}) affecting also \textit{q10}, \textit{q20} and \textit{q30}.
Summing up, the results from our different proposed tests for the evolution of the trend of the whole temperature distribution indicate that the Globe can be cataloged as a undergoing type \textit{W2} warming process. This warming type may have more serious consequences for ice melting, sea level increases, permafrost, $CO_{2}$ migration, etc. than the other types.
\begin{figure}[h!]
\begin{center}
\caption{Global annual temperature density calculated with monthly data across stations} \label{fig-density-Globe}
\includegraphics[scale=0.5]{Figures/Figure_density_Globe_1950}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Characteristics of temperature data in the Globe (monthly data across stations, CRU, 1950-2019)} \label{fig-quantiles-Globe-monthly}
\includegraphics[scale=0.5]{Figures/Fig_quantiles_Globe_1950_2019}
\end{center}
\end{figure}
\begin{table}[h!]\caption{Trend acceleration hypothesis (CRU monthly data across stations, 1950-2019)}\label{Tab-1950-Globe-monthly-acc}\begin{center}\scalebox{0.5}{\begin{tabular}{l*{5}{c}} \hline \hline
& \multicolumn{2}{c}{Trend test by periods}& \multicolumn{1}{c}{Acceleration test}\\
names/periods&1950-2019& 1970-2019& 1950-2019, 1970-2019\\ \hline
mean &0.0213&0.0300&2.2023\\
& (0.0000)& (0.0000)& (0.0147) \\
max &0.0361&0.0523&1.1217\\
& (0.0000)& (0.0001)& (0.1320) \\
min &0.0423&-0.0109&0.5016\\
& (0.0000)& (0.5867)& (0.3084) \\
std &-0.0070&-0.0057&0.1776\\
& (0.0000)& (0.0570)& (0.4296) \\
iqr &-0.0067&-0.0043&0.2454\\
& (0.0435)& (0.4183)& (0.4033) \\
rank &-0.0062&0.0632&0.2181\\
& (0.5876)& (0.0005)& (0.4138) \\
kur &-0.0010&0.0001&0.0445\\
& (0.5205)& (0.9566)& (0.4823) \\
skw &0.0006&0.0003&0.0301\\
& (0.0577)& (0.5726)& (0.4880) \\
q5 &0.0404&0.0468&0.7035\\
& (0.0000)& (0.0000)& (0.2415) \\
q10 &0.0305&0.0406&0.9273\\
& (0.0000)& (0.0001)& (0.1777) \\
q20 &0.0253&0.0342&1.0156\\
& (0.0000)& (0.0000)& (0.1558) \\
q30 &0.0215&0.0280&1.2056\\
& (0.0000)& (0.0000)& (0.1150) \\
q40 &0.0192&0.0293&1.9873\\
& (0.0000)& (0.0000)& (0.0245) \\
q50 &0.0179&0.0268&1.8614\\
& (0.0000)& (0.0000)& (0.0324) \\
q60 &0.0185&0.0291&2.1971\\
& (0.0000)& (0.0000)& (0.0149) \\
q70 &0.0185&0.0288&2.5770\\
& (0.0000)& (0.0000)& (0.0055) \\
q80 &0.0160&0.0257&2.2460\\
& (0.0000)& (0.0000)& (0.0132) \\
q90 &0.0146&0.0243&2.0848\\
& (0.0005)& (0.0000)& (0.0195) \\
q95&0.0143&0.0239&1.7520\\
& (0.0001)& (0.0000)& (0.0410) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: OLS estimates and HAC p-values in parenthesis of the $t_{\beta=0}$ test from regression: $C_{t}=\alpha+\beta t+u_{t}$, for two different time periods. For the acceleration hypothesis we run the system: $C_{t}=\alpha_{1} +\beta_{1} t+u_{t}, \text{ } t=1, ...,s ,..., T, C_{t}=\alpha_{2} +\beta_{2} t+u_{t}, \text{ } t=s+1, ..., T, \text{and test the null hypothesis } \beta_{2}=\beta_{1} \text{ against the alternative} \beta_{2}>\beta_{1}$. We show the value of the t-statistic and its HAC p-value.}
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (CRU montly data, 1950-2019)}\label{Tab-cotrend-Globe-monthly-1950-2019}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&25.143&0.005 \\
Lower quantiles (q05, q10, q20, q30) &9.545&0.023 \\
Medium quantiles (q40, q50, q60) &0.078&0.962 \\
Upper quantiles (q70, q80, q90, q95) &1.099&0.777 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &17.691&0.007 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &2.041&0.916 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &24.683&0.001 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.022&0.000 \\
q95-q50&-0.004&0.193 \\
q95-q05 &-0.026&0.000 \\
q75-q25 (iqr) &-0.007&0.043 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Co-trending analysis (CRU montly data, 1970-2019)}\label{Tab-cotrend-Globe-monthly-1970-2019}\begin{center}\scalebox{0.7}{\begin{tabular}{l*{3}{c}} \hline \hline
Joint hypothesis tests&Wald test&p-value\\ \hline
All quantiles (q05, q10,...,q90, q95)&18.478&0.047 \\
Lower quantiles (q05, q10, q20, q30) &5.523&0.137 \\
Medium quantiles (q40, q50, q60) &0.569&0.752 \\
Upper quantiles (q70, q80, q90, q95) &2.667&0.446 \\
Lower-Medium quantiles (q05, q10, q20, q30, q40, q50, q60) &7.606&0.268 \\
Medium-Upper quantiles (q40, q50, q60, q70, q80, q90, q95) &6.714&0.348 \\
Lower-Upper quantiles (q05, q10, q20,q30, q70, q80, q90, q95 ) &14.520&0.043 \\
\hline
Spacing hypothesis&Trend-coeff.&p-value\\ \hline
q50-q05 &-0.020&0.047 \\
q95-q50&-0.003&0.462 \\
q95-q05 &-0.023&0.048 \\
q75-q25 (iqr) &-0.004&0.418 \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: Annual distributional characteristics (quantiles) of temperature. The top panel shows the Wald test of the null hypothesis of equality
of trend coefficients for a given set of characteristics. In the bottom panel, the TT is applied to the difference between two
representative quantiles.
\end{tablenotes}\end{table}
\begin{table}[h!]\caption{Amplification hypotheses (CRU monthly data across stations, 1950-2019)}\label{tab-amplif-Globe}\begin{center}\scalebox{0.8}{\begin{tabular}{l*{3}{c}} \hline \hline
periods/variables&1950-2019&1970-2019\\ \hline
q05&2.00&1.83\\
& (0.000)& (0.000) \\
q10&1.79&1.73\\
& (0.000)& (0.001) \\
q20&1.41&1.37\\
& (0.000)& (0.000) \\
q30&1.07&1.00\\
& (0.089)& (0.502) \\
q40&0.88&0.91\\
& (0.999)& (0.973) \\
q50&0.74&0.81\\
& (1.000)& (0.997) \\
q60&0.74&0.85\\
& (0.999)& (0.973) \\
q70&0.77&0.85\\
& (1.000)& (0.988) \\
q80&0.72&0.78\\
& (1.000)& (1.000) \\
q90&0.69&0.70\\
& (1.000)& (1.000) \\
q95&0.60&0.64\\
& (1.000)& (1.000) \\
\hline \hline \end{tabular}}\end{center}
\begin{tablenotes}
\textit{Note}: OLS estimates and HAC p-values of the t-statistic of testing $H_{0}: \beta_{i}=1$ versus $H_{a}: \beta_{i}>1$ in the regression: $C_{it}=\beta _{i0}+\beta _{i1} mean_{t}+\epsilon_{it}$. $mean$ refers to the average of the Global temperature distribution.
\end{tablenotes}\end{table}
\clearpage
\subsection{Micro-local warming: Madrid and Barcelona}
The existence of warming heterogeneity implies that in order to design more efficient mitigation policies, they have to be developed at different levels: global, country, region etc. How local we need to go will depend on the existing degree of micro-warming heterogeneity. In this subsection, we go to the smallest level, climate station level . We analyze, within Spain, the warming process in two weather stations corresponding to two cities: Madrid (Retiro station) and Barcelona (Fabra station). \footnote{From Madrid and Barcelona there is data since 1920's, nevertheless we began the study in 1950 for consistency with the previous analysis of Spain and the Globe.} Obviously, the data provided by these stations is not cross-sectional data but directly pure time series data. Our methodology can be easily applied to higher frequency time series, in this case daily data, to compute the distributional characteristics (see Figures \ref{fig-char-daily-Madrid-1950} and \ref{fig-char-daily-Barcelona-1950})\footnote{See the application to Central England in GG2020 and in Gadea and Gonzalo (2022) to Madrid, Zaragoza and Oxford.}.
The results are shown in the Appendix. These two stations, Madrid-Retiro and Barcelona-Fabra clearly experience two different types of warming. First, there is evidence of micro-local warming, understood as the presence of significant and positive trends, in all the important temperature distributional characteristics of both stations. The acceleration phenomenon is also clearly detected, in other words, the warming increases as time passes (see Tables \ref{Tab-1950-Madrid-daily-rec-acc} and \ref{Tab-1950-Barcelona-daily-rec-acc}). Secondly, from the co-trending tests (Tables \ref{Tab-cotrend-Madrid-daily-1950}-\ref{Tab-cotrend-Madrid-daily-1970} and \ref{Tab-cotrend-Barcelona-daily-1950}-\ref{Tab-cotrend-Barcelona-daily-1970}), it can be concluded that the warming process of Madrid-Retiro is type \textit{W3} while for Barcelona-Fabra it is type \textit{W1}. In both cases the warming typology is stable through both sample periods (1950-2019 and 1970-2019). Thirdly, as expected, Madrid-Retiro presents ``inner'' and ``outer'' amplification for the upper quantiles, while Barcelona-Fabra does so only for the center part of its temperature distribution (see Tables \ref{Tab-amplif-Madrid-1950} and \ref{Tab-amplif-Barcelona-1950}).
Summing up, even within Spain we find evidence of warming heterogeneity. While Madrid (Continental Mediterranean climate) has a similar pattern as that of peninsular Spain (1970-2019) \textit{W3}, Barcelona (Mediterranean coastline climate) maintains a \textit{W1} typology. Thus there are two different warming processes which require mitigation policies at the country as well as the very local level.
\section{Comparing results}
The goal of this section is to show the existence of climate heterogeneity by comparing the results obtained from applying our three-step methodology to different regions. These results are summarized in Table \ref{Tab-summary}. It is clear that there is distributional warming in all the analyzed areas; but this warming follows different patterns and sometimes the warming type is not even stable. In the case of Spain, it depends on the period under consideration. Figure \ref{fig-comp-Globe-Spain-Madrid-Barcelona} captures graphically the different trend behavior and intensity of the distributional characteristics by regions (Spain and the Globe and Madrid and Barcelona).\footnote{The analysis of other characteristics such as the third and fourth order moments can contribute to the temperature distributions. In the case of Spain, the kurtosis is always negative with a mean value of -0.8 and a significant negative trend, which means that we are dealing with a platykurtic distribution with tails less thick than Normal, a shape that is accelerating over time. However, it is ot possible to draw conclusions about symmetry given its high variability over time. Conversely, the temperature distribution in the Globe is clearly leptokurtic with an average kurtosis of 0.9 and a negative but not significant trend. The global temperature observations are therefore more concentrated around the mean and their tails are thicker than in a Normal distribution. The skewness is clearly negative although a decreasing and significant trend points to a reduction of the negative skewness. } The graphical results in this figure coincide with the results of the warming typology tests shown in Table \ref{Tab-summary}.
The middle of Table \ref{Tab-summary} shows that warming acceleration is detected in all the locations. This acceleration is more general in Spain than in the Globe (see also the heatmap in Figure \ref{fig-comp-Globe-Spain-heatmap}) and in Barcelona than in Madrid. Apart from these differences, the acceleration shares certain similarities across regions. This is not the case for the warming amplification that is clearly asymmetric. Spain suffers an amplification in the upper quantiles while the Globe does so in the lower ones. Notice that the latter amplification goes beyond the standard results found in the literature for the Arctic region (\textit{q05}). We detect amplification also for the regions corresponding to the quantiles \textit{q10}-\textit{q30}. In the case of Madrid and Barcelona, Madrid suffers a wider warming amplification than Barcelona.
The results of the first two steps of our methodology are obtained region by region (Spain, the Globe, Madrid and Barcelona). It is the last step, via the warming dominance test (see the numerical results in Table \ref{tab-WD}) where we compare directly one region with another. Warming in Spain dominates that of the Globe in all the quantiles except the lower \textit{q05}.\footnote{A more detailed analysis of the warming process suffered in the Artic region can be found in Gadea and Gonzalo (2021).} This would support the idea held in European institutions and gathered in international reports on the greater intensity of climate change in the Iberian Peninsula. Warming in Madrid dominates that of Barcelona in the upper quantiles, while the reverse is the case in the lower quantiles. This latter result coincides with the idea that regions close to the sea have milder upper temperatures.
Further research (beyond the scope of this paper) will go in the direction of finding the possible causes behind the warming types \textit{W1}, \textit{W2}, and \textit{W3}. Following the literature, on diurnal temperature asymmetry (Diurnal Temperature Range $=DTR= T_{max}-T_{min}$) we can suggest as possible causes for \textit{W2} the cloud coverage (Karl et al. 1993) and the planetary boundary layer (see Davy et al. 2017). For \textit{W3}, the process of desertification (see Karl et al. 1993).
Summarizing, in this section we describe, measure and test the existence of warming heterogeneity in different regions of the planet. It is important to note that these extensive results can not be obtained by the standard analysis of the average temperature.
\begin{table}[h!]\caption{Warming dominance}\label{tab-WD}\begin{center}\scalebox{1}{\begin{tabular}{lcccc} \hline \hline
& \multicolumn{2}{c}{Spain-Globe}& \multicolumn{2}{c}{Madrid-Barcelona}\\
Quantile&$\beta$&t-ratio&$\beta$&t-ratio\\ \hline
q05 &-0.018&(-2.770)&-0.013&(-3.730)\\
q10 &-0.010&(-1.504)&-0.013&(-4.215)\\
q20 &-0.004&(-0.950)&-0.012&(-2.988)\\
q30 &0.001&(0.180)&-0.013&(-4.164)\\
q40 &0.002&(0.788)&-0.009&(-2.909)\\
q50 &0.003&(1.025)&-0.003&(-0.701)\\
q60 &0.006&(1.933)&-0.001&(-0.219)\\
q70 &0.009&(3.266)&0.006&(1.252)\\
q80 &0.012&(3.203)&0.016&(3.331)\\
q90 &0.017&(3.862)&0.010&(1.869)\\
q95 &0.019&(4.930)&0.014&(1.993)\\
\hline \hline
\end{tabular}
}\end{center}
\begin{tablenotes}
\textit{Note}: The slopes (t-statistic) of the following regression \begin{equation*}
q_{\tau t}(A)- q_{\tau t}(B)=\alpha_{\tau} +\beta_{\tau} t +u_{\tau t}
\end{equation*}
In the first column \textit{A}=Spain, \textit{B}=Globe and in the second \textit{A}=Madrid, \textit{B}=Barcelona.
\end{tablenotes}
\end{table}
\begin{table}[h!]
\caption{Summary of results}\label{Tab-summary}\begin{center}\scalebox{0.65}{
\begin{tabular}{c|c|c|c|c|c|c} \\ \hline \hline
\multicolumn{7}{c}{Cross analysis} \\ \hline
Sample & Period & Type & Acceleration & \multicolumn{2}{c}{Amplification} & Dominance \\ \hline
& & & & Inner & Outer & \\
Spain & & & & & & \\
& 1950-2019 & \textit{W1} & [\textit{mean, std, iqr, rank, } & [\textit{q70, q80, q95]} & [\textit{q90, q95]} & [q60,..., q95] \\
& & & \textit{q20,..., q95]} & & \\
& 1970-2019 & \textit{W3} & & [\textit{q50,..., q80]} & [\textit{q60,..., q95]} & \\
The Globe & & & & & & \\
& 1950-2019 & \textit{W2} & [\textit{mean} & [\textit{q05,..., q30]} & & [\textit{q05]} \\
& & & \textit{q40,..., q95]} & & & \\
& 1970-2019 & \textit{W2} & & [\textit{q05,..., q20]} & & \\
& & & & & & \\ \hline
\multicolumn{7}{c}{Time analysis} \\ \hline
Sample & Period & Type & Acceleration & \multicolumn{2}{c}{Amplification} & Dominance \\ \hline
Madrid, Retiro Station & & & & & & \\
& 1950-2019 & \textit{W3} & [\textit{mean, std, rank, } & [\textit{q50,..., q95]} & [\textit{ q40,..., q95]} & [q80,..., q95] \\
& & & \textit{q40, ..., q95]} & & & \\
& 1970-2019 & \textit{W3} & & [\textit{q50,..., q95]} & [\textit{q40,..., q95]} & \\
Barcelona, Fabra Station & & & & & & \\
& 1950-2019 & \textit{W1} & [\textit{mean, } & \textit{-} & [\textit{q30,..., q90]} & \textit{[q05,..., q40]} \\
& & & \textit{q20,..., q95]} & & & \\
& 1970-2019 & \textit{W1} & & [\textit{q60, q70]} & [\textit{q30,..., q70]} & \\
& & & & & & \\ \hline \hline
\end{tabular}
}\end{center}
\begin{tablenotes}
\tiny{ \textit{Note}: For Spain and the Globe we build characteristics from station-months units. For Madrid and Barcelona we use daily frequency time series. A significance level of 10\% is considered for all tests and characteristics.}
\end{tablenotes}
\end{table}
\begin{figure}[h!]
\begin{center}
\caption{Trend evolution of different temperature distributional characteristics} \label{fig-comp-Globe-Spain-Madrid-Barcelona}
\includegraphics[scale=0.5]{Figures/Figure_comp_Globe_Spain_Madrid_Barcelona}
\end{center}
\begin{figurenotes}
\textit{Note}: The bars represent the intensity of the trends found in each characteristic measured through the value of the $\beta$-coefficient estimated in the regression $C_{t}=\alpha+\beta t+u_{t}$.
\end{figurenotes}
\end{figure}
\begin{figure}[h!]
\begin{center}
\caption{Comparing heatmaps}
\label{fig-comp-Globe-Spain-heatmap}
\subfloat[{\small Globe}]{
\includegraphics[scale=0.4]{Figures/Heatmap_comp_Globe_1950}}\\
\subfloat[{\small Spain}]{
\includegraphics[scale=0.4]{Figures/Heatmap_comp_Spain_1950}}\\
\end{center}
\begin{figurenotes}
\textit{Note}: The color scale on the right side of the figure shows the intensity of the trend, based on the value of the $\beta$-coefficient estimated in the regression $C_{t}=\alpha+\beta t+u_{t}$.
\end{figurenotes}
\end{figure}
\clearpage
\section{Conclusions}
The existence of Global Warming is very well documented in all the scientific reports published by the IPCC. In the last one, the AR6 report (2022), special attention is dedicated to climate change heterogeneity (regional climate). Our paper presents a new quantitative methodology, based on the evolution of the trend of the whole temperature distribution and not only on the average, to characterize, to measure and to test the existence of such warming heterogeneity.
It is found that the local warming experienced by Spain (one of most climatically diverse areas) is very different from that of the Globe as a whole. In Spain, the upper-temperature quantiles tend to increase more than the lower ones, while in the Globe just the opposite occurs. In both cases the warming process is accelerating over time. Both regions suffer an amplification effect of an asymmetric nature: there is warming amplification in the lower quantiles of the Globe temperature (beyond the standard well-known results of the Arctic zone) and in the upper ones of Spain. Overall, warming in Spain dominates that of the Globe in all the quantiles except the lower \textit{q05}. This places Spain in a very difficult warming situation compared to the Globe. Such a situation requires stronger mitigation-adaptation policies. For this reason, future climate agreements should take into consideration the whole temperature distribution and not only the average.
Any time a novel methodology is proposed, new research issues emerge for future investigation. Among those which have been left out of this paper (some are part of our current research agenda), three points stand out as important:
\begin{itemize}
\item There is a clear need for a new non-uniform causal-effect climate change analysis beyond the standard causality in mean.
\item In order to improve efficiency, mitigation-adaptation policies should be designed containing a common global component and an idiosyncratic regional element.
\item The relation between warming heterogeneity and public awareness of climate change deserves to be analyzed.
\end{itemize}
| {'timestamp': '2023-01-09T02:13:57', 'yymm': '2301', 'arxiv_id': '2301.02648', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.02648'} |
\section{Introduction}
Throughout this paper, we consider simple and connected graphs. A simple connected graph $G=(V,E)$ consists of the vertex set $V(G)=\{v_{1},v_{2},\ldots,v_{n}\}$ and the edge set $E(G)$. The \textit{order} and \textit{size} of $G$ are $|V(G)|=n$ and $|E(G)|=m$, respectively. The \textit{degree} of a vertex $v,$ denoted by $d_{G}(v)$ (we simply write by $d_v$) is the number of edges incident on the vertex $v$. Further, $N_G (v)$ denotes the set of all vertices that are adjacent to $v$ in $G$ and $\overline{G}$ denotes the complement of the graph $G$. A vertex $u\in V(G)$ is called a pendant vertex if $d_{G}(u) =1$.For other standard definitions, we refer to \cite{5R8,5R9}.\\
\indent If $A$ is the adjacency matrix and $D(G)=diag(d_1 ,d_2 ,\dots,d_n)$ is the diagonal matrix of vertex degrees of $G$, the $Laplacian$ $matrix$ of $G$ is defined as $ L(G)=D(G)-A$. By the spectrum of $G$, we mean the spectrum of its adjacency matrix, and it consists of the eigenvalues $\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_n$. The Laplacian spectrum of $G$ is the spectrum of its Laplacian matrix, and is denoted by $\mu_1 (G) \geq \mu_2 (G) \geq \dots \geq \mu_n (G) =0$. For any interval $I$, let $m_{L(G)}I$ be the number of Laplacian eigenvalues of $G$ that lie in the interval $I$. Also, let $m_{L(G)}(\mu_i (G) )$ denote the multiplicity of the Laplacian eigenvalue $\mu_i (G) $ .\\
\indent In $G$, the \textit{distance} between the two vertices $u,v\in V(G),$ denoted by $d_{uv}$, is defined as the length of a shortest path between $u$ and $v$. The \textit{diameter} of $G$, denoted by $d$, is the maximum distance between any two vertices of $G.$ The \textit{distance matrix} of $G$, denoted by $D(G)$, is defined as $D(G)=(d_{uv})_{u,v\in V(G)}$.
The \textit{transmission} $Tr_{G}(v)$
(we will write $Tr(v)$ if the graph $G$ is understood) of a vertex $v$ is defined as the sum of the distances from $v$ to all other vertices in $G$, that is, $Tr_{G}(v)=\sum\limits_{u\in V(G)}d_{uv}.$\\
\indent Let $Tr(G)=diag (Tr(v_1),Tr(v_2),\ldots,Tr(v_n)) $ be the diagonal matrix of vertex transmissions of $G$. Aouchiche and Hansen \cite{5R1} defined the \textit{distance Laplacian matrix} of a connected graph as $D^L(G)=Tr(G)-D(G)$ (or briefly written as $D^{L}$). The eigenvalues of $D^{L}(G)$ are called the distance Laplacian eigenvalues of $G$. Since $ D^L(G) $ is a real symmetric positive semi-definite matrix, we denote its eigenvalues by $\partial_{i}^{L}(G) $'s and order them as $0=\partial_{n}^{L}(G)\leq \partial_{n-1}^{L}(G)\leq \dots\leq \partial_{1}^{L}(G)$. The distance Laplacian eigenvalues are referred as $D^L-eigenvalues$ of $G$ whenever the graph $G$ is understood. Some recent work can be seen in \cite{pk1,pk2}. For any interval $I$, $m_{D^L (G)}I$ represents the number of distance Laplacian eigenvalues of $G$ that lie in the interval $I$. Also, $m_{D^L (G)}(\partial_{i}^{L}(G) )$ denotes the multiplicity of the distance Laplacian eigenvalue $ \partial_{i}^{L}(G) $. The multiset of eigenvalues of $ D^L(G)$ is called the \textit{distance Laplacian spectrum} of $G$. If there are only $k$ distinct distance Laplacian eigenvalues of $G$, say, $\partial_{1}^{L}(G),\partial_{2}^{L}(G),\dots,\partial_{k}^{L}(G)$ with corresponding multiplicities as $n_1 ,n_2 ,\dots, n_k$, then we convey this information in the matrix form as\\
$$\begin{pmatrix}
\partial_{1}^{L}(G) & \partial_{2}^{L}(G) & \dots & \partial_{k}^{L}(G)\\
n_1 & n_2 & \dots & n_k\\
\end{pmatrix}.$$
\indent We denote by $K_n$ the complete graph of order $n$ and by $K_{t_1 ,\dots, t_k}$ the complete multipartite graph with order of parts $t_1 ,\dots, t_k$. The star graph of order $n$ is denoted by $S_n$. Further, $SK_{n,\alpha}$ denotes the complete split graph, that is, the complement of the disjoint union of a clique $K_\alpha$ and $n-\alpha$ isolated vertices.
For two disjoint graphs $G$ and $H$ of order $n_1$ and $n_2$, respectively, the \textit{corona graph} $GoH$ is the graph obtained by taking one copy of $G$ and $n_1$ copies of $H$, and then joining the \textit{i}th vertex of $G$ to every vertex in the \textit{i}th copy of $H$, for all $ 1\leq i\leq n_1$.\\
\indent In a graph $G$, the subset $M\subseteq V(G)$ is called an \textit{independent set} if no two vertices of $M$ are adjacent. The \textit{independence number} of $G$ is the cardinality of the largest independent set of $G$ and is denoted by $\alpha(G)$. A set $M\subseteq V(G)$ is \textit{dominating} if every $v\in V(G) \setminus M$ is adjacent to some member in $S$. The \textit{domination number} $\gamma(G)$ is the minimum size of a dominating set.\\
\indent The \textit{chromatic number} of a graph $G$ is the minimum number of colors required to color the vertices of $G$ such that no two adjacent vertices get the same color. It is denoted by $\chi(G)$. The set of all vertices with the same color is called a \textit{color class}. \\
\indent The distribution of Laplacian eigenvalues of a graph $G$ in relation to various graph parameters of $G$ has been studied extensively. Grone and Merris \cite{5R10} and Merris \cite{5R11} obtained bounds for $m_{L(G)}[0,1) $ and $m_{L(G)}[0,2) $. Guo and Wang \cite{5R12} showed that if $G$ is a connected graph with matching number $\nu(G)$, then $m_{L(G)}(2,n]>\nu(G)$, where $n>2\nu(G)$. Some work in this direction can be seen in \cite{cjt}. Recently, Ahanjideh et al \cite{5R0} obtained bounds for $m_{L(G)}I $ in terms of structural parameters of $G$. In particular, they showed that $m_{L(G)}(n -\alpha(G), n] \leq n -\alpha(G)$ and $m_{L(G)}(n-d(G)+3, n]\leq n -d(G) -1$, where $\alpha(G)$ and $d(G)$ denote the independence number and the diameter of $G$, respectively. The distribution of the distance Laplacian eigenvalues of a graph $G$ with respect to its structural parameters has not got its due attention and our investigation in this manuscript is an attempt in that direction. \\
\indent The rest of the paper is organized as follows. In Section 2, we find the distribution of $G$ in relation to the chromatic number $\chi$ and the number of pendant vertices. We show that $m_{D^{L}(G) }[n,n+2)\leq \chi-1$ and show that the inequality is sharp. We also prove that $m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1 $, where $C_{\overline{G}}$ is the number of components in $\overline{G}$, and discuss some cases where the bound is best possible. In addition, we prove that $m_{D^{L} (G )}[n,n+p)\leq n-p$, where $p\geq 1$ is the number of pendant vertices. In Section 3, we determine the distribution of distance Laplacian eigenvalues of $G$ in terms of the independence number $\alpha(G)$ and diameter $d$. In particular, we show that $m_{D^{L} (G)}[n,n+\alpha(G))\leq n-\alpha(G)$ and show that the inequality is sharp. We show that $m_{D^{L}(G)}[0,dn]\geq d+1$. We characterize the graphs having diameter $d\leq 2$ satisfying $m_{D^{L}(G) } (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$. In Section 4, we propose some research problems.
\section{Distribution of distance Laplacian eigenvalues, chromatic number and pendant vertices }
For a graph $G$ with $n$ vertices, let $Tr_{max}(G)=max\{Tr(v):v\in V(G)\}$ . Whenever the graph $G$ is understood, we will write $Tr_{max}$ in place of $Tr_{max}(G)$. We have the following important result from matrix theory.
\begin{lemma}\label{L2}\emph {\cite{5R3}} Let $M=(m_{ij})$ be a $n\times n$ complex matrix having $l_1 ,l_2 ,\dots,l_p$ as its distinct eigenvalues. Then
$$\{l_1 ,l_2 ,\dots,l_p\}\subset \bigcup\limits_{i=1}^{n}\Big \{z:|z-m_{ii}|\leq \sum\limits_{j\neq i}|m_{ij}|\Big\}.$$
\end{lemma}
\indent By using Lemma \ref{L2} for the distance Laplacian matrix of a graph $G$ with $n$ vertices, we get
\begin{equation}
\partial^L_{1}(G)\leq 2Tr_{max}
\end{equation}
The following fact about distance Laplacian eigenvalues will be used in the sequel.\\
\textbf{Fact 1.} Let $G$ be a connected graph of order $n$ and having distance Laplacian eigenvalues in the order $\partial^L_{1}(G)\geq \partial^L_{2}(G)\geq \dots \geq \partial^L_{n}(G)$. Then,\\
\hspace*{25mm} $\partial^L_{n}(G)=0$ and $\partial^L_{i}(G)\geq n$ for all $i=1,2,\dots,n-1.$\\\\
We recall the following important results.
\begin{theorem}\label{T7}
(Cauchy Interlacing Theorem). Let $M$ be a real symmetric matrix of order $n$, and let $A$ be a principal submatrix of $M$ with order $s\leq n$. Then $$\lambda_i (M)\geq \lambda_i (A) \geq \lambda_{i+n-s} (M)\hspace{1cm}(1\leq i\leq s).$$
\end{theorem}
\begin{lemma} \label{L1}\emph {\cite{5R1}} Let $G$ be a connected graph with $n$ vertices and $m$ edges, where $m\geq n$. Let $G^*$ be the connected graph obtained from $G$ by deleting an edge. Let $\partial^L_1 \geq \partial^L_2 \geq ...\geq \partial^L_n$ and ${\partial^*_1}^L \geq {\partial^*_2}^L \geq ...\geq {\partial^*_n}^L$ be the spectrum of $G$ and $G^*$, respectively. Then ${\partial^*_i}^L \geq \partial^L_i $ for all $i=1,\dots,n$.
\end{lemma}
\begin{lemma}\label{L8} \emph{\cite{5R7} } Let $t_{1},t_{2},\dots,t_{k}$ and n be integers such that $t_{1}+t_{2}+\dots+t_{k}=n$ and $t_{i}\geq 1$ for $i=1,2,\dots,k$. Let $p=|\{i:t_{i}\geq 2\}|$. The distance Laplacian spectrum of the complete $k-partite$ graph $K_{t_{1},t_{2},\dots,t_{k}}$ is$ \Big((n+t_{1})^{(t_{1}-1)},\dots,(n+t_{p})^{(t_{p}-1)},n^{(k-1)},0\Big)$.
\end{lemma}
\begin{lemma}\label{L3} \emph {\cite{5R1}} Let $G$ be a connected graph with $n$ vertices. Then $\partial^L_{n-1}\geq n$ with equality if and only if $\overline{G}$ is disconnected. Furthermore, the multiplicity of $n$ as an eigenvalue of $D^L (G)$ is one less than the number of components of $\overline{G}$.
\end{lemma}
First we obtain an upper bound for $m_{D^{L} (G)} I$, where $I$ is the interval $[n,n+2)$, in terms of the chromatic number $\chi$ of $G$.
\begin{theorem} \label{T8} Let $G$ be a connected graph of order $n$ and having chromatic number $\chi$. Then $$m_{D^{L} (G)} [n,n+2 ) \leq \chi-1.$$ Inequality is sharp and is shown by all complete multipartite graphs.
\end{theorem}
\noindent {\bf Proof.} Let $t_1 ,t_2 ,\dots,t_\chi $ be $\chi$ positive integers such that $t_1 +t_2 +\dots+t_{\chi} =n$ and let these numbers be the cardinalities of $\chi$ partite classes of $G$. We order these numbers as $t_1 \geq t_2 \geq \dots\geq t_{\chi(G)} $. Thus $G$ can be considered as a spanning subgraph of the complete multipartite graph $H=K_{t_1 ,t_2 ,\dots,t_{\chi}}$ with $t_1 \geq t_2 \geq \dots\geq t_{\chi} $ as the cardinalities of its partite classes. Using Lemma \ref{L8}, we see that $m_{D^{L} (H )} [n,n+2 ) = \chi-1$. By Lemma \ref{L1} and the Fact 1, we have $ m_{D^{L} (G )} [n,n+2 ) \leq m_{D^{L} (H )} [n,n+2 ) = \chi-1$, proving the inequality. Using Lemma \ref{L8}, we see that the equality holds for all complete multipartite graphs. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
As a consequence of Theorem \ref{T8}, we have the following observation.
\begin{corollary} \label{C2} Let $G$ be a connected graph of order $n$ having chromatic number $\chi$. Then $$ m_{D^{L} (G )} [n+2,2Tr_{max} ]\geq n- \chi.$$ Inequality is sharp and is shown by all complete multipartite graphs.
\end{corollary}
\noindent {\bf Proof.} By using the Fact 1, we get
\begin{align*}
&m_{D^{L} (G )} [n,n+2 )+ m_{D^{L} (G )}[n+2,2Tr_{max} ] =n-1, \\&
or ~~~~~ \chi-1+ m_{D^{L} (G )}[n+2,2Tr_{max} ] \geq n-1, \\&
or ~~~~~~ m_{D^{L} (G )}[n+2,2Tr_{max} ] \geq n- \chi.
\end{align*}
Therefore, the inequality is established. The remaining part of the proof follows from Theorem \ref{T8}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
In the following theorem, we characterize the unique graph with chromatic classes of the same cardinality having $n-1$ eigenvalues in the interval $\big[n,n+\frac{n}{\chi}\big]$.
\begin{theorem} \label{T9} Let $G$ be a connected graph of order $n$ and having the chromatic number $\chi$. If the chromatic classes are of the same cardinality, then
$$ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]\leq n-1$$ with equality if and only if $G\cong K_{\frac{n}{\chi},\dots,\frac{n}{\chi}}$.
\end{theorem}
\noindent {\bf Proof.} Using Fact 1, we get the required inequality. Now, we will show that the equality holds for the graph $H= K_{\frac{n}{\chi},\dots,\frac{n}{\chi}}$. Using Lemma \ref{L8}, we have the distance Laplacian spectrum of $H$ as
$$\begin{pmatrix}
0 & n & n+\frac{n}{\chi} \\
1 & \chi-1 & n-\chi \\
\end{pmatrix},$$
which clearly shows that the equality holds for the graph $H$. To complete the proof, we will show that if $G\ncong H$, then $ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]< n-1$. Since the chromatic classes are of the same cardinality, we see that $G$ has to be an induced subgraph of $H$ and $n=s\chi$ for some integer $s$, so that $s=\frac{n}{\chi}$. In $H$, let $e=\{u,v\}$ be an edge between the vertices $u $ and $v$. Using Lemma \ref{L1}, it is sufficient to take $G=H-e$. In $G$, we see that $Tr(u)=Tr(v)=n+s-1$. Let $A$ be the principal submatrix of $D^L (G)$ corresponding to the vertices $u$ and $v$. Then $A$ is given by
\begin{equation*}
A=
\begin{bmatrix}
n+s-1 & -2 \\
-2 & n+s-1
\end{bmatrix}.
\end{equation*}
Let $c(x)$ be the characteristic polynomial of $A$. Then $c(x)=x^2 -2(n+s-1)x+{(n+s-1)}^2-4$. Let $x_1 $ and $x_2$ be the roots of $c(x)$ with $x_1 \geq x_2$. It can be easily seen that $x_1=n+s+1$. Using Theorem \ref{T7}, we have $\partial^L _1 (G)\geq x_1 =n+s+1>n+s=n+\frac{n}{\chi}$. Thus, $ m_{D^{L} (G )} \big[n,n+\frac{n}{\chi}\big]< n-1$ and the proof is complete. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
Now, we obtain an upper bound for the number of distance Laplacian eigenvalues which fall in the interval $\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)$.
\begin{theorem}\label{TN1}Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. Then,
\begin{equation}
m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1
\end{equation}
where $C_{\overline{G}}$ is the number of components in $\overline{G}$. The bound is best possible for $\chi=2$ (when $n$ is odd) and $\chi=n-1$ as shown by $K_{m+1, m}$, where $n=2m+1$, and $K_{2,\underbrace{1,1,\dots,1}_{n-2}} $, respectively.
\end{theorem}
\noindent {\bf Proof.} Let $n_1 \geq n_2 \geq \dots\geq n_{\chi} $ be $\chi$ positive integers in that order such that $n_1 +n_2 +\dots+n_{\chi} =n$ and let these numbers be the cardinalities of $\chi$ partite classes of $G$. Clearly, $G$ can be considered as a spanning subgraph of the complete multipartite graph $H=K_{n_1 ,n_2 ,\dots,n_{\chi}}$. Using Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (H)=n+n_1, ~~~~~~ \text{for all} ~ 1\leq i\leq n_1 -1.$$
As $n_1$ is largest among the cardinalities of chromatic classes, it is at least equal to average, that is,
$n_1 \geq \frac{n}{\chi}$. Also, $n_1$ is an integer, therefore, $n_1 \geq \left\lceil\frac{n}{\chi}\right\rceil$. Using this fact in above inequality, we get
$$
\partial^L _i (G)\geq n+\left\lceil\frac{n}{\chi}\right\rceil ~~~~~~ \text{ for all} ~ 1\leq i\leq n_1 -1.
$$
Thus, there at least $n_1 -1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $n+\left\lceil\frac{n}{\chi}\right\rceil$.
Also from Lemma \ref{L3}, we see that $n$ is a distance Laplacian eigenvalues of $G$ with multiplicity exactly $C_{\overline{G}}-1$. Using these observations with Fact 1, we get
\begin{align*}
m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)& \leq n- (n_1 -1)-(C_{\overline{G}}-1)-1\\
& = n-n_1 -C_{\overline{G}}+1\\
& \leq n-\left\lceil\frac{n}{\chi}\right\rceil-C_{\overline{G}}+1,
\end{align*}
proving the required inequality. \\
Let $G^*=K_{2,\underbrace{1,1,\dots,1}_{n-2}} $. It is easy to see that $\left\lceil\frac{n}{n-1}\right\rceil=2$. Also, the complement of $G^*$ has exactly $n-1$ components. By Lemma \ref{L8}, the distance Laplacian spectrum of $G^*$ is given as follows
$$\begin{pmatrix}
0 & n & n+2 \\
1 & n-2 & 1 \\
\end{pmatrix}.$$
Putting all these observations in Inequality (2.2), we see that the equality holds for $G^*$ which shows that the bound is best possible when $\chi=n-1$.
Let $G^{**}=K_{m+1, m}$, where $n=2m+1$. In this case, we see that $\left\lceil\frac{n}{2}\right\rceil=m+1=\frac{n+1}{2}$ and the complement of $G^{**}$ has exactly $2$ components. By Lemma \ref{L8}, we observe that the distance Laplacian spectrum of $G^{**}$ is given as follows
$$\begin{pmatrix}
0 & n & \frac{3n+1}{2} & \frac{3n-1}{2} \\
1 & 1 & \frac{n-1}{2} & \frac{n-3}{2}\\
\end{pmatrix}.$$
Using all the above observations in Inequality (2.2), we see that the equality holds for $G^{**}=K_{m+1, m}$ which shows that the bound is best possible when $\chi=2$ and $n$ is odd. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The following are some immediate consequences of Theorem \ref{TN1}.
\begin{corollary}\label{CN1} Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. Then,
$$
m_{D^{L} (G )}\bigg[ n+\left\lceil\frac{n}{\chi}\right\rceil,\partial^L _1 (G)\bigg]\geq \left\lceil\frac{n}{\chi}\right\rceil-1.
$$
The bound is best possible for $\chi=2$ (when $n$ is odd) and $\chi=n-1$ as shown by $K_{m+1, m}$, where $n=2m+1$, and $K_{2,\underbrace{1,1,\dots,1}_{n-2}} $, respectively.
\end{corollary}
\begin{corollary}\label{CN2}Let $G\ncong K_n$ be a connected graph on $n$ vertices with chromatic number $\chi$. If $\overline{G}$ is connected, then
$$m_{D^{L} (G )}\bigg( n,n+\left\lceil\frac{n}{\chi}\right\rceil\bigg)\leq n- \left\lceil\frac{n}{\chi}\right\rceil.$$
\end{corollary}
\noindent{\bf Proof.} Since $\overline{G}$ is connected, therefore, $C_{\overline{G}}=1$. Putting $C_{\overline{G}}=1$ in Inequality (2.2) proves the desired result. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The next theorem shows that there are at most $n-p$ distance Laplacian eigenvalues of $G$ in the interval $[n,n+p)$, where $p\geq 1$ is the number of pendant vertices in $G$.
\begin{theorem}\label{TN2} Let $G\ncong K_n$ be a connected graph on $n$
vertices having $p\geq 1$ pendant vertices, then
$$m_{D^{L} (G )}[n,n+p)\leq n-p.$$
For $p=n-1$, equality holds if and only if $G\cong S_n$.
\end{theorem}
\noindent{\bf Proof.} Let $S$ be the set of pendant vertices so that $|S|=p$. Clearly, $S$ is an independent set of $G$. Obviously, the induced subgraph, say $H$, on the vertex set $M=V(G)\setminus S$ is connected. Let the chromatic number of $H$ be $q$ and $n_1 \geq n_2 \geq \dots \geq n_q$ be the cardinalities of these chromatic classes in that order, where $1\leq q \leq n-p$ and $n_1 +n_2 +\dots+n_q =n-p$. Let $n_k \geq p \geq n_{k+1}$, where $0\leq k \leq q$, $n_0 =p$ if $k=0$ and $n_{q+1}=p$ if $k=q$. With this partition of the vertex set $V(G)$ into $q+1$ independent sets, we easily see that $G$ can be considered as an induced subgraph of complete $q+1$-partite graph $L=K_{n_1 ,n_2,\dots, n_k ,p,n_{k+1} ,\dots,n_q} $. Consider the following two cases.\\
\noindent{\bf Case 1.} Let $1\leq k \leq q$ so that $n_1 \geq p$. Then, from Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (L)=n+n_1\geq n+p, ~~~ \text{ for all} ~ 1\leq i \leq n_1 -1. $$
\noindent{\bf Case 2.} Let $k=0$ so that $p\geq n_1$. Again, using Lemmas \ref{L1} and \ref{L8}, we get
$$\partial^L _i (G)\geq \partial^L _i (L)=n+p, ~~~ \text{ for all} ~ 1\leq i \leq p -1.$$
Thus, in both cases, we see that there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $n+p$. As $p\geq 1$, so $\overline{G}$ has at most two components, which after using Lemma \ref{L3} shows that $n$ is a distance Laplacian eigenvalue of $G$ of multiplicity at most one. From the above observations and Fact 1, we get
$$m_{D^{L} (G )}[n,n+p)\leq n-p,$$
which proves the required inequality.
For the second part of the theorem, we see that $S_n$ is the only connected graph having $n-1$ pendant vertices. The distance Laplacian spectrum of $S_n$ by Lemma \ref{L8} is given as
$$\begin{pmatrix}
0 & n & 2n-1 \\
1 & 1 & n-2\\
\end{pmatrix}$$
and the proof is complete. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
An immediate consequence is as follows.
\begin{corollary}\label{CN3}
Let $G\ncong K_n$ be a connected graph on $n$
vertices having $p\geq 1$ pendant vertices, then
$$m_{D^{L} (G )}[n+p,\partial^L _1 (G)]\geq p-1.$$
For $p=n-1$, equality holds if and only if $G\cong S_n$.
\end{corollary}
The following lemma will be used in the proof of Theorem \ref{T11}.
\begin{lemma}\label{L9} \emph{\cite{5R2}} Let $G$ be a graph with $n$ vertices. If $K=\{v_1 ,v_2 ,\dots,v_p\}$ is an independent set of $G$ such that $N(v_i)=N(v_j)$ for all $i,j\in \{1,2,\dots,p\}$, then $\partial=Tr(v_i)=Tr(v_j)$ for all $i,j\in \{1,2,\dots,p\}$ and $\partial +2$ is an eigenvalue of $D^L (G)$ with multiplicity at least $p-1$.
\end{lemma}
\begin{theorem} \label{T11} Let $G$ be a connected graph of order $n\geq 4$ having chromatic number $\chi$. If $S=\{v_1 ,v_2 ,\dots,v_p\} \subseteq V(G)$, where $|S|=p\geq \frac{n}{2}$, is the set of pendant vertices such that every vertex in $S$ has the same neighbour in $V(G)\setminus S$, then
$$ m_{D^{L} (G )} [n,2n-1)\leq n-\chi.$$
\end{theorem}
\noindent {\bf Proof.} Clearly, all the vertices in $S$ form an independent set. Since all the vertices in $S$ are adjacent to the same vertex, therefore, all the vertices of $S$ have the same transmission. Now, for any $v_i$ $(i=1,2,\dots,p)$ of $S$, we have
\begin{align*}
T=Tr(v_i ) \geq 2(p-1)+1+2(n-p-1) =2n-3.
\end{align*}
From Lemma \ref{L9}, there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $T+2$. From above, we have $T+2\geq 2n-3+2=2n-1$. Thus, there are at least $p-1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $2n-1$, that is, $ m_{D^{L} (G )} [2n-1,2Tr_{max}]\geq p-1$. Using Fact 1, we have
\begin{equation}
m_{D^{L} (G )} [n,2n-1)\leq n-p.
\end{equation}
We claim that $\chi(G)\leq \frac{n}{2}$. If possible, let $\chi(G)> \frac{n}{2}$. We have following two cases to consider.\\
$\bf {Case ~ 1.}$ Let $p=n-1$. Clearly, the star is the only connected graph having $n-1$ pendant vertices. Thus, $G\cong S_n$. Also, $\chi(S_n)=2$, a contradiction, as $\chi(S_n)=2\leq\frac{n}{2}$, for $n\geq 4$.\\
$\bf {Case ~ 2.}$ $\frac{n}{2}\leq p \leq n-2$. Since $p\leq n-2$, there is at least one vertex, say $u$, which is not adjacent to any vertex in $S$. Thus in the minimal coloring of $G$, at least $p+1$ vertices, say, $u,v_1 ,\dots,v_p$ can be colored using only one color. The rest $n-p-1$ vertices can be colored with at most $n-p-1$ colors. Thus, $\chi\leq 1+n-p-1=n-p\leq n-\frac{n}{2}=\frac{n}{2}$, a contradiction. Therefore, $\chi \leq \frac{n}{2}\leq p$. Using this in Inequality (2.3), we get
$$ m_{D^{L} (G )} [n,2n-3)\leq n-\chi,$$
completing the proof. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
To have a bound only in terms of order $n$ and the number of pendant vertices $p$, we can relax the conditions $p\geq \frac{n}{2}$ and $n\geq 4$ in Theorem \ref{T11}. This is given in the following corollary.
\begin{corollary} \label{C3} Let $G$ be a connected graph of order $n$ . If $S=\{v_1 ,v_2 ,\dots,v_p\} \subseteq V(G)$ is the set of pendant vertices such that every vertex in $S$ has the same neighbour in $V(G)\setminus S$, then
$$ m_{D^{L} (G )} [n,2n-1)\leq n-p.$$
\end{corollary}
\section{Distribution of distance Laplacian eigenvalues, independence number and diameter}
The following lemma will be useful.
Now, we obtain an upper bound for $m_{D^{L} (G)}I$, where $I$ is the interval $[n,n+\alpha(G))$, in terms of order $n$ and independence number $\alpha(G)$.
\begin{theorem} \label{T1} Let $G$ be a connected graph of order $n$ having independence number $\alpha (G)$. Then $m_{D^{L} (G)} [n,n+\alpha(G))\leq n-\alpha(G)$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, the equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Moreover, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{theorem}
\noindent {\bf Proof.} We have the following three cases to consider.\\
{\bf Case 1.} $\alpha(G)=1$. Clearly, in this case $G\cong K_n$ and the distance Laplacian spectrum of a complete graph is
$$\begin{pmatrix}
0 & n \\
1 & n-1 \\
\end{pmatrix}.$$
Therefore, we have $m_{D^{L} (K_n)} [n,n+1)= n-1$ which proves the result in this case. \\
{\bf Case 2.} $\alpha(G)= n-1$. Since the star $S_n$ is the only connected graph having independence number $n-1$, therefore, $G\cong S_n$ in this case. Now, $n-\alpha(S_n)=n-n+1=1$. From Lemma \ref{L8}, the distance Laplacian spectrum of $S_n $ is given as \\
$$\begin{pmatrix}
0 & n & 2n-1 \\
1 & 1 & n-2 \\
\end{pmatrix}.$$
Therefore, $m_{D^{L} (S_n)} [n,2n-1)= 1$, proving the result in this case.\\
{\bf Case 3.} $2\leq \alpha(G)\leq n-2$. Without loss of generality, assume that $N=\{v_1 ,v_2 ,\dots ,v_{\alpha(G)}\} \subseteq V(G)$ is an independent set with maximum cardinality. Let $H$ be the new graph obtained by adding edges between all non-adjacent vertices in $V(G)\setminus N$ and adding edges between each vertex of $N$ to vertex of $V(G)\setminus N$. With this construction, we see that $H\cong SK_{n,\alpha}$. Using Fact 1 and Lemma \ref{L1}, we see that $m_{D^{L} (G)} [n,n+\alpha(G))\leq m_{D^{L} (H)} [n,n+\alpha(G))$. So to complete the proof in this case, it is sufficient to prove that $ m_{D^{L} (H)} [n,n+\alpha(G))\leq n-\alpha(G)$. By Corollary 2.4 in \cite{5R2}, the distance Laplacian spectrum of $H$ is given by
$$\begin{pmatrix}
0 & n & n+\alpha(G) \\
1 & n-\alpha(G) & \alpha(G)-1 \\
\end{pmatrix}.$$
This shows that $ m_{D^{L} (H)} [n,n+\alpha(G))= n-\alpha(G)$. Thus the bound is established. Also, it is clear that $SK_{n,\alpha}$ satisfies the inequality for $2\leq \alpha(G)\leq n-2$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
From Theorem \ref{T1}, we have the following observation.
\begin{corollary} \label{c1} If $G$ is a connected graph of order $n$ having independence number $\alpha (G)$, then $\alpha(G) \leq 1+m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, the equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Moreover, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{corollary}
\noindent {\bf Proof.} Using Inequality (2.1) and Theorem \ref{T1}, we have
\begin{align*}
& m_{D^{L} (G)} [n,n+\alpha(G))+ m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]=n-1\\
or ~~~~~~~~~~~~~~ & ~ n-\alpha(G)+ m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}]\geq n-1\\
or ~~~~~~~~~~~~~~ & ~ \alpha(G) \leq 1+m_{D^{L} (G)} [n+\alpha(G),2Tr_{max}],
\end{align*}
which proves the inequality. The proof of the remaining part is similar to the proof of Theorem \ref{T1}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
The next result is an upper bound for $ m_{D^{L} (G)} (n,n+\alpha(G))$ in terms of the independence number $\alpha(G)$, order $n$ and number of components of the complement $\overline{G}$ of $G$.
\begin{theorem} \label{T2} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$. Then
$$ m_{D^{L} (G)} (n,n+\alpha(G))\leq n-\alpha(G) +1-k,$$
where $k$ is the number of components of $\overline{G}$. For $\alpha(G)=1$ or $\alpha(G)=n-1$, equality holds if and only if $G\cong K_n$ or $G\cong S_n$. Furthermore, for every integer $n$ and $\alpha(G)$ with $2\leq \alpha(G)\leq n-2$, the bound is sharp, as $SK_{n,\alpha}$ satisfies the inequality.
\end{theorem}
\noindent {\bf Proof.} Since $\overline{G}$ has $k$ components, therefore by Lemma \ref{L3}, $n$ is a distance Laplacian eigenvalue of multiplicity exactly $k-1$. Using Theorem \ref{T1}, we have
\begin{align*}
m_{D^{L} (G)} (n,n+\alpha(G)) & =m_{D^{L} (G)} [n,n+\alpha(G))-m_{D^{L} (G)} (n)\\
& =m_{D^{L} (G)} [n,n+\alpha(G))-k+1\\
& \leq n-\alpha(G) +1-k.
\end{align*}
Thus the inequality is established. The remaining part of the proof follows by observing the distance Laplacian spectrum of the graphs $ K_n$, $ S_n$ and $SK_{n,\alpha}$ given in Theorem \ref{T1}. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
We will use the following lemmas in the proof of Theorem \ref{T3}.
\begin{lemma} \label{L4} \emph {\cite{5R4}} If $G$ is a graph with domination number $\gamma (G)$, then $ m_{L(G)} [0,1)\leq \gamma (G) $.
\end{lemma}
\begin{lemma}\label{L5}\emph{\cite{5R1}} Let $G$ be a connected graph with $n$ vertices and diameter $d(G)\leq 2$. Let $\mu_1 (G) \geq \mu_2 (G)\geq \dots \geq \mu_n (G)=0$ be the Laplacian spectrum of $G$. Then the distance Laplacian spectrum of $G$ is $2n-\mu_{n-1} (G) \geq 2n- \mu_{n-2} (G)\geq \dots \geq 2n-\mu_1 (G)>\partial^L_n (G)=0$. Moreover, for every $i\in \{1,2,\dots,n-1\}$, the eigenspaces corresponding to $\mu_i (G)$ and $2n-\mu_i (G)$ are same.
\end{lemma}
Now, we obtain an upper bound for $m_{D^{L}(G) }$, where $I$ is the interval $(2n-1,2n)$, in terms of the independence number $\alpha(G)$. This upper bound is for graphs with diameter $d(G)\leq 2$.
\begin{theorem} \label{T3} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then
$$m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -1$$and inequality is sharp as shown by $K_n$.
\end{theorem}
\noindent {\bf Proof.} We know that every maximal independent set of a graph $G$ is a minimal dominating set of $G$. Therefore, $\alpha(G)\leq \gamma (G)$. Using Lemma \ref{L4}, we get $\alpha(G)\geq m_{L(G)} [0,1)$. As $G$ connected, the multiplicity of 0 as a Laplacian eigenvalue of $G$ is one. Thus, $\alpha(G)-1\geq m_{L(G)} (0,1)$, that is, there are at least $\alpha(G)-1$ Laplacian eigenvalues of $G$ which are greater than zero and less than one. Using this fact in Lemma \ref{L5}, we observe that there are at least $\alpha(G)-1$ distance Laplacian eigenvalues of $G$ which are greater than $2n-1$ and less than $2n$. Thus,
$$m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -1.$$
Clearly, $ m_{D^{L} (K_n)} (2n-1,2n )=0$ and $\alpha(K_n)=1$, which shows that equality holds for $K_n$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
Our next result shows that the upper bound in Theorem \ref{T3} can be improved for the graphs having independence number greater than $\frac{n}{2}$.
\begin{theorem} \label{L7} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)>\frac{n}{2}$ and diameter $d(G)\leq 2$ . Then $m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G) -2.$
\end{theorem}
\noindent {\bf Proof.} If possible, let $m_{D^{L} (G)} (2n-1,2n )\geq \alpha(G) -1$. Using Lemma \ref{L5}, we see that there are at least $\alpha(G) -1$ Laplacian eigenvalues of $G$ which are greater than zero and less than one. As $G$ is connected, 0 is a Laplacian eigenvalue of multiplicity one. Using these facts and Lemma \ref{L4}, we have $\alpha(G) \leq m_{L(G)} [0,1)\leq \gamma(G) \leq \alpha(G).$ Thus, $ \gamma(G) =\alpha(G) >\frac{n}{2}$. This contradicts the well known fact that $\gamma(G) \leq \frac{n}{2}$ . Thus the result is established.\nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
We also use the following lemma in our next result.
\begin{lemma} \label{L6} \emph{\cite{5R5}} Let $G$ and $G^*$ be graphs with $n_1$ and $n_2$ vertices, respectively. Assume that $\mu_1 \leq \dots \leq \mu_{n_1 }$ and $\lambda_1 \leq \dots \leq \lambda_{n_2 }$ are the Laplacian eigenvalues of $G$ and $G^*$ , respectively. Then the Laplacian spectrum of $GoG^*$ is given as follows.\\
(i) The eigenvalue $\lambda_j +1$ with multiplicity $n_1$ for every eigenvalue $\lambda_j (j=2,\dots,n_2)$ of $G^*$;\\
(ii) Two multiplicity-one eigenvalues $\frac{\mu_i +n_2 +1\pm \sqrt{{(\mu_i +n_2 +1)}^2-4\mu_i}}{2}$, for each eigenvalue $\mu_i (i=1,\dots,n_1)$ of $G$.
\end{lemma}
The following result characterizes the graphs with diameter $d(G)\leq 2$ and independence number $\alpha(G)$ which satisfy $m_{D^{L} } (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$.
\begin{theorem} \label{T5} Let $G$ be a connected graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$ if and only if $G=HoK_1$ for some connected graph $H$.
\end{theorem}
\noindent {\bf Proof.} Assume that $G=HoK_1$ for some connected graph $H$. Then $|H|=\frac{n}{2}$. Let the Laplacian eigenvalues of $H$ be $\mu_1 \geq \dots \geq \mu_{\frac{n}{2}}$. By Lemma \ref{L6}, the Laplacian eigenvalues of $G$ are equal to $\frac{\mu_i +2\pm \sqrt{{\mu_i}^2 +4}}{2} $, $i=1,\dots,\frac{n}{2}$. We observe that half of these eigenvalues are greater than 1 and the other half are less than 1. As $G$ is connected, 0 is a Laplacian eigenvalue of multiplicity one. So $m_{{L} (G)} (0,1 )=\frac{n}{2}-1$. Using Lemma \ref{L5}, we see that there are $\frac{n}{2}-1$ distance Laplacian eigenvalues which are greater than $2n-1$ and less than $2n$. Thus, $m_{D^{L} (G)} (2n-1,2n )= \frac{n}{2}-1$. Now, we will show that $\alpha(G)=\frac{n}{2} $. Assume that $V(G)=\{v_1, \dots,v_{\frac{n}{2}}, v'_1 ,\dots,v'_{\frac{n}{2}}\},$ where $V(H)=\{v_1, \dots,v_{\frac{n}{2}}\}$ and $N_G (v'_i)=\{v_i \}$. If $A$ is a maximal independent set, then $|A|\leq \frac{n}{2}$. For if $|A|> \frac{n}{2}$, then from the structure of $G$, we have at least one pair of vertices in $A$, say $v_i ,v'_i$, which are adjacent, a contradiction. As $\{ v'_1 ,\dots,v'_{\frac{n}{2}}\}$ is an independent set, therefore $\alpha(G)=\frac{n}{2}$. Thus, we have $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$.\\
\indent Conversely, assume that $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$. Using Lemmas \ref{L4} and \ref{L5}, we see that $ \alpha(G)=m_{L (G)} [0,1)\leq \gamma(G) \leq \alpha(G)$ which shows that $\gamma(G)=\alpha(G)=\frac{n}{2}$. Therefore, by Theorem 3 of {\cite{5R6}}, $G=HoK_1$ for some connected graph $H$. \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
In the following theorem, we show that we can relax the condition $\alpha(G)=\frac{n}{2}$ in Theorem \ref{T5} for the class of bipartite graphs.
\begin{theorem} \label{T6} Let $G$ be a connected bipartite graph with $n$ vertices having independence number $\alpha(G)$ and diameter $d(G)\leq 2$. Then, $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$ if and only if $G=HoK_1$ for some connected graph $H$.
\end{theorem}
\noindent {\bf Proof.} Assume that $G=HoK_1$, for some connected graph $H$. Then the proof follows by Theorem \ref{T5}. So let $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$. Using Theorem \ref{T5}, it is sufficient to show that $\alpha(G)=\frac{n}{2}$. If possible, let the two parts of $G$ have different orders. Then, using Lemmas \ref{L4} and \ref{L5}, we have
$$ \gamma(G)<\frac{n}{2}<\alpha(G)=m_{D^{L} (G)} (2n-1,2n )+1= m_{L (G)} [0,1)\leq \gamma(G),$$
which is a contradiction. Therefore, the two parts of $G$ have the same order. Now, if $ \alpha(G)> \frac{n}{2}$, then by Lemma \ref{L7}, $m_{D^{L} (G)} (2n-1,2n )\leq \alpha(G)-2$, a contradiction. Hence $\alpha(G)\leq \frac{n}{2}$. Since the partite sets have the same order, we get $\alpha(G)=\frac{n}{2}$.\nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
\noindent {\bf {Remark.}} From the above theorem, we see that if $G$ is a connected bipartite graph with $n$ vertices, having independence number $\alpha(G)$ and diameter $d\leq 2$ satisfying either of the conditions (i) $G=HoK_1$ for some connected graph $H$, or (ii) $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1$, then $\alpha(G)=\frac{n}{2}$ and $n$ is even.\\
The following theorem shows that the number of distance Laplacian eigenvalues of the graph $G$ in the interval $[0,dn]$ is at least $d+1$.
\begin{theorem} \label{T10} If $G$ is a connected graph of order $n$ having diameter $d$, then $$ m_{D^{L} (G )} \big[0,dn]\geq d+1.$$
\end{theorem}
\noindent {\bf Proof.} We consider the principal submatrix, say $M$, corresponding to the vertices $v_1 ,v_2 ,\dots, v_{d+1}$ which belong to the induced path $P_{d+1}$ in the distance Laplacian matrix of $G$. Clearly, the transmission of any vertex in the path $P_{d+1}$ is at most $\frac{d(2n-d-1)}{2}$, that is, $Tr(v_i )\leq \frac{d(2n-d-1)}{2}$, for all $i=1,2,\dots,d+1$. Also, the sum of the off diagonal elements of any row of $M$ is less than or equal to $\frac{d(d+1)}{2}$. Using Lemma \ref{L2}, we conclude that the maximum eigenvalue of $M$ is at most $dn$. Using Fact 1 and Theorem \ref{T7}, there at least $d+1$ distance Laplacian eigenvalues of $G$ which are greater than or equal to $0$ and less than or equal to $dn$, that is, $ m_{D^{L} (G )} \big[0,dn]\geq d+1.$ \nolinebreak\hfill\rule{.2cm}{.2cm}\par\addvspace{.5cm}
From Theorem \ref{T10}, we get the following observation after using Inequality (2.1).
\begin{corollary} \label{C4} Let $G$ be a connected graph of order $n$ having diameter $d$. If $dn<2Tr_{max}$, then $$ m_{D^{L} (G )} \big(dn,2Tr_{max}]\leq n- d-1.$$
\end{corollary}
\section{Concluding Remarks}
In the entire generality, we believe it is hard to characterize all the graphs satisfying the bounds given in Theorem \ref{T1} and Theorem \ref{T8}. Also in Theorem \ref{T5}, we characterized graphs with diameter $d\leq 2$ satisfying $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$ and we left the case when $d\geq 3$. So, the following problems will be interesting for future research.\\
{\bf Problem 1.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} [n,n+\alpha(G))= n-\alpha(G)$, for any $G\in \vartheta$. } \\
{\bf Problem 2.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} [n,n+2)= \chi-1$, for any $G\in \vartheta$. }\\
{\bf Problem 3.} {\it Determine the classes of graphs $\vartheta$ for which $m_{D^{L} (G)} (2n-1,2n )= \alpha(G)-1=\frac{n}{2}-1$, for any $G\in \vartheta$ with $d\geq 3$. }\\
\noindent{\bf Data availibility} Data sharing is not applicable to this article as no data sets were generated or analyzed
during the current study.
| {'timestamp': '2022-02-15T02:07:14', 'yymm': '2202', 'arxiv_id': '2202.05987', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.05987'} |
\section{Introduction}
In what follows, $k\ge1$ and $a\ge2$ are fixed integers, and we refer to elements in the set $\mathbb{V}:=\{0,\ldots,a-1\}^k$ as $\hbox{$k$-mers}$, which we represent either as strings or row vectors depending on the context.
The Hamming distance between two $\hbox{$k$-mers}$ $u$ and $v$, from now on denoted as $d(u,v)$, is the number of coordinates where the $\hbox{$k$-mers}$ differ, and is a valid metric. The Hamming graph $\mathbb{H}_{k,a}$ has $\mathbb{V}$ as its vertex set, and two $\hbox{$k$-mers}$ $u$ and $v$ are adjacent (i.e. connected by an undirected edge) if and only if $d(u,v)=1$, i.e. $u$ and $v$ differ at exactly one coordinate. As a result, the (geodesic) distance between two vertices in $\mathbb{H}_{k,a}$ is precisely their Hamming distance (see Figure~\ref{fig:HamResEx}). The literature refers to the Hamming graph with $a=2$ as the ($k$-dimensional) hypercube.
A non-empty set $R\subseteq\mathbb{V}$ is called resolving when for all $u,v\in\mathbb{V}$, with $u\ne v$, there exists $r\in R$ such that $d(u,r)\ne d(v,r)$. In other words, $R$ multilaterates $\mathbb{V}$. For instance, $\mathbb{V}$ resolves $\mathbb{H}_{k,a}$ because $d(u,v)=0$ if and only if $u=v$. Equivalently, $R\subseteq\mathbb{V}$ is resolving if and only if the transformation $\Phi:\mathbb{V}\to\mathbb{R}^{|R|}$ defined as $\Phi(v):=(d(v,r))_{r\in R}$ is one-to-one. In particular, the smaller a resolving set of $\mathbb{H}_{k,a}$, the lower the dimension needed to represent $\hbox{$k$-mers}$ as points in a Euclidean space, which may be handy e.g. to represent symbolic data numerically for machine learning tasks~\cite{TilLla19}.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.33\textwidth]{H13.pdf}\includegraphics[width = 0.33\textwidth]{H23.pdf}\includegraphics[width = 0.33\textwidth]{H33.pdf}
\caption{Visual representation of $\mathbb{H}_{1,3}$, $\mathbb{H}_{2,3}$, and $\mathbb{H}_{3,3}$. Blue-colored vertices form minimal resolving sets in their corresponding Hamming graph.}
\label{fig:HamResEx}
\end{figure}
The metric dimension of $\mathbb{H}_{k,a}$, which we denote $\beta(\mathbb{H}_{k,a})$, is defined as the size of a minimal resolving set in this graph~\cite{HarMel76,Sla75}. For instance, $\beta(\mathbb{H}_{1,a})=(a-1)$ because $\mathbb{H}_{1,a}$ is isomorphic to $K_{a-1}$, the complete graph on $(a-1)$ vertices~\cite{chartrand2000resolvability}. Unfortunately, computing the metric dimension of an arbitrary graph is a well-known NP-complete problem~\cite{Coo71,GarJoh79,KhuRagRos96}, and it remains unknown if this complexity persists when restricted to Hamming graphs. In fact, the metric dimension of hypercubes is only known up to dimension $k=10$~\cite{HarMel76}, and values have been conjectured only up to dimension $k=17$~\cite{MlaKraKovEtAl12}---see OEIS sequence A303735 for further details~\cite{OEIS19}.
\newpage
Integer linear programming (ILP) formulations have been used to search for minimal resolving sets~\cite{chartrand2000resolvability,currie2001metric}. In the context of Hamming graphs, a potential resolving set $R$ is encoded by a binary vector $y$ of dimension $a^k$ such that $y_j=1$ if $j\in R$ and $y_j=0$ if $j\in\mathbb{V}\setminus R$. One can then search for a minimal resolving set for $\mathbb{H}_{k,a}$ by solving the ILP~\cite{chartrand2000resolvability}:
\begin{alignat}{2}
&\min\limits_y \; & &\sum_{j\in\mathbb{V}} y_j \notag \\
&\text{subject to} \; & &\sum_{j\in\mathbb{V}} |d(u,j)-d(v,j)| \cdot y_j \ge 1, \; \forall u\ne v\in\mathbb{V} \\
& & & y_j \in \{0,1\}, \; \forall j\in\mathbb{V}. \notag
\label{eq:oldILP}
\end{alignat}
The first constraint ensures that for all pairs of different vertices $u$ and $v$, there is some $j\in R$ such that $|d(u,j)-d(v,j)| > 0$, hence $R$ resolves $\mathbb{H}_{k,a}$. The objective penalizes the size of the resolving set. (A variant due to~\cite{currie2001metric} is similar but stores $a^k$ copies of a binary version of the distance matrix of the graph.) One downside of this formulation is that forming the distance matrix of $\mathbb{H}_{k,a}$ requires $\mathcal{O}(a^{2k})$ storage, as well as significant computation. Moreover, standard approaches to reduce the computation below $\mathcal{O}(a^{2k})$, such as fast multipole methods~\cite{greengard1987fast} and kd-trees~\cite{bentley1975multidimensional}, do not obviously apply. Even if one could compute all pairwise distances between nodes, simply storing the distance matrix is impractical. To fix ideas, the graph $\mathbb{H}_{8,20}$---which is associated with octapeptides (see \cref{sec:protein_representation})---has $20^8$ nodes, so storing the distance matrix with $\log_2(8)=3$ bits per entry and taking advantage of symmetry would require $3{20^8\choose 2}$ bits, or approximately a prohibitive 123 exabytes.
Due to the above difficulties, other efforts have focused on finding small resolving sets rather than minimal ones. When $a^k$ is small, resolving sets for $\mathbb{H}_{k,a}$ may be determined using the so-called Information Content Heuristic (ICH) algorithm~\cite{HauSchVie12}, or a variable neighborhood search algorithm~\cite{MlaKraKovEtAl12}. Both approaches quickly become intractable with increasing $k$. However, the highly symmetric nature of Hamming graphs can be taken advantage of to overcome this problem. Indeed, recent work~\cite{TilLla19} has shown that $\beta(\mathbb{H}_{k,a})\le\beta(\mathbb{H}_{k-1,a})+\lfloor a/2\rfloor$; in particular, $\beta(\mathbb{H}_{k,a})\le(k-1)\lfloor a/2\rfloor+(a-1)$ i.e., just $\mathcal{O}(k)$ nodes are enough to resolve all the $a^k$ nodes in $\mathbb{H}_{k,a}$. Moreover, one can find a resolving set of size $\mathcal{O}(k)$ in only $\mathcal{O}(ak^2)$ time~\cite{TilLla19}.
This manuscript is based on the recent Bachelor's thesis~\cite{Lai19}, and has two overarching goals. First, it aims to develop practical methods for certifying the resolvability, or lack thereof, of subsets of nodes in arbitrary Hamming graphs. So far, this has been addressed for hypercubes in the literature~\cite{Beardon:2013} but remains unexamined for arbitrary values of the parameter $a$. While our work does not directly address the problem of searching for minimal resolving sets, verifying resolvability is a key component of any such search and may shed new light on the precise metric dimension of $\mathbb{H}_{k,a}$ in future investigations. Second, this paper aims also to exploit said characterization to remove unnecessary nodes---if any---in known resolving sets. This problem, which is infeasible by brute force when $a^k$ is large, has not received any attention in the literature despite being crucial for the embedding of $\hbox{$k$-mers}$ into the Euclidean space of a lowest possible dimension.
The paper is organized as follows. Our main theoretical results are presented first in~\cref{sec:main}. \cref{thm:Az=0} provides the foundation from which we address the problem of verifying resolvability in Hamming graphs and implies a new characterization of resolvability of hypercubes (\cref{cor:symp_system}). An illustrative example shows the utility of \cref{thm:Az=0} but raises several practical challenges in its implementation on large Hamming graphs. \Cref{sec:grobner} describes a computationally demanding verification method based on Gr\"obner bases that is nevertheless more efficient than the brute force approach and determines with certainty whether or not a set of nodes in $\mathbb{H}_{k,a}$ is resolving. Computational issues are addressed in~\cref{sec:ILP} with a novel ILP formulation of the problem. This approach is fast but stochastic and hence has the potential to produce false positives or false negatives. \Cref{sec:complexity_experiments} compares the run time of these methods against a brute force approach across small Hamming graphs. Combining the techniques from sections~\ref{sec:grobner} and~\ref{sec:ILP}, \cref{sec:protein_representation} presents a simple approach to discovering and removing redundant nodes in a given resolving set. This approach allows us to improve on previous bounds on the metric dimension of the Hamming graph $\mathbb{H}_{8,20}$. Finally, two appendices provide background information about Gr\"obner bases and linear programming.
All code used in this manuscript is available on GitHub (\url{https://github.com/hamming-graph-resolvability/Hamming_Resolvability}).
\section{Main results}
\label{sec:main}
In what follows ${\hbox{Tr}}(A)$ denotes the trace of a square matrix $A$, $B'$ the transpose of a matrix or vector $B$, and ${\hbox{vec}}(C)$ the column-major ordering of a matrix $C$ i.e. the row vector obtained by appending from left to right the entries in each column of $C$. For instance:
\[{\hbox{vec}}\left(\left[\begin{array}{cc} a & b \\ c & d\end{array}\right]\right)=(a,c,b,d).\]
In addition, $\bar D$ denotes the flip of the entries in a binary matrix (or vector) $D$, that is 0 is mapped to 1, and vice versa.
The one-hot encoding of a $\hbox{$k$-mer}$ $v$ is defined as the binary matrix $V$ of dimension $(a\times k)$ such that $V[i,j]=1$ if and only if $(i-1)=v[j]$ (the offset in $i$ is needed since the reference alphabet is $\{0,...,a-1\}$ instead of $\{1,\ldots,a\}$). Here, $V[i,j]$ denotes the entry in row-$i$ and column-$j$ of the matrix $V$, and similarly $v[j]$ denotes the $j$-th coordinate of the vector $v$. We also follow the convention of capitalizing $\hbox{$k$-mer}$ names to denote their one-hot encodings.
Our first result links one-hot encodings of $\hbox{$k$-mer}$s with their Hamming distance. Note this result applies to any alphabet size, not just binary.
\begin{lemma}\label{lem:UtV}
If $u,v$ are $\hbox{$k$-mers}$ with one-hot encodings $U,V$, respectively, then $d(u,v)=k-{\hbox{Tr}}(U'V)$; in particular, $d(u,v)={\hbox{Tr}}(U'\bar V)$.
\end{lemma}
\begin{proof}
Let $U_i$ and $V_i$ be the $i$-th column of $U$ and $V$, respectively. Clearly, if $u[i]=v[i]$ then $\langle U_i,V_i\rangle = 1$, and if $u[i]\ne v[i]$ then $\langle U_i,V_i\rangle = 0$, because all but one of the entries in $U_i$ and $V_i$ vanish and the non-vanishing entries are equal to 1. As a result, ${\hbox{Tr}}(U'V)=\sum_{i=1}^k\langle U_i,V_i\rangle$ counts the number of positions where $u$ and $v$ are equal; in particular, $d(u,v)=k-{\hbox{Tr}}(U'V)$. Finally, observe that if $1^{a\times k}$ denotes the $(a\times k)$ matrix with all entries equal to 1 then ${\hbox{Tr}}(U'1^{a\times k})=k$ because every row of $U'$ has exactly one 1 and all other entries vanish. As a result, $d(u,v)={\hbox{Tr}}(U'(1^{a\times k}-V))={\hbox{Tr}}(U'\bar V)$, as claimed.
\end{proof}
We can now give a necessary and sufficient condition for a subset of nodes in an arbitrary Hamming graph to be resolving.
\begin{theorem}\label{thm:Az=0}
Let $v_1,\ldots,v_n$ be $n\ge1$ $\hbox{$k$-mers}$ and $V_1,\ldots,V_n$ their one-hot encodings, respectively, and define the $(n\times ak)$ matrix with rows
\begin{equation}
A := \left(\begin{array}{c}
{\hbox{vec}}(V_1) \\
\vdots\\
{\hbox{vec}}(V_n)
\end{array}\right).
\label{def:A}
\end{equation}
Then $R:=\{v_1,\ldots,v_n\}$ resolves $\mathbb{H}_{k,a}$ if and only if $0$ is the only solution to the linear system $Az=0$, with $z$ a column vector of dimension $ak$, satisfying the following constraints: if $z$ is parsed into $k$ consecutive but non-overlapping subvectors of dimension $a$, namely $z=((z_1,\ldots, z_a), (z_{a+1},\ldots, z_{2a}), ... , (z_{(k-1)a+1},\ldots,z_{ka}))'$, then each subvector is the difference of two canonical vectors.
\end{theorem}
\begin{proof}
Before showing the theorem observe that, for any pair of matrices $A$ and $B$ of the same dimension, ${\hbox{Tr}}(A'B)=\langle{\hbox{vec}}(A),{\hbox{vec}}(B)\rangle$, where $\langle\cdot,\cdot\rangle$ is the usual inner product of real vectors.
Consider $\hbox{$k$-mers}$ $x$ and $y$, and let $X$ and $Y$ be their one-hot encodings, respectively. Due to \cref{lem:UtV}, $d(v_i,x)=d(v_i,y)$ if and only if ${\hbox{Tr}}(V_i'(X-Y))=0$ i.e. $\langle{\hbox{vec}}(V_i),{\hbox{vec}}(X-Y)\rangle=0$. As a result, the set $R$ does not resolve $\mathbb{H}_{k,a}$ if and only if there are $\hbox{$k$-mers}$ $x$ and $y$ such that $Az\neq0$, where $z:={\hbox{vec}}(X)-{\hbox{vec}}(Y)$. Note however that each column $X$ and $Y$ equals a canonical vector in $\mathbb{R}^a$; in particular, if we parse ${\hbox{vec}}(X)$ and ${\hbox{vec}}(Y)$ into $k$ subvectors of dimension $a$ as follows: ${\hbox{vec}}(X)=(x_1,\ldots,x_k)$ and ${\hbox{vec}}(Y)=(y_1,\ldots,y_k)$, then $z=(x_1-y_1,\ldots,x_k-y_k)$ with $x_i'$ and $y_i'$ canonical vectors in $\mathbb{R}^a$. This shows the theorem.
\end{proof}
\subsection{Illustrative Example}\label{subsec:Illustrative} In $H_{2,3}$ consider the set of nodes $R_0=\{02,11\}$. From~\cref{thm:Az=0}, $R_0$ resolves $H_{2,3}$ if and only if $A_0z=0$, with
\begin{equation}
A_0 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & 0
\end{bmatrix},
\label{def:A0}
\end{equation}
has no non-trivial solution $z$ which satisfies the other constraints in the theorem when writing $z = \big((z_1,z_2,z_3),(z_4,z_5,z_6)\big)'$. Something useful to note about this decomposition is that if a subvector of $z$ has two identical entries, then all the entries in that subvector must vanish.
Note that $A_0$ is already in its reduced row echelon form~\cite{Olver:2018}, and has two pivots: $z_1 = -z_6$ and $z_2=-z_5$. Seeking non-trivial solutions to the constrained linear system, we examine permissible values for $z_5$ and $z_6$:
\begin{itemize}
\item[(a)] If $z_5=-1$ then we must have $(z_4,z_6)\in\{(0,1),(1,0)\}$. Furthermore, if $z_6=1$ then $(z_1,z_2,z_3)=(-1,1,0)$, but if $z_6=0$ then $(z_1,z_2,z_3)=(0,1,-1)$. Consequently, $z=(-1,1,0,0,-1,1)$ and $z=(0,1,-1,1,-1,0)$ solve the constrained system.
\item[(b)] Similarly, we find that $z=(-1,0,1,-1,0,1)$ and $z=(1,0,-1,1,0,-1)$ solve the constrained system when we assume that $z_5=0$.
\item[(c)] Finally, $z=(1,-1,0,0,1,-1)$ and $z=(0,-1,1,-1,1,0)$ are also found to solve the constrained system when we impose that $z_5=1$.
\end{itemize}
Having found at least one non-trivial solution to the constrained linear system, we conclude that $R_0$ does not resolve $H_{2,3}$. (The found $z$'s are in fact the only non-trivial solutions.)
From the proof of \cref{thm:Az=0}, we can also determine pairs of vertices in $H_{2,3}$ which are not resolved by $R_0$. Indeed, using the non-trivial solutions found above we find that
$12$ and $01$, $21$ and $10$, and $00$ and $22$ are the only pairs of nodes in $H_{2,3}$ which are unresolved by $R_0$. In particular, because the distances between the nodes in each pair and $22$ are different, $R_1:=R_0\cup\{22\}$ resolves $H_{3,2}$.
We can double-check this last assertion noticing that the reduced echelon form of the matrix $A_1$ associated with $R_1$ is
\begin{equation}
\hbox{rref}(A_1) = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1
\end{bmatrix}.
\label{def:A1}
\end{equation}
In particular, $z_1 = -z_6$, $z_2 = -z_5$, and $z_3 = -z_6$. The first and third identity imply that $z_1=z_2$, hence $(z_1,z_2,z_3)=(0,0,0)$. This together with the first and second identity now imply that $(z_4,z_5,z_6)=(0,0,0)$. So, as anticipated, $z=0$ is the only solution to the constrained linear system $A_1z=0$.
In general, if the reduced row echelon form of the matrix given by \cref{thm:Az=0} has $j$ free variables, then there could be up to $3^j$ possible solutions to the associated linear system, each of which would have to be checked for the additional constraints. This exhaustive search could be very time consuming if not impossible. Handling the linear system constraints more systematically and efficiently is the motivation for sections~\ref{sec:grobner} and~\ref{sec:ILP}.
\subsection{Specializations to Hypercubes} In~\cite{Beardon:2013} a necessary and sufficient condition for the resolvability of hypercubes is provided exploiting that $d(u,v)=\|u-v\|_2^2$ when $u$ and $v$ are binary $\hbox{$k$-mers}$. Next, we reproduce this result using our framework of one-hot encodings instead.
\begin{corollary} \cite[Theorem 2.2]{Beardon:2013}
Let $R=\{v_1,\ldots,v_n\}$ be a set of $n\ge1$ binary $\hbox{$k$-mers}$, and define the $(n\times k)$ matrix with rows
\[B := \left[\begin{array}{c}
v_1-\bar{v_1} \\
\vdots \\
v_n-\bar{v_n}
\end{array}\right].\]
Then, $R$ resolves $H_{k,2}$ if and only if $\hbox{ker}(B)\cap\{0,\pm1\}^k=\{0\}$.
\label{cor:mat_system}
\end{corollary}
\begin{proof}
Let
\[A= \left[\begin{array}{ccccc}
\vline & \vline & & \vline & \vline \\
A_1 & A_2 & \ldots & A_{2k-1} & A_{2k} \\
\vline & \vline & & \vline & \vline
\end{array}\right]\]
be the $(n\times 2k)$ matrix with columns $A_1,\ldots,A_{2k}$ given by \cref{thm:Az=0} for $R$. It follows that $R$ resolves $H_{k,2}$ if and only if $Az=0$, with $z=((x_1,y_1),\ldots,(x_k,y_k))'\in\{0,\pm1\}^{2k}$ and $(x_i+y_i)=0$ for each $i=1,\ldots,k$, has only a trivial solution. Note however that $Az=By$, where
\begin{eqnarray*}
B &:=&
\left[\begin{array}{ccc}
\vline & & \vline\\
(A_2-A_1) & \ldots & (A_{2k}-A_{2k-1}) \\
\vline & & \vline
\end{array}\right];\\
y &:=& (y_1,\ldots,y_k)'.
\end{eqnarray*}
Therefore $R$ is resolving if and only if $By=0$, with $y\in\{0,\pm1\}^k$, has only a trivial solution. But recall from~\cref{thm:Az=0} that the rows of $A$ are the column-major orderings of the one-hot encodings of the binary $k$-mers in $R$. In particular, using $\llbracket\cdot\rrbracket$ to denote Iverson brackets, we find that the row in $B$ associated with $v\in R$ is:
\[\Big(\llbracket v[1]=1\rrbracket-\llbracket v[1]=0\rrbracket,\ldots,\llbracket v[k]=1\rrbracket-\llbracket v[k]=0\rrbracket\Big)=v-\bar v,\]
from which the corollary follows.
\end{proof}
We can provide an even simpler characterization of sets of $\hbox{$k$-mers}$ that resolve the hypercube, provided that $1^k:=(1,\ldots,1)$ is one of them. This seemly major assumption is only superficial. Indeed, hypercubes are transitive; that is, given any two binary $\hbox{$k$-mers}$ there is an automorphism (i.e., a distance preserving bijection $\sigma:\{0,1\}^k\to\{0,1\}^k$) that maps one into the other~\cite[\S3.1]{TilLla19}. Hence, given any set $R$ of binary $\hbox{$k$-mers}$ there is an automphism $\sigma$ such that $1^k\in\sigma(R)$. In particular, because $R$ is resolving if and only if $\sigma(R)$ is resolving, one can assume without any loss of generality that $1^k$ is an element of $R$.
\begin{corollary}
Let $R=\{v_1,\ldots,v_n\}$ be a set of $n$ binary $\hbox{$k$-mers}$ such that $1^k\in R$, and define the $(n\times k)$ matrix with rows
\[C := \left[\begin{array}{c}
v_1 \\
\vdots \\
v_n
\end{array}\right].\]
Then, $R$ resolves $H_{k,2}$ if and only if $\hbox{ker}(C)\cap\{0,\pm1\}^k=\{0\}$.
\label{cor:symp_system}
\end{corollary}
\begin{proof}
Note that for all binary $\hbox{$k$-mer}$s $v$: $(v+\bar v)=1^k$; in particular, $(v-\bar v)=(2v-1^k)$. Hence, if $B$ is as given in~\cref{cor:symp_system} and $C$ as defined above then
\[Bz=0\hbox{ if and only if }Cz=\langle 1^k,z\rangle\left[\begin{array}{c}1/2\\\vdots\\1/2\end{array}\right].\]
But, because $1^k\in R$ and $2\cdot1^k-1^k=1^k$, one of the entries in $Bz$ equals $\langle 1^k,z\rangle$. Since the entries in $Cz$ are proportional to $\langle 1^k,z\rangle$, $Bz=0$ if and only if $Cz=0$, from which the corollary follows.
\end{proof}
\section{Polynomial Roots Formulation}
\label{sec:grobner}
In this section, we express the constraints of the linear system in \cref{thm:Az=0} as roots of a multi-variable polynomial system, and we reveal various properties about this system which can drastically reduce the complexity of determining whether a subset of nodes resolves or not $\mathbb{H}_{k,a}$.
In what follows, for any given non-empty set $P$ of polynomials in a possibly multi-variable $z$, $\{P=0\}$ denotes the set of $z$'s such that $p(z)=0$, for each $p\in P$. Unless otherwise stated, we assume that $z$ has dimension $ka$, i.e. $z=(z_1,\ldots,z_{ka})$.
Consider the polynomial sets
\begin{eqnarray}
P_1 &:=& \Big\{z_i^3-z_i,\hbox{ for $i=1,\ldots,ka$}\Big\};\notag \\
P_2 &:=& \left\{\sum_{j=(i-1)a+1}^{ia}z_j,\hbox{ for $i=1,\ldots,k$}\right\};\\
P_3 &:=& \left\{\Big(2-\sum_{j=(i-1)a+1}^{ia}z_j^2\Big)\cdot\sum_{j=(i-1)a+1}^{ia}z_j^2,\hbox{ for $i=1,\ldots,k$}\right\}. \notag
\label{eq:P}
\end{eqnarray}
Our first result characterizes the constraints of the linear system in \cref{thm:Az=0} in terms of the roots of the above polynomials. Ahead, unless otherwise stated:
\begin{equation}
P := (P_1\cup P_2\cup P_3).
\label{def:P}
\end{equation}
\begin{lemma}
$z\in\{P=0\}$ if and only if when parsing $z$ into $k$ consecutive but non-overlapping subvectors of dimension $a$, each subvector is the difference of two canonical vectors.
\label{lem:polsys}
\end{lemma}
\begin{proof}
The polynomials in $P_1$ enforce that each entry in $z$ must be a ${-1}$, $0$, or $1$, while the polynomials in $P_2$ enforce that there is a $(-1)$ for every $1$ in each subvector of $z$. Finally, the polynomials in $P_3$ enforce that each subvector of $z$ has exactly two non-zero entries or no non-zero entries. Altogether, $z\in P$ if and only if each subvector is identically zero, or it has exactly one 1 and one $(-1)$ entry and all other entries vanish, i.e. each subvector of $z$ is the difference of two canonical vectors in $\mathbb{R}^a$.
\end{proof}
The following is now an immediate consequence of this lemma and~\cref{thm:Az=0}.
\begin{corollary}
Let $R$ be a set of nodes in $\mathbb{H}_{k,a}$ and $A$ the matrix given by equation~\cref{def:A}. Then, $R$ resolves $\mathbb{H}_{k,a}$ if and only if $\hbox{ker}(A)\cap\{P=0\}=\{0\}$.
\label{cor:KerCapP=0}
\end{corollary}
Our focus in what remains of this section is to better characterize the non-trivial roots of the polynomial system $\{P=0\}$. To do so, we rely on the concepts of polynomial ideals and (reduced) Gr\"obner bases, and the following fundamental result from algebraic geometry. For a primer to these and other concepts on which our results rely see~\cref{app:1}.
\begin{theorem} (Hilbert's Weak Nullstellensatz~\cite[\S4.1]{Cox_Little_OShea:2015}.)
\label{thm:weak_null}
For any non-empty finite set of polynials $P$, $\{P=0\} = \emptyset$ if and only if $\{1\}$ is the reduced Gr{\"o}bner basis of $I(P)$, the ideal generated by $P$.
\end{theorem}
Define for each $i=1,\ldots,k$:
\begin{eqnarray}
\label{def:Bi}
B_i &:=& \Big\{z_j^3-z_j,\hbox{ for }j=(i-1)a+1,\ldots,ia\Big\}\\
&&\qquad\bigcup\left\{\sum_{j=(i-1)a+1}^{ia}z_j,\Big(2-\sum_{j=(i-1)a+1}^{ia}z_j^2\Big)\cdot\sum_{j=(i-1)a+1}^{ia}z_j^2\right\}.
\notag
\end{eqnarray}
Observe that $B_i$ is a set of polynomials in $(z_{(i-1)a+1},\ldots,z_{ia})$, i.e. the $i$-th subvector of $z$; in particular, each of these polynomials may be regarded as a function of $z$, and $B_1,\ldots,B_k$ partition $P$, i.e. $P=\sqcup_{i=1}^kB_i$. Accordingly, we call $B_i$ the $i$-th \underline{b}lock of $P$, and denote the reduced Gr\"obner basis of $B_i$ as $G_i$. The computational advantage of these observations is revealed by the following results.
\begin{lemma}
$G=\cup_{i=1}^kG_i$ is the reduced Gr\"obner bases of $P$ in equation~\cref{def:P}. Furthermore, $G_i$ may be obtained from $G_1$ using the change of variables:
\begin{equation}
(z_1,\ldots,z_a)\longrightarrow(z_{(i-1)a+1},\ldots,z_{ia}).
\label{ide:varchange}
\end{equation}
\label{lem:groeb_block}
\end{lemma}
\begin{proof}
The case with $k=2$ follows from~\cite[Proposition 2]{Cox_Little_OShea:2015} due to the fact that no variable and hence no polynomial is shared between the blocks of $P$. A straightforward inductive argument in $k\ge2$ then shows that $\cup_{i=1}^kG_i$ is the reduced Gr\"obner bases of $P$. Finally, note that $B_1$ is up to the change of variables in equation~(\ref{ide:varchange}) identical to $B_i$; in particular, since Buchberger's algorithm (\cref{algo:1}) and the Gr\"obner basis reduction algorithm (\cref{algo:2}) build upon polynomial division, the reduced Gr\"obner bases of $B_i$ may be obtained from that of $B_1$ using the same change of variables.
\end{proof}
\begin{lemma}
The reduced Gr\"obner bases of $B_1$ under the lexicographic ordering is
\begin{equation}
G_1=\left\{\sum\limits_{i=1}^a z_i\right\}\bigcup_{2\le i\le a}\{z_i^3-z_i\}\bigcup_{2\le i<j\le a}\{z_i^2z_j+z_iz_j^2\}\bigcup_{2\le i<j<\ell\le a}\{z_iz_jz_\ell\}.
\end{equation}
\label{lem:G1explicit}
\end{lemma}
\begin{proof}
Let $G$ be the set of polynomials on the right-hand side above. Since $G$ depends on $a$ but not on the parameter $k$ of $\mathbb{H}_{k,a}$, and the identity for $a\in\{2,3,4\}$ can be checked using algorithms~\ref{algo:1} and~\ref{algo:2}, without loss of generality we assume in what follows that $a>5$.
Since reduced Gr\"obner basis are unique, it suffices to show that (i) $I(G)=I(B_1)$; and that for all $f,g\in G$: (ii) the reduction of $\hbox{Spoly}(f,g)$ by $G$ is 0; (iii) $LC(f)=1$; and (iv) if $f\in G\setminus\{g\}$ then no monomial of $f$ is divisible by $LM(g)$. We omit the tedious but otherwise straightforward verification of properties (ii) and (iv). Since property (iii) is trivially satisfied, it only remains to verify property (i).
To prove $I(G) = I(B_1)$, it suffices to show that $\{G=0\}=\{B_1=0\}$. Indeed, the polynomials of the form $z_iz_jz_\ell$ imply that if $z\in\{G=0\}$ then $(z_2,\ldots,z_{a-1})$ has at most two non-zero coordinates. In the case two of these coordinates are non-zero, say $z_i$ and $z_j$, the polynomials $z_i^3-z_i=z_i(z_i-1)(z_i+1)$, $z_j^3-z_j=z_j(z_j-1)(z_j+1)$, and $z_i^2z_j+z_iz_j^2=z_iz_j(z_i+z_j)$ imply that $(z_i,z_j)=(1,-1)$ or $(z_i,z_j)=(-1,1)$; in particular, because we must have $\sum_{\ell=1}^az_\ell=0$, $z_1=0$. Instead, if exactly one of the coordinates in $(z_2,\ldots,z_{a-1})$ is non-zero, say $z_j$, then the polynomial $\sum_{\ell=1}^az_\ell$ together with $z_j^3-z_j$ imply that $(z_1,z_j)=(1,-1)$ or $(z_1,z_j)=(-1,1)$. Finally, if $(z_2,\ldots,z_{a-1})=0$ then the polynomial $\sum_{\ell=1}^az_\ell$ implies that $z_1=0$. In all of these three exhaustive cases, it follows that $(z_1,\ldots,z_a)$ is identically zero, or it has exactly one 1 and one (-1) coordinate and all other coordinates vanish; in other words, $(z_1,\ldots,z_a)$ is a difference of two canonical vectors in $\mathbb{R}^a$. Since this is precisely the constraint imposed on this subvector of $z$ by the polynomials in $B_1$, we obtain that $\{G=0\}=\{B_1=0\}$ i.e. $I(G)=I(B_1)$.
\end{proof}
A minor issue for using the Weak Nullstellensatz in our setting is that the polynomials in $P$ have no constant terms; in particular, $0\in\{P=0\}$. To exclude this trivial root, observe that if $z\in P$ then $\sum_{j=(i-1)a+1}^{ia}z_j^2\in\{0,2\}$, for each $i=1,\ldots,k$. As a result, if $z$ is a non-trivial root of $\{P=0\}$ then $\sum_{j=1}^{ka}z_j^2=2i$ for some $i$. This motivates to introduce the auxiliary polynomial:
\begin{equation}
f(z):=\Big(\sum_{j=1}^{ka}z_j^2\Big),
\label{def:f(z)}
\end{equation}
so that $R$ resolves $\mathbb{H}_{k,a}$ if and only if $\hbox{ker}(A)\cap\{P=0\}\cap\{f-2i=0\}=\emptyset$ for all $i=1,\ldots,k$.
\begin{lemma}
Consider a (finite) reduced Gr\"obner basis $G \neq \{1\}$ and a polynomial $f$. If $f \xrightarrow{G} r$ then, for each $c\in\mathbb{R}$, $(f+c) \xrightarrow{G} (r+c)$.
\end{lemma}
\begin{proof}
Let $G=\{g_1,\ldots,g_n\}$. Without loss of generality fix a constant $c\ne0$. Note that $G$ contains no constant polynomial (except for $0$) because $G \neq \{1\}$ hence $1\notin G$. As a result, the leading monomial of each $g_i$ does not divide $c$, hence $c\xrightarrow{G}c$. Since $f \xrightarrow{G} r$, and reductions by a Gr\"obner basis are unique, $(f+c) \xrightarrow{G} (r+c)$ as claimed.
\end{proof}
The following is now a direct consequence of the lemma.
\begin{corollary}
Let $G$ be the reduced Gr\"obner basis of $P$ in equation~(\ref{def:P}). If $f$ is as defined in~\cref{def:f(z)} and $f \xrightarrow{G} r$ then, for each $i = 1,2,\ldots,k$, $(f-2i)\xrightarrow{G} (r-2i)$.
\label{cor:rem}
\end{corollary}
The results from this section allow for a computational method for checking resolvability on $\mathbb{H}_{k,a}$. Lemmas~\ref{lem:groeb_block} and~\ref{lem:G1explicit} are used to construct the reduced Gr\"obner basis $G$ directly, and~\cref{cor:rem} efficiently removes the trivial solution from consideration in the criteria provided by~\cref{thm:weak_null}. Altogether these results significantly reduce the number of polynomial reductions required to assess the resolvability of a set of nodes on $\mathbb{H}_{k,a}$.
\subsection{Illustrative Example (Continuation)} We saw in~\Cref{subsec:Illustrative} that $R_0=\{02,11\}$ does not resolve $H_{2,3}$ whereas $R_1=R_0\cup\{22\}=\{02,11,22\}$ does. We can double-check these assertions using~\cref{cor:KerCapP=0} as follows.
First, recall that for $H_{2,3}$ the variable $z$ is 6-dimensional and should be decomposed in the form $z=\big((z_1,z_2,z_3),(z_4,z_5,z_6)\big)$. Next, the kernel of the matrix given by the corollary for $R_0$ (denoted as $A_0$, see Eq.~\cref{def:A0}) is described by the linear system:
\[\left\{\begin{array}{rcl}
z_1+z_6 &=& 0;\\
z_2+z_5 &=& 0.
\end{array}\right.\]
On the other hand, the roots in $\{P=0\}$ given by~\cref{cor:KerCapP=0} correspond to the polynomial system:
\[\left\{\begin{array}{ccl}
0 &=& z_1^3-z_1;\\
0 &=& z_2^3-z_2;\\
0 &=& z_3^3-z_3;\\
0 &=& z_1 + z_2 + z_3;\\
0 &=& (2-z_1^2-z_2^2-z_3^2)\cdot(z_1^2+z_2^2+z_3^2);\\
\hline
0 &=& z_4^3-z_4;\\
0 &=& z_5^3-z_5;\\
0 &=& z_6^3-z_6;\\
0 &=& z_4 + z_5 + z_6;\\
0 &=& (2-z_4^2-z_5^2-z_6^2)\cdot(z_4^2+z_5^2+z_6^2);
\end{array}\right.\]
where the horizontal line distinguishes between the first and second block of $P$ (see Eq.~(\ref{def:Bi})). Finally, recall the auxiliary polynomial given by equation~\cref{def:f(z)}:
\[f(z)=z_1^2+z_2^2+z_3^2+z_4^2+z_5^2+z_6^2.\]
Assuming the lexicographic order over the monomials, one can determine that the reduced Gr\"obner basis of $\{A_0z\}\cup P\cup\{f-2\}$ is $\{1\}$; in particular, $\hbox{ker}(A_0)\cap\{P=0\}\cap\{f=2\}=\emptyset$. On the other hand, because the reduced Gr\"obner basis of $\{A_0z\}\cup P\cup\{f-2\}$ is $\{z_1+z_6, z_2+z_5, z_3-z_5-z_6, z_4+z_5+z_6, z_5^2+z_5z_6+z_6^2-1, z_6^3 - z_6\}$, it follows that $\hbox{ker}(A_0)\cap\{P=0\}\cap\{f=4\}\ne\emptyset$ i.e. $\hbox{ker}(A_0)\cap\{P=0\}$ has a non-trivial solution. Consequently, $R_0$ does not resolve $H_{2,3}$.
To confirm that $R_1=R_0\cup\{22\}$ does resolve $H_{2,3}$, note that we only need to add the equation $z_3+z_6=0$ to the previous linear system (the full linear system is now described by the matrix $A_1$, see Eq.~\cref{def:A1}). Using our code, we find that $\hbox{ker}(A_1)\cap\{P=0\}\cap\{f-2\}=\emptyset$ and also that $\hbox{ker}(A_1)\cap\{P=0\}\cap\{f-4\}=\emptyset$ because the associated reduced Gr\"obner basis are both equal to $\{1\}$. As a result, $\hbox{ker}(A_1)\cap\{P=0\}$ has no non-trivial solution, i.e. $R_1$ resolves $H_{2,3}$.
\section{Novel Integer Linear Programming Formulation}
\label{sec:ILP}
For some background about Integer Linear Programming (ILP), see~\cref{app:2}.
In contrast to the ILP approaches of \cite{chartrand2000resolvability,currie2001metric}, our ILP formulation checks the resolvability of a given set rather than searching for minimal resolving sets. Furthermore, it does not pre-compute the distance matrix of a Hamming graph. As before, fix $\mathbb{H}_{k,a}$ and a subset of vertices $R$. Letting $z=(z_1,\ldots,z_{ka})$ and using the polynomial set $P$ from equation~(\ref{def:P}), we leverage~\cref{lem:polsys} (with $A$ as in equation~(\ref{def:A}), each row corresponding to a vertex in $R$) to reformulate~\cref{thm:Az=0} as follows:
\begin{equation}
R \text{ does \underline{not} resolve } \mathbb{H}_{k,a} \quad \iff \quad \exists z \neq 0 \;\text{such that}\; z\in\hbox{ker}(A)\cap\{P=0\}.
\label{eq:resolveIff}
\end{equation}
To formulate this as an ILP, we use the following result.
\begin{lemma}
Define
\[\mathcal{I}:= \bigcap_{i=1}^k\left\{z\in\mathbb{Z}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\hbox{ and } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\}.\]
Then $\mathcal{I}$ is the intersection of a closed convex polyhedron with the integer lattice $\mathbb{Z}^{ak}$, and $z\in\{P=0\}$ if and only if $z\in\mathcal{I}$.
\label{lemma:ILP}
\end{lemma}
\begin{proof}
Since the intersection of convex sets is convex, and the intersection of a finite number of polyhedra is a polyhedron, it follows from standard arguments that
\begin{eqnarray*}
\mathcal{J}_1 &:=& \bigcap_{i=1}^k\left\{z\in\mathbb{R}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\right\};\\
\mathcal{J}_2 &:=& \bigcap_{i=1}^k\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\};
\end{eqnarray*}
are convex subsets of $\mathbb{R}^{ak}$, and $\mathcal{J}_1$ is a polyhedron. We claim that $\mathcal{J}_2$ is also a polyhedron, for which it suffices to check that each set in the intersection that defines it is a polyhedron. Without loss of generality, we do so only for the case with $i=1$. Indeed, because $\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a |z_j| \le 2\}$ is invariant under arbitrary coordinate sign flips, we have that
\[\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a |z_j| \le 2\right\}=\bigcap_{w\in\{-1,1\}^{ak}}\left\{z\in\mathbb{R}^{ak}\hbox{ such that } \sum_{j=1}^a w_jz_j \le 2\right\},\]
which implies that $\mathcal{J}_2$ is also a polyhedron. Since $\mathcal{I}=(\mathcal{J}_1\cap\mathcal{J}_2\cap\mathbb{Z}^{ak})$, the first part of the lemma follows.
From the proof of~\cref{lem:polsys} it is immediate that $\{P=0\}\subset\mathcal{I}$. To show the converse inclusion, observe that $\{P=0\}=\cap_{i=1}^k\{B_i=0\}$ where the $B_i$'s are as defined in equation~(\ref{def:Bi}). To complete the proof, it suffices therefore to show that $\mathcal{I}_i\subset\{B_i=0\}$, where
\[\mathcal{I}_i:=\left\{z\in\mathbb{Z}^{ak}\hbox{ such that }\sum\limits_{j=(i-1)a+1}^{ia} z_j = 0\hbox{ and } \sum\limits_{j=(i-1)a+1}^{ia} |z_j| \le 2\right\}.\]
Indeed, if $z\in\mathcal{I}_1$ then, because the coordinates of $z$ are integers, the condition $\sum_{j=1}^a |z_j| \le 2$ implies that $|z_j|\in\{0,1,2\}$ for $j=1,\ldots,a$. If $|z_j|=2$ for some $j$ then $\sum_{j=1}^az_j=\pm2$, which is not possible. Thus $z_j\in\{0,\pm1\}$ for $j=1,\ldots,a$; in particular, $z_j^3-z_j=0$. On the other hand, the condition $\sum_{j=1}^az_j=0$ implies that the number of 1's and (-1)'s in $(z_1,\ldots,z_a)$ balance out; in particular, since $\sum_{j=1}^a |z_j| \le 2$, either $(z_1,\ldots,z_a)$ vanishes, or it has exactly one 1 and one (-1) entry and all other entries vanish; in particular, $(2-\sum_{j=1}^az_j^2)\cdot\sum_{j=1}^az_j^2=0$. Thus, $z\in\{B_1=0\}$. The case for $i>1$ is of course the same.
\end{proof}
\begin{remark}
With current ILP solvers, one can impose that $z\in\{0,\pm1\}^{ak}$ simply as $|z_i|\le1$ for $i=1,\ldots,ak$. On the other hand, while a constraint like $\sum_{j=1}^a |z_j| \le 2$ is clearly polyhedral, it is not in the form of an affine equality or inequality suitable for ILP solvers. Nevertheless, standard reformulation techniques can convert this into a set of affine equalities and inequalities in a higher dimensional space. For example, in the product space with variables $(\tilde{z},w)$, we can write the constraint as $\sum_{j=1}^a w_j \le 2$ and $|\tilde{z}_j| \le w_j$ (i.e., $\tilde{z}_j \le w_j$ and $-\tilde{z}_j \le w_j$), which leads to an equivalent formulation of the original ILP. One may handle such reformulations automatically using the Matlab package \texttt{CVX}~\cite{cvx}.
\end{remark}
It only remains to encode the fact that we look for a \emph{nonzero} root in $\{P=0\}$, which we do via the ILP in the following theorem:
\begin{theorem}
A subset of vertices $R$ is \underline{not} resolving on $\mathbb{H}_{k,a}$ if and only if the solution to the following ILP is less than zero:
\begin{alignat}{2}
\label{eq:newILP}
&\min_{z\in\mathbb{R}^{ak}} \; & &\sum_{j=1}^{ak} 2^j z_j \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I},
\end{alignat}
where $A$ is defined in equation~\cref{def:A}.
\end{theorem}
\begin{proof}
Using equation~\cref{eq:resolveIff} and \cref{lemma:ILP}, it remains to show that the objective function is less than zero if and only if there is a non-zero feasible $z$. Suppose there is not a non-zero feasible $z$. Clearly $z=0$ is feasible, hence it is the only feasible point for the ILP, and the objective value is zero. Now suppose there is some non-zero feasible $z$. Let $j'$ be the largest non-zero coordinate. Then because $\sum_{j=1}^{j'-1} 2^j < 2^{j'}$, and because each entry is bounded $|z_j|\le 1$, the objective value at this $z$ is non-zero. If the objective value is negative, this proves the value of the ILP is negative; if the objective value is positive, then observe that $(-z)$ is also feasible and has a negative objective value, and hence the value of the ILP is negative.
\end{proof}
\begin{remark}
If the solution to the ILP is less than zero and hence $R$ is not a resolving set, then each optimal vector $z$ is the difference of the column-major ordering of the one-hot encodings of two $\hbox{$k$-mers}$ which are not resolved by $R$; in particular, a vector that resolves these $\hbox{$k$-mers}$ needs to be added to $R$ to resolve $\mathbb{H}_{k,a}$.
\end{remark}
\subsection{Practical formulations and roundoff error}
\label{sec:ILP_practical}
When $ak$ is small, it is feasible to directly solve the ILP in equation \cref{eq:newILP}. One issue with larger values of $ak$, besides an obvious increase in run-time, is that the values of $2^j$ in the objective function quickly lead to numerical overflow. A simple fix is to replace each coefficient $c_j = 2^j$ with an independently drawn realization of a standard normal random variable $\mathcal{N}(0,1)$. Since these new coefficients are independent of the feasible set, if the latter is truly larger than $\{0\}$, the probability that the entire feasible set is in the null-space of the linear function $\sum_{j=1}^{ak}c_jz_j$ is zero. Of course, again due to finite machine precision, this otherwise almost surely exact method may only be approximate. Admittedly, when running the ILP with the random coefficients $c_j$'s, finding an undoubtedly negative solution to the ILP would certify that the set $R$ is not resolving. However, if the solution is just slightly negative or vanishes within machine precision, the assessment about $R$ should be taken with a grain of salt. In this case, one should draw a new set of random coefficients and re-run the ILP to reassess the resolvability of $R$.
Another consideration is that the ILP solver wastes time finding a feasible point with the smallest possible objective, when we only care if there is a feasible point with objective smaller than $0$. Thus we could solve the feasiblity problem
\begin{alignat*}{2} \label{eq:feas1}
&\textnormal{Find} \; & &z\in\mathbb{R}^{ak} \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I} \;\textnormal{and}\; \langle c, z \rangle < 0
\end{alignat*}
where $c_j = 2^j$ or $c_j \sim \mathcal{N}(0,1)$ as discussed above. (Feasibility problems can be encoded in software by minimizing the $0$ function.) Unfortunately this is not an ILP because $\{ c \mid \langle c, z \rangle <0 \}$ is not a closed set. We can partially ameliorate this by solving
\begin{alignat}{2} \label{eq:feas2}
&\textnormal{Find} \; & &z\in\mathbb{R}^{ak} \notag \\
&\textnormal{subject to} \; & & Az=0 \;\textnormal{and}\; z\in\mathcal{I} \;\textnormal{and}\; \langle c, z \rangle \le -\delta
\end{alignat}
where $\delta>0$ is a small number (our code uses $\delta=10^{-3}$). Finding a feasible point $z$ is then proof that the set $R$ does not resolve $\mathbb{H}_{k,a}$. If the solver says the above problem is infeasible, it could be that $\delta$ was too large and hence the computation was inconclusive. In this case, one could run the slower program \cref{eq:newILP}.
\section{Computational Complexity Experiments}
\label{sec:complexity_experiments}
The theoretical framework and algorithms proposed in this paper provide a novel way of approaching resolvability on Hamming graphs. To show the computational feasibility and practicality of our methods, we compare the average run-time of both the ILP and Gr\"obner basis algorithms against the brute force approach for checking resolvability. Our experiments use Python 3.7.3 and SymPy version 1.1.1~\cite{SymPy}, and the commercial ILP solver \texttt{gurobi} ver.~7.5.2~\cite{gurobi}.
In the~\cref{tab:k_a_pairs}, we present the average run-time and standard deviation of the algorithms on reference test sets for Hamming graphs of increasing sizes. \cref{fig:runtime} displays the mean run-times as a function of the graph size, and best linear fit for each method. As seen in the table and figure, the brute force approach is faster on only the smallest Hamming graphs (with fewer than $1000$ nodes) whereas the ILP solution is exceptionally fast even as the Hamming graph grows to more than $6000$ nodes. For small problems, the time taken to solve the ILP is likely dominated by the overhead cost of using \texttt{CVX} to recast the ILP into standard form. The run-time results show a promising improvement in computational time over the brute force approach which will only become more pronounced on massive Hamming graphs. Additionally, the brute force approach is infeasible on these larger graphs due to significant memory costs.
The ILP algorithm is exceptionally quick, beating all other methods for Hamming graphs with more than 1000 nodes, but it cannot guarantee that a set is resolving. The Gr\"obner basis algorithm by contrast is much slower on average but is a deterministic method of showing resolvability. ILP can be used to quickly determine possible resolving sets which are then verified by the Gr\"obner basis algorithm. In this way, the two methods are symbiotic and cover each other's weaknesses. We illustrate this in the next section.
\begin{table}
\centering
\tiny
\begin{tabular}{cS[table-format=4]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]S[table-format=1.2e-1]}
\toprule
& & \multicolumn{2}{c}{Brute Force} & \multicolumn{2}{c}{Gr\"obner Basis} & \multicolumn{2}{c}{ILP} \\
\cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8}
($k,a$) & {$a^k$} & {Mean} & {SD} & {Mean} & {SD} & {Mean} & {SD} \\
\midrule
(2,2) & 4 & 3.88e-05 & 1.51e-06 & 6.79e-03 & 1.06e-03 & 1.28e-01 & 3.53e-03 \\
(2,4) & 16 & 2.47e-04 & 6.83e-05 & 2.25e-02 & 2.59e-03 & 1.16e-01 & 4.84e-03 \\
(3,3) & 27 & 5.02e-4 & 2.45e-4 & 2.83e-2 & 7.92e-3 & 1.21e-01 & 8.12e-3 \\
(5,2) & 32 & 6.61e-04 & 3.29e-04 & 3.14e-02 & 5.27e-03 & 1.28e-01 & 3.91e-03 \\
(3,5) & 125 & 8.98e-03 & 5.38e-03 & 1.12e-01 & 2.91e-02 & 1.37e-01 & 7.02e-03 \\
(5,3) & 243 & 2.78e-2 & 1.96e-2 & 1.22e-1 & 7.88e-2 & 1.20e-1 & 8.12e-3 \\
(8,2) & 256 & 2.85e-02 & 2.21e-02 & 9.87e-02 & 1.96e-02 & 1.17e-01 & 1.59e-03 \\
(4,4) & 256 & 3.13e-02 & 1.97e-02 & 1.27e-01 & 3.90e-02 & 1.37e-01 & 9.58e-03 \\
(5,5) & 3125 & 5.19e+00 & 3.17e+00 & 4.00e+00 & 3.54e+00 & 1.35e-01 & 1.09e-02 \\
(12,2) & 4096 & 6.28e+00 & 5.34e+00 & 2.93e-01 & 7.24e-02 & 1.24e-01 & 2.39e-03 \\
(6,4) & 4096 & 7.78e+00 & 4.65e+00 & 7.73e-01 & 3.67e-01 & 1.52e-01 & 8.99e-03 \\
(8,3) & 6561 & 2.02e+01 & 1.40e+01 & 1.12e+01 & 1.47e+01 & 1.62e-01 & 1.41e-02 \\
\bottomrule
\end{tabular}
\caption{Time in seconds required to determine resolvability for each technique. Fifty resolving and fifty non-resolving sets, selected uniformly at random, were considered for each Hamming graph $\mathbb{H}_{k,a}$. Means and standard deviations consider five replicates per set.}
\label{tab:k_a_pairs}
\end{table}
\begin{figure}
\centering
\includegraphics[width=2.7in]{Runtime.pdf}
\caption{Data from~\cref{tab:k_a_pairs} with lines of best fit (on log-transformed data) for each method.}
\label{fig:runtime}
\end{figure}
\section{Low-dimensional Protein Representations}
\label{sec:protein_representation}
Symbolic information pervades modern data science. With the advent and popularization of high-throughput sequencing assays, this is particularly true in the field of computational biology where large volumes of biological sequence data have become critical for studying and understanding the behavior of cells. Analysis of these sequences, however, presents significant challenges. One major issue is that many powerful analysis techniques deal with numeric vectors, not arbitrary symbols. As a result, biological sequence data is typically mapped to a real space before such methods are applied. Two of the most common mappings use K-mer count~\cite{leslie2002spectrum} and one-hot encodings (also called binary vectors)~\cite{cai2003support}. K-mer count vectors represent symbolic sequences by their counts of each possible K-mer.
Resolving sets can be used to define low-dimensional mappings as well. To fix ideas we focus on octapeptides, that is proteins composed of 8 amino acids. With a total of 20 possible amino acids (which we represent as {\ttfamily {\footnotesize a,r,n,d,c,q,e,g,h,i,l,k,m,f, p,s,t,w,y,v}}) and imposing the Hamming distance across these sequences, we have the Hamming graph $\mathbb{H}_{8,20}$. This graph is massive. It has $25.6$ billion vertices and roughly $1.9$ trillion edges rendering most methods of discovering small resolving sets, including the ICH algorithm, useless. Utilizing a constructive algorithm, a resolving set of size 82, which we call $R$, was discovered for $\mathbb{H}_{8,20}$ in~\cite{TilLla19}. However, it is not known whether $R$ contains a proper subset that is still resolving. Here, we address this problem applying the results of sections~\ref{sec:grobner} and \ref{sec:ILP}.
Starting with lower and upper bounds $L=1$ and $U=82$ respectively, we implement a binary search for $\beta(\mathbb{H}_{8,20})$. With $s=\frac{L+U}{2}$ as the current subset size to check, up to 1000 subsets of $R$ are selected at random. The ILP approach (\cref{sec:ILP}) then provides an efficient method for testing the feasibility problem outlined in \cref{thm:Az=0} for these subsets. If any subset passes this test, the upper bound is set to $s$. Otherwise, $s$ becomes the lower bound. This process is repeated until $L=(U-1)$. Following this procedure, we found the following set of size $77$:
\[
r:=\left\{
{
\scriptsize
\begin{tabular}{lllllll}
aaaraaaa, & arwaaaaa, & ccchhhhh, & ccchhhhi, & ccchhhia, & ccchhiaa, & ccchiaaa,\\
ccciaaaa, & cnsaaaaa, & dddeeeee, & dddeeeeg, & dddeeega, & dddeegaa, & dddegaaa,\\
dddgaaaa, & dhfaaaaa, & eagaaaaa, & eeefaaaa, & eeemfaaa, & eeemmfaa, & eeemmmfa,\\
eeemmmmf, & eeemmmmm, & fffaaaaa, & gggppppp, & gggpppps, & gggpppsa, & gggppsaa,\\
gggpsaaa, & gggsaaaa, & hhhttttt, & hhhttttw, & hhhtttwa, & hhhttwaa, & hhhtwaaa,\\
hhhwaaaa, & hpvaaaaa, & iiivaaaa, & iiiyvaaa, & iiiyyvaa, & iiiyyyva, & iiiyyyyv,\\
iiiyyyyy, & kkkaaaaa, & klqaaaaa, & lllaaaaa, & mkyaaaaa, & mmmaaaaa, & nnnccccc,\\
nnnccccq, & nnncccqa, & nnnccqaa, & nnncqaaa, & nnnqaaaa, & nstaaaaa, & pppaaaaa,\\
qpkaaaaa, & qqqkaaaa, & qqqlkaaa, & qqqllkaa, & qqqlllka, & qqqllllk, & qqqlllll,\\
qyeaaaaa, & rrrdaaaa, & rrrndaaa, & rrrnndaa, & rrrnnnda, & rrrnnnnd, & rrrnnnnn,\\
sisaaaaa, & svtaaaaa, & ttcaaaaa, & vfraaaaa, & wmpaaaaa, & wwdaaaaa, & yglaaaaa
\end{tabular}
}
\right\}.
\]
Since the ILP formulation does not guarantee that this set is resolving, we verified the result using a parallelized version of the Polynomial Roots Formulation (\cref{sec:grobner}) so that the Gr\"obner bases of multiple auxiliary polynomials (Eq.~\cref{def:f(z)}) may be determined simultaneously. Thus, we have found a set $r\subset R$ of size 77 that resolves $\mathbb{H}_{8,20}$; in particular, $\beta(\mathbb{H}_{8,20})\le77$, which improves the bound of~\cite{TilLla19}, and all $25.6$ billion octapeptides may be uniquely represented with only 77 dimensions. In contrast, a $2$-mer count vector representation would require 400 dimensions and a one-hot encoding 160 dimensions.
\begin{remark}
We replicated the verification of $r$ as a resolving set of $H_{8,20}$ using our Polynomial Roots Formulation 10 times across 32 computer cores. Overall, a maximum of approximately 380 megabytes of memory per core (SD $\sim 0.5$ MB) and 6 hours and 20 minutes (SD $\sim142$ s) were required to demonstrate the resolvability of $r$. Memory usage was determined using the Slurm workload manager \verb|sacct| command and \verb|maxRSS| field, while time was measured using Python's \verb|time| module.
\end{remark}
\newpage
| {'timestamp': '2019-07-16T02:03:16', 'yymm': '1907', 'arxiv_id': '1907.05974', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.05974'} |
\chapter{Comments on the Measurement of Vermeer}
\label{chap:rant_on_energies}
The measurement of Ref.~\cite{vermeer_1988} has potential issues that make it difficult to properly assess its conclusions. Though it was ultimately included in our energy compilation, the following observations should be noted.
The study of Ref.~\cite{vermeer_1988} performed a measurement of $^{12}$C$(^{16}$O$, \alpha)^{24}$Mg at several different beam energies using a silicon surface barrier detector to detect the heavy recoils and a MDM-2 spectrograph to detect the $\alpha$ particles. The energies reported in that study come from the focal plane detector in the MDM-2 spectrograph. However, the states used to calibrate this detector are not mentioned. The only mention of the process of energy calibration is:
\begin{quote}
The centroids of clearly-resolved peaks were used for energy
calibration. Typically about 10 or 15 peaks were used to derive a fourth order polynomial for excitation energy as a function of channel number. The excitation energies used for calibration
purposes are those of ref. 13.
\end{quote}
Their \say{ref 13} is the compilation of Endt (Ref.~\cite{ENDT_1978}). We are left with the impression that a large number of calibration states were used, with different states being selected for each beam energy. What is unclear is if these states were included in the reported energies. As was mentioned in Section \ref{sec:energy_level_update}, calibration states should never be considered independent measurements. In the absence of information on which states were used for calibration, it was assumed that the calibration states were properly excluded from the reported values, and as a consequence it was included in the compilation of the present study. However, until the calibration states used in the study are clearly determined, the reported values should be viewed with some skepticism.
\chapter{INTRODUCTION}
\label{chap:astro}
The pioneering work of Burbidge, Burbidge, Fowler, and Hoyle \cite{b2fh} and independently Cameron \cite{cameron} established the field of nuclear astrophysics. These works came to the realization that the chemical elements we find in the Solar System are the remnants of nuclear burning happening inside of stars. While understanding the origin of the elements is still one of the primary questions of the field, this thesis will focus on another implication of stellar nucleosynthesis. Specifically: observed elemental abundances in the cosmos are the unique signatures of nuclear burning process. In this way, nuclear physics becomes an additional quantitative tool to gain insight into astronomical observations. By studying nuclear reactions in the laboratory terrestrially, we can answer questions about stellar phenomena that would otherwise be inaccessible to us. However, at the energies found in stellar plasmas the coulomb barrier dominates, making direct study of these reactions exceptionally difficult. Thus, in order to fully understand stellar nucleosynthesis, a variety of experimental approaches must be used to construct a composite picture of these nuclear reactions.
The work presented in this thesis examines the use of nuclear transfer reactions to constrain the nuclear reactions responsible for destroying sodium and potassium in globular clusters. The present chapter will give an overview of the observation evidence for elemental abundance anomalies in globular clusters and discuss their importance for our theories of stellar evolution. Chapter~\ref{chap:reactions} will provide the necessary details for how nuclear reactions occur in stars and how we can better understand them using transfer reactions, Chapter~\ref{chap:nuclear_unc} will show how the uncertainties that naturally arise from nuclear physics experiments impact the astrophysical predictions, Chapter~\ref{chap:tunl} details the experimental methods used in this work, Chapter~\ref{chap:bay_dwba} presents novel Bayesian methods that were developed to quantify uncertainties from transfer reactions, and Chapter~\ref{chap:sodium} summarizes the analysis and results of the $^{23}$Na$(^3 \textnormal{He}, d)$ transfer reaction.
\section{Globular Cluster Abundance Anomalies}
\label{sec:abund_anomalies}
Of all the observed astronomical objects, globular clusters are perhaps the closest we can get to a laboratory-like environment in the galaxy. They are some of the brightest and oldest objects in our sky, which consist of hundreds of thousands of stars that are gravitationally bound within $\sim 100$ parsecs. These facts mean that we have a relatively easy to observe object that has a large, isolated population of stars. If our theories of stellar evolution and intra-cluster dynamics were complete, then any observed property of a cluster could be explained by just a few parameters \cite{gratton_2010}. At this time we have no such theory, and many properties of globular clusters remain uncertain.
Since the 1980's, the simple perspective on globular clusters mentioned above has shifted significantly. Prior to detailed spectroscopy of the stars within the cluster, it was thought that these objects constituted a \textit{single stellar population}. Under this assumption, each star in the globular cluster would form at the same time from gas of similar chemical composition \cite{kraft_1979}.
Operating under the assumption of a single stellar population, all of the observed properties of a globular cluster would arise from the starting composition and age of the cluster. If we look at the \textit{Hertzsprung-Russell diagram} (HR diagram) in Figure~\ref{fig:hr_diagram}, and assume a single stellar population, we could deduce the current evolutionary stage of any star in the cluster as a function of its initial mass. Specifically, more massive stars burn through their nuclear fuel more quickly, and are therefore more advanced in their evolution at the particular moment in time. Stars formed at the same time, but with different initial masses, can be compared to a calculated isochrone, i.e, the curve through the HR diagram that shows stars of the same age but different initial mass, as seen in Figure~\ref{fig:hr_diagram}. The relatively good agreement between the isochrone and observations demonstrate that the concept of a single stellar population does hold some merit for globular clusters. Further conclusions can be drawn using similar methods, most notably that the cluster can be used as a lower bound on the age of the universe \cite{Jimenez_1998}.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/figs/aa32843-18-fig19.pdf}
\caption{HR diagram for the globular cluster 47 Tuc. Plot is taken from Ref.~\cite{gaia_2018}. The red line is a calculated isochrone. The green and blue dots represent the inner and outer regions of the cluster.}
\label{fig:hr_diagram}
\end{figure}
Despite the success of the single population hypothesis,
advances in high resolution spectroscopy have proven it to be an inadequate theory. Evidence against this hypothesis started to accumulate with the observation that a significant enhancement of sodium was present in some members of the red giant branch (RGB) in the cluster M13 \cite{perterson_1980}. Continued observational work eventually confirmed that globular clusters, to varying degrees, had specific star-to-star correlations and anticorrelations between light elements \cite{kraft_1994}. In particular, the anticorrelation between sodium and oxygen has been observed in every globular cluster that it has been looked for \cite{gratton_2004}. An example of this anticorrelation is shown in Fig.~\ref{fig:na_o_tuc}. These Na-O anticorrelations were originally dubbed \textit{abundance anomalies}, since they could not be easily explained in the framework of a single stellar population. This enriched material could only come from stellar burning happening \textit{in-situ} or from some initial inhomogeneity in the cluster material.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/figs/47_Tuc_Na_O.pdf}
\caption{The observed Na-O anticorrelation for 13 stars on the RGB from 47 Tuc from Ref.~\cite{2014_Thygesen}. The red points are the observed abundance while the green blue contour shows the probability density inferred from these points using a bootstrap method.}
\label{fig:na_o_tuc}
\end{figure}
Whatever the astrophysical source of the observed anticorrelations, they are clear signatures of nuclear burning processes. They have been uniquely identified as being produced by hydrogen burning at elevated temperatures \cite{d_and_d_1989, langer_1993, kudryashov_1988}. Since stars on the RGB can not produce these temperatures, the enriched material must have been produced at another site prior to their observation.
Continued advances in high resolution photometry led to the first unambiguous evidence of multiple stellar populations existing within globular clusters \cite{gratton_2012}. These measurements made it possible to resolve different tracks among the stars on the main sequence, with each track corresponding to a variation in helium content \cite{piotto_2007, villanova_2007}. Figure~\ref{fig:multi_ms} taken from Ref.~\cite{milone_2012} again shows 47 Tuc, but by using the high resolution photometric data, two distinct stellar populations can be identified. These observations unambiguously demonstrate that globular clusters are not simple stellar populations, but instead they possess multiple generations of stars, with the newest generation being enriched in light elements by some unknown polluter from the previous generation \cite{gratton_2012, gratton_2019}.
These sets of observations have overturned the traditional view of globular clusters, and effectively open up new avenues of research. By trying to positively identify the source of the polluted material, determining how this material is ejected back into the cluster, and the dynamics of the second generation of stars, we have the opportunity to significantly advance our understanding of the formation of some of the oldest objects in our galaxy.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-1/figs/apj407585f35_hr.jpg}
\caption{Figure is taken from Ref.~\cite{milone_2012}. HR diagrams of 47 Tuc. The x-axis shows the color index in terms of the filters used on the Hubble space telescope \cite{filters_2012}, while the y-axis shows the apparent magnitude. The two colors distinguish the multiple populations present within the cluster.}
\label{fig:multi_ms}
\end{figure}{}
\section{Polluter Candidates and Nucleosynthesis}
As established in Section \ref{sec:abund_anomalies}, the enrichment of sodium in second generation stars is undoubtedly from hydrogen burning at elevated temperatures. Unfortunately, the astrophysical environment that can provide these temperatures and eject the processed material back into the cluster is unknown. Proposed environments include intermediate and massive Asymptotic Giant Branch (AGBs) Stars ($5 \textnormal{-} 9 \textnormal{M}_{\odot}$) \cite{Ventura_2001, dercole_2010}, fast rotating massive stars (FRMS, $ \geq 25 \textnormal{M}_{\odot}$) \cite{decressin_2007}, and very massive stars (VMS, $ \geq 10^4 \textnormal{M}_{\odot}$) \cite{denissenkov_2014, denissenkov_2015}.
While all of these environments can replicate the Na-O anti-correlation, they also share a common issue by overproducing He relative to Na \cite{problems_with_hbb, renzini_2015}.
By examining different burning conditions, i.e., temperature and time, and holding density and metallicity constant, Ref.~\cite{prantzos_2017} found that all of the observed correlations and anti-correlations in NGC 2808, including the Na-O anti-correlation, were only reproducible at temperatures of $\textnormal{T} \sim 70-80$ MK. These results are notably independent from the complications present in more advanced models such as convection and mixing \cite{Ventura_2005}. As such, their results provide an excellent starting point for nuclear physicists looking to identify key reactions to study. In this narrow temperature regime, any oxygen destroyed via $^{17}$O$(p, \alpha)^{14}$N or $^{18}$O$(p, \alpha)^{15}$N will become trapped in the main CNO cycle, thereby causing the overall oxygen abundance to drop. Meanwhile, proton captures on stable isotopes of Ne lead to a series of reactions called the (Ne-Na cycle). These series of reactions and decays are give by:
\begin{equation}
^{20}\textnormal{Ne}(p, \gamma)^{21}\textnormal{Na}(\beta^+)^{21}\textnormal{Ne}(p, \gamma)^{22}\textnormal{Na}(\beta^+)^{22}\textnormal{Ne}(p, \gamma)^{23}\textnormal{Na}(p, \alpha)^{20} \textnormal{Ne},
\end{equation}{}
and are presented in Figure~\ref{fig:ne_na_cycle}. The amount of material trapped in this cycle depends sensitively on the strength of $^{23}\textnormal{Na}(p, \alpha)^{20} \textnormal{Ne}$ versus that of $^{23}\textnormal{Na}(p, \gamma)^{24} \textnormal{Mg}$. H burning at these temperatures makes it impossible for material that has been turned into $^{24}$\textnormal{Mg} to reenter the Ne-Na cycle. These two processes, the destruction of oxygen and the conversion of Ne into Na happen concurrently causing the observed anticorrelation, but it should be noted that material does not flow between these two mass regions because of the $^{19}$F$(p, \alpha)^{16}$O reaction trapping material in the CNO region.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/figs/23Na_network1.pdf}
\caption{Illustration of the Ne-Na cycle. The $(p,\gamma)$, $\beta^{+}$, and $(p, \alpha)$ reactions and decays are shown in green, purple, and orange, respectively. Stable isotopes are shaded in grey.}
\label{fig:ne_na_cycle}
\end{figure}{}
\section{NGC 2419 and K Enrichment}
The Na-O anti-correlation, as discussed above, is a common feature in every observed globular cluster, but there are rich varieties of phenomena that are specific to individual or a limited number of clusters. One of the most recently observed, and most puzzling, is the evidence of K enrichment in NGC 2419 \cite{cohen_2011, cohen_2012}. The full correlation was established in \cite{mucciarelli_2012}, and Figure \ref{fig:mg_k_abund} shows the potassium versus magnesium abundances from \cite{mucciarelli_2012}.
\begin{figure}
\centering
\hspace*{-1.5cm}
\includegraphics[width=.7\textwidth]{Chapter-1/figs/Abundances_plot.png}
\caption{Plot of the potassium versus magnesium abundances observed in NGC 2419 \cite{mucciarelli_2012}. Approximately $40 \%$ of the stars within the cluster are shown to have a significant enrichment of potassium correlated with a depletion of magnesium.}
\label{fig:mg_k_abund}
\end{figure}{}
So far observations point towards NGC 2419 being a unique case. A small portion of the stars observed in NGC 2808 have shown a depletion of Mg correlated with an enhancement of K \cite{mucciarelli_2015}, but to date, there have been no other clear determinations of a K-Mg anti-correlation in any cluster \cite{mucciarelli_2017}. Interestingly, while the degree at which K and Mg are anti-correlated in NGC 2419 is unprecedented, similar trends have been reported in many field stars \cite{kemp_2018}. These observations raise important questions with regards to galactic chemical evolution, which our current stellar models are not able to match \cite{timmes_1995, romano_2010}. This could point to some common mechanism that is not isolated to clusters. However, even if the K-Mg anticorrelation is a cluster specific phenomenon, its study shares the same promises as the study of other abundance anomalies: a greater understanding of both nucleosynthesis in globular clusters and their multiple stellar populations.
Regardless of how widespread this chemical signature is, its origin is unknown. Without appealing to any specific astrophysical environment, it is clear that in order for K to be synthesized, H-burning at temperatures above those of the case for Na enrichment must be present. At these elevated temperatures a series of proton captures will lead to the following chain of reactions, shown in Fig.~\ref{fig:39K_net}:
\begin{equation}
\label{eq:k_reacs}
^{36}\textnormal{Ar}(p, \gamma)^{37}\textnormal{K}(\beta^+)^{37}\textnormal{Ar}(p, \gamma)^{38}\textnormal{K}(\beta^+)^{38}\textnormal{Ar}(p, \gamma)^{39}\textnormal{K}.
\end{equation}{}
Unlike the sodium production case discussed before, this chain of reactions does not form a cycle, due to the negligible contribution of $^{39} \textnormal{K} (p, \alpha) ^{36} \textnormal{Ar}$. It is also the case that $^{37} \textnormal{Ar}$ is unstable to electron capture, and has a half-life of $35$ days. This half-life compared with the relative strength of
$^{37}\textnormal{Ar}(p, \gamma)^{38}\textnormal{K}$ means that this decay has little effect on the synthesis of $^{39} \textnormal{K}$.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{Chapter-1/figs/39K_network_new.pdf}
\caption{A pictorial representation of the reactions responsible for the synthesis of $^{39}$K due to elevated H-burning. $(p, \gamma)$, $\beta^+$, and electron capture $(e^-)$ are shown in green, purple, and pink, respectively. Although the electron capture on $^{37}$Ar is possible, its rate is sufficiently low which makes its contribution to the final abundance of $^{39}$K negligible.}
\label{fig:39K_net}
\end{figure}{}
So far, the temperatures required for the sequence of reactions in Eq.~\ref{eq:k_reacs} have only produced a handful of potential polluter candidates. The scenario of hot bottom burning in AGB stars and super AGB stars ($5 \textnormal{-} 9 \textnormal{M}_{\odot}$) was proposed in Ref.~\cite{ventura_2012}. The authors noted that if the reaction rate of $^{38}\textnormal{Ar}(p, \gamma)^{39}\textnormal{K}$ was a factor of 100 times larger, then the observed levels of K could be explained by this scenario. An interesting idea was put forth in Ref.~\cite{carretta_2013}. The authors of this study hypothesize that because of the unique nature of NGC 2419, a similarly rare event could be a good candidate. They suggest a Pair-instability Supernova ($140 \textnormal{-} 260 \textnormal{M}_{\odot}$) as a sufficiently rare phenomenon capable of producing K. However, this scenario is expected to have an odd-even effect \cite{woosley_2002}, i.e., that nuclei with even $Z$ are expected in much greater abundance than those with odd $Z$. This is not the case in NGC 2419, which presents major complications for this scenario.
Instead of approaching this problem by selecting a potential polluter, developing and running a stellar model, and then comparing to observation, Iliadis et al. \cite{iliadis_2016} approached the problem by remaining agnostic towards the source of the pollution. Their study used a Monte-Carlo method to sample temperature, density, amount of hydrogen burned, and each of the thermonuclear reaction rates (taken from \cite{sallaska_2013}). For each of these samples, a nuclear reaction network calculation was carried out. The output from each network was then mixed by some fractional amount, $f$, of unprocessed material according to:
\begin{equation}
X_{\textnormal{mix}} = \frac{X_{\textnormal{proc}} + f X_{\textnormal{pris}}}{1+f},
\label{eq:mixing}
\end{equation}{}
where $X$ is the mass fraction of the material, "proc" refers to the processed material, and "pris" is the pristine cluster material. Finally, after the material was mixed using fixed values of $f$, it was compared to the observations from \cite{mucciarelli_2012, cohen_2012}. Samples that were able to reproduce all the observed correlations of NGC 2419 were kept. Figure~\ref{fig:trajectory} shows the accepted solutions plotted as a function of their temperature and density, while the lines show the conditions present in some of the proposed polluters.
\begin{figure}
\centering
\includegraphics{Chapter-1/figs/iliadis_trajectory.jpg}
\caption{Monte-Carlo samples capable of reproducing the elemental abundances observed in NGC 2419 plotted as a function of their temperature and density. Temperature and density conditions of some of the proposed polluter environments are plotted and labelled. Plot is taken from \cite{iliadis_2016}. }
\label{fig:trajectory}
\end{figure}{}
A further analysis in \cite{dermigny_2017} more closely examined the effect individual reaction rates had on the range of acceptable temperatures and densities. This study clearly defined the role nuclear physicists could play in resolving the anomalous chemical signature of NGC 2419. The authors concluded that only a handful of rates played a significant role in increasing the uncertainties in the astrophysical parameters. They are $^{30} \textnormal{Si} (p, \gamma)^{31} \textnormal{P}$, $^{37} \textnormal{Ar} (p, \gamma)^{38} \textnormal{K}$, $^{38} \textnormal{Ar} (p, \gamma)^{39} \textnormal{K}$, and $^{39} \textnormal{K} (p, \gamma)^{40} \textnormal{Ca}$.
\section{Summary}
The above discussion highlights the interesting questions posed by the observation of multiple, chemically distinct stellar populations in globular clusters. Without a precise knowledge of the nuclear reactions responsible for the enriched material, it is impossible to pin down the stellar environment that produced it. This work will be concerned with what appear to be two distinct processes. In the case of the Na-O correlation, an unknown pollution mechanism
appears to be common to nearly every globular cluster. Thus, it is a fundamental element to our understanding of these objects. The K-Mg anti-correlation is associated with a higher temperature burning environment, but is still capable of informing our knowledge of nucleosynthesis within these unique objects. Critical to both of these scenarios is a better understanding of how quickly these elements are destroyed through the $^{23} \textnormal{Na} (p, \gamma)^{24} \textnormal{Mg}$ and $^{39} \textnormal{K} (p, \gamma)^{40} \textnormal{Ca}$ reactions, respectively.
\chapter{Nuclear Reactions and Astrophysics}
\label{chap:reactions}
\section{Introduction}
This chapter draws the link between the thermonuclear reactions occurring in stellar interiors and the nuclear properties we can measure in the lab. Special emphasis is placed on the theory underpinning single-nucleon transfer reactions. It is shown that these reactions provide single-particle information essential to constraining the stellar reaction rate.
\section{Thermonuclear Reaction Rates}
For the majority of a star's life, it will steadily burn its nuclear fuel in order to balance the force of gravitational attraction between the stellar gas. This state of hydrostatic equilibrium is reached as the infalling gas begins to increase in temperature and density to a point where nuclear fusion reactions begin to occur \cite{cauldrons_in_the_cosmos}. The probability of fusion occurring is determined by the nuclear cross section, which itself is a function of energy. Thus, in a thermal environment where particles have a distribution of energies, the rate of fusion, called the \textit{thermonuclear reaction rate}, requires a convolution of two probabilities: the energy distribution of the reacting particles and the nuclear cross section. Assuming the stellar interior is in thermal equilibrium, then the particles will have kinetic energies dictated by the Maxwell-Boltzmann distribution, and the stellar reaction rate between nuclei $a$ and $A$, denoted $\langle \sigma v \rangle_{aA}$, is given by \cite{iliadis_book}:
\begin{equation}
\label{eq:reaction_rate}
\langle \sigma v \rangle_{aA} = \bigg(\frac{8}{\pi \mu_{aA}}\bigg)^{1/2} \frac{1}{(kT)^{3/2}} \int_0^{\infty} E \sigma(E) e^{-E/kT} dE.
\end{equation}{}
This equation is expressed in the center of mass frame where $\mu_{aA}$ is the reduced mass of particles $a$ and $A$, $E$ is their center of mass energy, $k$ is Boltzmann's constant, $T$ is the temperature of the stellar plasma, and $\sigma(E)$ is the nuclear cross section.
Because the integral in Eq.~\ref{eq:reaction_rate} goes from $E = 0 \rightarrow \infty$, it is not immediately clear which energy range is relevant for a specific temperature. The peak of the Maxwell-Boltzmann distribution occurs at $E = kT$. For temperatures of $70 \text{-} 80$ MK relevant to globular cluster nucleosynthesis, this peak occurs at $E \approx 6 \text{-} 7$ keV. However, at these low energies the coulomb repulsion between the positively charged ions will severely inhibit the reaction rate at the peak of the Maxwell-Boltzmann distribution. Classically, the reaction could only proceed once the particle has enough energy to overcome the coulomb barrier, but quantum mechanics gives a finite probability for the particle to tunnel through this barrier. This probability can be approximated for zero orbital angular momentum, i.e, $\ell =0 $, tunneling as:
\begin{equation}
\label{eq:tunnel_prob}
P = e^{-2 \pi \eta} ,
\end{equation}
%
where $P$ is called the Gamow factor and $\eta$ is the Sommerfeld parameter, which is defined as:
\begin{equation}
\label{eq:sommerfeld}
\eta = \frac{1}{\hbar}\sqrt{\frac{\mu_{aA}}{2 E}} Z_a Z_A e^2.
\end{equation}
$Z_a$ and $Z_A$ are the atomic numbers of $a$ and $A$, respectively, while $e$ is the elementary charge. Multiplying the Gamow and the Boltzmann factors together gives a peaked probability distribution that approximates the energy range in which the majority of the nuclear reactions will occur. This peak is called the Gamow peak, and an example for $^{23}$Na + $p$ at $75$ MK is shown in Fig.~\ref{fig:gam_peak}. The center of this peak is around $\sim 100$ keV, showing that the effect of the Coulomb barrier is to sample from the high energy tail of the Maxwell-Boltzmann distribution.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-2/figures/Na_pg_gamow.pdf}
\caption{The Gamow peak for $^{23}$Na + $p$ reaction at $75$ MK.}
\label{fig:gam_peak}
\end{figure}
At the low stellar burning temperatures under consideration in this work, the behavior of $\sigma(E)$ within this narrow energy window is the dominant factor for calculating the thermonuclear reaction rate.
\section{Resonant Reactions}
\label{sec:resonances}
At the relatively low center-of-mass energies where nuclear reactions in stellar interiors occur, the primary contributions to $\sigma(E)$ will come from direct radiative capture and resonant capture. The former plays a minor role in the $^{23}$Na$(p, \gamma)$ rate, but is not studied in this work. The latter is the primary mechanism by which both $^{23}$Na$(p, \gamma)$ and $^{39}$K$(p, \gamma)$ proceed.
Resonant reactions involve the formation of a compound nucleus. When the energy of the reacting particles is such that the center-of-mass energy matches an excited state of the compound system, there is an enhancement of the reaction cross section. This resonance energy is related to the excited state energy via:
\begin{equation}
\label{eq:resonance_energy}
E_r = E_x - Q,
\end{equation}
where $E_r$ is the resonance energy, $E_x$ is the excited state energy, and $Q$ is the reaction Q-value for the formation of the compound nucleus. Once the compound nucleus is formed, there is a sufficient period of time for the nucleons to rearrange themselves and effectively lose any information about how the compound nucleus was formed \cite{blatt_1991}. This means that the de-excitation of the compound system will be independent of the manner in which it was formed, a theory first proposed by Bohr \cite{bohr_1936}. Under these assumptions the nuclear cross section for a single isolated resonance, called the Breit-Wigner formula \cite{breit_1936}, can be written as:
\begin{equation}
\label{eq:resonance}
\sigma(E)_{\textnormal{BW}} = \frac{\lambda^2}{4 \pi}\frac{(2J+1)}{(2j_a+1)(2j_A+1)}\frac{\Gamma_a \Gamma_b}{(E-E_r)^2 + \Gamma^2/4},
\end{equation}
where $\lambda$ is the de Broglie wavelength of the system at the center-of-mass energy $E$, $J$ is the spin of the resonance, $j_a$ is the spin of the incident particle, $j_a$ is the spin of the target, and $E_r$ is the energy of the resonance.
This formula, consistent with the assumptions that have just been laid out, only depends on the resonance energy ($E_r$), level width ($\Gamma$), and spins of the particles involved. Of particular importance are the partial widths, $\Gamma_a$ and $\Gamma_A$, and the total width $\Gamma$. These widths are expressed in terms of energy, and using the uncertainty principle can be related to lifetime, $\tau$, of the resonance:
\begin{equation}
\label{eq:unc_principle}
\Gamma = \frac{\hbar}{\tau}.
\end{equation}
Additionally, $\Gamma$ is equal to the sum of all the energetically allowed decay channels' partial widths:
\begin{equation}
\Gamma = \sum_c \Gamma_c,
\end{equation}
where the index c runs over the channels. These widths are in general energy dependent quantities. This complication gives two distinct cases: \textit{narrow} resonances where the widths do not vary appreciably over the total width of the resonance and \text{broad} resonances where this does not hold.
It is now possible to calculate the astrophysical rate in the case of an isolated narrow resonance. Plugging the expression for the cross section, Eq.~\ref{eq:resonance}, into that of the reaction rate, Eq.~\ref{eq:reaction_rate}, yields:
\begin{equation}
\label{eq:rate_and_bw}
\langle \sigma v \rangle_{aA} = \bigg(\frac{8}{\pi \mu_{aA}}\bigg)^{1/2} \frac{1}{(kT)^{3/2}} \int_0^{\infty} \frac{\lambda^2}{4 \pi}\frac{(2J+1)}{(2j_a+1)(2j_A+1)}\frac{\Gamma_a \Gamma_b}{(E-E_r)^2 + \Gamma^2/4} E e^{-E/kT} dE.
\end{equation}
The simplification takes the partial widths and Maxwell-Boltzmann distribution to be constant over the width of the resonance. Pulling these out of the integral leaves the expression:
\begin{equation}
\label{eq:narrow_integral}
\int_0^{\infty} \frac{dE}{(E-E_r)^2 + \Gamma^2/4}.
\end{equation}
%
To explicitly show one more assumption, this integral can be be evaluated with trigonometric substitution and reduces to:
%
\begin{equation}
\label{eq:arctan_nonsense}
\frac{2}{\Gamma} \arctan \bigg(\frac{E-E_r}{\Gamma/2}\bigg) \bigg|^{\infty}_{0} ,
\end{equation}
%
the upper limit can readily be calculated and comes out to $\frac{\pi}{2}$. The lower one invokes another property of narrow resonances, that being $E_r >> \Gamma$, which effectively makes the lower bound $-\infty$, giving $-\frac{\pi}{2}$. Thus, this integral yields $\frac{2}{\Gamma} \pi$. Combining these factors gives:
%
\begin{equation}
\label{eq:narrow_rate}
\langle \sigma v\rangle_{aA} = \bigg(\frac{2 \pi}{\mu_{aA} kT} \bigg)^{3/2} \hbar^2 e^{-E_r/kT} \frac{(2J+1)}{(2j_a+1)(2j_A+1)} \frac{\Gamma_a \Gamma_b}{\Gamma}.
\end{equation}
%
The product of the spin multiplicities and widths is frequently denoted $\omega \gamma$ and is referred to as the resonance strength because it is proportional to the area under the resonance cross section \cite{iliadis_book}. Using this definition, the above becomes:
%
\begin{equation}
\label{eq:narrow_rate_with_resonance_strength}
\langle \sigma v \rangle_{aA} = \bigg(\frac{2 \pi}{\mu_{aA} kT} \bigg)^{3/2} \hbar^2 e^{-E_r/kT} \omega \gamma.
\end{equation}
Although the experimental methods in this thesis will not focus on the direct determination of the resonance strength, the results presented can be compared to such measurements. It is therefore enlightening to discuss how such measurements are performed. From Eq.~\ref{eq:narrow_integral}, we can see that any process that integrates over a narrow resonance will be proportional to $\omega \gamma$. In the lab this energy variation comes not from the random thermal motion of particles, but from the energy loss of a particle-beam traveling through a target material with stopping power, $\epsilon(E)$. In such a case, the number of nuclear reactions that occur per beam particle, i.e. the yield, $Y$, of the reaction, will be:
%
\begin{equation}
\label{eq:resonance_yield}
Y = \frac{\lambda_r^2}{2} \frac{\omega \gamma}{\epsilon_r},
\end{equation}
%
where the subscript $r$ means the quantities are evaluated at the resonance energy. This formula only holds if the target material can be considered infinitely thick, meaning that the target thickness in energy units is much greater than $\Gamma$, i.e. $\Delta E >> \Gamma$ \cite{iliadis_book}.
\section{Connection to Nuclear Structure}
Resonances as discussed above make no assumptions about the detailed structure of the compound system. In cases where the resonance strength can be determined directly, this is a very satisfactory state of affairs, where almost no knowledge of the nuclear force beyond its short range and strong interactions has to be assumed. However, if it is the case that the resonance cannot be determined directly, as is often true for low energy resonances of interest to astrophysics, indirect determinations of the resonance parameters have to be made. Because of the relative simplicity of Eq.~\ref{eq:resonance}, it can be guessed that any information about the underlying nuclear structure must be contained within the partial widths. As will be shown, these particle partial widths can be expressed in terms of single-particle states within the shell model. Historically, this connection between the resonance partial widths and single-particle quantities related to the shell model is quite remarkable, since the proposal and success of the Breit-Winger formula \cite{breit_1936} predated the evidence of the nuclear shell model \cite{mayer_1948} by more than a decade.
As an example, lets turn towards the proton partial width, $\Gamma_p$, which, despite being expressed in energy units, can be thought of as being proportional to the probability of a proton to be emitted from the compound nucleus. In order for this process to happen, three things must occur: the proton must overcome the coulomb barrier to arrive at the nuclear surface, at the surface the proton must occupy a single-particle state that has the same quantum numbers of the corresponding resonance, and this occupancy must be weighted according to how much the mean field effect of the other nuclear interactions fragments the total single-particle strength of a proton shell among its constitute states. Thus, $\Gamma_p$ can be written:
\begin{equation}
\label{eq:proton_partial_width}
\Gamma_{p} = 2\frac{\hbar^2}{\mu_{pA}R^2} C^2S_{\ell}P_{\ell} \theta_{\textnormal{sp}}^2,
\end{equation}
where $C^2$ is the isospin Clebsch-Gordan coefficient for the $p + A$ system,
$S$ is the spectroscopic factor of the single particle state, $P_{\ell}$ is the penetration factor, and $\theta_{\textnormal{sp}}$ is the single particle reduced width \cite{ILIADIS_1997}. Additionally, there is the parameter $R$, which is the channel radius. Matching these terms with the above probabilities: $P_{\ell}$ is the probability of the proton tunneling through the coulomb barrier, $\theta_{sp}^2$ is the probability of the proton being found at the nuclear surface, and $C^2S$ is the weight for the single-particle state.
\subsection{Calculation of Partial Widths}
\label{sec:calc_partial_widths}
Of the terms in Eq.~\ref{eq:proton_partial_width}, $P_{\ell}$ and $\theta_{sp}$ stand separately from $C^2S$ because they can be calculated accurately theoretically. $P_{\ell}$ is calculated from:
\begin{equation}
\label{eq:penetrability}
P_{\ell} = R\bigg( \frac{k}{F_{\ell}^2 + G_{\ell}^2} \bigg),
\end{equation}{}
where $F_{\ell}$ and $G_{\ell}$ are the regular and irregular coulomb functions, respectively \cite{Abramowitz_1974}. $k$ is the wave number of the particle, and $R$ is the channel radius \cite{iliadis_book}. $\theta_{sp}$ is
the dimensionless single-particle reduced width and is defined as:
\begin{equation}
\label{eq:dimensionless_reduced_width}
\theta_{sp} = \frac{R}{2} \big| u_{\textnormal{sp}}(R) \big|^2.
\end{equation}
Again $R$ is the channel radius, while $u_{sp}$ is the radial wave function for a single particle in a nuclear potential. The channel radius is defined as:
\begin{equation}
\label{eq:channel_radius}
R = r_0 (A_t^{1/3} + A_p^{1/3}),
\end{equation}
where $A_t$ and $A_p$ are the atomic mass numbers for the target and projectile, respectively. For this work $r_0 = 1.25$ fm.
While it is possible to calculate the above factors individually, they, as well as the constant factors, can be absorbed into a single term, $\Gamma_{sp}$. \, Eq.~\ref{eq:proton_partial_width} is now written as:
\begin{equation}
\Gamma_p = C^2S \Gamma_{\textnormal{sp}}.
\end{equation}
The benefit of such an equation is that it can be calculated numerically by varying the parameters of a single-particle potential to produce a resonance at the observed energy. The width of this single-particle resonance corresponds to $\Gamma_{\textnormal{sp}}$. One such method is utilized by the code \texttt{BIND}, which is described in detail in Ref.~\cite{ILIADIS_1997}. \texttt{BIND} finds $\Gamma_{\textnormal{sp}}$ by solving:
\begin{equation}
\label{eq:bind}
\frac{2}{\Gamma_{\textnormal{sp}}} \approx \bigg( \frac{d \delta}{d E} \bigg)_{\pi/2} = \frac{2 \mu_{aA}}{\hbar^2 k} \bigg( \int_0^{R_{max}} |u(r)|^2 dr + \frac{G_{\ell}(R_{max})}{2k} \frac{d}{dk} \bigg[ \frac{d G_{\ell}}{dr} \bigg | _{r=R_{max}} / G_{\ell}(R_{max}) \bigg] \bigg),
\end{equation}
where $\delta$ is the scattering phase shift, $R_{max}$ is a cutoff radius, and the other symbols have been defined previously. One shortcoming of the above method is that as the resonance energy approaches the proton threshold, the numerical method becomes extremely unstable. This is expected since partial widths drop off exponentially as $E_{c.m.}$ approaches zero. These problems at lower energies can be worked around by using the methods presented in \cite{ILIADIS_1997} and \cite{BARKER_1998}, these studies use the high energy values of $\Gamma_{\textnormal{sp}}$ and Eq.~\ref{eq:penetrability} to give fits for the energy and mass dependence of $\theta_{\textnormal{sp}}^2$. These fits also allow the calculation of \textit{reduced particle width} for sub-threshold resonances:
\begin{equation}
\label{eq:formal_reduced_width}
\theta^2 = C^2S \theta_{sp}^2.
\end{equation}
The only missing unknown quantity at this point is $C^2S$, which scales the single-particle quantities to the physical resonance parameters. $C^2S$ can in principle be calculated using the shell model, but it is highly sensitive to the model chosen for the nucleon-nucleon interactions. The situation is further complicated for resonances since the single particle states are unbound, which requires treatment of states in the continuum. Due to the high uncertainty involved with their determination theoretically, it is advantageous to determine $C^2S$ experimentally. Single-nucleon transfer reactions have long been used to experimentally test the shell model due to their sensitivity to the single-particle structure of excited states \cite{satchler}.
In particular, if a transfer reaction is performed at a high enough energy, then the dominant contribution to the cross section will be a direct reaction process. Direct reactions, as opposed to compound reactions, do not form a compound nucleus; therefore, the particles in the exit channel carry information about the reaction mechanism itself, not just the decay of the excited nucleus \cite{satchler}.
We can now look at how this information can be related to $C^2S$.
\section{Transfer Reactions}
Transfer reactions are a broad class of nuclear reactions that refer to processes where either a single or a cluster of nucleons are moved between the target and projectile systems. In normal kinematics assuming a cluster of nucleons, $c$, we have either a pickup reaction:
\begin{equation}
a + (A+c) \rightarrow (a+c) + A,
\end{equation}{}
or a stripping reaction:
\begin{equation}
(a+c) + A \rightarrow a + (A+c).
\end{equation}{}
Describing this system theoretically requires a quantum mechanical treatment of each of these subsystems. For the case of a stripping reaction, this divides into the scattering of $(a+c)$ off of $A$ and $a$ off of $(A+c)$, while accounting for the interaction of $(a+c)$ and $(A+c)$.
Since the measurement presented in this thesis is a proton stripping reactions, i.e. $A(^3 \textnormal{He}, d)B$, I will tailor the following discussion around its description.
\subsection{Optical Model}
A full description of the scattering processes $A + ^{3} \textnormal{He}$ and $d + B$ would require specifying the Hamiltonian for the $N$-nucleon scattering problem. This would involve accounting for all excited states in both the projectile and target systems and coupling all available reaction channels. The nuclear optical model simplifies this multi-nucleon scattering problem by considering a single particle interacting with a complex potential, $\mathcal{U}(r)$. The complex potential removes flux from the elastic channel, which approximates the effect of the other open reaction channels on the cross section. This assumption is only valid when the number of open reaction channels is large enough such that their influence can be averaged over \cite{hodgson1971}.
The theoretical basis for the optical model was first established in Ref.~\cite{feshbach1958}, but fell short of actually prescribing the form of the complex potential. Through detailed analysis of elastic scattering from a range of targets and energies, Becchetti and Greenlees \cite{b_g_p} showed that a phenomenological optical model could successfully describe a wide variety of elastic scattering data. Further theoretical and historical details can be found in Ref.~\cite{hodgson1971}.
Any reference to or listing of optical model parameters in this thesis assumes the following form of the optical potential:
\begin{multline}
\label{eq:optical_model}
\mathcal{U}(r) = V_c(r; r_c)-Vf(r; r_0, a_0)
-i(W-4a_iW_s\frac{d}{dr_i})f(r; r_i, a_i) \\
+ \bigg( \frac{\hbar}{m_{\pi}c} \bigg)^2V_{so} \frac{1}{r} \frac{d}{dr} f(r; r_{so}, a_{so}) \boldsymbol{\sigma} \cdot \boldsymbol{\ell},
\end{multline}
where $f(r)$ is given by the Woods-Saxon form factor:
\begin{equation}
\label{eq:ws_pot}
f(r; r_0, a_0) = \frac{1}{1 + \textnormal{exp}\bigg({\frac{r-r_0A_t^{1/3}}{a_0}}\bigg)}.
\end{equation}
Each term, with the exception of the coulomb potential, in $\mathcal{U}$ is parameterized with a well depth, ${V, W, W_s}$, radius, ${r_0, r_i, r_{so}}$, and diffuseness, ${a_0, a_i, a_{so}}$. $A_t$ denotes the atomic mass number of the target nucleus.
The spin-orbit interaction is between the projectile orbital and spin angular momentum, $\boldsymbol{\ell}$ and $\bf{s}$, respectively. Additionally, the spin orbit term makes use of the common parameterization where $\boldsymbol{\sigma} = 2 \mathbf{s}$, and $(\frac{\hbar}{m_{\pi}c})^2$ is a constant with a value of approximately $2$ fm$^2$.
The Coulomb term, $V_c$, comes from the potential of a uniformly charged sphere with radius $R_c = r_c A_t^{1/3}$.
These conventions follow those of the reaction code FRESCO \cite{fresco}, which is used for all calculations in this thesis, unless otherwise noted. A word of caution is in order due to many different conventions that exist for the parameter values in literature. Especially bothersome are the form of the imaginary surface and spin-orbit terms. For the imaginary surface term, the prefactor can either be $W_s$, $4W_s$, or as adopted here $4a_iW_s$. Both the subscript $D$ and $s$ are in use for the surface imaginary term, standing for either \textit{derivative} or \textit{surface}. For the spin-orbit term, the prefactor may omit the constant, $\mathbf{s}$ may be substituted for $\boldsymbol{\sigma}$, or the so-called Thomas form may be used which replaces $V_{so}$ and any constant by $\lambda$. These conventions change from author to author, study to study, and reaction code to reaction code. Beware!
The selection of the parameter values for these potentials is critical to the successful theoretical description of transfer reactions. For the phenomenological optical model, they are determined from experimental data, typically differential elastic scattering cross sections and analyzing powers. These data can be local, like those listed in Ref.~\cite{perey_perey}, where only the elastic scattering from a single target nucleus at a single energy is considered. On the other hand, global studies, such as Refs.~\cite{varner, b_g_p, b_g_3he, pang_global, daehnick_global}, use a variety of targets and beam energies to derive relations between potential parameters and target mass, beam energy, and other nuclear properties. Often these two approaches are mixed, with global values used as a starting point for a local data set. These starting global values are then fit to best describe the local data set.
\subsection{Distorted Wave Born Approximation}
The optical model potentials described in the previous section allow a computationally tractable approach to the $N$-nucleon scattering problem. The Distorted Wave Born Approximation (DWBA) is a perturbative method that treats the transfer of a nucleon between nuclei as a perturbation of the elastic scattering process described using optical potentials \cite{satchler, thompson_nunes_2009, hodgson1971}. In order for this model to be valid, the transfer cross section must be small compared to the elastic scattering cross section and the transfer reaction must be consistent with a direct reaction process. A direct reaction is described to first order as a transition from an initial to final state via an interaction potential, $V$. The T-matrix element for such a process is:
\begin{equation}
T = \braket{\mathbf{k}_f | V | \mathbf{k}_i},
\end{equation}{}
where $\mathbf{k}_i$ and $\mathbf{k}_f$ are the wave numbers for the incoming and outgoing wave functions, respectively. DWBA constructs these scattering wave functions using the entrance and exit optical model potentials according to:
\begin{equation}
\label{eq:no_time_se}
\bigg( -\frac{\hbar^2}{2 \mu} \nabla^2 + \mathcal{U} \bigg) \ket{\chi} = E \ket{\chi}.
\end{equation}{}
The T-matrix can then be written down for the transfer reaction using the incoming and outgoing wave functions for the final and initial states, $\chi_f^{*(-)}(\mathbf{k}_f)$ and $\chi_i^{(+)}(\mathbf{k}_i)$ :
\begin{equation}
\label{eq:simple_t}
T = \braket{\chi_f^{*(-)}(\mathbf{k}_f)|V_{transfer}|\chi_i^{(+)}(\mathbf{k}_i)}.
\end{equation}{}
This equation, while compact, masks much of the complexity of the problem. In particular, the transfer process needs to transform the incoming coordinates for the $A + ^{3}\!\textnormal{He}$ system to those of the outgoing $B + d$ system. A diagram of the coordinates considered in DWBA is shown in Fig.~\ref{fig:three_body}. Explicitly writing these out, we get:
\begin{equation}
\label{eq:complex_t}
T = J \int d \mathbf{r}_d \int d \mathbf{r}_{^3\!\textnormal{He}} \chi_f^{*(-)}(\mathbf{r}_d, \mathbf{k}_d) \braket{B, d|V_{transfer}|A, ^3\!\textnormal{He}} \chi_i^{(+)}(\mathbf{r}_{^3\textnormal{He}}, \mathbf{k}_{^3\textnormal{He}}).
\end{equation}{}
Eq.~\ref{eq:complex_t} reduces down to Eq.~\ref{eq:simple_t} once the integrals are evaluated. The perturbing potential $V_{transfer}$ is now acting on the states for the internal wave functions of the systems.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-2/figures/3body_3he_d.pdf}
\caption{Diagram of the coordinates used in DWBA for a stripping reaction $A(^3\textnormal{He}, d)B$. $\mathbf{r}_{^3\textnormal{He}}$ and $\mathbf{r}_{d}$ are the coordinates for the optical model potentials shown explicitly in Eq.~\ref{eq:complex_t}, while the other coordinates are implicitly contained in the matrix element $\braket{B, d|V_{transfer}|A, ^3\!\textnormal{He}}$. }
\label{fig:three_body}
\end{figure}
By construction, the distorted waves in Eq.~\ref{eq:complex_t} do not contain detailed nuclear structure information. Any angular momentum selection rules, nuclear structure information, or any other reaction information is thus contained in $\braket{B, d|V_{transfer}|A, ^3\!\textnormal{He}}$ \cite{Satchler_1964}. It is again the case that the Bra-ket notation hides the complex nature of this term, which requires evaluation of all the spins and internal coordinates of the particles involved in the transfer. Specifying the form of the perturbing potential is the last step before it becomes possible to fully describe the DWBA T-matrix.
Building the potential $V_{transfer}$ can be done in either the entrance or exit channels. This choice is referred to as either the \textit{prior} or \textit{post} representation, depending on if the interactions in the entrance or exit channel is chosen. First order DWBA is equivalent if either of these two representations is used to construct the perturbing potential \cite{thompson_nunes_2009}.
An example to help clarify this concept is the prior form of the $A(^3\textnormal{He}, d)B$ reaction. The valence particle is the proton, which in the entrance channel is bound to the deuteron. Since the transfer is dependent on the potential acting to remove the proton from the deuteron, it is necessary to find the energy difference between the individual and composite systems. The composite interaction is between $^3\textnormal{He}$ and $A$, while the individual interactions are for the $p+A$ and $d+A$ systems. Solving these interactions exactly would require solving the $N$-nucleon systems, so it is again necessary to use simplifications to make the problem tractable. Optical potentials can be used for $d+A$ and $^{3}\textnormal{He} + A$ systems, and a single particle model can be used for $p+A$. This gives:
\begin{equation}
\label{eq:prior}
\mathcal{V}_{prior} = V_{p+A} + \mathcal{U}_{d+A} - \mathcal{U}_{^3\textnormal{He}+A},
\end{equation}{}
where $\mathcal{U}_{d+A}$ and $\mathcal{U}_{^3\textnormal{He}+A}$ are optical model potentials and $V_{p+A}$ is a real single particle potential. $\mathcal{U}_{d+A}$ is typically called the core-core potential because it describes the interaction between the two inert cores in the reaction. These terms can be readily written down for either stripping or pickup reactions in either the post or prior form if it is noted that the core nucleus in the selected channel must appear in every term. It is then a matter of writing down all of the interactions with that core, where the binding and core-core interactions are added together and the remaining term is subtracted off. As an example, the post form for the above reaction will depend on all of the interactions with the core in the exit channel, which is $d$. This gives:
\begin{equation}
\label{eq:post}
\mathcal{V}_{post} = V_{p+d} + \mathcal{U}_{A+d} - \mathcal{U}_{B+d},
\end{equation}{}
where again it should be noted that $d$ appears in every term.
Plugging either of the above expressions into Eq.~\ref{eq:complex_t} will yield the DWBA cross section for a chosen excited state. This is known as a finite range transfer with full remnant \cite{thompson_nunes_2009}.
\subsection{Overlap Functions and Spectroscopic Factors}
\label{sec:overlaps}
It can be seen that any nuclear structure information that needs to be included in DWBA has to be contained in the matrix element $\braket{B, d|V_{transfer}|A, ^3\!\textnormal{He}}$. A full exploration of this term is beyond the scope of this thesis, but there are some details that merit discussion in order to draw the clear line from transfer reactions to nuclear astrophysics. If we return to the proton stripping example, it is easier to examine this matrix element in the post form, with the assumption that the remnant term is negligible. This gives:
\begin{equation}
\label{eq:transfer_matrix_element}
\braket{B, d|\mathcal{V}_{post}|A, ^3\!\textnormal{He}} = \braket{B|A} \braket{d|V_{d+p}|^3\!\textnormal{He}}.
\end{equation}{}
The term $\braket{B|A}$ is called the overlap function, and describes a state of the composite nucleus, $B$, in terms of the core nucleus and a proton, $A+p$. It is more accurate to express this quantity with its dependence on the relative coordinate between the proton and the core, $\mathbf{r}_{pA}$, as $\Phi_{I_A:I_B}(\mathbf{r}_{pA})$. In general, this overlap can be viewed as the coefficient for a given state in the expansion:
\begin{equation}
\label{eq:overlap_expansion}
\psi^B_{I_B}(\zeta_A, \mathbf{r}_{pA}) = \sum_{I_A} \Phi_{I_A:I_B}(\mathbf{r}_{pA}) \psi^A_{I_A}(\zeta_A),
\end{equation}
where the labels $I_A$ and $I_B$ can stand for any involved quantum number such as spin or isospin, while $\zeta_A$ is the symbol given to the internal coordinates of $A$. The spectroscopic factor, which when combined with the isospin Clebsch-Gordan coefficient is written $C^2S_{\ell s j}^{j I_A I_B}$, is defined as the integral of the square of the norm of the overlap function:
\begin{equation}
\label{eq:overlap_integral}
C^2S = \int_0^{\infty} \big|\Phi_{I_A:I_B}(\mathbf{r}_{pA})\big|^2 \, d \mathbf{r}_{pA}.
\end{equation}
The issue at this point, as has been the theme with this entire section, is that we are again confronted with a multi-nucleon problem when trying to determine $\Phi_{I_A:I_B}$. The problem is simplified by introducing a single-particle wave function, typically generated by a Woods-Saxon potential, that is normalized to unity. Thus, the overlap becomes:
\begin{equation}
\label{eq:coeff_frac_parentage}
\Phi_{I_A:I_B}(\mathbf{r}_{pA}) \approx A_{\ell s j}^{j I_A I_B} \phi_{\ell s j}^{j I_A I_B}(\mathbf{r}_{pA}).
\end{equation}
The coefficient $ A_{\ell s j}^{j I_A I_B}$ is called the \textit{coefficient of fractional parentage} (CFP). By Eq.~\ref{eq:overlap_integral}, we see that:
\begin{equation}
\label{eq:spec_factor_fraction_coeff}
C^2S_{\ell s j} = \big | A_{\ell s j}^{j I_A I_B} \big |^2.
\end{equation}
Stepping back from the math for a second, Eq.~\ref{eq:spec_factor_fraction_coeff} is the result of a fairly simple logic:
\begin{enumerate}
\item System $B$ is made of an inert core $A$ plus a proton.
\item System $B$ can thus be expanded into states of the system $A+p$.
\item The coefficients of this expansion, called the overlap, can be modeled as single particle states with quantum numbers $n l s j$ coming from the shell model.
\item The square of the norm of an overlap function is not equal to unity, but rather equal to the spectroscopic factor for that state.
\item Since the single particle states from item $3$ are unit normalized, then they must be weighted by a factor $A_{\ell s j}^{j I_A I_B}$.
\end{enumerate}
The spectroscopic factor can be viewed as the fraction of the total strength of a single particle shell contained in a single state.
Putting all of this information together, it can be seen that the matrix element in Eq.~\ref{eq:transfer_matrix_element} is proportional to the CFP for the $A+p$ system. The wave function for $^3$He can also be expanded via the overlap, yielding a CFP for the $d+p$ system as well. Once this is done, the DWBA formalism is complete and a cross section can be computed. The differential cross section will be proportional to the square of the T-matrix, and this cross section can be compared to the experimental cross section $\frac{d \sigma}{d \Omega}_{Exp}$. Pulling the spectroscopic factors out front gives:
\begin{equation}
\label{eq:exp_to_theory}
\frac{d \sigma}{d \Omega}_{Exp} = C^2S_{A+p}C^2S_{d+p} \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}}.
\end{equation}
The $d+p$ system spectroscopic factor is frequently omitted from this equation in the literature, which reflects the fact that it has been computed either by \textit{ab-initio} \cite{brida_ab_initio} methods, or simply from the relation for nuclei with mass number $ \leq 4$, which is $A/2$, e.g., $C^2S_{p+d} = 3 \times \frac{1}{2} $ \cite{satchler}.
The upshot of DWBA is now clear, a transfer cross section measured in the lab can be used to extract $C^2S_{A+p}$. Using the spectroscopic factor, theoretical single-particle resonances are scaled to give $\Gamma_p$, which, in turn, can be plugged into Eq.~\ref{eq:narrow_rate_with_resonance_strength}. Transfer reaction experiments provide nuclear input essential to thermonuclear reaction rates.
\section{Summary}
This chapter has laid out the foundation of how nuclear physics can be used to inform stellar burning. Resonant reactions were defined, and their contribution to thermonuclear reaction rates were made explicit in the case of narrow isolated resonances. The connection between the parameters in the Breit-Wigner cross section and nuclear structure led to an outline of how transfer reactions could constrain these reactions. Finally, the formalism for DWBA was presented, and the theory was examined in order to show how spectroscopic factors arise from the nuclear overlap functions and how they are determined from experiment.
\chapter{Nuclear Physics Uncertainties}
\label{chap:nuclear_unc}
\section{Introduction}
This chapter introduces the concepts of reaction rate uncertainties, which motivate much of the original research presented in this thesis. It will set the stage for the work on Bayesian DWBA that is presented in Chapter \ref{chap:bay_dwba}, and discuss the evaluation of the $^{39}$K$(p, \gamma)$ reaction rate which I was involved in. Within this framework nuclear physics uncertainties associated with quantities measured in the lab can be propagated all the way through a nuclear reaction rate network calculation. Thus, the results of nucleosynthesis calculations can be compared with astronomical observations taking into account the underlying nuclear physics uncertainties.
\section{Reaction Rate Uncertainties}
As the nuclear inputs such as partial widths, resonance energies, and resonance strengths are measured in the lab, it is essential to quantify their impact on the resulting astrophysical reaction rate. At first glance, this calculation can appear to be a simple one, with the lab values plugged into the reaction rate formulas from Chapter \ref{chap:reactions}. This simple calculation, however, would ignore a fundamental part of these experimentally determined quantities, which is that they are subject to uncertainties.
The danger in ignoring uncertainties is the same for nuclear astrophysics as it is for any other field of science, overconfidence in predictions and a loss of direction in how to better improve these predictions.
In other words, it is critical to estimate uncertainties both to
understand what is known and to develop plans on how to know it better.
The method described here for propagating nuclear uncertainties through reaction rate calculations was first described and utilized in a series of papers by Longland, Iliadis, and collaborators \cite{LONGLAND_2010_1, ILIADIS_2010_2, ILIADIS_2010_3, ILIADIS_2010_4}. While these methods will surely evolve in the future, they are of critical importance to the future of the field, and their foundation merits discussion.
Propagation of uncertainties through calculations can be done in several ways, with perhaps the most familiar way being:
\begin{equation}
\label{eq:error_formula}
\sigma_f = \sqrt{\sum_i \bigg( \frac{\partial f}{\partial x_i} \bigg)^2 \sigma_i^2},
\end{equation}
where for a function, $f$, dependent on random variables, $\mathbf{x}$. This equation relates the standard deviation of $f$, $\sigma_f$, to the standard deviations of the dependent variables, $x_i$. The issue with the above formula is that it is an approximation, which might not hold if there exist strong correlations between the variables or if the uncertainties are not normally distributed \cite{taylor_error_analysis}. Before moving on, an unfortunate situation in nuclear physics when discussing uncertainties should be mentioned. The context in this chapter should make it obvious whether $\sigma$ is referring to a cross section or a standard deviation. In order to be more explicit, cross sections will always be expressed as a function of another variable, i.e., $\sigma(E)$ or $\frac{d \sigma}{d \Omega}$.
As will be shown in the next section, the uncertainties for many nuclear physics quantities, such as the cross section, are not normally distributed. Furthermore, the derivatives in Eq.~\ref{eq:error_formula} might require evaluation via numerical methods, eliminating much of the computational simplicity that makes this formula attractive in the first place. Due to these reasons, it is beneficial to use an alternative method when confronted with the specific issues that arise in reaction rate calculations.
A simple and flexible solution to uncertainty propagation is using a Monte Carlo approach. This method assigns explicit probability distributions for each $x_i$:
\begin{equation}
\label{eq:prob_dist}
x_i \sim X_i,
\end{equation}
where the symbol $\sim$ means \say{distributed according to.} If samples can be drawn from $X_i$, then these samples, in turn, can be used to calculate $f$. Thus, the Monte Carlo approach draws many samples from the distribution of all variables that are subject to random effects. These samples are fed one at a time through the formula for $f$, yielding samples for $f$. Finally, the samples of $f$ are collected and analyzed. This approach, while simple, has some definite drawbacks. If $f$ is computationally expensive, then it can be infeasible to perform the necessary number of evaluations, which can be on the order of $10^6$. Additionally, there is no guarantee that the randomly drawn samples will be representative of the underlying distribution. This issue becomes especially problematic when the tails of the distribution are important.
As a quick example, lets take $f$ to be $f = x + y$, where $x$ and $y$ are distributed according to:
\begin{align}
\label{eq:example_dist}
x \sim \mathcal{N}\big(4.0, 1.0 \big) \\
y \sim \mathcal{N}\big(3.0, 1.0 \big),
\end{align}
where $\mathcal{N}(\mu, \sigma^2)$ stands for the normal distribution, which is parameterized by the mean, $\mu$, and the variance, $\sigma^2$. Thus, the above equations denote normal distributions with means $4$ and $3$, both with variance $1$.
Using Eq.~\ref{eq:error_formula} we can easily find that $f$ has $\mu = 7.0$ and $\sigma = \sqrt{1^2 + 1^2} = \sqrt{2}$. If instead we use a Monte Carlo method and draw $10^5$ samples from each $x$ and $y$, we get $10^5$ realizations of $f$. These samples are plotted in Fig.~\ref{fig:monte_carlo_example}. Once we have the samples of $f$ the other issue with Monte-Carlo procedures, or really any sampling based method, arises. How do we best summarize these samples? This is a rather trivial problem for this example since it is well described by a normal distribution, with the consequence that the mean, median, and mode are all equal, and that one standard deviation will contain $68 \%$ of the samples. In general this problem will arise again and again, and there is no general answer, so a deeper discussion is best left until we have a problem that requires more thought.
Blissfully unconcerned with nuances of summary statistics, we take the mean and the standard deviation of the samples, which gives $\mu=7.00$ and $\sigma=1.41$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/Monte_Carlo_example.pdf}
\caption{Histogram showing the comparison between the probability density of a normal distribution with parameters found from Eq.~\ref{eq:error_formula} and the samples obtained from the Monte Carlo method.}
\label{fig:monte_carlo_example}
\end{figure}
It is clear from the above discussion that before a Monte Carlo procedure can be applied to the problem of calculating thermonuclear reaction rates, the inputs must be assigned probability distributions that can be sampled. Furthermore, once the samples are drawn for $\langle \sigma v \rangle$, a convenient way to summarize them will be necessary before they can be integrated into a reaction network.
\section{Probability Distributions For Nuclear Inputs}
The form of the probability distributions for $E_r$, $\Gamma$, $\Gamma_x$ (partial width for particle $x$), and $\omega \gamma$ requires examination of how these quantities are derived from experiments. The central limit theorem states that the sum of random variable, no matter how these variable are distributed, will approach a normal distribution as more terms are added to the sum. This theorem, and some of its corollaries, provides a reasonable form for all of the above experimentally determined terms.
\subsection{Resonance Energies}
Energies in the lab are typically subject to uncertainties arising from the sum of many terms. Take for example proton elastic scattering off of a thin target of a heavy material. The protons are detected by a surface barrier silicon detector positioned close to $0^{\circ}$. In this case, the measured proton energy will be impacted by statistical uncertainties that comes from a variety of sources. Some examples are the incoming beam, which will have energy fluctuations due to the instability of the accelerator, $\delta E_{beam}$. Once this beam impinges on the thin target, it will lose some amount of energy, $\delta E_{target}$. The energy deposited within the detector by the scattered proton will be converted to an electric charge, which will be subject to thermal effects of the electrons moving through the semi-conductor, $\delta E_{thermal}$. Finally, this charge will be integrated by a charge sensitive preamplifier, which itself is subject to electronic noise, $\delta E_{electronic}$. We can think of all of these terms as additive effects of the energy of the particle $E_p$:
\begin{equation}
\label{eq:energy_example}
E_{detected} = E_p + \delta E_{beam} + \delta E_{target} + \delta E_{thermal} + \delta E_{electronic},
\end{equation}
where it should be understood that $E_p$ is a number, but all of the $\delta$ terms are randomly distributed quantities drawn from unknown distributions.
The above example excludes many potential factors, but the point is that a measured energy is most often the sum of many factors, all with uncertainty. Because of this fact, the central limit theorem can be invoked, which tells us that as these independent random effects are added up, the resulting quantity, i.e, energy, will tend towards a normal distribution. Thus, resonance energies can reasonably be assumed to be distributed normally.
\subsection{Quantities Derived From Cross Sections}
Although cross sections were discussed theoretically in Chapter \ref{chap:reactions}, we need to look how this quantity is measured to understand what the expected distribution is for a nuclear cross section. In its simplest form:
\begin{equation}
\label{eq:cross_section_example}
\sigma(E) = \frac{N_R}{(N_B/A) N_T},
\end{equation}
where $N_R$ is the number of reactions, $N_B$ is the number of particles in the beam, $N_T$ is the number of particles in the target, and $A$ is the area of the incident beam. Since all of these quantities are positive, we can look at logarithm of $\sigma(E)$:
\begin{equation}
\label{eq:cross_section_log}
\ln{\sigma(E)} = \ln{N_R} +\ln{A} - \ln{N_B} - \ln{N_T}.
\end{equation}
Random fluctuations in any of these quantities will therefore lead to a measured cross section of $\sigma_{measured}(E)$:
\begin{equation}
\label{eq:cross_section_log_error}
\ln{\sigma_{measured}(E)} = \ln{\sigma(E)} + \ln{\delta N_R} + \ln{\delta A} - \ln{\delta N_B} - \ln{\delta N_T}.
\end{equation}
It can be seen that Eq.~\ref{eq:cross_section_log_error} is analogous to Eq.~\ref{eq:energy_example}. Thus, the \textit{logarithm} of $\sigma_{measured}(E)$ tends to be normally distributed according to the central limit theorem. The probability distribution for $\sigma_{measured}(E)$ is then the logarithm of a normal distribution. This is called the \textit{log-normal} distribution, and is given by:
\begin{equation}
\label{eq:lognormal_dist}
f(x) = \frac{1}{\sigma \sqrt{2 \pi}} \frac{1}{x} e^{(\ln{x} - \mu)^2/2 \sigma^2},
\end{equation}
where the parameters $\mu$ and $\sigma$ are the mean and standard deviation of the corresponding normal distribution, respectively. Unlike the normal distribution, the log-normal distribution's mean does not correspond to $\mu$, nor is its variance given by $\sigma^2$. Instead, the mean, $E[x]$, is given by:
\begin{equation}
\label{eq:lognormal_mean}
E[x] = e^{(2 \mu + \sigma^2)/2},
\end{equation}
and the variance, $V[x]$, by:
\begin{equation}
\label{eq:lognormal_variance}
V[x] = e^{(2 \mu + \sigma^2)} \big[ e^{\sigma^2} - 1 \big].
\end{equation}
It is also useful to find an easy way to express $68 \%$ coverage. One possible way is to give the median ($med.$):
\begin{equation}
\label{eq:lognormal_median}
med.~[x] = e^{\mu},
\end{equation}
which for the log-normal distribution has the property that it is also the \textit{geometric} mean, where Eq.~\ref{eq:lognormal_mean}
is the \textit{arithmetic} mean. The geometric standard deviation is given by what is called the factor uncertainty ($f.u.$):
\begin{equation}
\label{eq:lognormal_median}
f.u.~[x] = e^{\sigma}.
\end{equation}
These definitions are given mostly for later reference, but some intuitive meaning does exist. Specifically, cross sections tend to come with a percentage uncertainty. This percentage uncertainty, let us say $20 \%$ for example, could be cast as a factor uncertainty of $f.u. = 1.20$. In this case, $68 \%$ coverage would be given by the $(med. / f.u.)$ to $(med. \times f.u.)$ . All of these quantities provide useful ways to summarize the shape and spread of a log-normal distribution.
It is also interesting to note what happens to a log-normally distributed variable as $\sigma$ becomes small. Take $Y$ to be log-normal, then there is a corresponding normal variable $X$ related by:
\begin{equation}
\label{eq:log_normal_to_normal}
Y = e^{X} = e^{\mu + \sigma Z},
\end{equation}
where $Z \sim \mathcal{N}(0, 1)$. If $\sigma$ is small, then $e^{\sigma Z} \approx (1+ \sigma Z)$, which yields $Y \approx e^{\mu}(1+\sigma Z)$. Defining new parameters $\mu' = e^{\mu}$ and $\sigma' = \mu' \sigma$, it can be seen that $Y \sim \mathcal{N}(\mu', \sigma'^2)$. Thus, in the limit of small $\sigma$, a log-normal distribution begins to resemble a normal distribution with mean $e^{\mu}$ and standard deviation $e^{\mu} \sigma$.
The arguments presented in this section were made for a cross section measurement, but any nuclear quantity that is the factor of many randomly distributed, positive quantities will be well described by a log-normal distribution due to the power of the central limit theorem. From the last chapter, it can indeed be seen that resonance strengths, Eq.~\ref{eq:resonance_yield}, and partial widths, Eq.~\ref{eq:proton_partial_width}, do meet this requirement.
\subsection{Monte Carlo Reaction Rates}
\label{sec:mc_rates}
Once appropriate probability distributions have been assigned to all the relevant quantities, samples can be drawn for each of them to yield samples for $N_A \langle \sigma v \rangle$ (where $N_A$ is Avogadro's number) as a function of temperature. The program \texttt{RatesMC} was used to perform this Monte-Carlo sampling. Details of this program can be found in Ref.~\cite{LONGLAND_2010_1}.
An instructive example is to examine a simplified reaction rate that is dominated by narrow resonances. One of the closest available examples to a narrow resonance dominated rate is the $^{21}$Ne$(p, \gamma)^{22}$Na rate. If, for now, the upper limits of the two lowest energy resonances are ignored, then this rate is determined completely by a direct capture component and $44$ narrow resonances. The input data for \texttt{RatesMC} is taken from Ref.~\cite{ILIADIS_2010_3}. At higher temperatures ($T \approx 30$ MK), the direct capture component becomes negligible and the rate is found from the sum of the resonant contributions:
\begin{equation}
\label{eq:narrow_rate_sum}
N_A \langle \sigma v \rangle_{p \gamma} = N_A \bigg(\frac{2 \pi}{m_{p} kT} \bigg)^{3/2} \hbar^2 \sum_{n} e^{-E_n/kT} (\omega \gamma)_n.
\end{equation}
It might be expected from this equation that the sum will cause the central limit theorem to take effect, resulting in a normally distributed reaction rate. However, at lower temperatures the Gamow peak effectively truncates this sum, meaning that the rate is dominated by individual resonances. In this case the rate is best viewed as a product of several variables, i.e, it will be log-normally distributed. This is indeed the case as seen in Fig.~\ref{fig:lognormal_na}, which shows the $10000$ Monte-Carlo samples drawn for $T = 70$ MK.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/lognormal_rate.pdf}
\caption{10000 samples for the simplified $^{21}$Ne$(p, \gamma)^{22}$Na reaction rate at $70$ MK. The log-normal approximation is shown as the dark blue, dashed line.}
\label{fig:lognormal_na}
\end{figure}
To further examine this effect, the contribution of individual resonances can be plotted as a function of temperature using:
\begin{equation}
\label{eq:rate_contribution}
C(T) = \frac{\langle \sigma v \rangle_{n}(T)}{\sum_n \langle \sigma v \rangle_{n}(T)},
\end{equation}
where $C(T)$ denotes the relative contribution of the $n^{\textnormal{th}}$ resonance at temperature $T$.
The results of this calculation for $^{21}$Ne$(p, \gamma)^{22}$Na are shown in Fig.~\ref{fig:contribution_plot_ne}. This plot emphasizes the impact of the Gamow peak, which causes the rate to be dominated only by resonances within this narrow energy window. Furthermore, as the temperature increases, the Gamow peak widens, the level density of the compound nucleus grows, and the sum of Eq.~\ref{eq:narrow_rate_sum} starts to become important. This effect can be seen in detail in Fig.~\ref{fig:ridge_ne}. This figure shows the samples from \texttt{RATESMC} at various temperatures divided by their median value. The distribution of the rate begins to become more normally distributed as the effects of individual resonances become less important. However, as Fig.~\ref{fig:lognorm_norm_rate} shows both the log-normal and normal distributions describe the reaction rate well at $10$ GK. This might seem odd, but it was shown that a normal distribution does arise from a log-normal when $\sigma$ becomes small. $\sigma$ becomes small in this case, even though the $f.u.$ values on the individual resonances might be quite large, because the sum does not preserve the percentage uncertainty.
The log-normal distribution of reaction rates was first mentioned by Hix et. al \cite{HIX_2003}, but little justification was given beyond that the rates were manifestly positive. However, with development of the above Monte-Carlo method in Ref.~\cite{LONGLAND_2010_1}, it became possible to see why these distributions arise naturally in reaction rates, and, more importantly, connect them with the uncertainties measured in the lab. The log-normal distribution also has the benefit of being defined with only two parameters, making it easy to tabulate reaction rates, which, as will be shown, is necessary for reaction rate libraries.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/GraphContribution.pdf}
\caption{The individual contribution to the $^{21}$Ne$(p, \gamma)^{22}$Na reaction rate without the inclusion of upper limits. \textit{A-Rate $1$} refers to the non-resonant direct capture contribution, while the short-dashed line is the sum of all other resonances. The width of the lines shows the uncertainty in the contribution calculation. Only resonances that contribute more than $15 \%$ to the total rate are plotted. Furthermore, this rate ignores upper limits on two resonances for simplicity.}
\label{fig:contribution_plot_ne}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-3/figs/rate_ridge_21Ne.pdf}
\caption{Plot of the Monte Carlo samples for the $^{21}$Ne$(p, \gamma)^{22}$Na reaction rate without upper limits normalized by the median of their corresponding log-normal distribution at various temperatures. As the temperature increases, no single resonance dominates, and the rate becomes more normally distributed.}
\label{fig:ridge_ne}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/lognormal_normal_rate.pdf}
\caption{Monte-Carlo samples for the $^{21}$Ne$(p, \gamma)^{22}$Na rate at $10$ GK. The orange and dark blue, dashed lines show the normal and the log-normal approximations for the samples, respectively. }
\label{fig:lognorm_norm_rate}
\end{figure}
\subsection{Upper Limits}
Before this discussion is finished, it is important to note the impact of upper limits on reaction rates. An upper limit is reported from a study when the counts in the region of interest do not exceed background. As mentioned in the last section, I ignored the two low energy resonances in $^{21}$Ne$(p, \gamma)^{22}$Na, which only have upper limits reported, in order to simplify the discussion. However, it is not a trivial issue to define a probability distribution for these upper limits. Ref.~\cite{LONGLAND_2010_1} made the first arguments for how this should be approached. Without proof, the Porter-Thomas distribution (i.e. a chi-squared distribution with one degree of freedom \cite{porter_1956}) describes the distribution for reduced widths:
\begin{equation}
\label{eq:porter_thomas}
f(\theta) = \frac{c}{\sqrt{\theta^2}} e^{-\theta^2/ \langle \theta^2 \rangle},
\end{equation}
with $c$ being a normalization constant and $\langle \theta^2 \rangle$ denoting the \textit{local mean value} of the reduced width. This value can be estimated from experiments, with the most comprehensive study being Ref.~\cite{Pogrebnyak_2013}.
Including the two previously neglected low energy resonances, which only have upper limits, back into the Monte-Carlo calculation gives dramatically different behaviour at lower temperatures. Importantly, the log-normal distribution does not accurately represent the reaction rate in the scenario where a rate is dominated by upper limits. This means that while the log-normal distribution is convenient and appropriate in several situations, the reaction rate cannot be said in general to be log-normally distributed. This can be seen in Fig.~\ref{fig:ridge_ne_upper_limits}. Specifically, the rate normalized by $med.$ of the samples is no longer on the order of one. As the rate becomes dominated by measured resonances at $T = 40$ MK, the distribution returns to an approximately log-normal distribution.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-3/figs/rate_ridge_21Ne_upper_limits.pdf}
\caption{Plot of the Monte Carlo samples for the $^{21}$Ne$(p, \gamma)^{22}$Na reaction rate normalized by the median of their corresponding log-normal distribution at various temperatures. With the upper limits included, the rate is no longer well described by a log-normal distribution at low temperatures, which can be seen from the normalized rate deviating strongly from one.}
\label{fig:ridge_ne_upper_limits}
\end{figure}
\section{Reaction Networks}
Accurate estimates of reaction rate uncertainties are, by themselves, of little importance. The ultimate goal of nuclear astrophysics is to relate theoretical calculations of stellar burning to observations. This requires that the rate of change of each element be tracked through the entirety of the stellar evolution. Considering just reactions between two particles, this can be done by solving a set of coupled differential equations of the form:
\begin{equation}
\label{eq:network_rate}
\frac{d N_{i}}{dt} = \sum_{j k} N_j N_k \langle \sigma v \rangle_{jk} - \sum_m N_i N_m \langle \sigma v \rangle_{im},
\end{equation}
where $N_i$ is the number of particles of nuclei $i$.
This equation can be made more general by including any additive or destructive rate, whether it is beta decay, electron capture, or photodisintegration \cite{iliadis_book}.
Solving this set of coupled differential equation requires each $\langle \sigma v \rangle$ be specified at every temperature considered. This is typically done by compiling a tabulated reaction rate library. This library specifies a rate for each reaction at specific temperatures. Examples are REACLIB \cite{Cyburt_2010}, BRUSLIB \cite{Xu_2013}, and STARLIB \cite{sallaska_2013}. Of these libraries, STARLIB is the only one, at this time, to incorporate statistical uncertainties. This is done by assuming the rate at each temperature is log-normally distributed, thus a rate is summarized by its recommended rate, $med.$, and factor uncertainty, $f.u.$, parameters.
We now arrive at the final link in the chain, propagating the reaction rate uncertainties, i.e., those tabulated in STARLIB, to the predicted outputs of the network calculation. Again Monte-Carlo methods are the only effective way to do this due to the thousands of rates involved and the complexity of the network. However, the computational cost of Monte-Carlo procedures becomes most pronounced at this step in the process, since it is currently computationally infeasible to couple a full reaction network to a stellar evolution code and run it several thousand times. One recourse is to use \textit{post-processing}, where a temperature and density profile extracted from a more complex stellar evolution code is fed into a reaction network that separately evolves the nuclear material. This network calculation is referred to as a single-zone model because it only tracks nucleosynthesis at single location within the star and assumes that this is representative of the star as a whole, which necessarily excludes effects like convection and rotation.
The code used for post-processing was originally developed by Nikos Prantzos and modified extensively by Christian Iliadis. It implements Gear's method to solve the coupled differential equations \cite{Longland_2014}, and is able to sample reaction rate uncertainties in the manner detailed in Ref.~\cite{Longland_2012}. This formalism draws a sample, $x(T)$, for the rates using a random variable $p \sim \mathcal{N}(0,1)$ to alter the median rate according to:
\begin{equation}
\label{eq:rate_sample}
x(T) = med. \times f.u.^p,
\end{equation}
where $med.$ and $f.u.$ are the log-normal parameters tabulated in STARLIB. Thus, before a network is solved, a random sample is drawn for $p$ for each reaction in the network. This varies the rates by the same relative amount in light of the fact that $f.u.$ is temperature dependent. For example, take a calculation that involves a varying temperature profile. Looking at a single rate, $\langle \sigma v \rangle_i$, with a rate variation factor, $p_i = 1$. At each temperature, the median rate will be varied by $e^{\mu_i}e^{\sigma_i(T)}$. Thus, the median is being multiplied by $f.u.$ giving values of the rate at the upper end of the $68 \%$ coverage interval for every temperature. If instead $p_i = -1$, then $e^{\mu_i}/e^{\sigma_i(T)}$ giving the lower end of the $68 \%$ coverage interval. It should be noted that in general $p$ is dependent on temperature; however, Ref.~\cite{Longland_2012} showed that this simple scheme performs nearly as well as more complicated parameterizations of $p$.
Once the network is run, the problem becomes identifying which reactions impact the abundance for a single element. Since the samples are drawn from meaningful statistical distributions, the relationship between a rate's variation and the final abundance can be understood in terms of correlations. To be more specific, given that we know the variations of each rate, which of these variations causes the largest corresponding variation in a selected nuclear species' final abundance. Unfortunately, there is no one way to measure statistical correlation. Two methods have been found to be most effective in the case of Monte-Carlo network calculations: the Spearman rank-order correlation coefficient, $r_s$, and mutual information, $MI$. The Spearman rank-order correlation coefficient converts the $n$ samples between random variables $X$ and $Y$ to ranks, $rg$. These ranks are integers where the smallest valued sample is assigned $1$ and the largest is given $n$. These ranks are assigned to each sample for both its $x$ and $y$ value. If none of the ranks are tied, i.e., all of the samples, $n$, are distinct, $r_s$ takes on the simple form:
\begin{equation}
\label{eq:spearman}
r_s = 1-\frac{6 \sum_i^n (rg(x_i) - rg(y_i))^2}{n(n^2-1)}.
\end{equation}
This quantity can vary from $-1$ to $1$ and measures any monotonic relationship between the variables $X$ and $Y$, or for this specific problem, a rate variation factor, $p_i$, and the final abundance of a selected nuclei. The use of this measure was first proposed in Ref.~\cite{Iliadis_2015}, and it is the method used for the results presented below.
The mutual information is a recently proposed method for identifying the correlations in Monte-Carlo network calculations \cite{Iliadis_mutual_information_2020}. While $r_s$ is a simple measure for quantifying correlation, its lack of sensitivity to non-monotonic functions is a serious deficiency for identifying network correlations where a simple relationship between $p_i$ and a final abundance might not exist (for examples see Ref.~\cite{Iliadis_2015}). The mutual information between two variables $X$ and $Y$ is given by \cite{Cover_2006}:
\begin{equation}
\label{eq:mutual_information}
MI = \sum_{i} \sum_{j} P(x_i, y_j) \log \bigg[ \frac{P(x_i, y_j)}{P(x_i)P(y_j)} \bigg],
\end{equation}
where $P(x_i)$ and $P(y_j)$ are the marginalized probabilities for the samples and $P(x_i, y_j)$ is the joint probability for the two variables. If $X$ and $Y$ are independent, then $P(X,Y) = P(X)P(Y)$ and $MI = 0$. In this way, the mutual information is sensitive to any dependence of $Y$ on $X$ and vice versa. However, it does come with its own issues. The most pressing is: given a finite set of samples, how do we know $P(X)$, $P(Y)$, and the even more troublesome $P(X,Y)$? The difference between $r_s$ and $MI$ in this regard is that $r_s$ is \textit{nonparametric}, i.e., it does not assume what distribution the observed samples were pulled from, while $MI$ is \textit{parametric}, i.e., it requires the assumption that the observed samples were pulled from a specific probability distribution. The estimation of these distributions in order to calculate $MI$ reliably is well outside the context of this thesis, but more information can be found in Ref.~\cite{verdu_2019}.
Regardless of the issues with either $r_s$ or $MI$, they are only a means to an end. They are merely used to sift through the large amount of variables and data produced by the network calculations, in order to signal which reactions impact the final abundance of a given element the most strongly.
Now with a sampling scheme and correlation measure in place, the true advantage of Monte-Carlo reaction rates can be seen. The samples of $p_i$ for each rate can be examined for correlations between the final abundances of different isotopes, and, in turn, these correlations will signify that a given reaction rate has a strong impact on the final abundances of interest. Since the reaction rate uncertainties come directly from experimental uncertainties, these correlations in the network signal that the currently known experimental information for these important reactions is insufficient. Additional experiments can then be performed.
Demonstration of this method is shown in the next section, which details work originally presented in Ref.~\cite{Longland_2018} that uses the Monte-Carlo reaction rate formalism to reexamine $^{39}$K$(p, \gamma)$. For this study I performed all of the nucleosynthesis calculations, which show how the new rate dramatically impacts the spread in the amount of predicted $K$ in globular clusters.
\section{The $^{39}$K$(p, \gamma)^{40}$Ca Rate}
The reevaluation of the $^{39}$K$(p, \gamma)^{40}$Ca rate was deemed necessary after the finding that it played an important role in the case of K enrichment in NGC 2419 \cite{dermigny_2017}. Prior to this reevaluation, the rate in STARLIB was based on an unpublished evaluation from 2014. However, it was found that this evaluation overlooked several measurements. By incorporating all available data, Longland et al. \cite{Longland_2018} found that the uncertainties were actually larger than previously thought.
The Monte-Carlo rate was incorporated into the STARLIB library to determine the astrophysical impact of the updated rate. A single-zone model was used for the Monte-Carlo reaction network. This model took initial mass fractions from Ref.~\cite{iliadis_2016}. For reference, the mass fraction is defined by:
\begin{equation}
\label{eq:mass_fraction}
X_i = \frac{N_i M_i}{N_A \rho},
\end{equation}
where $N_i$ is the number of nuclei of species $i$ per unit volume, $M_i$ is the atomic mass, $N_A$ is Avogadro's number, and $\rho$ is the density. Looking at the findings of Refs.~\cite{dermigny_2017, iliadis_2016} (see Fig.~\ref{fig:trajectory}), the observed abundances of all elements up to vanadium can be reproduced for hydrogen burning between $T = 100$ MK, $\rho = 10^8$ g/cm$^3$ and $T = 200$ MK, $\rho = 10^{-4}$ g/cm$^3$. A single representative burning environment with $T = 170$ MK and $\rho = 100$ g/cm$^3$ was selected. As a proxy for the timescale of the burning, the network was run until enough hydrogen was consumed such that it fell from its initial mass fraction, $X(^1 \textnormal{H}) = .75$, to $X(^1 \textnormal{H}) = .50$.
This network calculation was run $2000$ times with each run sampling every rate in the network, a total of $2373$ reactions up to $^{55}$Cr. All other parameters were held constant except for the rate variations. Two separate Monte-Carlo runs were performed with the first using the STARLIB rates and the second using the reevaluated rate. This was the only change between the two runs. After the $2000$ iterations for both sets of rates, $r_s$ was used to identify correlations between the $p_i$ and final mass fraction of $^{39}$K. The most strongly correlated reactions from both the STARLIB and reevaluated rates are shown in Fig.~\ref{fig:dot_plot_39K}. Each dot represents one run of the network. To highlight the importance of including all of the experimental information into the reaction rate calculation, the updated rate shows a markedly stronger correlation between $^{39}$K$(p, \gamma)$ and the final mass fraction of $^{39}$K. This correlation indicates that the nuclear uncertainties for the $^{39}$K$(p, \gamma)$ rate are large enough to hamper the predictions for a specific astrophysical environment.
Spectroscopic observations of globular clusters are only sensitive to elemental potassium, so transforming the results from these Monte-Carlo network calculations into elemental potassium is a key constraint on any future theoretical work.
In order to do this, all stable and long-lived isotopes of potassium must be considered. This includes $^{39}$K, $^{41}$K, and $^{40}$K. Additionally, any unstable nuclei that decay into these potassium isotopes must be accounted for. These radioactive nuclei are $^{39}$Cl, $^{39}$Ar, $^{41}$Ar, $^{39}$Ca,
$^{41}$Ca, and $^{41}$Sc. All of these elements contribute to the observed ratio $[\textnormal{K}/ \textnormal{Fe}]$, which is short hand for $[\textnormal{K}/ \textnormal{Fe}] = \log_{10} \big( \textnormal{K}/ \textnormal{Fe} \big)_{cluster} - \, \, \log_{10} \big( \textnormal{K}/ \textnormal{Fe} \big)_{sun} $. Thus, $[\textnormal{K}/ \textnormal{Fe}] = 1.0$ means that the amount of potassium relative to iron is $10$ times higher than the solar amount. The results of this calculation are shown in Fig.~\ref{fig:K_final_abundances}, where the results of this study are compared with the previous STARLIB rate and the rate found in REACLIB. The current rate and the previous STARLIB rate both have statistical uncertainties, so the samples from each network run are summarized by a Kernel Density Estimate (KDE) \cite{kde}. It can be seen that the updated rate has a larger spread in the predicted amount of $[\textnormal{K}/ \textnormal{Fe}]$, further demonstrating the need for a more precise reaction rate.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{Chapter-3/figs/pi_comp.pdf}
\caption{These scatter plots show the samples from $2000$ network runs plotted as a function of the rate variation factor $p_i$ and the final mass fraction of $^{39}$K. The three most strongly correlated rates from the previous evaluation and the updated one are shown. It can be seen that the correlation is much stronger for the $^{39}$K$(p, \gamma)$ rate once all of the experimental information has been included in the rate calculation. }
\label{fig:dot_plot_39K}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/Dex_plot.pdf}
\caption{Predicted elemental abundances from the Monte-Carlo reaction network calculation. It can be seen that the updated rate is less peaked than the previous rate. The value from the commonly used REACLIB library is also shown. Note that REACLIB does not include uncertainties, and cannot properly account for the spread caused by the nuclear physics uncertainties.}
\label{fig:K_final_abundances}
\end{figure}
Finally, looking at the reaction rate contribution plot for this rate, the dominant resonances, and thus the uncertainty at the relevant temperatures, can be seen. In order for our predictions to improve, constraints must be put on the $337$-keV resonance.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-3/figs/39K_GraphContribution.pdf}
\caption{Contribution plot for the $^{39}$K$(p, \gamma)$ rate. The rate at $170$ MK is strongly determined by the properties of the $337$-keV resonance.}
\label{fig:K_contribution}
\end{figure}
\section{Summary}
This chapter has outlined the modern approach to reaction rate uncertainties. Using Monte-Carlo methods, it was shown that nuclear uncertainties arising from particle partial widths, resonance energies, and resonance strengths can be propagated through the calculation of the reaction rate. Furthermore, the reaction rate library STARLIB allows these reaction rate uncertainties to be propagated through reaction network calculations. This means that at every step of the process, from the measurements in the lab to the predictions of stellar yields, uncertainties can be calculated and their impact properly assessed. The goals stated at the beginning of the chapter have been met, we can understand what we know and plan experiments to improve our predictions.
\chapter{Magnetic Spectroscopy at Triangle Universities Nuclear Laboratory}
\label{chap:tunl}
\section{Introduction}
This chapter details the facilities and equipment located at Triangle Universities Nuclear Laboratory (TUNL). Emphasis will be placed on the Split-pole spectrograph and its focal plane detector, which was used for the transfer measurements presented later. I was involved with the recommissioning of the spectrograph, which began in 2014, and led the work on the construction of the focal plane detector.
\section{TUNL Tandem Lab}
TUNL has three primary experimental facilities: the Tandem lab, the High Intensity Gamma-ray Source (HIGS), and the Laboratory for Experimental Nuclear Astrophysics (LENA). The experimental work detailed in this document was performed at the Tandem lab.
\subsection{Tandem Accelerator}
The tandem accelerator is so-called because it uses the "tandem" principle to create higher energy beams than a single ended accelerator would at the same voltage. This is done by holding the central terminal at a high positive voltage, $V_{term}$, and injecting the accelerator with negatively charged ions. An acceleration tube provides a smooth field gradient toward the central terminal, ensuring the ions in a negative charge state, $q$, are accelerated in a controlled way up to an energy of $qV$. At the center of the high voltage terminal is a thin carbon foil, with those used at TUNL typically having a value of $\sim 2 \frac{\mu \textnormal{g}}{\textnormal{cm}^2}$. This carbon foil strips the negative ions of their electrons, and its thickness impacts the amount of energy lost by the beam that is passing through it. Too thick of a target will degrade the energy resolution of the beam and lessen the lifetime of the foils, while too thin of a target will not ensure the ions reach a charge-state equilibrium, resulting in a smaller intensity of fully stripped ions \cite{SHIMA_2001}. After passing through the stripper foil, the now positively charged ions are repulsed by the positive terminal voltage and experience a second phase of acceleration, and finally leave the high energy end of the tandem. Thus, for these two phases of acceleration, a tandem accelerator with the terminal voltage, $V_{term}$, injected with an ion in a $-1$ charge state will produce a beam with energy (in units of eV):
\begin{equation}
\label{eq:tandem_energy}
E_{beam} = (Z + 1)V_{term}.
\end{equation}{}
The TUNL tandem is a High Voltage Engineering Corporation FN tandem. The maximum terminal voltage is 10 MV. The charging system consists of two Pelletron chains \cite{herb_1974}. These chains are made of metal pellets linked together with non-conductive nylon. The terminal is electrically isolated in a large cylindrical tank that is filled with a combination of CO, N$_2$, and SF$_6$.
\subsection{Ion Sources}
As emphasized above, a tandem accelerator must be injected with negatively charged ions. At TUNL two negative ion sources exist for hydrogen and helium, respectively. Using these two ion sources, beams of $^1$H, $d$, $^{3}$He, and $^{4}$He can be produced.
Hydrogen beams are created with the off-axis Direct-Extraction-Negative-Ion Source (DENIS). This ion source uses a duoplasmatron to extract negative ions from molecular gases \cite{duo}. The duoplasmatron uses a high voltage cathode to produce electrons. These electrons, in turn, break the molecular bonds of the H$_2$ gas, producing both positive and negative ions. A positively charged extraction electrode draws out the negative ions from the plasma, and accelerates them down the beam line.
Unlike hydrogen, helium gas exists in an atomic form. Unique from other noble gases, He$^{-1}$ forms a metastable state with a binding energy of $.077$ eV \cite{he_binding_energy}. This means a negative ion beam of He$^{-1}$ can be produced. The TUNL helium-exchange source does this by first using a negatively charged extraction electrode to draw out positive ions from a plasma created by a duoplasmatron. These positive ions then pass through a sodium charge exchange canal. An oven is used to evaporate metallic sodium into a gas and pass it through the canal. The positive helium beam passed through this gas. Due to the low electron affinity of the gas, it is possible for the helium beam to pick up two additional electrons. Approximately $1 \%$ of the beam passing through the sodium canal will form He$^{-1}$ \cite{charge_exchange_fraction}. The negatively charged beam is then accelerated using a positive potential towards the low energy side of the tandem. Beam currents of around $\sim 2$ $\mu$A are possible using this method.
\subsection{90-90 Beam Line}
The beam emerging from the tandem post-acceleration will have an energy that is inadequate for accurate and precise measurements. This problem can be remedied by passing the beam through an analyzing magnet. TUNL has two sets of analyzing magnets, a general purpose dipole that can bend the beam between $20^{\circ} \textnormal{-} 70^{\circ}$ and a set of $90^{\circ}$ dipole magnets for use with experiments requiring high energy resolution. A schematic of the high resolution beam line is shown in Fig.~\ref{fig:the_lab}. To deliver the beam to the Split-pole Spectrograph (SPS), it is passed through the $0^{\circ}$ port of the $20 \textnormal{-} 70$ magnet. The magnet is degaussed in order to prevent any deflection of the beam. A series of magnetic steerers and quadrupoles direct and focus the beam into the $90 \textnormal{-} 90$ system. The beam image is focused by the Q4C quadrupole through the entrance slits, which defines the spot size of the beam before energy analysis. Before the first dipole another quadruple, Q5, is used to increase the horizontal size of the beam to increase the resolving power of the system, while a sextupole, S1, provides higher order aberration corrections \cite{WILKERSON_1983}. After these elements the beam passes through the first $90^{\circ}$ dipole. The magnetic field of this dipole is controlled by a precision NMR probe. The energy dispersed beam is then defined further by the center slits. Upon entering the second dipole, the beam is bent again by $90^{\circ}$ and then passed through another sextupole and quadrupole. The presence of small differences between the field strengths of the two dipoles can be corrected by manually adjusting the current on the second dipole via a trim control in order to maximize beam current on the exit slits. This system and its operating principle are shown in Fig.~\ref{fig:90_90_system}. After all of these steps, the beam energy can be accurately determined using the NMR field reading and the known bending radius of the first dipole, which is $40$ inches \cite{9090_disc}. Further stability is provided by using either the center or exit slits to regulate the tandem terminal voltage. Beam current is measured on both of the horizontal slits. If the current on one slit starts to increase, the terminal voltage is increased or decreased automatically by a slit-current feedback system in order to balance the current reading between the slits \cite{WESTERFELDT_1984}. The spread in the beam energy is determined by how wide the entrance, center, and exit slits are. For a setting with the slit widths set to $\Delta x_{entrance} = 1$ mm, $\Delta x_{center} = 0.5$ mm, and $\Delta x_{exit} = 1$ mm, the beam spread, $\Delta E$, is given by $\Delta E \approx \frac{E}{5800}$. This relation scales linearly for the width of the slits, such that a setting with $\Delta x_{entrance} = 2$ mm, $\Delta x_{center} = 1.0$ mm, and $\Delta x_{exit} = 2$ mm gives $\Delta E \approx \frac{E}{2900}$. For the $^{23}$Na$(^3 \textnormal{He}, d)^{24}$Mg experiment the slits were set to $2$ mm, $1$ mm, and $2$ mm, which for a $21$ MeV beam, gives $\Delta E < 10 $ keV.
\begin{figure}
\centering
\includegraphics[width=.85\textwidth]{Chapter-4/figs/TUNLModel-Transparancy_with_labels.pdf}
\caption{A $3$D model of the TUNL Tandem lab. The beamline used to deliver beam to the SPS is highlighted with major components labeled.}
\label{fig:the_lab}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.85\textwidth]{Chapter-4/figs/90_90_drawing.pdf}
\caption{A top-down cartoon of the focusing and energy analyzing elements of the $90 \textnormal{-} 90$ system. Trajectories of particles with different energies passing through the two dipoles have been shown in order to demonstrate the operational principle of the center slits. Note that these trajectories are for illustration only, they do not represent ion-optic calculations.}
\label{fig:90_90_system}
\end{figure}
\subsection{SPS Beamline and Target Chamber}
Once the beam is energy analyzed, it is directed to the $15^{\circ}$ SPS beam line via a $70 \textnormal{-} 70$ switching magnet. This beam line uses two sets of magnetic steerers that are controlled via feedback slits to stabilize the beam going through both the $70 \textnormal{-} 70$ magnet and another quadrupole magnet. A steerer is used to fine tune the beam through a $1$ mm collimator on the target ladder. With the $90 \textnormal{-} 90$ slits set as mentioned above, and with the focusing from the beamline quadrupole, nearly $100 \% $ of the energy analyzed beam can be passed through the collimator.
\section{Split-pole Spectrograph}
\label{sec:split_pole}
At the energies used to perform transfer reactions, there are many open reaction channels. As a consequence, any detection system will have to be able to identify the large variety of reaction products that are produced. Magnetic spectrographs separate reaction products based on:
\begin{equation}
\label{eq:magnetic_rigidity}
\rho B = \frac{p}{q},
\end{equation}{}
where $B$ is the magnetic field of the spectrograph, $\rho$ is the
bending radius, $q$ is the charge of the particle, and $p$ is the particle's momentum. The product $B \rho$ is called the magnetic rigidity. It can be seen from Eq.~\ref{eq:magnetic_rigidity}
that, in the case of a constant charge, a magnetic spectrograph transforms differences of momentum into a spatial separation.
Several quantities need to be defined in order to characterize the capabilities of a magnetic spectrograph. Considering that the separation of particles based on their magnetic rigidity is analogous to an optical lens, ion-optics will provide the necessary tools to analyze a given spectrograph's performance. These optics can be expressed in a phase space that consists of the entrance angle $\theta_i$, the exit angle $\theta_f$, the entrance azimutal angle $\phi_i$, the exit azimutal angle $\phi_f$, the beam image in the horizontal direction $x_i$, the beam image in the vertical direction $y_i$, the image after the magnetic field $x_f$, the final vertical image $y_f$, and the momentum spread $\delta$, which is defined as:
\begin{equation}
\label{eq:7}
\delta = \frac{\Delta p}{p}.
\end{equation}
This quantity relates small changes in the momentum, $\Delta p$, to a reference momentum, $p$, through the small angle approximation in momentum space. It should also be noted that $\delta_i = \delta_f$ due to the conservation of momentum. Furthermore, a coordinate system that has the beam direction as $+z$, beam left as $+x$, and up as $+y$ is assumed. For all of these variables the initial and final coordinates can be related by an optical transfer matrix. Of particular interest is the coordinate $x_f$, which will dictate the final resolution of the spectrograph. Explicitly writing out all of the matrix elements to second order, we have:
\begin{equation}
\begin{split}
\label{eq:opt_matrix}
x_f = & (x_f|x_i)x_i + (x_f|\theta_i)\theta_i + (x_f|\delta)\delta \\
& +(x_f|x_i,x_i)x_i^2 + (x_f|\theta_i, \theta_i) \theta_i^2 + (x_f| \delta, \delta) \delta^2 + (x_f|y_i, y_i)y_i^2 + (x_f|\phi_i, \phi_i) \phi_i^2 \\
&+ (x_f|x_i, \theta_i)x_i \theta_i + (x_f|x_i, \delta) x_i \delta + (x_f | y_i, \phi_i) y_i \phi_i + (x_f|\theta_i, \delta) \theta_i \delta,
\end{split}
\end{equation}
where the matrix coefficients are equivalent to partial derivatives \cite{enge, opt_matrix}. Azimutal symmetry eliminates all terms with odd powers of $\phi_i$ and $y_i$. First order focusing is achieved when $(x_f|\theta_i) \approx 0$, while second order double focusing requires $(x_f|\theta_i, \theta_i) \approx 0$.
The first order terms in this expansion have intuitive physical meaning. $(x_f|x_i)$ is the magnification, $M$, and it relates the initial and final beam spot sizes. $(x_f|\delta)$ is the dispersion, $D$, and it describes the shift in position corresponding to a unit shift in momentum. These two terms determine the first order resolving power, $\mathcal{R}_1 = \delta^{-1}$. A spectrograph is able to resolve particles with different momenta if the dispersion is greater than the peak width, $\Delta x_f$. If there is a finite size to beam spot, then the magnification of the spectrograph produces a final peak width according to $\Delta x_f = M \Delta x_i$, and spreads out particles according to $\Delta x_f = D \delta$.
Setting these two terms equal, we have for the first order resolving powers:
\begin{equation}
\label{eq:resolving_power}
\mathcal{R}_1 = \frac{D}{M \Delta x_1}.
\end{equation}{}
Higher order aberrations will change the right hand side of this expression, but the definition of $\mathcal{R}$ does not change.
The SPS was originally designed by Harold Enge \cite{splitpole}. As implied by the name, the design encloses two dipole magnets into a single field coil. Configuring the magnets in this way creates three distinct magnetic field regions: the field of the first dipole, an intermediate field between the two dipoles, and the field of the second dipole \cite{enge_optics}. These fields create the conditions for second-order double focusing in the horizontal direction and first-order focusing in the vertical direction \cite{enge}. The resolving power of the SPS is $\mathcal{R} = 4500$.
An SPS can accept particles up to $8$ msr, but resolution rapidly degrades for solid angles greater than $2$ msr because of the
$(x_f|\theta^3)$ term \cite{enge}. In terms of the vertical and horizontal acceptance, this translates to $\Delta \theta = \pm 70$ mrad and $\Delta \phi = \pm 40$ mrad. The solid angle on the TUNL SPS is defined using brass apertures located $d = 22.58$ cm away from the target ladder. These apertures can be switched via motor control and they have a range of solid angles from, $0.122$ msr to $5.53$ msr. In order to minimize the resolution loss, we limited our solid angle to $< 1$ msr.
Beam integration is a difficult problem for any spectrograph. A Faraday cup is necessary to properly integrate beam current, but the physical size of these devices can be prohibitive. In particular, the reaction products have to enter the spectrograph, which means that the larger the Faraday cup is, the fewer angles can be measured with the spectrograph. The dimensions of the target chamber are also limited in order to decrease the distance between the target ladder and the entrance aperture. For the TUNL Split-pole, this means that our beam integration is rather crude. The beam is stopped in a rectangular sheet of $1/16$-inch-thick tantalum, with dimensions of $3/8 \times 1/2$ inches. Even with these modest dimensions, the beam stop limits spectrograph angles $< 5^{\circ}$. There are two effects that will produce inaccurate current integration from a beam stop such as this:
\begin{enumerate}
\item Electrons from the target showering the beam stop.
\item Electrons in the beam stop being boiled off due to the beam.
\end{enumerate}{}
Item 2 can be remedied by applying a positive charge to the beam stop in order to reabsorb the emitted electrons. For the TUNL SPS a $300$ V battery is used for the positive potential. The positive terminal of the battery is attached to the beam stop and the negative terminal is attached to the beam charge integrator (BCI). Though not implemented during the experiments for this thesis, it has been shown that rare earth permanent magnets attached to the target ladder can minimize the effects of item 1 \cite{wrede_thesis}.
\section{Focal Plane Detector and Electronics}
Full details of the focal plane detector and construction can be
found in Ref.~\cite{marshall_2018}, and this section reproduces portions of that work.
As seen in Fig.~\ref{fig:enge_overview}, particles passing through the SPS are focused to a point. The focal points for particles with different magnetic rigidities form a dispersive image of the target on a plane. This focal plane is curved and lies at a $41.5^{\circ}$ angle to the magnetic exit. A high resolution, position sensitive detector located at this plane is needed in order to determine the magnetic rigidity, and thus energy, of the reaction products. This detector must also be able to distinguish between the various species of particles that have similar magnetic rigidities as those of the particles of interest.
Efforts to construct a focal plane detector capable of providing high spatial resolution and particle identification started in the summer of 2014. A non-functioning but completely fabricated detector was located at TUNL. The detector was an updated version of the one originally presented in Ref.~\cite{hale_thesis}. This design uses two position sensitive avalanche counters, a $\Delta E $ proportionality counter, and a plastic scintillator. All of these detectors are integrated into one assembly which is shown in Fig.~\ref{fig:det_cross}. Extensive work to return the detector to working order began in the summer of 2014, and the first experiments to characterize the detector began in 2015.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{Chapter-4/figs/Figure1.pdf}
\caption{Top-down cross section of the SPS. The two dipoles enclosed in a single field coil produce a magnetic field that spatially separates particles based on their momentum to charge ratio. A position sensitive detector located at the curved focal plane of the spectrograph measures the bending radius of the reaction products.}
\label{fig:enge_overview}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{Chapter-4/figs/ENGEWholeDetectorCrossSection.pdf}
\caption{Side view of the cross section of the focal plane detector. The red arrow represents the direction of incident particles and
the black ovals show the location of the o-ring seals. The approximate location of the anode wires throughout the detector are indicated by the white circles. The gas filled regions are indicated by light yellow shading. Though not indicated in the figure, the length of the detector is $71.12$ cm.}
\label{fig:det_cross}
\end{figure}{}
\subsection{Position Section}
Position measurements of particles leaving the high magnetic field region of the SPS are performed by two position sensitive avalanche counters. The positions are measured near the entrance of the detector and again before the particles are stopped in the total energy scintillator. The position-sensitive avalanche counters operate as follows, and are
represented pictorially in Fig.~\ref{fig:ion}. Five high voltage anode wires are located within each counter between two cathode foils made of aluminized Mylar. {These counters sit inside the detector chassis, which is pressurized to 200 Torr with circulating isobutane. The pressurized environment is isolated from the high vacuum of the spectrograph with a $12.7\textnormal{-} \mu m$ thick Kapton entrance window. Charged particles that enter the detector will ionize the isobutane as they pass through the detector volume. If an ionization event occurs within the electric field of the counters, electrons will be rapidly accelerated towards the positively charged anode wires setting off a series of secondary ionization events, thereby creating an electron avalanche \cite{knoll}. This avalanche is negatively charged and localized around the particle's position as it passes the anode. The cloud of negative charge induces a positive charge on both of the cathode foils. The foil closer to the entrance of the detector is electroetched \cite{etch}, and is described in detail below. Etching creates electrically isolated strips that are connected together via a delay line. Thus, as charge is carried out of the detector from each strip, it is exposed to a different amount of delay. Therefore, the timing difference between the two sides of the detector gives a relative measurement of position. If the charge was only distributed over one strip, then the position resolution would be limited to the strip width. However, distributing the charge over multiple strips allows an interpolation of the composite signal, thereby improving the spatial resolution to the sub-millimeter level. As the particle exits the position sections, it passes through the grounded cathode foil that helps shape and isolate the electric field from the anodes.
Position sensitive avalanche counters are commonly designed to have
pickup strips parallel to the incident particle path \cite{msu,heavy,parikh, argonne_det}; however, the etched foils in the TUNL detector sit
perpendicular to the particle path. This type of setup has also been
used in the focal plane detector for the decommissioned Q3D spectrograph at the Maier-Leibnitz Laboratory
\cite{vert} and the decommissioned Q3D at Brookhaven National Laboratory \cite{BNLDetector}.
These designs have been shown to have excellent position resolution.
Additionally, if cathode foils are used, the number of wires required can be drastically reduced; thus, easing maintenance of the
detector. However, these designs are not suited towards heavy ion reactions, where
the cathode foils would provide additional scattering surfaces that degrade mass resolution.
The effects of these foils on the energy loss of light particles were examined via GEANT4 simulations in Ref.~\cite{marshall_2018} and found to have a negligible impact on the position resolution.
The methods used to fabricate the position section assemblies are discussed below with particular attention paid towards the etched cathode planes, delay line, and timing electronics.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth,angle=90]{Chapter-4/figs/ionization.pdf}
\caption{Cartoon of the principles of operation. When a charged particle enters the detector, ionization occurs on the fill gas, and an induced charge is created on the etched and grounded foils.}
\label{fig:ion}
\end{figure}
\subsubsection{ElectroEtching Technique}
The design of the position sections is critically reliant on having precisely etched cathode foils. These foils should have evenly spaced, electrically isolated
strips, which necessitates a process to remove the aluminum coating from the Mylar foils. Ref.~\cite{etch} found that electric discharge etching produces clean, uniform lines. Electroetching uses a stylus held at a fixed voltage. By bringing the stylus into contact with the aluminum surface, an electric discharge is produced which removes the aluminum coating. It should be noted that Chemical etching with sodium hydroxide has also been shown to work, for example see Ref.~\cite{vert}, but difficulties arise with the precise application of sodium hydroxide and the cleaning of the reaction products. Considering the complications with the chemical etching, electroetching was chosen to create the cathode foils. Using the TUNL facilities, this technique was found to reliably produce etched foils in less than a day, which reduces the time and effort required for routine maintenance. Each strip is $2.54\textnormal{-}$ mm wide, with each etched line being $0.03\textnormal{-}$ mm wide. The strips are etched on $0.3\textnormal{-}\mu$m-thick single sided aluminized Mylar.
Our particular setup consists of a tungsten tipped stylus attached to a copper assembly pictured in Fig.~\ref{fig:etch}. {The etching is performed using a milling machine programmed with G-code. To isolate the copper rod from the spindle of the machine, a nylon covering rod was used.} During the machining process, it is of vital importance to keep
good electrical contact between the Mylar and stylus tip. To ensure this condition, several steps were taken. First, the stylus arm was attached to its base with a pivot. This design allows the tip to follow the natural curvature of the Mylar. Second, the Mylar is carefully clamped to the milling table with two $5.08\, \textnormal{cm} \times 5.08\, \textnormal{cm}$ grounded metallic plates. These plates were found to provide the proper grounding throughout the etching process. Finally, periodic sanding of the tungsten tip was found to be necessary to prevent aluminum buildup. The tip itself was held at $-15\ V$ during the process. This voltage was found to produce clean lines while reducing the possibility of damaging sparks. Ref.~\cite{etch} found that negative polarity produced cleaner lines when examined under an electron microscope.
\begin{figure}
\centering
\includegraphics[scale=.7]{Chapter-4/figs/MylarEtchingApparatus.pdf}
\caption{Drawing of the etching apparatus. The biased stylus tip is allowed to follow the curvature of the Mylar thanks to the pivoting copper arm. The plastic housing insulates the milling machine from the biasing potential. The dashed lines indicate threaded holes for screws.}
\label{fig:etch}
\end{figure}
\subsubsection{Delay Lines}
The delay line consists of 20 delay chips with 10 taps per chip. Each tap provides $5$ ns of delay making the total delay across the line $1\, \mu$s.
Copper plated G-10 boards were machined to align the copper strips with the etched pickup strips, creating the necessary electrical contact between the etched pickup strips and the legs of the delay chips. The legs are attached via pin inserts on the back of the G-10 boards. The chips, Data Delay Devices 1507-50A \cite{chips}, have a 50 $\Omega$ impedance, which matches that of the signal cables.
Weldable BNC feedthroughs attached to NPT threads provide a vacuum tight method for connection to the delay line signal cables. It must also be noted that the error in delay per tap is quite high at $\pm 1.5\ ns$, which could lead to non-linearity in the delay to position conversion \cite{msu}.
Following the suggestions in Ref.~\cite{widths}, this effect is minimized by ensuring the ratio of the cathode strip width (2.54 mm) and the distance between the anode and the cathode (3.00 mm) is around $0.8$.
\subsubsection{Position Section Assembly}
The delay line, cathode {foil}, anode wires, and grounded {foil} are all housed in the position section assembly shown in Fig. \ref{fig:pos}. Four metallic screws bring the copper plated top into electrical contact with the detector body, which is grounded. Plastic screws ensure proper contact between the cathode {foil} and the delay line, while maintaining electrical isolation with the rest of the board.
Five gold plated tungsten wires $25 \, \mu$m in diameter are used for the anodes. The wire spacing is $4$ mm, and they are surrounded by the cathode foils. These foils are made of $0.3\textnormal{-}\mu $m-thick aluminized Mylar, either single sided for the etched cathode foils, or double sided for the grounded cathode which were purchased from Goodfellow Cambridge Ltd. The Mylar sheets are secured to both the detector and position assemblies using double sided tape. Because the tape is an insulator, care was taken to ensure good electrical connection between the detector and the grounding foils .
An accurate measurement of the charged particles' position requires a well localized electron avalanche. This requirement means that the position sections must be operated at a higher voltage than would be required of a proportional counter \cite{knoll}. In order to prevent sparking and allow voltages of ${\sim} 2000\ V$, insulating acrylic coating is applied to high risk areas and $1\ \textnormal{N}$ of tension is applied to the wires. The tension is necessary to keep the wires straight, which ensures the electric field is uniform and further reduces sparking. Isobutane was chosen as the fill gas, following the suggestions of Ref.~\cite{gas}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-4/figs/PositionDrawing.pdf}
\caption{A model of the position section assemblies. From front to back we have: the copper plated front plate to which the etched cathode foil is taped, the copper
plated G-10 board with pickup strips and mounts for the delay line chips, the anode wire plane board, and the back board to which the grounded plane is taped.
{The expanded region shows the copper pickup strips that make contact with the cathode foil. The delay line is attached to the back via pin inserts that are
at the top of each strip.}}
\label{fig:pos}
\end{figure}
\subsubsection{Position Section Electronics}
Signal paths are visualized in Fig. \ref{fig:elec}.
Hereafter signals will be referred to based on whether they exit the detector on the side corresponding to a high value of the magnetic rigidity (high energy) or a low one (low energy).
Inside the focal plane chamber, each of the four position signals are sent through fast timing preamplifiers. After preamplification, the signals are processed through an Ortec 863 quad Timing Filter Amplifier (TFA). The final shaping and noise rejection before our timing analysis is provided by a Constant Fraction Discriminator (CFD). Thresholds on each of the channels are adjusted to match the output levels of the TFA, which are on the order of $~300 $mV. After the final signal shaping, the signals from the high energy end of the detector are used to start an Ortec 567 Time to Amplitude Converter (TAC), while the low energy signals are all subject to a $1\ \mu \textnormal{s}$ delay and used to stop the TAC. This delay ensures that the stop signal always occurs after the start signal for a real event. The output from the TAC is sent into a CAEN V785 peak sensing Analog-to-Digital Converter (ADC), so that it can be recorded and later analyzed.
\begin{figure}
\centering
\includegraphics[width=.95\textwidth]{Chapter-4/figs/electronics.pdf}
\caption{Diagram of the focal plane detector related electronics. Black lines show the flow of waveforms to a peak sensing ADC.}
\label{fig:elec}
\end{figure}
\subsection{$\Delta E $ Section}
\subsubsection{$\Delta E $ Assembly}
The $\Delta E$ section of the detector is a gas proportional counter, which consists of a single $12.7\textnormal{-} \mu$m-diameter anode wire and two grounded cathode planes.
The front cathode plane is the other side of the front position section's grounded cathode plane. The back plane is another strip of aluminized Mylar.
This Mylar plane is taped directly onto the detector body and checked for proper electrical contact. Due to low breaking tension of the anode wire, it is held taut by hand, and soldered onto NPT threaded feedthroughs. The wire is biased to $1000\ V$ to ensure that the charge collection is proportional to the energy loss of the particle.
\subsubsection{$\Delta E$ Electronics}
The $\Delta E$ section of the detector's signal is processed with an in-house charge sensitive preamplifier based on the Cremat CR-110 operational amplifier, which provides a $1\ \mu s$ shaping time \cite{cremat}. After the preamplifier, an Ortec $572$A amplifier is used to shape the signal before it is sent to the ADC.
\subsection{Residual Energy Section}
\subsubsection{Paddle Scintillator}
\label{sec:scint_paddle}
Particles are stopped, and residual energy deposited, in a Saint-Gobain BC-404 organic plastic scintillator. The BC-404 is sensitive to $\alpha$ and $\beta$ radiation, and is recommended for fast timing \cite{gobain}. The timing response makes it an ideal trigger for the current data acquisition system and planned $\gamma$-ray coincidence measurements. The dimensions are $71.76$ cm long by $5.08$ cm wide by $0.64$ cm thick. These dimensions are customized to cover the length of the detector and ensure all light particles will stop within the active volume. In order to maximize the amount of light collected along the entire length of the scintillator, it is wrapped in thin, reflective aluminum foil and Tyvek. Reference \cite{wrapping} demonstrated that Tyvek has an increased light output compared to the aluminum wrapping; however, it was unable to create a seal on the high pressure section of the detector, so a single sheet of aluminum foil on the sealing surface was used.
\subsubsection{Optical Fibers}
Early iterations of the $E$ section used a light guide to couple the paddle scintillator
to the Photomultiplier Tube (PMT); however, this design added significant weight and length to
the detector. To avoid the rigid constraints of light guides, optical fibers were chosen to gather and transmit light to the PMT.
The fibers are $1$-mm-diameter Bicron BCF-91A, which shift the wavelength of the violet/blue scintillated light ($380-495\ \textnormal{nm}$) of the BC-404 into the green spectrum ($495-570\ \textnormal{nm}$) \cite{fibers}.
Following the suggestions of Ref. \cite{wrapping}, the optical fibers were spaced $5\ \textnormal{mm}$ apart
to maximize light collection. Eight $1$-mm-deep grooves were machined in the scintillator to hold the fibers.
The fibers were secured in place with BC-600 optical cement.
A light tight tube is used to bend the fibers to the PMT that sits on the top of the detector.
\subsubsection{Photomultiplier Tube}
Matching the emitted light of the light fibers while maintaining a compact package were the main requirements for the PMT.
The Hamamatsu H6524 has a spectral response of $300-650\ \textnormal{nm}$, a peak sensitivity of $420\ \textnormal{nm}$, and
a quantum efficiency of $27\%$ \cite{hamamatsu}. These features provide the highest quantum efficiency available for the wavelengths of interest.
The 10-stage dynode structure provides a gain of $1.7 \times 10^6$ with an anode bias of $-1500$ V.
Although the detector is located outside the high magnetic field region of the SPS, a magnetic shield was incorporated into the tube assembly to prevent possible interference.
\subsection{{$E$ Electronics and Event Structure}}
The dynode signal from the PMT is split to provide both timing and energy information. {Energy signals are processed through
an Ortec $572$A amplifier and then recorded.} Timing signals go through a TFA and a CFD to generate an event count.
A count from the $E$ detector triggers the master gate for the data acquisition system, which is vetoed if the ADC buffer is full.
If a trigger is not vetoed, a $10\textnormal{-} \mu$s gate is generated, and the ADC records all
coincident signals. Using the $E$ signal to generate the ADC gate, as opposed to the position sections, avoids introducing a position dependent gate timing due to the delay lines.
Count rates are recorded for all detector signals, gates generated, and gates vetoed by the ADC busy signal. This setup allows us to easily diagnose electronic problems and adjust beam current to keep the acquisition dead time low ($<\!10\%$ with typical rates being $<\!5\%$).
\subsection{Improvement of the Delay Line}
The Data Delay Devices 1507-50A delay chips used in the initial design of the detector were found to be unsatisfactory for accurate work due to high differential non-linearity. It was determined that the likely cause was the high tap to tap variation of $5 \pm 1.5$ ns. A testing procedure was devised in which generated signals would be sent through each tap one at a time. These signals would be processed through the same electronics as the real signals. Thus, this testing procedure is only sensitive to issues in the delay line or the electronics. Fig.~\ref{fig:delay_chip_comp} shows residuals for the linear fit of these $200$ strips as a fraction of the strip. The purple points are the 1507-50A chips, and it can be seen that there is a large, systematic non-linearity in the measured delay time, which can result in the signals location being erroneously displaced by half a strip. From these results, it was decided to purchase a new set of chips. The Allen Avionics 50P chips were selected due to having a $5.0 \pm 0.8$ ns tolerance \cite{allen_chips}. After installation, the delay line was tested again with the same tap by tap procedure. The results are shown as orange points in Fig.~\ref{fig:delay_chip_comp}, and show a significant improvement over the previous chips.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{Chapter-4/figs/Front_comp.pdf}
\caption{Residuals from the linear fit of the strip number to the measured strip position. The residuals are expressed in terms of a fraction of a strip, i.e., the residuals have been divided by the slope of the fit. The Data Delay Devices 1507-50A chips are shown in purple, while their replacements, the Allen Avionics 50P chips, are shown in orange.}
\label{fig:delay_chip_comp}
\end{figure}{}
\subsection{Improvement of the Scintillator}
After a salvo of initial experiments, including the $^{23}$Na$(^{3} \textnormal{He}, d)^{24}$Mg experiment detailed in this document, an additional set of characterization runs were carried out. These runs used deuteron elastic scattering off of carbon to measure the relative efficiency of the detector as a function of focal plane position. This was done by changing the magnetic field of the SPS to move the elastic scattering peak from one side of the detector to the other. The magnetic field ranged from $B = 0.86 \textnormal{-} 1.02$ T in steps of $\Delta B = 0.02$ T, or roughly $400$ ADC channels. A severe linear dependence between position and efficiency was found, with the low energy side being the most efficient. This issue was traced back to the low light collection efficiency of the optical fibers.
In order to boost the efficiency, it was decided to return to the previous design. A scintillator paddle of the same type and dimensions described in Section~\ref{sec:scint_paddle} was coupled with optical cement to a light guide that adapts to a $2$ inch diameter PMT. Optical grease was used on the interface between the light guide and the photocathode. The PMT is a Thorn EMI Electron Tubes 9813B with $30 \%$ quantum efficiency, spectral response of $290 \textnormal{-} 630$ nm, and a peak sensitivity of $\approx 360$ nm \cite{emi_tube}.
The maximum bias is $(-) 3000$ V, but sufficient gain was found at an operational voltage of $(-) 2500$ V. The light guide and PMT were wrapped in vinyl tape to prevent light leaks.
After installation, the efficiency experiments were repeated. The strength of the scintillator signal still exhibited a linear dependence, but there was no longer a loss in detection efficiency across the focal plane.
\subsection{Kinematic Corrections}
The momenta of the reaction products entering a spectrograph have an angular dependence on $\theta_i$. This dependence will greatly degrade the resolution at the focal plane if no corrective action is taken \cite{enge, enge_optics, opt_matrix}.
This so-called kinematic broadening can be introduced into the optical matrix formalism from Section~\ref{sec:split_pole} by Taylor expanding $\delta$ around $\theta_i$ giving:
\begin{equation}
\label{eq:kin_taylor_expansion}
\delta(\theta_i) = \delta_0+\frac{\partial \delta}{\partial \theta_i}\theta_i = \delta_0 - K \theta_i ,
\end{equation}
where the kinematic factor $K$ is defined as the change in the momentum shift with a change in angle. The sign of $K$ is dependent upon both the direction the spectrograph is rotated and the reaction kinematics. For the TUNL SPS, the reaction angle is with respect to beam left; thus, $\Delta \theta_i$ is positive, and normal kinematics are used, meaning the outgoing particles have a lower momentum with increasing angle, leading to a negative sign on $K$.
In order to correct for this effect, the dependence of $x_f$ on $\theta_i$ must be removed. This can be done by displacing the detector in the $z$ direction. A change $\Delta z$ will introduce a dependence on $\theta_f$ into $x_f$. Two additional first order terms with $\theta_i$ will now be present in the optical matrix, which can be used to compensate for kinematic broadening since $z$ can be controlled by the experimenter. The expression for $x_f$ is now:
\begin{equation}
\begin{split}
x_f & = (x_f|x_i)x_i + (x_f|\delta) \delta_0-K(x_f|\delta)\theta_i + \Delta z (\theta_f|\theta_i)\theta_i + \\
& \Delta z (\theta_f|x_i) x_i + \Delta z (\theta_f|\delta) \delta_0 - K \Delta z (\theta_f|\delta) \theta_i.
\end{split}{}
\end{equation}
Setting the $\theta_i$ terms equal to zero and solving for $\Delta z$ gives:
\begin{equation}
\label{eq:10}
\Delta z = \frac{KDM}{1-KM(\theta_f|\delta)} \approx KDM.
\end{equation}
The linear approximation is valid when $K$ is relatively small, so that the denominator is close to unity. While $M$ and $D$ can be calculated theoretically, it was decided to find an empirical fit between $K$ and $\Delta z$ in order to ensure maximum resolution.
\begin{figure}
\centering
\includegraphics[]{Chapter-4/figs/3_peaks.pdf}
\caption{Example proton spectrum when detector is off focal plane with a 3-slit aperture. The data are from $^{12}$C$+$p elastic scattering at $\theta_{\textnormal{Lab}}=20^{\circ}$ and $E_{Lab} = 12$ MeV.
The different peak intensities reflect the rapid variance of the cross section with the detection angle.}
\label{fig:peaks}
\end{figure}
\begin{figure}
\centering
\includegraphics[]{Chapter-4/figs/au_fit.pdf}
\caption{An example of measured peak centroids and the linear fit with respect to the $z$ position of the detector read in terms of a stepper motor voltage. Data are from the elastic scattering of protons off $^{197}\!$Au at $\theta_{\textnormal{Lab}} = 30^{\circ}$ and $E_{Lab} = 12$ MeV.}
\label{fig:au}
\end{figure}
The empirical method fits the linear relationship between $\Delta z$ and $K$, both of which are known independently of the ion optics of the spectrometer. $\Delta z$ is just a relative change in the position of the focal plane detector. For this work, it was taken to be the distance from the exit of the magnet to the front face of the detector. $K$ can be determined for a given reaction using energy and momentum conservation, giving the formula \cite{enge}:
\begin{equation}
\label{eq:2}
K = \frac{(M_bM_eE_b/E_e)^{1/2} \sin \theta}{M_e+M_r-(M_bM_eE_b/E_e)^{1/2} \cos \theta} ,
\end{equation}
\noindent where e references the ejected particle, b is for beam, and r is for the residual particle. $M$ are the masses of the particles, while $E$ denote their kinetic energies.
One possible method for finding an optimal $z$ for a given $K$ is described in Ref.~\cite{parikh}. Using this method, the optimal $z$ position is found by moving the detector through the focal chamber and minimizing the width of a chosen peak; however, this method does not give much feedback during the run, as peak width can be hard to determine without careful peak fitting.
Instead, it was decided to carry out a series of experiments using C and Au targets to probe several $K$ values. A beam of $10$ MeV protons were impinged on the targets. Elastic scattering of these protons at $30^{\circ}$ was measured using a three-slit entrance aperture. This aperture serves to discretize the acceptance solid angle into three narrow ranges of $\theta$. When the detector is off the focal plane, three particle groups will be observed as shown in Fig.~\ref{fig:peaks}. When the detector is on the focal plane these groups should converge; thus, the detector is swept across the depth of the focal plane chamber and a linear fit of the accompanying peak positions is found, as shown in Fig.~\ref{fig:au}. The detector position is inferred based on a voltage reading from two motors, which displace it along the $z$ direction. There is considerable uncertainty on our fit between $K$ and $z$, but the relationship only needs to be approximately known to effectively compensate for kinematic broadening.
\section{Monitor Detector and Electronics}
Performing the $^{23}$Na$(^{3} \textnormal{He}, d)^{24}$Mg transfer reaction is complicated by the hygroscopic nature of the target material. Furthermore, as discussed in Section~\ref{sec:split_pole} electrons from the target could impact our integrated charge. For these reasons, it was decided to use a monitor detector to look for target degradation and to provide a relative normalization of the transfer cross sections.
The monitor detector was a $\Delta E/E$ telescope, consisting of two surface barrier silicon detectors. The $\Delta E$ detector has a thickness of $150$ $\mu$m, and the $E$ has a thickness of $2000$ $\mu$m. A stand was designed to hold both of these detectors as a well as solid angle defining apertures. These apertures are made of brass and can be swapped out to change the solid angle. Two permanent magnets were attached at the front of the holder to deflect electrons away from the detectors.
The electronics for these detectors are shown in Fig.~\ref{fig:electronics_si}.
A Mesytec MSI-8 charge sensitive amplifier with built-in timing and shaping amplification was used. The timing signals from both detectors are sent into a CFD. In order to reduce background counts, a coincidence module is placed after the CFD signals, and requires a coincidence between the $\Delta E$ and $E$ detectors. If a coincidence is present, the ADC gate is opened and the output from the shaping amplifiers is recorded. A pulser is also fed through these electronics to verify the ADC dead time and to monitor for possible electronic noise.
\begin{figure}
\centering
\includegraphics[width=.95\textwidth]{Chapter-4/figs/electronics_si.pdf}
\caption{The electronics for the silicon telescope.}
\label{fig:electronics_si}
\end{figure}
\section{Summary}
This chapter has laid out the technical details of the equipment used to carry out the experimental work presented in this thesis. As shown, measuring transfer reactions with high resolution requires a delicate interplay between the ion source, accelerator, beamline optical elements, magnetic spectrograph, and detector systems.
\chapter{Bayesian Methods For Transfer Reactions}
\label{chap:bay_dwba}
\section{Introduction}
This chapter will describe the novel Bayesian methods developed to analyze the experimental data from transfer reactions. As shown in Chapter~\ref{chap:nuclear_unc}, it is critical to properly assess the nuclear uncertainties from experiments in order to know the limits of our astrophysical predictions. Transfer reactions, and in particular transfer reactions performed with magnetic spectrographs, come with their own unique uncertainties. The first part of this chapter introduces a Bayesian model for energy calibration that properly accounts for the statistical uncertainties present in both the dependent and independent variables. The second portion of this chapter is devoted to incorporating the uncertainties arising from optical model parameters into DWBA, thereby accounting for these effects when assigning $\ell$ values (the transferred angular momentum) and extracting spectroscopic factors. The energy calibration method was first presented in Ref.~\cite{marshall_2018}, but has since been updated. The Bayesian DWBA method was first presented in Ref.~\cite{Marshall_2020}. Both of these works represent my work to develop some of the first analysis methods of their kind for transfer reaction data.
\section{Bayesian Statistics}
The problems that are addressed in this chapter are, in the most abstract sense, based around the concept of estimating parameters of a model from data that are subject to statistical uncertainties.
Bayesian statistics is a formulation of statistics that uses Bayes' theorem to update prior probability distributions of these parameters based on experimental data, $\mathbf{D}$, allowing experiments to improve our knowledge of these parameters. Despite the important distinctions between Bayesian methods and those of frequentist statistics, the topic is well outside the scope of this thesis.
While Bayesian statistics requires additional assumptions, it is centered around Bayes' theorem which itself is simply a logic expression to relate conditional probabilities, independent of statistical interpretations. A conditional probability between two variable $X$ and $Y$ is the probability of $X$ occurring given that $Y$ has occurred, and is symbolized by $P(X|Y)$. This is defined by the joint probability of $X$ and $Y$, i.e., the probability of both $X$ and $Y$ having occurred, over the probability of $Y$. This definition is symbolized as:
\begin{equation}
\label{eq:cond_probs}
P(X|Y) = \frac{P(X , Y)}{P(Y)}
\end{equation}
Bayes' theorem follows from the equality $P(X , Y) = P(Y , X)$. Thus, the conditional probability of $P(X|Y)$ can be written:
\begin{equation}
\label{eq:simple_bayes}
P(X|Y) = \frac{P(Y|X)P(X)}{P(Y)}.
\end{equation}
The goal for any quantitative experiment is to collect data, compare this data to a model, and finally make predictions using the parameters of the model. Bayesian statistics assumes that these model parameters, $\boldsymbol{\theta}$, can be represented by probability distributions. Thus, the data we observe in the lab, $\mathbf{D}$, are used to update these \textit{prior} probabilities for the model parameters through Eq.~\ref{eq:simple_bayes}. Reformulated into this framework, Bayes' theorem reads:
\begin{equation}
\label{eq:bayes_theorem}
P(\boldsymbol{\theta}|\mathbf{D}) = \frac{P(\mathbf{D}|\boldsymbol{\theta}) P(\boldsymbol{\theta})}
{P(\mathbf{D})},
\end{equation}
where $P(\boldsymbol{\theta})$ are the prior probability distributions of the model parameters,
$P(\mathbf{D}|\boldsymbol{\theta})$ is the likelihood function, $P(\mathbf{D})$ is the evidence, and $P(\boldsymbol{\theta}|\mathbf{D})$ is the posterior \cite{bayes}. Expressing these terms in a more informal way: the priors are what we believe about the model parameters before the data are measured, the likelihood is the probability of the observed data given a set of model parameters, the evidence is the probability of the observed data, and the posterior is what we know about the model parameters after the data have been observed. In Bayesian statistics the general goal is to use the right hand side of the equation to calculate the posterior.
As will be shown, this is no simple task, and the problem has only become tractable because of the increased amount of computation power available in the last $40$ or so years. In particular, the evidence is only calculable if it is expanded into the integral:
\begin{equation}
\label{eq:evidence_integral}
P(\mathbf{D}) = \int_{\boldsymbol{\theta}} P(\boldsymbol{D}|\boldsymbol{\theta})P(\boldsymbol{\theta}) d \boldsymbol{\theta},
\end{equation}
which follows from Eq.~\ref{eq:cond_probs}. This integral will have the same dimension as the number of parameters that need to be integrated over. Once the integral is carried out, it can be seen that the evidence is just a number, and more specifically it represents a normalization constant that ensures the posterior is a properly normalized probability distribution. Further complications arise in Bayesian models with multiple parameters. In this case Bayes' theorem will yield the posterior joint probability distribution for all of the model parameters. Gaining posterior distributions for each parameter (i.e., knowing the uncertainties on each model parameter) independent of the other parameters requires that these other parameters be \textit{marginalized} over. For example, take a two parameter model with parameters $a$ and $b$. Bayes' theorem for this expression is:
\begin{equation}
\label{eq:bayes_2_parameters}
P(a, b|\mathbf{D}) = \frac{P(\mathbf{D}|a, b)P(a, b)}{P(\mathbf{D})}.
\end{equation}
However, what is typically desired from a statistical analysis are the posteriors $P(a|\mathbf{D})$ and $P(b|\mathbf{D})$. The conditional probability $P(a, b| \mathbf{D})$ is equivalent to $P(a , b , \mathbf{D})/P(\mathbf{D})$, so marginalization over $a$ will yield $P(b|\mathbf{D}) = \int P(a, b|\mathbf{D}) da$. The mathematical difficulties the evidence and marginalization integrals present can frequently only be solved using numerical methods, as is the case for the two applications presented here.
\section{Markov Chain Monte Carlo}
\label{sec:bay_energy_cal}
Instead of numerically solving the above integrals directly, the goal will be to draw samples directly from the marginalized posterior distributions. This is related to the issues encountered when trying to propagate uncertainties through reaction rate calculations as discussed throughout Chapter~\ref{chap:nuclear_unc}, and the solution is similar in nature. The goal is to draw representative samples from the posterior, but in this case randomly selecting samples from the priors and evaluating the likelihood will be insufficient for determining the posterior. This problem differs fundamentally from the Monte-Carlo uncertainty propagation because we are not trying to calculate a function of random variables, but, instead, we are trying to calculate the convolution of two probabilities. These probabilities need to be normalized by the evidence, which means these random draws need to sufficiently sample the integral as well.
In the most reductive terms, Markov chain Monte Carlo (MCMC) is a process to sample an arbitrary probability distribution. A Markov chain describes a system with a series of states $\dots x_{t-1}, x_{t}, x_{t+1} \dots$, where the transition from state $t$ to state $t+1$ only depends on the properties of the system at $t$ with no dependence on $x_{t-1}$ \cite{Mackay_2002, Cover_2006}. This abstract idea relates to the problem at hand, i.e., drawing samples from an arbitrary probability distribution, by trying to construct a Markov chain that produces a series of states that are distributed according to the desired probability distribution, $\pi$, i.e., each $x_t$ distributed according to $x_t \sim \pi$. The Monte Carlo portion of Markov Chain Monte Carlo refers to the fact that the Markov chain will be constructed using Monte-Carlo methods to generate the transitions from state $x_{t}$ to state $x_{t+1}$. The fact that this process will converge to the desired probability distribution is well beyond the scope of this thesis, but details can be found in Ref.~\cite{Christian_2005}.
Many variants of MCMC exist, but perhaps the most commonly encountered is the Metropolis-Hastings algorithm \cite{Metropolis_1953, Hastings_1970}. Say we want to estimate a target distribution, $\pi$. If the Markov chain is at a point $x_{t}$, then a new point is proposed according to a proposal distribution, $q(x^{\prime}|x_{t})$. This new point is accepted with a probability $\alpha$. The question now is what form must the product of these two probabilities, $T = \alpha q $, take in order for the samples to converge to $\pi$? A sufficient but not necessary condition for the Markov chain to converge to $\pi$ is that these transitions, $T$, to and from a state satisfy \textit{detailed balance} \cite{Mackay_2002}:
\begin{equation}
\label{eq:detailed_balance}
T(x^{\prime}|x_{t}) \pi(x_{t}) = T(x_{t}|x^{\prime}) \pi(x^{\prime}).
\end{equation}
The total probability of a transition will be given by the product of the acceptance probability and the proposal distribution $T(x^{\prime}|x_{t}) = \alpha(x^{\prime}|x_{t}) q(x^{\prime}|x_{t})$. It follows that the proposal probability, whatever we choose, must satisfy:
\begin{equation}
\label{eq:prop_form}
\frac{\alpha(x^{\prime}|x_{t})}{\alpha(x_t|x^{\prime})} = \frac{\pi(x^{\prime})}{\pi(x_t)} \frac{q(x_t|x^{\prime})}{q(x^{\prime}|x_{t})}.
\end{equation}
The form chosen for $\alpha$ by Metropolis \textit{et al}. assumed that the proposal distributions were symmetric, $q(x_t|x^{\prime}) = q(x^{\prime}|x_{t})$, and enforced the condition that if a proposed state had a higher probability, then the proposed move should always be accepted \cite{Metropolis_1953}. Plugging these desired properties into Eq.~\ref{eq:prop_form} for the case that $\pi(x^{\prime}) > \pi(x_{t})$ gives: $\alpha(x^{\prime}|x_t)=1$, with detailed balance requiring $\alpha(x_t|x^{\prime}) = \frac{\pi(x_t)}{\pi(x^{\prime})}$. Therefore, the Metropolis acceptance probability is generally:
\begin{equation}
\label{eq:metrop_acceptance}
\alpha(x^{\prime}|x_t) = \textnormal{min} \bigg[ 1, \frac{\pi(x^{\prime})}{\pi(x_t)} \bigg],
\end{equation}
where \say{min} means that $\alpha = 1$ if $\frac{\pi(x^{\prime})}{\pi(x_t)} > 1$ and otherwise will be equal to $\frac{\pi(x^{\prime})}{\pi(x_t)}$. The proposal distributions can be included in this definition as well if they are selected to be non-symmetric (this detail being Hastings' contribution \cite{Hastings_1970}).
Once a proposal distribution has been selected, the algorithm becomes:
\begin{algorithm}[H]
\SetAlgoLined
\KwResult{$n$ samples drawn from probability distribution $P$ }
set the total number of steps $n$\;
set $t = 0$ \;
select initial parameters for $x_t$ \;
\While{$t < n$}{
Propose a new set of parameter values for $x^{\prime}$\;
\eIf{$P(x^{\prime})/P(x_t) \geq 1 $}{
$t = t+1$\;
accept new coordinates $x_t = x^{\prime}$\;
}{
draw a random number $r \in [0,1)$\;
$t = t+1$\;
\eIf{$P(x^{\prime})/P(x_t) \geq r$}{
accept new coordinates $x_t = x^{\prime}$\;}{
reject new coordinates $x_t = x_{t-1}$\;
}
}}
\caption{Metropolis}
\end{algorithm}
\noindent This algorithm shows the power of MCMC, in a simple case all a computer has to do is evaluate the probability distribution function and draw a random sample from $[0,1)$.
Of critical importance to the problems mentioned in Section 2 of this chapter is the fact that this method depends only on the \textit{ratio} of the probabilities. Since the Bayesian evidence is just a constant number, this ratio removes any dependence of the samples on knowing $P(\textbf{D})$. However, this is a double-edged sword because the samples will only be proportional to $P(\boldsymbol{\theta}|\textbf{D})$. So while we do not have to calculate $P(\textbf{D})$, we also learn very little about it. On a more positive note, the MCMC samples trivially yield marginal posteriors for each model parameter. If estimation of these parameters is the only concern, then MCMC is one of the most flexible, powerful, and widely used techniques \cite{mcmc_review}.
There are some necessary complications with MCMC that merit discussion now. The first is that the initial position of the chain $x_{t=0}$ will not necessarily be a valid sample from $P(x)$. Furthermore, since each MCMC sample necessarily depends on the previous step, it is required that the first part of the chain is discarded in order to remove any effects of the initialization. This process is called \textit{burn-in}, and the length of a time it takes for MCMC to start drawing valid samples from $P(x)$ varies from problem to problem. A closely related issue is the concept of an \textit{effective sample size} ($ESS$). Again due to the correlated nature of the samples, if we draw $N$ samples, only a subset of these samples will be independent. The time it takes for a MCMC simulation to \say{forget} its history is the autocorrelation time, $\tau$, and the number of effective samples is $ESS = N/\tau$. These parameters all depend on $P(x)$ as well as $q(x^{\prime}|x_t)$. As an example, if the proposal distribution is a normal distribution centered around the current position of the chain, the variance of this distribution is a free parameter. If we have a $d$ dimensional problem, then there are $\approx d^2$ free parameters for each variance and covariance. Each of these parameters will affect $\tau$ and, therefore, $ESS$.
\section{Bayesian Energy Calibration}
Focal plane detectors for magnetic spectrographs are typically energy calibrating using a simple, low-order polynomial fit. However, one of the most critical nuclear inputs for thermonuclear reaction rates are the resonance energies. This can be seen in Eq.~\ref{eq:narrow_rate_with_resonance_strength}, where $E_r$ enters the rate exponentially. Thus, despite the relative simplicity of the energy calibration process, it plays a critical role in determining reaction rates. If the uncertainties from the calibration process are underestimated, the associated impact on the reaction rates will be severely overconfident predictions. For a magnetic spectrograph, measured peaks on the focal plane represent energy levels in the residual nucleus. By using states of known energy, it is possible to calibrate these spectra, and thereby extract the energies of the other observed levels. Since the focal plane surface is curved, the relationship between $\rho$, the radius of curvature,
and ADC channel number, $x$, is not linear \cite{enge}. Using a polynomial calibration corrects for this curvature across
the focal plane, and takes the form:
\begin{equation}
\label{eq:polynomial_fit}
\rho = Ax^2 + Bx + C,
\end{equation}
for a second order fit. Energy values for calibration states are transformed to $\rho$ values using Eq.~\ref{eq:magnetic_rigidity}. Once a fit is found, $\rho$ can be predicted for any peak in the spectrum, and again with the use of Eq.~\ref{eq:magnetic_rigidity} an excitation energy can be predicted.
Both the peak centroids and energy levels used for calibration will contribute to the statistical uncertainty of the calculated energy levels. Figure \ref{fig:calibration_sketch} shows a sketch of the problem, with the goal being to fit a polynomial in the presence of both $x$ and $\rho$ uncertainties.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-5/figs/calibration_points_example.pdf}
\caption{Cartoon that demonstrates the issue with energy calibrating the focal plane. The relationship between $x$ and $\rho$ is nonlinear, and uncertainties are present in both variables.}
\label{fig:calibration_sketch}
\end{figure}
The process outlined above offers two distinct problems: propagation of uncertainties through the relativistic collision kinematics and a statistically rigorous regression of the polynomial fit. For this work, uncertainty propagation for the kinematics was performed by using a Monte Carlo method, see Chapter \ref{chap:nuclear_unc} Section 2. To calculate $\rho$, this involves treating previous experimental values for energy levels as normal distributions. Random samples are drawn from these distributions and then used to solve the kinematic equations.
After enough samples have been drawn, a histogram of the solutions to the kinematic equations is made, and from this information, estimates of the probability distribution function
for $\rho$ can be made, which was found to be well described by a normal distribution. Each $\rho$ can then be associated with a peak in the spectrum, giving each calibration point a known $\rho$ and peak centroid, $x$. These points can then be used to find the coefficients of the polynomial through regression.
The challenge associated with the polynomial regression in this case is that the uncertainties in $\boldsymbol{\rho}$ are comparable to those in the peak centroids $\mathbf{x}$. Fitting the polynomial needs to account for both of these uncertainties simultaneously. The uncertainties from both of these sources should be reflected in the posterior distributions for the polynomial coefficients, $(\theta_0,\ldots,\theta_N)$, where $N$
is the order of the polynomial.
Following Eq.~\ref{eq:bayes_theorem}, a Bayesian analysis of this problem requires that prior distributions and a likelihood function are assigned. Uninformative priors are assigned to every polynomial coefficient. These distributions take the form of broad normal distributions centered around zero:
\begin{equation}
\label{eq:coeff_priors}
\theta_j \sim \mathcal{N}(0, 100^2),
\end{equation}
where $\theta_j$ is $j^\textnormal{th}$ order polynomial coefficient. This prior ensures that these parameters are allowed to vary over a wide range of values. In principle the intercept could be made to be more strict since $\rho \geq 0$, but the choice in priors, provided a wide enough coverage in values, was found to have no appreciable difference in the results.
For the peak centroids, a simplification is made that assumes these values are model parameters, and are therefore assigned informative priors. If $x_{i, obs}$ is the measured centroid in channel units for calibration peak $i$ with a measured standard deviation $\sigma_{i, obs}$, then this prior is given by:
\begin{equation}
\label{eq:centroid_prior}
x_i \sim \mathcal{N}(x_{i, obs}, \sigma^2_{i, obs}).
\end{equation}
An equivalent formulation of this statement could be made by assuming the measured values are subject to a likelihood function, but this model would be slightly more complicated in the present case. This formulation is the more general prescription but is mentioned here just for completeness.
The likelihood function makes the connection between the $\rho$ values that were selected for the calibration, and the polynomial function:
\begin{equation}
\label{eq:poly_eq_for_model}
f(\theta_j, x_i) = \sum_{j=0}^N \theta_j x_i.
\end{equation}
The residuals of the polynomial fit and the calibration $\rho$ values, $(f(\theta_j, x_i) - \rho_{i, cal})$, are assumed to be normally distributed with a standard deviation taken from the previously reported uncertainties, $\sigma_{i, cal}$. This gives:
\begin{equation}
\label{eq:calibration_likelihood}
\rho_{i, cal} \sim \mathcal{N}(f(\theta_j, x_i), \sigma^2_{i, cal}).
\end{equation}
Combining all of these terms, the full Bayesian model for the focal plane calibration becomes:
\begin{align}
\label{eq:calibration_bayesian_model}
& \textnormal{Priors:} \nonumber \\
& x_i \sim \mathcal{N}(x_{i, obs}, \sigma^2_{i, obs}) \nonumber \\
& \theta_j \sim \mathcal{N}(0, 100^2) \nonumber \\
& \textnormal{Function:} \\
& f(\theta_j, x_i) = \sum_{j=0}^N \theta_j x_i \nonumber \\
& \textnormal{Likelihood:} \nonumber \\
& \rho_{i, cal} \sim \mathcal{N}(f(\theta_j, x_i), \sigma^2_{i, cal}). \nonumber
\end{align}
To summarize this model, each calibration peak in the spectrum has a measured channel mean $x_{i, obs}$ and variance, $\sigma_{i, obs}^2$. These values are used to assign an informative prior to the peak value used in the model, $x_i$.
The likelihood function is evaluated at the calibration points, $\rho_{i, cal}$, which are found by using excitation energies in the literature and are converted to $\rho$ using the kinematics and experimental parameters of the experiment being analyzed.
Evaluation of the posterior distribution was performed using MCMC. The MCMC chain was initialized using values from a maximum likelihood estimate in order to decrease the burn in time.
The model was set up and evaluated using the \texttt{PyMC2} package \cite{pymc}.
Typical runs draw around $2 \times 10^5$ samples after $5 \times 10^4$ initial steps are discarded as burn in. Thinning is also employed as needed, but convergence times can vary greatly between
different nuclei depending on how well the calibration energy levels are known. Additionally, efficient sampling of the posterior was found to be greatly helped by scaling channel numbers
around their average value (i.e., for each of the N data points, $x_{i, obs}$: $x_{i, obs}^{\textnormal{scaled}} = x_{i, obs} - \frac{1}{N}\sum_i^N x_{i, obs}$).
To deduced excitation energies from the calibration fit, a Monte Carlo procedure is again used. In this case, the uncertainty contributions come from both the chosen peak centroid and the coefficients of the polynomial fit. The samples for the coefficients were drawn from a KDE constructed from the posterior samples to account for the non-normality of the sampled distributions. The resulting estimated energies, however, were found to be normally distributed.
There is no guarantee that a chosen set of calibration points will produce a fit that accurately predicts energies.
Frequently this problem arises from misidentifying peaks in the spectrum.
Thus, a goodness-of-fit measure is necessary to help select a valid set of calibration points.
A reduced-$\chi^2$ statistic is available in a Bayesian framework,
but comes out of a maximum likelihood approximation, with data that has normally distributed uncertainties, and priors that are uniformly distributed \cite{bayes}.
However, the calibration above is also dependent on the uncertainties in the peak centroids. When these uncertainties are accounted for, a $\chi^2$ function that is only sensitive to the calibration energy uncertainties could lead to the rejection of an otherwise satisfactory
calibration set. In order to integrate these variations into a maximum likelihood estimate, a quantity, I will call $\delta^2$, is defined as:
\begin{equation}
\label{eq:delta_definition}
\delta^2 = \frac{1}{2K}\sum_{\alpha=0}^{K}\bigg[\frac{1}{N-\nu} \sum_{i=0}^{N}
\bigg(\frac{f(x_{\alpha i};\boldsymbol{\theta})-\mu_{\rho_i}}{\sigma_{\rho_i}}\bigg)^2
+ \frac{1}{M} \sum_{j=0}^{M} \bigg(\frac{x_{\alpha j}-\mu_{x_j}}{\sigma_{x_j}}\bigg)^2 \bigg],
\end{equation}
where $N$ is the number of measured $\rho$ values, $\nu$ is the number of fitting parameters, $M$ is the number of centroids,
and $K$ is the number of centroid samples drawn. The factor of $1/2$ accounts for each term approaching unity
when the fitted parameters describe the data well. This quantity is again based on a maximum likelihood approximation applied to normally distributed
uncertainties with uniform priors, but it serves as a useful approximation for the goodness-of-fit of the $\rho$ versus channel calibration.
This method was found to clearly distinguish misidentified peaks without giving false negatives arising from channel uncertainties.
It was found that $\delta^2 < 5$ usually indicates a fit free from misidentified states and worthy of further examination.
The techniques outlined above define statistically sound procedures for uncertainty propagation and $\rho$ versus channel fitting for focal plane energy calibration.
These procedures have the advantage of not approximating the influence of the multiple sources of uncertainty, and creating a general framework which can be expanded as dictated by the experiment.
\subsection{$^{28}$Al Calibration}
The Bayesian energy calibration method presented above was originally conceived during the analysis of the $^{27}$Al$(d,p)$ experiment performed at TUNL to characterize the focal plane detector (see the work described in Chapter \ref{chap:tunl} Section 4). The experiment was done using DENIS to generate an approximately $ 1 \, \mu \textnormal{A}$ beam of $^2$H$^{-}$. This
beam was accelerated through the tandem to generate a $12$-MeV $^2$H$^{+}$ beam. Energy analysis was done using the high transmission settings of the $90$-$90$ system. The SPS solid angle was fixed to $0.25$ msr to minimize the effects of kinematic broadening. The detector was filled to $225$ Torr and the position sections were biased to $1800-2000$ V.
The beam was impinged on a target of ${\sim} 80$ $\mu$g/cm$^2$ $^{27}\!$Al evaporated onto a 15.2 $\mu$g/cm$^2$ $^{\textnormal{nat}}$C foil.
Additionally, a $^{\textnormal{nat}}$C target similar to the Al target backing was used to identify contamination peaks arising from carbon and oxygen.
The spectrograph was positioned at three angles, $\theta_{Lab}=15^{\circ}$, $25^{\circ}$, and $35^{\circ}$; and its field was set to $0.75$ T. Example spectra from the $\Delta E$/$E$ and the front position gated on the proton group at $25^{\circ}$ are shown in Fig.~\ref{fig:2D} and Fig.~\ref{fig:spectra}, respectively.
\begin{figure}
\centering
\centering
\includegraphics[width=.9\textwidth]{Chapter-5/figs/DE_E.pdf}
\caption{ $\Delta E$/E 2D spectrum from $^{27}\!$Al$(d,p)$ at $E_{Lab}=12$ MeV and $\theta_{Lab}=25^{\circ}$. The horizontal axis is the amount of energy deposited into the scintillator, while
the vertical axis is the energy lost in the $\Delta E$ proportionality counter. {The two observed particle groups have been circled and labeled.}}
\label{fig:2D}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{Chapter-5/figs/front_AL.pdf}
\caption{Front position section gated on the proton group at $E_{Lab}=12$ MeV and $\theta_{Lab}=25^{\circ}$. Peaks used for the energy calibration are highlighted and labeled by their energies in keV.
All other labels are the deduced energies from this work, as reported in Section \ref{sec:al_results}. {Unlabeled peaks were unobserved at other angles due to lower statistics; thus, the reaction that produced them could not be identified with certainty and they are excluded from the reported energy values.}}
\label{fig:spectra}
\end{figure}
States from $^{28}\!$Al were identified and matched to the peaks in each spectrum. Level energies from Ref.~\cite{levels} were used both as calibration values and as comparisons for predicted energies.
Initially, a set of seven calibration states were chosen for each angle. A second order polynomial was chosen due to the third order term being consistent with zero and a marginally better value of $\delta^2$.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{Chapter-5/figs/nndc_comp.pdf}
\caption{The residual plot of the Bayesian (blue) and {Bayesian with adjusted uncertainty (green)} calibrations at $\theta = 25^{\circ}$. {Calibration points are not included, so only predicted energies are shown.} The excitation energies of the points {with adjusted uncertainty} have been shifted to the right by 50 keV for visibility. The error bars represent the statistical uncertainty of the fit added in quadrature with the uncertainty reported in Ref. \protect\cite{levels}. It is clear that
the statistical uncertainties from the fit alone are inconsistent with previously reported values. However, general agreement is found when an {additional uncertainty is fit during the calibration.}}
\label{fig:nndc}
\end{figure*}
For the case of $^{28}$Al, most of the strongly populated states are known to sub-keV precision, which leads to small statistical uncertainties in the fit. These small uncertainties make the deduced values inconsistent with those reported in the Evaluated Nuclear Data Structure File (ENSDF) database \cite{levels}.
Thus, there is clearly an additional source of uncertainty present that is not accounted for in the Bayesian model of Eq.~\ref{eq:calibration_bayesian_model}. This could be due to systematic effects, but sources of systematic uncertainty from experimental parameters (i.e, reaction angle, beam energy, target effects)
have a minimal effect on the deduced excitation energies due to the calibration process.
For example, if the beam energy is off $\sim\! 5$ keV, then all of the calibration points, assuming they are from the
same reaction, will be shifted by roughly the same amount. Therefore, the calibration's intercept will change, and the effect is canceled out in the predicted energies.
The same arguments hold for any systematic effect that is equal for all points.
Following these considerations, the main source of systematic uncertainty from these effects is the energy dependence of the straggling through the target.
This effect was estimated to be $0.2$ keV, which does not improve the agreement of our results with ENSDF.
The other possible sources of systematic uncertainty arise from the detector response.
In the polynomial model outlined above, the detector response is assumed to be linear; however, deviations from this assumption will cause the model to incorrectly predict
energies. In order to account for these possible effects, an extension to the Bayesian framework was used. In this model, the uncertainty
for the peak centroids {(which we refer to as the "adjusted uncertainty")} is considered to be of the form:
\begin{equation}
\label{eq:1}
\Sigma_{i}^2 = \sigma_{i, obs}^2+\sigma^{\prime}{}^{2} ,
\end{equation}
where $\sigma_{i, obs}$ is the observed statistical uncertainty in a given peak, $\sigma^{\prime}$ is an uncertainty that is not directly measured, and $\Sigma_i$ is the new {adjusted uncertainty} for a given peak.
The purpose of $\sigma^{\prime}$ is to broaden the
normal distribution associated with each peak to a degree dictated by the available data. This broadening accounts
for systematic effects in our position measurement, but does not assume a cause or a fixed value. Rather, it is merely another
model parameter to be estimated during calibration.
In practice, this is done by extending the Bayesian model with a prior distribution for $\sigma^{\prime}$ and using the same MCMC method to infer its value during the polynomial regression.
The choice of prior for $\sigma^{\prime}$ is more nuanced than for the other parameters previously mentioned.
Recall that our data reflects the influence of an additional uncertainty that cannot be directly estimated.
Thus, the prior must encode a source of uncertainty that is larger than the observed statistical uncertainties,
but not large enough to affect the calibration process.
These considerations lead to the adoption of a Half-Cauchy distribution for the precision, $\tau^{\prime} \equiv 1/\sigma^{\prime}{}^2$,
which is a simple transformation of the standard deviation that was found to improve MCMC convergence.
The Half-Cauchy distribution, written $\textnormal{HalfCauchy}(\alpha,\beta)$,
is parameterized by $\alpha$, the location parameter, and $\beta$, the scale parameter. The Half-Cauchy distribution
has been found to give good behavior numerical behaviour close to 0, and also avoids the hard limits of the Uniform distribution \cite{gelman2006}.
For this model:
\begin{equation}
\label{eq:additional_prior_unc}
\tau^{\prime} \sim \textnormal{HalfCauchy}(0,1).
\end{equation}
Additional analysis showed no noticeable dependence on $\beta$. With this addition, our model becomes:
\begin{align}
\label{eq:calibration_bayesian_model}
& \textnormal{Priors:} \nonumber \\
& \tau^{\prime} \sim \textnormal{HalfCauchy}(0,1) \nonumber \\
& x_i \sim \mathcal{N}(x_{i, obs}, \Sigma^2_{i}) \nonumber \\
& \theta_j \sim \mathcal{N}(0, 100^2) \nonumber \\
& \textnormal{Function:} \\
& \sigma^{\prime}{}^2 = 1/\tau^{\prime} \nonumber \\
& \Sigma_i^2 = \sigma_{i, obs}^2+\sigma^{\prime}{}^{2} \nonumber \\
& f(\theta_j, x_i) = \sum_{j=0}^N \theta_j x_i \nonumber \\
& \textnormal{Likelihood:} \nonumber \\
& \rho_{i, cal} \sim \mathcal{N}(f(\theta_j, x_i), \sigma^2_{i, cal}). \nonumber
\end{align}
The comparison of the ENSDF values versus the original fit and the adjusted uncertainty fit
at $\theta_{Lab} = 25^{\circ}$ is shown in Figure \ref{fig:nndc}. Better agreement was found, indicating
the method produces reasonable estimates for the total uncertainty.
\subsection{Results}
\label{sec:al_results}
\begin{table}[]
\centering
\setlength{\tabcolsep}{12pt}
\caption{ Measured $^{28}$Al energy levels (keV)}
\begin{tabular}{c c l}
\hline
This Work & &ENSDF \cite{levels} \\
\hline
\hline
36(5) & & 30.6382(7) \\
1372(5) & & 1372.917(20) \\
1619(5) & & 1621.60(4)* \\
2135(5) & & 2138.910(10) \\
2200(5) & & 2201.43(3) \\
2480(5) & & 2486.20(6) \\
2576(5) & & 2581.81(22) \\
3105(5) & & 3105(1) \\
3289(5) & & 3296.34(4) \\
3341(5) & & 3347.19(4) \\
3583(5) & & 3591.457(9) \\
3941(5) & & 3935.603(18) \\
4020(5) & & 4033(3) \\
4244(5) & & 4244.49(10) \\
4456(5) & & 4461.97(10) \\
4510(9) & & 4516.94(18) \\
\hline
\multicolumn{3}{r}{* An average of two states at 1620 and 1622.} \\
\end{tabular}
\label{tab:energies}
\end{table}
Using the Bayesian framework with the addition of $\Sigma_i$, a calibration was produced for each angle.
The same calibration peaks were used for each angle, and they represent strongly populated states that are well-resolved in the spectra, which are not obscured by contamination peaks that shift with angle. The location of these states at $\theta_{Lab}=25^{\circ}$ are shown in Fig.~\ref{fig:spectra}
The reported values listed in Table \ref{tab:energies} are weighted averages of the energies over all angles, with
the requirement that any candidate state be observed at more than one angle. A total of 16 states (excluding the 7 calibration states) were measured in this way.
Finally, an estimation for the detector resolution can be found from the slope of the $\rho$ calibration.
A slope of $0.036 \frac{\textnormal{mm}}{\textnormal{channel}}$ was consistently found, with typical full width at half maximum (FWHM) values of $10$-$20$ channels. These values give resolutions between $0.36$-$0.72$ mm.
The separation on the ground and first excited state implies an energy resolution of $\approx 15$ keV.
\subsection{Additional Notes On Energy Calibration}
The method and results described were first presented in Ref.~\cite{marshall_2018}, since that time several improvements and insights have been made. The first is that the \texttt{PyMC2} package was abandoned in favor of the \texttt{emcee} package \cite{emcee}. The specifics of this sampler will be discussed in the next part of this chapter. \texttt{PyMC2} is a fairly simple implementation of the Metropolis Hastings algorithm, and therefore struggled severely even in this fairly simple setting. The chains suffered from high autocorrelation, which required the deduced energies be constructed from KDE representations of the samples. This KDE estimation was, by construction, insensitive to correlations between the polynomial parameters, which means that the deduced excitation energies did not account for these correlations. For the above results, this matters little because a majority of the states are inside the fitting region. These \textit{interpolated} values are less sensitive to the polynomial correlations; however, in the case of \textit{extrapolated} values, these correlations are critical. By implementing \texttt{emcee}, which easily sampled the model, the autocorrelation times decreased, allowing many more independent samples to be drawn. Independent samples can be used directly in the calculation of the excitation energies, and they naturally account for the correlation between model parameters.
The additional uncertainty, $\sigma^{\prime}$, was motivated above, but a few additional notes should be made. The description presented above focuses on the case of $^{28}$Al, but in general spectrograph measurements have often grasped for additional sources of uncertainty. For example, Wrede \cite{wrede_thesis} describes a \textit{reproducibility uncertainty} which adjusts the uncertainty based on the disagreement of the predicted values for calibration states versus their input values. Hale et al. \cite{hale_2004} describe a procedure of comparing the predicted energies for other states in the spectrum in order to estimate an additional uncertainty. These adjusted uncertainties and the additional parameter in the Bayesian model are both a response to the inadequacy of a simple polynomial fit for spectrograph data. Deviations from a polynomial fit are made worse because the uncertainties resulting from the fit are often smaller than the calibration uncertainties, which becomes a major issue when these precise values clearly conflict with the observations. This problem will likely grow worse with time, as the precision of the energies used for calibration improves, further decreasing the statistical uncertainties from the fit. Even if it was the case that the polynomial fit exactly described $\rho$ as a function of detector position, trouble can still arise due to systematic effects in the data used to calibrate. The Bayesian approach offers a clear advantage in this scenario because $\Sigma_i$ adds a small amount of robustness to the fitting procedure, which means that outliers will have a diminished impact on the fit results (for a larger discussion of robustness in the context of Bayesian analysis, see Chapter 15 of Ref.~\cite{Kruschke_2010}).
The final note is on the process of taking a weighted average between the measurements at different angles. This procedure can also produce unrealistically small uncertainties, which become more prevalent the more angles are measured. Again, if underlying systematic effects are thought to be present, a weighted average may not be a valid approach. A demonstration of this effect will be presented in Chapter \ref{chap:sodium}.
\section{Bayesian DWBA}
\label{sec:bay_dwba}
This portion of the thesis details work originally publish in Ref.~\cite{Marshall_2020} and takes large portions from that paper. All of the calculations and methods were performed and developed by the author.
\subsection{Introduction to the Problem}
Chapter \ref{chap:reactions} discussed the need for a reaction theory in order to extract single-particle quantities from transfer reaction data. By using DWBA, it becomes possible to determine both the transferred angular momentum, $\ell$, and the
spectroscopic factor of the populated single particle or hole state \cite{satchler}.
This structure information can in turn be used to answer questions in nuclear astrophysics \cite{transfers_in_astro},
and to test the shell model on isotopes located far from stability \cite{Wimmer_2018}.
Despite the wide use of these methods, quantifying the uncertainties associated with
both the optical potentials and the reaction model has been a long standing issue. Previous studies have used statistical methods to
determine the uncertainty on the potential parameters \cite{varner}, but little work has been done to propagate
these uncertainties through DWBA calculations in a statistically meaningful way. To date, most spectroscopic factors are reported
with either no uncertainty, an assumed equivalence between the uncertainty in the data normalization and that of the spectroscopic factor, or a constant $25 \%$ determined from historical studies \cite{endt_cs}.
Over the last few years, these issues have led to a renewed focus on the impact of the optical model parameters on transfer reactions. A series of studies have focused on the nature and magnitude of these effects \cite{Lovell_2015, lovell_opt_unc, king_d_p}. The first steps have also been taken towards quantifying these uncertainties using Bayesian statistics \cite{lovell_mcmc, king_dwba}. These studies focus on the broad effects of optical potentials, but it is worthwhile to establish a Bayesian framework in which the results of a single experiment can be analyzed. This section details my work to establish such a framework, and to examine the possible implications on future experiments. This framework is critical to the analysis of the $^{23}$Na$ (^{3}\textnormal{He}, d)$ reaction in Chapter \ref{chap:sodium}.
The methods developed and presented here were first applied to the analysis of the proton pickup reaction $^{70} \textnormal{Zn} (d, ^{3} \textnormal{He}) ^{69} \textnormal{Cu}$,
which was originally reported in Ref. \cite{pierre_paper}.
This data set possesses many of the features typical of a transfer measurement study: the use of a high resolution magnetic spectrograph to resolve the excited states of interest, elastic data for the entrance channel collected with the same target and beam, experimental uncertainties coming from counting statistics,
and limited angular coverage in both the elastic scattering and transfer differential cross sections. The previous analysis assigned $\ell$ values and extracted spectroscopic factors for
the first eight excited states of $^{69} \textnormal{Cu}$. As such, this data set allows a proof of concept for the Bayesian analysis, since its methods closely parallel the experiments carried out at TUNL, but the data reduction has already been performed. This reanalysis aims to determine the uncertainties associated
with these quantities using Bayesian statistics, and in doing so provide a method that can be built upon in order to analyze transfer reactions of interest to astrophysics.
\subsection{Zero-range Approximation}
The Bayesian model that will be presented in the next part of this chapter will require many millions of likelihood evaluations in order to estimate the posterior distributions. Thus, it is critical to reduce the computational cost of the transfer calculations. In light of this requirement, the zero-range approximation has been adopted for these calculations \cite{satchler}. The approximation, in the specific case of the pick-up reaction $A(d, ^{3} \! \textnormal{He})B$, takes the prior form of the transfer potential, given by:
\begin{equation}
\label{eq:prior_pickup}
\mathcal{V}_{prior} = V_{p+d} + \mathcal{U}_{d+B} - \mathcal{U}_{d+A}
\end{equation}
and sets the remnant term, $\mathcal{U}_{d + B} - \mathcal{U}_{d+A}$, to zero. Experimental observation has justified considering the remnant term negligible \cite{first_dwba}. The projectile is then assumed to be absorbed and emitted from the same point giving. The light particle matrix element reduces to:
\begin{equation}
\label{eq:1}
\bra{d} V_{pd} \ket{^3\textnormal{He}} \sim D_o \delta(\mathbf{r_p}) ,
\end{equation}
where $\ket{^3\textnormal{He}}$ and $\ket{d}$ are the internal wave functions of the ejectile and projectile, respectively, $D_0$ is
the volume integral of the interaction strength, $V_{pd}$ is the binding potential of the proton to the deuteron,
and $\mathbf{r_p}$ is the coordinate of the proton relative to the deuteron. The use of this approximation gives the further benefit of a direct
comparison to the original analysis of $^{70}$Zn$(d, ^3 \textnormal{He}) ^{69}$Cu that used the zero-range code DWUCK4 for the extraction of $C^2S_{p+B}$ \cite{dwuck4}. It should be noted that Ref.~\cite{pierre_paper} also performed finite-range calculations, which solve the above matrix element by using the expansion from Eq.~\ref{eq:overlap_expansion} for $^3$He, but the computational costs are prohibitively expensive in the present analysis. The value of $D_0$ is calculated
theoretically, with the historical value for proton pick-up and stripping reactions being $D_0 = -172.8$ MeV fm$^{3/2}$ \cite{bassel}. Comparing the different models in Ref.~\cite{all_norms_3He}, an approximately $15 \%$ spread in the values of $D_0^2$ is observed. This is inline with the findings of Ref.~\cite{bertone}, which also noted an approximate $15 \%$ spread in the product $(C^2S_{p+d}) D_0^2$. The above value is adopted here with its associated uncertainty; however, \textit{Ab initio} methods, such as those in Ref.~\cite{brida_ab_initio}, now offer more precise determinations of the $\braket{d|^3 \textnormal{He}}$ overlap. If $D_0$ is deduced using these methods, then this additional source of uncertainty will effectively be eliminated.
\subsection{Bayesian Considerations}
A Bayesian approach to this problem will, again, require prior probabilities are assigned for every optical model potential parameter and the spectroscopic factor itself. These prior probabilities will then be updated through the likelihood function using the experimentally measured cross sections for the elastic and transfer channels.
One of the primary goal of this work is to give probabilities for each allowed transfer. The motivation for such a method is that states of astrophysical interest are frequently weakly populated and/or obscured by contaminants. In such a scenario, the angular distribution will often not be informative enough to give a unique $\ell$ assignment. As an example, Fig.~\ref{fig:hale_23na_8945} shows the data for the $8945$-keV state in $^{23}$Na populated via $^{22}$Ne$(^3\textnormal{He}, d)$ in Ref.~\cite{hale_2001}, as reproduced in Ref.~\cite{kelly_2017}. It is clear that a unique $\ell$ value cannot be assigned from such a distribution, but how much information about $\ell$ can we learn from such data? Ideally, these data could be used to give probabilities for each $\ell$ value, and these probabilities could be taken into account when calculating other quantities, such as the reaction rate.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{Chapter-5/figs/Hale_angular_dist.png}
\caption{Angular distribution from the $8945$-keV state in $^{23}$Na populated via $^{22}$Ne$(^3\textnormal{He}, d)$ from Ref.~\cite{hale_2001}, as reproduced in Ref.~\cite{kelly_2017}. The state was weakly populated and obscured at several angles by a contamination peak. A unique $\ell$ value ($L$ in the figure) could not be determined.}
\label{fig:hale_23na_8945}
\end{figure}
The problem of determining the $\ell$ value for a transfer requires a reformulation of Bayes' theorem. To be specific, this problem belongs to a subcategory of Bayesian inference called model selection. Computing the probability for a
model, $M_j$, can be done by restating Bayes' theorem:
\begin{equation}
\label{eq:m_theorem}
P(M_j|\mathbf{D}) = \frac{P(\mathbf{D}|M_j) P(M_j)}
{\sum_i P(\mathbf{D}|M_i)P(M_i)}.
\end{equation}
The above expression is built on the same logical foundation as Eq.~\ref{eq:bayes_theorem}, but has been adapted to compute
posterior distributions for $M_j$, which means a comparison can now be made between different models. For each $M_j$, there is a set of model parameters $\boldsymbol{\theta}_j$, which have been marginalized over. Formally, we have:
\begin{equation}
\label{eq:marg}
P(\mathbf{D}|M_j) = \int P(\mathbf{D}|M_j, \boldsymbol{\theta}_j) P(\boldsymbol{\theta}_j|M_j) d\boldsymbol{\theta}_j.
\end{equation}
Examining Eq.~\ref{eq:marg}, it can be seen that $P(\mathbf{D}|M_j)$ is equivalent to the evidence integral from Eq.~\ref{eq:bayes_theorem}. Thus, in order to evaluate how probable different angular
momentum transfers are, the evidence integral must be calculated.
Once the evidence integral is calculated, there are several metrics to interpret model posterior probabilities. For simplicity, we will now refer to the evidence integral as $Z_j$, which corresponds to the model $M_j$, and additionally assume the model priors, $P(M_j)$, are equal for each model. This assumption means before the transfer data are analyzed, all $\ell$ transfers are considered equally likely. The most commonly used criterion for Bayesian model selection is called the Bayes Factor, which is defined by:
\begin{equation}
\label{eq:bayes_factor}
B_{ji} = \frac{Z_j}{Z_i},
\end{equation}
in the case of equal model priors.
If this ratio is greater than $1$, the data support the selection of model $j$, while values less than $1$
support model $i$. Judging the level of significance for a value of $B_{ji}$ is open to interpretation, but a useful
heuristic was given by Jefferys \cite{Jeffreys61}. For the cases where model $j$ is favored over $i$ we have the following
levels of evidence: $3 > B_{ji} > 1$ is anecdotal, $10 > B_{ji} > 3$ is substantial, $30 > B_{ji} > 10$ is strong, $100 > B_{ji} > 30$ is very strong, and $ B_{ji} > 100$ is decisive.
It is also possible to calculate explicit probabilities for each model. Again assuming each of the models is equally likely, the probability of a given model can be expressed as:
\begin{equation}
\label{eq:model_prob}
P(M_j|\mathbf{D}) = \frac{Z_j}{\sum_i Z_i}.
\end{equation}
Through Eq.~\ref{eq:model_prob}, probabilities can be calculated for each physically allowed angular momentum transfer, $\ell_j$.
Using these definitions, Bayesian inference can be carried out after prior probabilities are assigned for each optical model parameter and a likelihood function for the data is chosen.
\subsection{Ambiguities in Potential Parameters}
\label{sec:amb_pots}
Any analysis involving potentials of the form in Eq.~\ref{eq:ws_pot} will suffer from so-called continuous and discrete ambiguities.
Both of these ambiguities arise because a single differential cross section at a single energy cannot uniquely determine the potential parameters.
The continuous ambiguity describes strong correlation between certain model parameters \cite{hodgson1971, vernotte_optical}. A well known example is the relation between the real volume depth, $V$, and the corresponding radius, $r_0$.
The relation has an approximate analytical form given by $Vr_0^n = const$, where the exponent $n \approx 1.14$ and the constant can vary depending on the reaction or chosen optical model \cite{vernotte_optical}.
The continuous ambiguity can be remedied in part by a global analysis of the potential parameters
across a wide range of mass numbers and reaction energies, as noted in the comprehensive analysis of
proton and neutron scattering in Ref.~\cite{varner} and for $^3$He and $t$ scattering in Ref.~\cite{pang_global}.
Since the present analysis will be limited to a single elastic scattering data set, the model must be prepared to deal
with these parameter correlations.
The discrete ambiguity arises in optical model analysis due to the identical
phase shifts that are produced by different values of $V$ \cite{drisko_1963}.
This multi-modal behavior is perhaps the more problematic of the two ambiguities
since parameter correlation can be handled with standard statistical methods. However, interpretation
of uncertainties in a multi-modal problem requires care beyond standard credibility intervals.
The discrete families of parameters can be readily identified by the volume integral of the real potential:
\begin{equation}
\label{eq:j_int}
J = \frac{4 \pi}{A_{P}A_{T}} \int_0^{\infty} Vf(r; r_0, a_0) r^2 dr ,
\end{equation}
where the mass numbers of the projectile and target, $A_P$ and $A_T$, respectively, ensure
that $J$ should be roughly constant for a family of potential parameters at a single energy.
Microscopic structure models such as the folding model can also be used to calculate $J$,
and the theoretical value can be used to identify the physical potential family \cite{daehnick_global}.
Trusting the efficacy of this method, the approach for this work is to adopt potential depths from global fits and to keep our prior values contained around these starting potential depths.
\subsection{Global Potential Selection}
The {initial} potentials used for the analysis of $^{70}$Zn$(d, ^3\textnormal{He}) ^{69}$Cu before inference can be found in Table~\ref{tab:opt_parms}.
In order to facilitate comparison with Ref.~\cite{pierre_paper}, the same global potentials have been used. These potentials are the Daehnick-F global $d$ optical model \cite{daehnick_global},
and the Becceheti and Greenless global $^3$He model of Ref.~\cite{b_g_3he}. It is also worth noting that elastic scattering with an
unpolarized beam does not provide a constraint on the parameters of a spin-orbit potential, so all spin orbit terms have been held fixed
in the current work \cite{hodgson1994, daehnick_global, thompson_nunes_2009}.
The bound state geometric parameters are assigned their most commonly used value of $r_0=1.25$ fm and $a_0=0.65$ fm,
with the volume potential depth adjusted to reproduce the binding energy of the final state \cite{perey_params,hodgson1971,bjork_params}.
The bound state spin-orbit volume depth was fixed at a value of $V_{so} = 8.66$ MeV in order to
approximately correspond to the condition $\lambda = 25$, where $\lambda \sim \frac{180 V_{so}}{V}$
for the value of $V$ for the ground state.
\begin{table*}[ht]
\centering
\begin{threeparttable}[e]
\caption{\label{tab:opt_parms}Optical potential parameters used in this work before inference.}
\setlength{\tabcolsep}{4pt}
\begin{tabular}{ccccccccccccccc}
\toprule[1.0pt]\addlinespace[0.3mm] Interaction & $V$ & $r_{0}$ & $a_{0}$ & $W$ & $W_{s}$ & $r_{i}$ & $a_{i}$ & $r_{c}$ & $V_{so}$ \\
& (MeV) & (fm) & (fm) & (MeV) & (MeV) & (fm) & (fm) & (fm) & (MeV)\\ \hline\hline\addlinespace[0.6mm]
$d$ $+$ $^{70}$Zn\tnote{a} & $86.76$ & $1.17$ & $0.75$ & $0.90$ & $11.93$ & $1.32$ & $0.81$ & $1.30$ & $6.34$ & \\
\hspace{0.15cm} $^{3}$He $+ ^{69}$Cu \tnote{b} & $156.5$ & $1.20$ & $0.72$ & $42.2$ & &$1.40$ & $0.86$ & $1.25$ & \\
\hspace{0.1cm}$p$ $+$ $^{69}$Cu & \tnote{c} & 1.25 & 0.65 & & & & & 1.25 & 8.66 & \\[0.2ex]
\bottomrule[1.0pt]
\end{tabular}
\begin{tablenotes}
\item[a] Global potential of Ref. \cite{daehnick_global}.
\item[b] Global potential of Ref. \cite{b_g_3he}.
\item[c] Adjusted to reproduce binding energy of the final state.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Bayesian Model}
\label{sec:model}
Following the above discussion and considerations, it is time to define the Bayesian model for this problem. The model will perform fits of each excited state simultaneously with the elastic scattering data. Again, using a Bayesian approach means each
parameter, whether from the optical model potentials or otherwise, has to be assigned a prior probability distribution.
Additionally, likelihood functions will need to be assigned for the data in both the elastic and transfer channels.
For this problem, only three distributions are used: normal, half-normal, and uniform. A half-normal distribution is equivalent to a normal distribution with $\mu=0$ and restricted to the interval $[0, \infty)$; thus, it only has one free parameter, which is the variance.
It is written as $\textnormal{HalfNorm}(\sigma^2)$. The uniform distribution will be given by its lower and upper limits, written
as $\textnormal{Uniform}(\textnormal{Lower}, \textnormal{Upper})$. This distribution gives equal probability to every value between the lower and upper limits.
The majority of parameters come from the optical model potentials. The elastic scattering data from $^{70} \textnormal{Zn} (d,d)$ should be able
to inform the posteriors for the entrance channel parameters, $\boldsymbol{\mathcal{U}}_{\textnormal{Entrance}}$. However, the ambiguities discussed in Section 5.3,
combined with the lack of data at angles higher than $\theta_{c.\!m.}= 50^{\circ}$, means that the priors for the entrance channel must be weakly informative. In order to accomplish this, their radius and diffuseness parameters are focused around a reasonable range for both the real and imaginary potentials. If it is assumed that physical values for these parameters tend to lie within $r = 1.0-1.5$ fm and $a=0.52-0.78$ fm, then the priors can be constructed to favor these values. This is accomplished by assigning normal distributions with mean, $\mu_r = 1.25$ fm and $\mu_a=.65$ fm and variance $\sigma^2_r = (0.20 \, \mu_r)^2$ and $\sigma^2_a = (0.20 \, \mu_a)^2$. These priors have $68 \% $ credibility intervals that are equivalent to $r = 1.0-1.5$ fm and $a=0.52-0.78$ fm, and importantly do not exclude values that lie outside of these ranges. This means that if the data are sufficiently informative, they can pull the values away from these ranges, but in the absence of strong evidence, the priors will bias the parameters toward their expected physical values. The depths of the potentials were also assigned standard deviations of $20 \% $ of their global depths, which favors the mode assigned by the global analysis, and was found to be sufficiently restrictive to eliminate the discrete ambiguity. These conditions are summarized in the prior:
\begin{equation}
\label{eq:entrance_prior}
\boldsymbol{\mathcal{U}}_{\textnormal{Entrance}} \sim \mathcal{N}(\mu_{\textnormal{central}, k}, \{0.20 \, \mu_{\textnormal{central}, k}\}^2),
\end{equation}
where the symbol \say{central} refers to the global values for the depths and the central physical values of $r=1.25$ fm and $a=0.65$ fm defined above, and the index $k$ runs over the depth, radius and diffuseness parameters for the real and imaginary parts of the potential.
The exit channel, as opposed to the entrance channel, does not have elastic scattering data to constrain it directly. This means that informative priors based on a global analysis must be used, while also considering a reasonable range of values. Normal priors are used, again to avoid sharp boundaries on the values, with the global values of Table~\ref{tab:opt_parms} as the location parameters, and the scale parameter set to $\sigma^2 = (0.10 \, \mu)^2$. Values are thus focused around those of the global model, but are allowed a moderate amount of variation.
The prior choice can be stated:
\begin{equation}
\label{eq:entrance_prior}
\boldsymbol{\mathcal{U}}_{\textnormal{Exit}} \sim \mathcal{N}(\mu_{\textnormal{global}, k}, \{0.10 \, \mu_{\textnormal{global}, k}\}^2),
\end{equation}
with the \say{global} label referring to the values of Table~\ref{tab:opt_parms} and $k$ labeling each of the potential parameters for the exit channel.
At this point, it is worth emphasizing that the potential priors for both the entrance and exit potentials are essentially arbitrary. The $20 \%$ and $10 \%$ variation for the parameters are meant to make this computation tractable, since it is impossible with the limited amount of data to uniquely determine the parameters as discussed in Section 5.3. The influence of this choice on the entrance channel is limited since there are data to inform the parameters. However, the choice of $10 \%$ for the exit channel will influence our final calculated uncertainties. Lower or higher amounts of variation could be considered for these parameters, but a choice has to be made in order to account for their impact on DWBA calculations. I have also chosen to exclude variations in the spin-orbit and bound state potentials. However, the possible impact of the bound state potentials will be discussed later in this chapter.
Since this model treats $C^2S$ as another parameter to be estimated, a prior must be specified. It has been assigned the mildly informative prior:
\begin{equation}
\label{eq:cs_prior}
C^2S \sim \textnormal{HalfNorm}(n_{nucleon}^2),
\end{equation}
where $n_{nucleon}$ is the number of nuclei occupying the orbital that is involved in the transfer. The half-normal distribution ensures that $C^2S \geq 0$, while the scale parameter comes from the sum rules of Macfarlane and French \cite{sum_rules}. These rules have been found experimentally to be a robust constraint \cite{sum_rule_test}. However, it is likely that this prior is more conservative than necessary, since it is not expected that a single state will contain the entirety of the strength for a given shell. These considerations are just to provide a rough estimate to help construct the prior for $C^2S$.
The use of the zero-range approximation for the transfer channels also comes with an additional uncertainty from the strength parameter, $D_0$, as discussed in
Section 5.5.2. Our model explicitly accounts for this $15 \%$ uncertainty by using a parameter $\delta D_0^2$, which is assigned a normal and informative prior:
\begin{equation}
\label{eq:d0}
\delta D_0^2 \sim \mathcal{N}(1.0, 0.15^2).
\end{equation}
Two additional parameters are also introduced that are not a part of DWBA, but are instead meant to account for deficiencies in the reaction theory. The first is a normalization parameter, $\eta$, which allows for the adjustment of the theoretical predictions for both the elastic and transfer cross sections based on any observed normalization difference between the elastic channel data and optical model calculations. This can be in principle seen as treating the absolute scale of the data as arbitrary, which prevents biasing the potential parameters towards
unphysical values if a systematic difference is present. The posterior for this parameter will only be directly informed by the elastic data of the entrance channel, but will directly influence the posterior for $C^2S$. Since $\eta$ is multiplicative in nature, we do not want to bias it towards values less than or greater than $1$. This is done by introducing a
parameter, $g$, which is uniformly distributed according to:
\begin{equation}
\label{eq:g_uni}
g \sim \textnormal{Uniform}(-1, 1).
\end{equation}
$\eta$ is then defined as:
\begin{equation}
\label{eq:eta}
\eta = 10^{g}.
\end{equation}
Collecting all of these factors, the DWBA predictions can now be written at each angle $i$ as:
\begin{equation}
\label{eq:cs_full}
\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i} = \eta \times \delta D_0^2 \times C^2S \times \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, i}.
\end{equation}
The second additional parameter comes from the consideration that the DWBA theory provides only an approximation to the true transfer cross section. If only the measured experimental uncertainties from the transfer channel are considered, then any deviation from DWBA will significantly influence the posteriors for the potential parameters. This is remedied by introducing an additional theoretical uncertainty, $\sigma_{\textnormal{theory}, i}$, where the index $i$ references the angle at which the differential cross section is evaluated. Because cross sections vary many orders of magnitude as a function of angle and are manifestly positive, uncertainties are best treated as percentages. Thus, this additional uncertainty is defined as a percentage uncertainty on the theoretical cross section, which is based on a single unknown parameter, $f$. The total uncertainty at an angle is:
\begin{equation}
\label{eq:unc}
\sigma_i^{\prime 2} = \sigma_{\textnormal{Transfer}, i}^2 + \bigg(f\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}\bigg)^2.
\end{equation}
I use $\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}$ as defined in Eq.~\ref{eq:cs_full}, $\sigma_{\textnormal{Transfer}, i}^2$ is the experimental statistical uncertainty, and the adjusted uncertainty, $\sigma_i^{\prime 2}$,
assumes that the experimental and theoretical uncertainties are independent. Since $f$ is some fractional amount of the predicted cross section, it is assigned the weakly informative prior:
\begin{equation}
\label{eq:f}
f \sim \textnormal{HalfNorm}(1),
\end{equation}
so that it is biased towards values less than $1$.
Finally, the likelihood functions for the experimental data must also be specified. The analysis of each excited state will
require two likelihood functions for both the elastic and transfer data. These likelihood functions use the normal distribution, and
take the form:
\begin{equation}
\label{eq:likelihood}
\frac{d \sigma}{d \Omega}_{\textnormal{Exp}, i} \sim \mathcal{N}\bigg(\frac{d \sigma}{d \Omega}_{\textnormal{Theory}, i}, \sigma_{\textnormal{Exp}, i}^2\bigg),
\end{equation}
where $i$ again refers to a specific angle. The above expression assumes that the residuals between the experimental cross section and the ones calculated from theory are distributed normally.
Taking into account all of the considerations and definitions listed above, the full Bayesian model can be written.
Experimental elastic scattering data are identified by the label \say{Elastic}, and the transfer data are labeled \say{Transfer}. The theoretical differential cross sections calculated with FRESCO are written $\frac{d \sigma}{d \Omega}_{\textnormal{Optical}, j}$ for elastic scattering and $\frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, i}$ for the transfer reaction. The indices $i$ and $j$ refer to the transfer and elastic angles, respectively. The model is, thus:
\begin{align}
\label{eq:model}
& \textnormal{Priors:} \nonumber \\
& \boldsymbol{\mathcal{U}}_{\textnormal{Entrance}} \sim \mathcal{N}(\mu_{\textnormal{central}, k}, \{0.20 \, \mu_{\textnormal{central}, k}\}^2) \nonumber \\
& \boldsymbol{\mathcal{U}}_{\textnormal{Exit}} \sim \mathcal{N}(\mu_{\textnormal{global}, k}, \{0.10 \, \mu_{\textnormal{global}, k}\}^2) \nonumber \\
& f \sim \textnormal{HalfNorm}(1) \nonumber \\
& \delta D_0^2 \sim \mathcal{N}(1.0, 0.15^2) \nonumber \\
& C^2S \sim \textnormal{HalfNorm}(n_{nucleon}^2) \nonumber \\
& g \sim \textnormal{Uniform}(-1, 1) \nonumber \\
& \textnormal{Functions:} \\
& \eta = 10^{g} \nonumber \\
& \frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{Optical}, j} = \eta \times \frac{d \sigma}{d \Omega}_{\textnormal{Optical}, j} \nonumber \\
& \frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i} = \eta \times \delta D_0^2 \times C^2S \times \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, i} \nonumber \\
& \sigma_i^{\prime 2} = \sigma_{\textnormal{Transfer}, i}^2 + \bigg(f\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}\bigg)^2 \nonumber \\
& \textnormal{Likelihoods:} \nonumber \\
& \frac{d \sigma}{d \Omega}_{\textnormal{Transfer}, i} \sim \mathcal{N}\bigg(\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}, \sigma_i^{\prime \, 2}\bigg) , \nonumber \\
& \frac{d \sigma}{d \Omega}_{\textnormal{Elastic}, j} \sim \mathcal{N}\bigg(\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{Optical}, j}, \sigma_{\textnormal{Elastic}, j}^2\bigg) , \nonumber
\end{align}
where the $k$ index runs over each of the potential parameters.
It should also be noted that the applicability of DWBA requires that the reaction is dominated
by a direct reaction mechanism occurring at the nuclear surface. Thus, transfer data must be collected at intermediate laboratory energies to suppress the contributions of isolated resonances and low angles to ensure a surface dominated reaction. Failure to adhere to these principles could introduce additional uncertainties into the extraction of $C^2S$. Practically, this work follows the suggestion of Ref.~\cite{thompson_nunes_2009} and only fits the transfer data up to the first observed minimum in the data.
\subsection{Posterior and Evidence Estimation}
It is clear from Eq.~\ref{eq:model} that our Bayesian model lives in a high dimensional space, which
presents a difficult challenge for all MCMC algorithms as discussed before. In particular, traditional Metropolis-Hastings samplers require tuning of the step proposals for each dimension. The problem of tuning the proposals is avoided with the Affine Invariant Ensemble sampler of Goodman and Weare \cite{ensemble_mcmc}.
This method uses an ensemble of random walkers to
sample the posterior, and has been designed to perform well with linearly correlated
parameters. We use the Python package \texttt{emcee} to implement the algorithm \cite{emcee}.
Using \texttt{emcee} with a so-called stretch move requires only a single parameter, $a$, to be specified. \cite{emcee}.
The posteriors for each state are estimated using an ensemble of $400$ walkers which take $> 4000$ steps.
Burn in periods were found to take approximately $1000$ steps.
Final parameter estimates are taken from the final $2000$ steps, which are then thinned by $50$ in order to
give $1.6 \times 10^4$ samples. The autocorrelation in the samples before thinning was estimated to be roughly $400$ steps. $2000$ steps would then contain 5 autocorrelation lengths, with each length yielding one independent sample per walker. This means $\approx 2000$ independent samples are drawn from the posterior, ensuring that the statistical fluctuations of the sampling are negligible compared to the uncertainties in the posteriors. Thinning was only used to reduce the number of samples and thereby ease subsequent calculations such as the credibility intervals for the differential cross sections.
MCMC methods draw samples directly from the posterior distribution which allows parameter estimation, but
they do not allow a straightforward estimation of the evidence integral.
The model selection necessary to assign $\ell$ values requires the calculation of
Eq.~\ref{eq:model_prob}. Monte Carlo integration techniques solve the issue of calculating $Z$, but essentially reverse the previous issue by placing a diminished focus on the calculation of the posterior distributions. Thus, separate calculations have to be carried out for the two tasks of parameter estimation (spectroscopic factors)
and model selection ($\ell$ assignment). The evidence calculations presented here are carried out using the nested sampling
procedure introduced by Skilling \cite{skilling2006, skilling2004}, as
implemented in the \texttt{dynesty} Python package \cite{speagle2019dynesty}.
For this work, all nested sampling runs used $250$ live points bounded with multiple ellipsoids and updates performed through slice sampling.
The stopping criteria was set at $\Delta Z_i < .01$. Since nested sampling is subject to statistical uncertainties in $\ln Z$, it is necessary to propagate these uncertainties through to both $B_{ij}$ and the probabilities for each $\ell$ transfer defined by Eq.~\ref{eq:model_prob}. This was done by drawing $10^6$ random samples from the Gaussian distributions for each $\ln Z_i$, and then applying either Eq.~\ref{eq:model_prob} or Eq.~\ref{eq:bayes_factor} to each sample, yielding a set of samples for each quantity. From these samples, the $68 \%$ credibility intervals are reported, which are constructed from the $16$, $50$, and $84$ percentiles.
\section{Analysis of $^{70} \textnormal{Zn}(d, ^{3} \textnormal{He})^{69} \textnormal{Cu}$}
\label{sec:results}
The Bayesian model detailed above allows the extraction of spectroscopic factors and the assignment of $\ell$ values
to transfer reaction data, while taking into account uncertainties associated with the optical potentials.
In order to test these methods, a reanalysis of the $^{70}$Zn$(d, ^{3}$He$)^{69}$Cu reaction data originally presented in Ref.~\cite{pierre_paper} was performed. For reference, data were collected by impinging a $27$-MeV deuteron beam onto
a thin target of enriched $^{70}$Zn. The reaction products were measured with a magnetic spectrograph. The original study should be referred to for complete experimental details. This reaction and the measured data set have two important conditions that simplify our study. First, since $^{70} \textnormal{Zn}$ has a $0^{+}$ ground state, only a unique $\ell$ transfer is allowed for a given final state. Second, only 8 low lying bound states were observed, meaning no additional
theoretical model is needed for treating transfers to the continuum. The results of the MCMC calculations
are summarized in Table~\ref{tab:cs}. Comparisons are made to the original values of the zero-range and finite-range calculations of the previous work.
Plots of the DWBA cross sections generated from the MCMC calculations are shown in Fig.~\ref{fig:states}. The purple and blue bands show the $68 \%$ and $95 \%$ credibility bands, respectively. Using samples directly from the Markov chain means that these credibility bands accurately account for all of the correlations present between the parameters. Each of these states will now be discussed in detail, with additional calculation details provided for the ground state in order to demonstrate
the use of our Bayesian method.
\begin{table}
\centering
\begin{threeparttable}
\setlength{\tabcolsep}{4pt}
\caption{\label{tab:cs} Summary of the spectroscopic factors derived in this work. {Comparisons to the zero-range (ZR) and finite-range (FR) calculations of} Ref.~\cite{pierre_paper} are made. {All calculations use the same bound state parameters}.}
\begin{tabular}{cccccc}
\addlinespace[0.5mm]
\\ \hline \hline
\\ [-1.0 ex]
$E_x$(MeV) & $\ell$ & $J^{\pi}$ \tnote{a} & $C^2S(ZR)$ \cite{pierre_paper} & $C^2S(FR)$ \cite{pierre_paper} & $C^2S$(This work) \\ [-.5ex]
\\ \hline
\\ [-1.5ex]
$0.0$ & $1$ & $3/2^{-}$ & $1.40(15)$ & $1.50(17)$ & $2.06^{+0.87}_{-0.68}$ \\ [0.8ex]
$1.11$ & $1$ & $1/2^{-}$ & - & $0.35(11)$ & $0.48^{+0.52}_{-0.25}$ \\ [0.8ex]
$1.23$ & $3$ & $(5/2^{-})$ & $0.80(11)$& $0.70(10)$ & $1.10^{+0.81}_{-0.48}$ \\ [0.8ex]
$1.71$ & $3$ & $7/2^{-}$ & $2.00(11)$ & $2.50(14)$ & $2.37^{+1.36}_{-0.84}$ \\ [0.8ex]
$1.87$ & $3$ & $7/2^{-}$ & $0.40(10)$ & $0.50(10)$ & $1.07^{+0.93}_{-0.51}$ \\ [0.8ex]
$3.35$ & $3$ & $(7/2^{-})$ & $1.60(10)$ & $2.40(15)$ & $2.67^{+1.83}_{-1.06}$ \\ [0.8ex]
\multirow{2}{*}{$3.70$} & $2$ & $(3/2^{+})$ & $1.90(25)$ & $1.50(20)$ & $1.74^{+1.05}_{-0.62}$ \\ [0.8ex]
& $3$ & $(7/2^{-})$ & - & - & $2.90^{+2.75}_{-1.43}$ \\ [0.8ex]
$3.94$ & $0$ & $1/2^{+}$ & $0.70(6)$ & $0.70(10)$ & $1.03^{+0.71}_{-0.44}$ \\ [-1.5ex]
\\ \hline \hline
\end{tabular}
\begin{tablenotes}
\item[a] These assignments are discussed in depth in Section \ref{sec:gs_sec} through Section \ref{sec:394_sec}.
\end{tablenotes}
\end{threeparttable}
\end{table}
\afterpage{
\clearpage
\null
\hspace{0pt}
\vfill
\captionof{figure}{The DWBA calculations for the states of $^{69}$Cu. The $68 \%$ and $95 \%$ credibility intervals are shown in purple and blue, respectively. Only data points up to the first minimum were considered, and they are shown in orange. For the $3.70$ MeV state, the $68 \%$ bands are shown for the two most likely $\ell$
transfers.}
\label{fig:states}
\vfill
\newpage
\clearpage
\begin{figure}
\ContinuedFloat\centering
\captionsetup[subfigure]{labelformat=empty}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/gs_fit_new_deg.png}
\caption{\label{fig:gs_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/111_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:111_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/123_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:123_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/171_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:171_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/187_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:187_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/335_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:335_fit}}
\end{subfigure}
\end{figure}
\begin{figure}[t]
\ContinuedFloat\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/370_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:370_fit}}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-5/figs/394_fit_deg.png}
\vspace{-1\baselineskip}
\caption{\label{fig:394_fit}}
\end{subfigure}
\vspace{-1.8\baselineskip}
\end{figure}
\clearpage
}
\newpage
\subsection{The Ground State}
\label{sec:gs_sec}
The MCMC calculations for the ground state were carried out using $8000$
steps and $400$ walkers in the ensemble. As an example, the
trace plot for the value of $C^2S$ as a function of step is provided in Fig.~\ref{fig:steps}.
Parameter values were estimated by using the last $2000$ steps and thinning by $50$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-5/figs/ground_state_trace.png}
\caption{\label{fig:steps} Trace of the MCMC walkers as a function of step and $\ln(C^2S)$. Only the last 2000 steps were used for the posteriors.}
\end{figure}
As noted before, all of the MCMC calculations simultaneously fit the elastic scattering and transfer data. This means that the posterior distributions shown in Fig.~\ref{fig:gs_corner} are functions of both the elastic and ground state transfer data. The impacts of the choice of potential parameters and the scale parameter $\eta$ on the elastic fit is quite dramatic. If the global values in Table~\ref{tab:opt_parms} were adopted without adjusting any parameters, the agreement between theory and experiment would be quite poor as shown by the dashed black line in Fig.~\ref{fig:elastic_mcmc}. It should also be noted that the experimental uncertainties for these points are roughly $10 \%$. On the other hand, the purple and blue bands in Fig.~\ref{fig:elastic_mcmc} show the fit obtained using the Bayesian model, which quite clearly provides a better description of the data. A significant difference is found between the normalization of the data and the optical model prediction, with $\eta \simeq 23 \% $.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-5/figs/elastic_comp_deg.png}
\caption{\label{fig:elastic_mcmc} Bayesian fit of the elastic data calculated simultaneously with the $0.00$ MeV state. The $68 \%$ and $95 \%$ credibility intervals are shown in purple and blue respectively, while the black dashed curve was calculated using the global values from Table~\ref{tab:opt_parms}.}
\end{figure}
By examining the correlations between the parameters, the model should display the continuous ambiguity discussed in Sec.~\ref{sec:amb_pots}. The pair-wise correlation plots in Fig.~\ref{fig:gs_corner} show the posterior samples from the entrance (top) and exit (bottom) channel potentials and how they relate to those of $g$, $C^2S$, $\delta D_0$, and $f$. The intra-potential correlations are quite striking for the entrance channel. All of the real potential parameters, $V, r_0,$ and $a_0$, show strong correlations with one another, and slightly weaker correlations existing between $V$, $r_0$, $r_i$, and $W_s$. Strong relationships also exist between $a_i$ and $W_s$, which is also another known continuous ambiguity \cite{perey_perey}. There is a much different situation for the exit potentials, where almost no intra-potential correlations
are seen. This result is expected since there are no elastic scattering data to constrain these parameters and because the Bayesian model parameter $f$ limits the amount of information that can be drawn from the transfer channel data. However, there is a surprisingly strong relationship between the exit channel imaginary radius and $C^2S$. A similar relationship can be seen with the entrance channel imaginary radius, but the effects on $C^2S$ are dramatically less.
The results of the fit for the ground state are shown in Fig.~\ref{fig:gs_fit}.
The circular orange data points were the only data considered in the fit in order to
not bias our deduced spectroscopic factor as discussed in Section~\ref{sec:model}.
The ground state of $^{69}$Cu is known to have a spin-parity of $\frac{3}{2}^-$, so the
transfer was calculated assuming a $2p_{3/2}$ state.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Chapter-5/figs/ground_state_corner_entrance_initial.pdf}
\includegraphics[width=\textwidth]{Chapter-5/figs/ground_state_corner_exit.pdf}
\caption{\label{fig:gs_corner} The pair-wise correlation plots for the ground state transfer. The top plot shows the entrance potential parameters, while the bottom shows the exit channel parameters. Both channels are compared to the model parameters as defined in Eq.~\ref{eq:model}. The $68 \%$ credibility intervals are listed at the top of each column with the dashed lines showing their relationship to the $1$-D parameter distributions. The plots were generated using the Python package \texttt{corner} \cite{Foreman-Mackey2016}.}
\end{figure*}
\subsection{The 1.11-MeV State}
The $1.11$-MeV state was only seen at four angles. Furthermore, only the first two data points lie within the
first minimum. The $J^{\pi} = \frac{1}{2}^-$ assignment is based on the observed angular distributions of Ref.~\cite{69cu_orig} and the analyzing power measurement of Ref.~\cite{fay_69cu}. In order to check that the data analyzed in the current work are consistent with these conclusions, the evidence integrals were calculated for $\ell = 0, 1, 2$, and $3$ transfers using all the data points.
The data support an $\ell = 1$ transfer, but do not rule out an $\ell = 3$ transfer. For this case, the median Bayes Factor defined in Eq.~\ref{eq:bayes_factor} is $B_{13} = 6.32$ (i.e, the fifty percentile of $Z_1/Z_3$), indicating that there is substantial evidence in favor of $\ell = 1$. Since the data are consistent with the $\ell$ assignments of Ref.~\cite{fay_69cu,fay_69cu}, the MCMC calculations were carried out assuming a $2p_{1/2}$ state. The results of this calculation are plotted in Fig.~\ref{fig:111_fit}.
\subsection{The 1.23-MeV State}
The state located at $1.23$ MeV is definitely associated with an $\ell = 3$
transfer. The previous analysis assumed a firm $J^{\pi} = \frac{5}{2}^-$; however,
the literature does not provide direct evidence for this. The analyzing power of Ref.~\cite{fay_69cu} was inconclusive, and the authors suggested the presence of a doublet based on the observed width of the peak in the spectrum. The $(d, ^3\textnormal{He})$ experiment of Ref.~\cite{69cu_orig} also suggested a doublet and noted the high spectroscopic factor obtained ($C^2S = 1.5$) if a $\frac{5}{2}^-$ assignment was assumed. Other studies have also assigned a firm $J^{\pi} = \frac{5}{2}^-$ \cite{franchoo_mono, coul_ex, beta_decay}, but it is unclear if these results are actually independent determinations, or if they follow Table II of Ref.~\cite{69cu_orig}. Therefore, the ENSDF evaluation is adopted \cite{a_69_ensdf}, which recommends $J^{\pi} = (\frac{5}{2}^-, \frac{7}{2}^-)$, but only present the $C^2S$ value for $1f_{5/2}$ with the fit shown in Fig.~\ref{fig:123_fit}.
\subsection{The 1.71 and 1.87-MeV States}
From the parity constraints of Ref.~\cite{69cu_orig, fay_69cu} and the $\gamma$-ray anisotropies observed in Ref.~\cite{deep_inelas}, a firm $J^{\pi} = \frac{7}{2}^-$ assignment has been made for the $1.71$-MeV state. The results from the DWBA fit for a $1f_{7/2}$ state are shown in Fig.~\ref{fig:171_fit}. The arguments from the $1.71$-MeV state also apply to the state at $1.87$-MeV. A firm
$J^{\pi} = \frac{7}{2}^-$ was assumed and a fit for a $1f_{7/2}$ state is shown in Fig.~\ref{fig:187_fit}.
\subsection{The 3.35-MeV State}
The state at $3.35$ MeV was reportedly seen in Ref. \cite{69cu_orig}, but no information was presented other than its possible existence. The previous analysis found an $\ell = 3$ nature to the angular distribution, and made a tentative assignment of $J^{\pi} = (\frac{7}{2}^-)$. The current methods support this conclusion as shown in Table~\ref{tab:probs}. $B_{3 \ell} > 10$ for all other $\ell$ transitions, indicating strong evidence for the $\ell = 3$ transfer. The probability the final state was populated with an $\ell=3$ transfer is $P(\ell=3) = 91^{+3}_{-4} \%$. However, DWBA is still unable to discriminate between $J^{\pi} = (\frac{5}{2}^-, \frac{7}{2}^-)$. The fit assuming a $1f_{7/2}$ state is shown in Fig.~\ref{fig:335_fit}.
\begin{table*}[]
\centering
\caption{\label{tab:probs} Results of the model comparison calculations for the $3.35$ and $3.70$ MeV states. For each $\ell$ value, values are provided for the $\log{Z}$ value calculated
with nested sampling, the median Bayes factor when compared to the most likely transfer $\ell=3$, and the probability of each transfer .}
\begin{tabular}{ccccc}
\\ [0.5ex] \hline \hline
\\ [-2.0ex]
& $\ell$ & $\log{Z}_{\ell}$ & $B_{3 \ell}$ & $P(\ell)$ \\ [0.5ex] \hline
\\ [-2.0ex]
\multirow{4}{*}{$E_x = 3.35 $ MeV} & 0 & 3.856(330) & $> 10^4$ & $< .01 \%$ \\ [1.0ex]
& 1 & 10.662(359) & $15.94$ & $6^{+3}_{-2} \%$ \\ [1.0ex]
& 2 & 9.961(363) & $32.14$ & $3^{+2}_{-1} \%$ \\ [1.0ex]
& 3 & 13.431(349) & 1.0 & $91^{+3}_{-4} \%$ \\
\\ [-5.0ex]
\\ \hline
\\ [-2.0ex]
\multirow{4}{*}{$E_x = 3.70 $ MeV} & 0 & 10.393(365) & $> 10^3$ & $ < 0.02 \%$ \\ [1.0ex]
& 1 & 14.947(351) & $45.98$ & $2^{+1}_{-1} \%$ \\ [1.0ex]
& 2 & 16.640(346) & $8.47$ & $10^{+5}_{-4} \%$ \\ [1.0ex]
& 3 & 18.776(336) & 1.0 & $88^{+4}_{-6} \%$ \\ [0.5ex] \hline \hline
\end{tabular}
\end{table*}
\subsection{The 3.70-MeV State}
The state at $3.70$ MeV was also seen for the first time in Ref.~\cite{pierre_paper}. However, the Bayesian method indicates an ambiguous $\ell$ assignment. As can
be seen in Fig.~\ref{fig:370_fit}, the measured angular distribution
is relatively flat, and does not appear to differ
from other states with $\ell = 3$. However, an assignment of $\ell = 2$ was
made in the previous analysis. Comparing the evidence integral for each case, it is found
that indeed the data effectively rule out $\ell = 0$ and $1$, while supporting an
$\ell = 2$ or $3$ assignment. Looking at Table~\ref{tab:probs}, a Bayes factor of $B_{32} = 8.47$ is found for
$\ell = 3$ over $\ell=2$, which suggests substantial evidence in favor of the $\ell = 3$ assignment. Using Eq.~\ref{eq:model_prob}, the $68 \%$ credibility intervals for the probabilities are $P(\ell=3) = 88^{+4}_{-6} \%$ and $P(\ell=2) = 10^{+5}_{-4} \%$, with the uncertainties coming from the statistical uncertainties of the nested sampling evidence estimation. The KDE for
the two dominate transfers are shown in Fig.~\ref{fig:l_comp} \cite{kde}. The fits for both $\ell = 2$ and $3$ are shown in Fig.~\ref{fig:370_fit}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-5/figs/l_comp.png}
\caption{\label{fig:l_comp} The KDE representations of the probabilities of the $\ell=2$ and $3$ transfers for the $3.70$-MeV state.}
\end{figure}
\subsection{The 3.94-MeV State}
\label{sec:394_sec}
The $3.94$-MeV state was also observed for the first time in the previous study. The suggested $\ell = 0$ assignment was found to be supported by the data. The second most likely transfer was found to be $\ell = 1$. In this case, $B_{01} = 72.24$, indicating very strong evidence in favor of the $\ell = 0$ assignment. The transfer to a $2s_{1/2}$ state is shown in Fig.~\ref{fig:394_fit}.
\subsection{Spectroscopic Factors}
\label{sec:spec_factor_dis_cu}
The results of the previous sections merit closer examination, especially with regards to the spectroscopic factors. Comparing these results with
those previously obtained in Table~\ref{tab:cs}, two things are clear: our median values tend to be larger than those of Ref.~\cite{pierre_paper} and
the uncertainties are much larger. To the first point, a majority of the shift comes from the lower value of $W_s$ used in the previous analysis.
Though not stated in Ref.~\cite{pierre_paper}, the surface potential was given a value of $W_s \approx 7.5$ MeV, which has the effect of lowering the value of $C^2S$.
The values of this work are on average higher due to the Bayesian analysis favoring $W_s = 11.93$ MeV and the inclusion of $\eta$, but these are somewhat offset due to the posterior values of $r_i$ and $a_i$ being lower than their global values. To the second point, when all of the sources of uncertainty are included in the analysis, highly asymmetric and data-driven uncertainties on $C^2S$ ranging from $35-108 \%$ are found. This is a substantial increase with regards to the common assumption that the extraction of spectroscopic factors comes with an approximately $25 \%$ uncertainty \cite{endt_cs}. This may still be the case when the data are sufficiently informative, but the results of a single experiment should be viewed more conservatively. In particular, low angular coverage in the entrance channel elastic scattering data, the absence of any elastic scattering data in the exit channel, and transfer angular distributions with just a few points all play a role in final uncertainty that can be reported for $C^2S$.
To gain a clearer picture of the role each potential plays in the final uncertainty, the calculations for the ground state were repeated for the following cases:
\begin{enumerate}
\item Uncertainty in just the entrance channel potential parameters.
\item Uncertainty in both the entrance and exit channel potential parameters.
\item Uncertainty in the entrance, exit, and bound state potential parameters.
\end{enumerate}
Case one has the lowest uncertainty with $C^2S = 1.88^{+0.44}_{-0.37}$ ($\approx \! 24 \%$).
Case two is the same model used for all of the states in Section.~\ref{sec:results}. This gives $C^2S = 2.06^{+0.87}_{-0.68}$ ($\approx \! 42 \%$).
Case three first requires that priors for the radius and diffuseness parameters of the bound state potential are specified. Analogously to the exit channel, which also lacks data to directly constrain these parameters, they are assigned $V_{\textnormal{Bound}} \sim (\mu_{\textnormal{central}, k}, \{ 0.10 \mu_{\textnormal{central}, k}\}^2)$. Again, $k$ is an index that runs over the radius and diffuseness parameters, and \say{central} refers to $r = 1.25$ and $a=0.65$. This case has the largest final uncertainty with $C^2S = 2.04^{+1.15}_{-0.85}$ ($\approx \! 56 \%$).
The comparison between the final distribution for the spectroscopic factors obtained for just the entrance channel, entrance channel and exit channel, and all of the potentials including the bound state are shown in Fig. \ref{fig:ridge}. This demonstrates the strong dependence of $C^2S$ on each of these potentials.
These results point toward ways to improve the precision of $C^2S$. Examination of the correlations in the posterior samples in Fig.~\ref{fig:gs_corner} shows that the imaginary radius in the exit channel is the parameter responsible for much of the uncertainty in $C^2S$. The samples for the exit channel also show little intra-potential correlation between the parameters. This is expected since the only data that could inform these parameters are in the transfer channel. If elastic data for the exit channel were available, then the proper parameter correlations could be inferred, thereby, reducing the uncertainty in the extracted spectroscopic factors. This could bring the uncertainty closer to the roughly $24 \%$ seen in the case when just the entrance potential is considered.
Bound state parameter dependence could have significant impact on astrophysical applications as well. In these applications, the extraction of $C^2S$ is an intermediate step towards calculation of quantities relevant to astrophysics such as particle partial widths and direct capture
cross sections. It was noted in Ref.~\cite{bertone} that it is essential to use the same bound state parameters for both the extraction of $C^2S$ and calculation of the direct capture cross section or partial width. This procedure was found to significantly reduce the final uncertainties on these quantities. If the bound state parameters are included in a Bayesian model to extract $C^2S$, then it becomes possible to calculate these quantities not only using the same bound state parameters, but using fully correlated, statistically meaningful samples informed directly by the transfer reaction measurement. The preliminary steps towards investigating these claims are presented in Chapter \ref{chap:sodium}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-5/figs/ridge.png}
\caption{\label{fig:ridge} Ridge line plot that compares the KDE distributions for the ground state $C^2S$ when there is variation in the entrance potential; entrance and exit potentials; and in the entrance, exit, and bound state potentials. The percentage uncertainties go from $24 \%$, $42 \%$, and $56 \%$, respectively.}
\end{figure}
\subsection{Nuclear Structure of $^{69} \textnormal{Cu}$}
Structure properties of $^{69} \textnormal{Cu}$ are also influenced by these results. The occupancy of orbitals tends to be higher than
expected for both the open $pf$ orbitals and for the closed $1f_{7/2}$ proton shell.
In order to propagate the uncertainties from each $C^2S$, the MCMC samples are used to
construct a KDE for each state. From these densities, $10^5$ samples are pulled to estimate the occupancy:
\begin{equation}
\label{eq:occ}
n = \sum_{i}^N C^2S_i,
\end{equation}
where $i$ refers to each of the $N$ states considered in the sum. Similarly,
the energy of the $1f_{7/2}$ shell can be determined from:
\begin{equation}
\label{eq:energy}
E(1f_{7/2}) = \frac{\sum_{i}^N C^2S_i(1f_{7/2}) E_i(1f_{7/2})}{n_{1f_{7/2}}}.
\end{equation}
The occupancy above the closed shell was found to be $n_{pf} = 3.90^{+1.03}_{-1.28}$,
which is consistent but systematically higher than the value of $2.55(23)$ from the finite range calculations of the previous analysis \cite{pierre_paper}.
For the $1f_{7/2}$ shell, there are two scenarios dependent on the identity of the state at $3.70$ MeV.
If the state does not belong to the $f$ shell, then $n_{1f_{7/2}} = 6.64^{+2.47}_{-1.79}$ and $E(1f_{7/2}) = 2.43^{+0.23}_{-0.25}$ MeV
, or if it does, $n_{1f_{7/2}} = 10.03^{+3.63}_{-2.66}$ and $E(1f_{7/2}) = 2.86^{+0.23}_{-0.26}$ MeV.
Looking at the median value for $n_{pf}$, it would be expected that $n_{1f_{7/2}} = 6.10$. This may point to the $\ell=2$ assignment of the $3.70$-MeV state
being the correct one, but it must be recognized that there are still large uncertainties on all of these quantities.
Furthermore, since the optical model parameters are shared by these states, values derived through combinations of states are susceptible to significant systematic shifts.
In light of this fact, these credibility intervals should be viewed as approximations. Perhaps more importantly is that if the $3.70$-MeV state belongs to the $1f_{7/2}$, then the full strength of this shell has been observed. The shell model calculations in Ref.~\cite{pierre_paper} predict a much higher energy than $E(1f_{7/2}) = 2.86$ MeV due to the presence of more states at higher excitation energies. A future experiment with a higher incident
beam energy that would be capable of populating these predicted higher lying states could help clarify these discrepancies.
\subsection{Comparison to Other Bayesian Studies}
It is also worthwhile to compare these methods with those of several recent publications, which have also applied Bayesian methods to optical potentials \cite{lovell_mcmc, king_dwba}.
These papers differ from our approach in a few key ways: the data come from multiple experiments, exit and entrance channels are fitted separately, transfers are calculated using finite range effects, and the prior distributions are much wider than ours ($100 \%$ of the initial global values).
In a fully Bayesian framework, fitting the data in the entrance and exit channels separately or simultaneously is equivalent as long as the same model is used \cite{bayes}.
While the priors of this work are narrower, they could likely be made broader if there was elastic scattering data over a wider range of angles.
Full finite-range calculations could be important to include in future studies, but, as seen in Table~\ref{tab:cs}, for this reaction the average difference is roughly $16 \%$, well within the uncertainty arising from the optical potentials.
Including these effects will require a more efficient way to evaluate the likelihood function. Specifically, a finite-range calculation takes roughly $50$ times longer than a calculation using the zero-range approximation. For this work, $2 \times 10^6$ likelihood evaluations took approximately $22$ hours, meaning the finite-range calculation would take over $1000$ hours.
As well as those differences, these results differ from those of
Ref.~\cite{king_dwba} in one important aspect. Here, I confirmed the strong correlations between optical model parameters that are expected from historical studies \cite{hodgson1971}, and
treated them in a statistically meaningful way. It should be stressed that the Bayesian model does
not assume these correlations, but that they appear to be a consequence of the Woods-Saxon potential form factor.
On the other hand, in their comparison of frequentist and Bayesian
methods, Ref.~\cite{king_dwba} do not observe such
correlations, with the exception of the $V_0$ and $r_0$ anti-correlation, and ascribe their finding to the non-Gaussian
posterior distributions, which would be poorly described by the
frequentist model. The origin of this
disagreement is unclear, and further investigation is needed.
\section{Summary}
In this chapter, Bayesian analysis was introduced along with the basics of the numerically approximating the posterior distribution. This formalism was then applied to the two primary methods for extracting information from transfer reactions: energy calibration of the focal plane and DWBA. The Bayesian approach to the focal plane calibration allowed a fit that considered both calibration energy uncertainties as well as those from the experimentally determined peak centroids. An additional uncertainty could also be easily integrated into the model, which improves upon previous techniques that estimated this uncertainty after the calibration.
The Bayesian model for DWBA signifies a major advancement in analysis. Although the full implications of this method are still not known, it was shown in this chapter that Bayesian methods allow a more detailed analysis of transfer data. In particular, optical model uncertainties can be more fully understood with the Bayesian approach, and for the first time explicit probabilities can be assigned to each allowed $\ell$ value.
\chapter{$^{23}$N\lowercase{a}$(^{3}$H\lowercase{e}, $\lowercase{d})^{24}$M\lowercase{g}}
\label{chap:sodium}
\section{Previous Studies and Motivation for This Work}
The NeNa cycle (see Fig.~\ref{fig:ne_na_cycle}) has been the subject of experimental investigation for 40 years. Early direct measurements focused on constraining explosive nucleosynthesis happening in classical novae and were carried out for both the $(p, \gamma)$ and $(p, \alpha)$ reaction channels \cite{Zyskind_1981, goerres_1989}. The study of Goerres \textit{et al.} (Ref.~\cite{goerres_1989}) was one of the first to directly search for the resonance that corresponded to the $E_x \approx 11827$-keV state in $^{24}$Mg, which had been observed in the indirect measurements of Refs.~\cite{moss_1976, vermeer_1988}. However, they were only able to establish an upper limit of $\omega \gamma_{(p, \gamma)} \leq 5 \times 10^{-6}$ eV. A short time later, Ref.~\cite{1996_Eid} was the first to calculate a direct capture component of the reaction rate, and identify its importance in the $(p, \gamma)$ rate.
The current understanding of these rates is in large part due to the study of Ref.~\cite{hale_2004}. In that study, a $(^{3}$He$,d)$ transfer reaction was performed, and a state at $E_x = 11.831$ keV was observed, thereby placing constraints on the energy and strength of the $138$-keV resonance. However, the angular distribution for this state was inconclusive, leaving a large amount of uncertainty in the $(p, \gamma)$ and $(p, \alpha)$ rates. Additionally, the authors of that work presented a detailed evaluation of the literature to formulate the basis for the current $(p, \gamma)$ and $(p, \alpha)$ rates. Since that time, several direct searches have been performed, with the intent of measuring the $138$-keV resonance. Ref.~\cite{Rowland_2004} set a new upper limit at $\omega \gamma_{(p, \gamma)} \leq 1.5 \times 10^{-7}$ eV. Subsequently, Ref.~\cite{Cesaratto_2013} used a high intensity proton beam of $\approx 1$ mA to give a further reduced upper limit of $\omega \gamma_{(p, \gamma)} \leq 5.17 \times 10^{-9}$ eV, which ruled out this resonance's importance for the $(p, \alpha)$ channel. Finally, the first direct detection of the $138$-keV resonance with a statistical significance above $2 \sigma$ came in Ref.~\cite{BOELTZIG_2019}. That study reports $\omega \gamma_{(p, \gamma)} = 1.46^{+0.58}_{-0.53} \times 10^{-9}$ eV.
At the present time, the $^{23}$Na$(p, \gamma)$ reaction rate has a greatly reduced uncertainty, now on the order of $30 \%$ at the temperatures of relevance to globular clusters, because of the intense study of the $138$-keV resonance using direct measurements. However, much of the rate is still dependent on the results and evaluation presented in Ref.~\cite{hale_2004}. The purpose of the current study is to use the $^{23}$Na$(^{3}\textnormal{He}, d)^{24}$Mg reaction to investigate the results and conclusions of the foundational work of Ref.~\cite{hale_2004}. Of particular interest is placing constraints on the spin and parity of the $138$-keV resonance using the model selection capabilities of the Bayesian DWBA method presented in Section \ref{sec:bay_dwba}.
The current rates in STARLIB has been updated for the measurement of Ref.~\cite{BOELTZIG_2019}. \texttt{RateMC} was used to generate the rate contribution plot for the $(p, \gamma)$ rate shown in Fig.~\ref{fig:contribution_luna_p_g} and the $(p, \alpha)$ rate shown Fig.~\ref{fig:contribution_luna_p_a}. As can be seen in the figure for the $(p, \gamma)$ rate, at the temperatures relevant to globular cluster nucleosynthesis, $70 \text{-} 80$ MK, the $138$-keV resonance dominates, with a lesser contribution coming from the direct capture rate and the $240$-keV resonance. For the $(p, \alpha)$ rate the dominant contributions are from subthreshold states and the $170$-keV resonance. Looking at Fig.~\ref{fig:level_scheme}, the relative locations of some of these resonances with respect to the excitation energy of $^{24}$Mg can be seen. Thus, the goal of the current experiment is to study the excited state of $^{24}$Mg in the region of $11 \lessapprox E_x \lessapprox 12$ MeV. Using the SPS at TUNL it is possible to extract excitation energies, angular distributions, and spectroscopic factors. From the spectroscopic factors, proton partial widths can be extracted of these states. It can often be assumed that $\omega \gamma_{(p, \gamma)} \approx \omega \Gamma_{p}$ for resonances below $500$ keV where $\Gamma \approx \Gamma_{\gamma}$. However, as can be seen in Fig.~\ref{fig:level_scheme}, $^{24}$Mg is $\alpha$ unbound by several MeV near the proton threshold. For the current experiment we cannot assume the proton partial width is directly proportional to the resonance strength, and as a consequence additional nuclear input will be needed to determine $\omega \gamma_{(p, \gamma)}$ with precision.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/GraphContribution_LUNA.pdf}
\caption{Rate contribution plot for $^{23}$Na$(p, \gamma)$ based on the current STARLIB rate updated with the resonance strengths of Ref.~\cite{BOELTZIG_2019}. \textit{A-Rate 1} is the direct capture rate, while the dashed curve is the summed contribution of all other resonances.}
\label{fig:contribution_luna_p_g}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/GraphContribution_p_a_luna.pdf}
\caption{Rate contribution plot for $^{23}$Na$(p, \alpha)$ based on the current STARLIB rate updated with the resonance strengths of Ref.~\cite{BOELTZIG_2019}. Subthreshold resonances dominate at low temperatures in the absence of a direct capture rate. The dashed curve is the summed contribution of all other resonances.}
\label{fig:contribution_luna_p_a}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/Level_Scheme.pdf}
\caption{Level scheme of $^{24}$Mg with the relative locations of some of the astrophysically states shown.}
\label{fig:level_scheme}
\end{figure}
This chapter will be roughly divided into five parts: the production of the transmission targets needed for this experiment, details of the transfer experiment, energy calibration of the focal plane, Bayesian DWBA analysis, and finally the calculation of the reaction rate and its impact on the globular cluster nucleosynthesis.
\section{Targets}
\label{sec:targets}
Several attempts were made to produce transmission targets for this experiment. These targets need to be thin enough for the outgoing deutrons to leave the target and be detected with high resolution. However, they must also be thick enough to give reasonable count rates, so that weakly populated states can be detected with decent statistics. This section details my attempts to produce such targets over a roughly two year period.
It was decided early on to focus efforts on producing NaBr targets based on the observations in Refs.~\cite{hale_2004, hale_thesis} that these targets were fairly stable to bombardment, reasonably resistant to oxygen contamination, and the astrophysical region of interest is free from contamination arising from $^{79, 81}$Br.
All targets were produced by thermal evaporation. The principle of this process is to place the material that we want to create a thin film of into a resistive boat that has a comparatively higher evaporation point. This boat is clamped between two copper electrodes that are water cooled. Several inches above the boat, a substrate holder is loaded with target backings mounted on target frames. An example of this setup is shown in Fig.~\ref{fig:nabr_setup}. A bell jar is placed over the substrates, electrodes, and boat. The bell jar is then brought down to high vacuum. Once under vacuum, current is gradually applied to the electrodes, thereby heating the boat. After the material reaches its evaporation point, the gaseous material leaves the boat as a result of thermal energy. In this way, the material from the boat comes to slowly condense on the cooler substrate, creating a thin layer of the target material. A quartz crystal monitor, which is also water cooled, tracks the rate and total deposition of the material. Once the desired thickness has been reached, the current is reduced to zero, and the bell jar is brought back up to atmospheric pressure and removed. For the NaBr evaporation, the NaBr was a reagent grade crystalline powder, the boat was made of tantalum, and the substrates were carbon foils of natural isotopic abundance floated onto a target frame. Although several batches of carbon foils were used during the course of the target making process, they were all purchased from The Arizona Carbon Foil Co., Inc., with thicknesses varying from $15 \text{-} 25$ $\mu$g$/$cm$^2$ \cite{acfmetals}. The LENA evaporator, pictured in Fig.~\ref{fig:evaporator}, was used for all target production. This evaporator is devoted towards target fabrication for low background experiments, making it an attractive alternative to the more general use TUNL evaporator. In particular, the bell jar is brought down to rough vacuum with a scroll pump, and to high vacuums of $1 \text{-} 5 \times 10^{-7}$ Torr by a cryopump, both of which reduce the potential of contamination coming from pump oils. A specific evaporation setup is shown in Fig.~\ref{fig:nabr_setup}.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/evap.png}
\caption{The LENA evaporator. Note that the bell jar is coated with a layer of tantalum from the target development of Ref.~\cite{HUNT_2019}.}
\label{fig:evaporator}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/NaBr_setup.png}
\caption{Evaporation setup for NaBr. The shutter covers the boat as the electrode current is increased. This prevents outgassing material from being deposited on the targets until it is removed.}
\label{fig:nabr_setup}
\end{figure}
\subsection{Rutherford Backscattering Spectroscopy}
The failure of several initial $^{23}$Na$(^3\textnormal{He}, d)$ experiments necessitated a better understanding of the NaBr targets. Several different methods were explored to characterize the targets, none of which yielded a satisfactory way to accurately measure the target thickness. However, enough information was available to reach the following conclusions about the targets:
\begin{enumerate}
\item They contained $^{23}$Na.
\item Exposure to atmosphere can be tolerated for short periods of time.
\item After this time, the targets degrade heavily from oxidation.
\end{enumerate}
All of these conclusions are based on a series of Rutherford backscattering spectroscopy (RBS) experiments that were carried out at TUNL. These experiments utilized the $52^{\circ}$ beamline, which is equipped with a general purpose scattering chamber. A $2$-MeV beam of $^4$He$^{2+}$ was accelerated down the beamline and impinged on the NaBr targets. Currents were typically on the order of $100$ nA (i.e., $50$ pnA). The backscattered $\alpha$ particles were detected by a $100$-$\mu$m-thick silicon surface barrier detector positioned at $165^{\circ}$ relative to the beam direction. Charge integration of the beam was carried out via a Faraday cup located downstream of target chamber. The silicon detector electronics were identical to those of the monitor detector setup shown in Fig.~\ref{fig:electronics_si}, with the exception of the second detector and coincidence circuit. Energy calibration for this detector was carried out using the peaks from a gold target of known thickness.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/Na_Br_rbs.pdf}
\caption{An example spectra from the first series of RBS measurements to attempt to characterize the NaBr targets. The simulated curves were produced using SIMNRA. The low energy tails on the data cannot be described by a single layer of NaBr. }
\label{fig:nabr_rbs}
\end{figure}
A detailed analysis of this data set was attempted, but proved unsuccessful. Theoretical cross sections were calculated using SIMNRA \cite{Mayer_1999}. It was quickly found that the low energy tails on O, Na, and Br peaks could not be described by assuming a single layer of these materials. An example spectra is shown in Fig.~\ref{fig:nabr_rbs} along with the simulated cross section from SIMNRA. As can be seen in the figure, thick layers of material produce an increase in cross section at lower energies. The observed low energy tails can only be explained by introducing many layers with varying composition. This situation means that the observed RBS data for this target are nearly useless for predictive purposes due to the abundance of free parameters, i.e., the number of layers, the thickness of the layers, and the relative composition of the elements in the layers.
Explaining the above results proved difficult. In time, it was gradually understood that the low energy tails were a result of the material becoming heavily oxidized. Definitive proof came during a later run, in which RBS was performed on a pair of NaI targets. One target was prepared the day of the run, and immediately transferred from the evaporator to the target chamber, while the other was evaporated over a week before and exposed to atmosphere. The change in target material aimed to test whether the low energy tails were a characteristic of the NaBr targets, or a more general phenomenon. As can be seen in Fig.~\ref{fig:na_I_rbs}, the exposure to atmosphere is linked to the low-energy tails seen in the RBS data.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/Na_I_rbs.pdf}
\caption{(top) The RBS spectra obtained for a NaI target exposed to atmosphere for a week, (bottom) and a NaI target evaporated the day of the RBS run and immediately transferred to a scattering chamber at high vacuum. Oxidation creates significant low energy tails in the spectrum, and prohibits accurate determination of the target composition and thickness.}
\label{fig:na_I_rbs}
\end{figure}
Practically, this result points towards the issue with relying on RBS for an accurate determination of target thickness for the $^{23}$Na$(^3$He$,d)$ experiment. Even if the targets do not degrade during the course of the transfer experiment, the process of transferring them between the SPS target chamber and the chamber on the $52^{\circ}$ beamline can dramatically change their properties. If RBS was carried out before the transfer measurement, the resolution of the experiment will suffer because of the oxidation. If, instead, the RBS is carried out after the experiment, the resulting oxidation will render the analysis of the RBS data inconclusive.
These results ultimately led to the conclusion that an absolute cross section scale would be hard to obtain both because of the poorly known target thickness and the previously mentioned issues with charge integration of the SPS beamstop (see Section~\ref{sec:split_pole}).
\section{Experiment Details}
Data taking for this experiment took place over a five-day period from October $24 \text{-} 28$, 2018. The run was limited to five days due to the sodium oven on the helium source. The oven can run for approximately $140$ hours before it has to be brought up to atmosphere and loaded with more sodium. Because of this constraint, the experiment focused on taking data at the maximum number of angles below $25^{\circ}$. These angles cover the region of the angular distribution that is best described by DWBA. While higher angles can provide useful information, they suffer from lower count rates, meaning more time would be needed per angle. The lower statistics combined with the expected inadequacies of DWBA offered little reason to sacrifice the limited run time towards these measurement. The initial plan was to measure from $3^{\circ} \text{-} 21^{\circ}$ in steps of $2^{\circ}$. Higher angles would only be attempted once these angles had accumulated sufficient statistics to constrain the $11831$-keV state.
Taking into consideration the findings presented in Section \ref{sec:targets}, the NaBr targets used for the $^{23}$Na$(^3 \textnormal{He}, d)$ experiment were evaporated the morning of the run. $^{\textnormal{nat}}$C foils $22$ $\mu$g$/$cm$^2$ thick were used as the target backing. These were floated several days before evaporation to allow them time to dry. The evaporation took place over a period of roughly $55$ minutes, with the rate of deposition fluctuating between $10-30$ ng/(cm$^2$ s). Evaporation was halted after the thickness monitor indicated a thickness of $\approx 70$ $\mu$g/cm$^2$. Six targets were produced in total. After the evaporation was complete, the bell jar was gradually brought up to atmosphere, and the targets were placed into a container to transfer them to the target chamber of the SPS. This container was brought down to rough vacuum to reduce exposure to air. At the SPS target chamber, three of the targets were mounted onto the SPS target ladder. The other targets consisting of a $1$ mm collimator for beam tuning, a $^{\textnormal{nat}}$C target identical to the backing of the NaBr targets for background, and thermally evaporated $^{27}$Al on a $^{\textnormal{nat}}$C backing to use for an initial energy calibration. After quickly mounting these targets onto the target ladder, the ladder itself was mounted in the target chamber and brought down to high vacuum.
The nominal energy selected for the experiment was $21$ MeV, following Ref.~\cite{hale_2004}. The tandem was brought up to a voltage of approximately $6.7$ MV. The $90 \text{-} 90$ NMR was set to $\approx 568.28$ mT. The beam energy of $E_{^3\textnormal{He}} = 21.249(1)$ MeV was determined from the bending radius and the NMR setting. The uncertainty was estimated from the fluctuations in the NMR reading during the course of the experiment, with the average value being $\Bar{B} = 568.290(1)$ mT. The magnet settings were chosen based on a scaled set of values derived from an earlier beam development run of $d$, which had been used to maximize transmission through the collimator of the SPS using the high resolution setting of the $90 \text{-} 90$ system.
A beam of $2$ $\mu$A $^3$He$^{-}$ was extracted out of the helium source at the start of the run. In order to minimize the amount of $^{3}$He needed, a gas recycling system was used. Details of this system can be found in Ref.~\cite{combs_2017}. New $^3$He has to be introduced into this system every $\approx 48$ hours in order to maintain beam current. The amount of beam that made it to the target varied, but was typically around $100-200$ nA of $^3$He$^{+2}$. The high resolution setting of the $90 \text{-} 90$ system meant that close to $100 \%$ of this beam could be passed through the $1$ mm collimator. It was noticed during the course of data taking that the beam would drift vertically on the target. In order to mitigate the potential impact of this drift, the beam was retuned through the collimator every few hours. The entirety of the run was plagued by source instability. The power supplies for the attachment and extractor electrodes would current limit every few seconds from a periodic discharge coming from the extractor. Each discharge was accompanied by a drop in the beam current, and as a result we were severely limited in the amount of quality data that could be collected.
A diagram of the SPS target chamber is shown in Fig.~\ref{fig:target_chamber}. At each angle the field of the spectrograph was set between $1.14 \text{-} 1.13$ T in order to keep the states of interest on the focal plane. The solid angle of the spectrograph was fixed throughout the experiment at $\Omega_{\textnormal{SPS}} = 1$ msr. The monitor telescope was positioned at $45^{\circ}$. The planned angles between $3^{\circ} \text{-} 21^{\circ}$ in steps of $2^{\circ}$ were measured in addition to $26^{\circ}$. The source instability severely limited the statistics collected at $3^{\circ}$ and $26^{\circ}$. The last $8$-hour shift was devoted towards elastic scattering measurements for the DWBA analysis. These measurements were performed by changing the field of the SPS to $0.75 \text{-} 0.80$ T and measuring angles between $15^{\circ} \text{-} 55^{\circ}$ in $5^{\circ}$ steps and a final angle at $59^{\circ}$. An attempt was made to measure at $60^{\circ}$, but the sliding seal on the target chamber started to lose vacuum around $59.5^{\circ}$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/target_chamber.pdf}
\caption{Top view drawing of the SPS target chamber.}
\label{fig:target_chamber}
\end{figure}
\section{Focal Plane Peak Fitting}
\label{sec:peak_fitting_fp}
The deuteron group was clearly resolved in the focal plane $\Delta E / E$ spectra, as can be seen in Fig.~\ref{fig:11_deg_e_de}. This group was gated on at each angle to produce the deuteron spectra in the focal plane. An example of the spectra produced with the above gate in shown in Fig.~\ref{fig:11_deg_fp_spectra_raw}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/E_dE.pdf}
\caption{Histogram of the $\Delta E$ versus $E$ spectra. This example is from $\theta_{lab} = 11^{\circ}$}
\label{fig:11_deg_e_de}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/23Na_11deg.pdf}
\caption{Focal plane position spectra gated on the deuteron group from Fig.~\ref{fig:11_deg_e_de}.}
\label{fig:11_deg_fp_spectra_raw}
\end{figure}
Peaks in the gated focal plane spectra were fit using Gaussian functions with a linear background. The Gaussian function takes the form:
\begin{equation}
\label{eq:gaussian_peak_fit}
f(x ; A, w, c) = \frac{A}{\sqrt{2 \pi w^2}} e^{\frac{(x-c)^2}{2 w^2}},
\end{equation}
where $A$ is the area, $w$ is the width, and $c$ is the centroid. Some of the peaks on the high energy side of the detector at higher angles showed a mild low energy tail. For computational ease, these peaks were fit with a log-normal shaped function. An example of this behaviour for the $8864$-keV peak at $\theta_{lab} = 11^{\circ}$ is shown in Fig.~\ref{fig:8864_fit}. Compared to a Gaussian fit, the log-normal fit resulted in an approximately $1$ channel change in the peak centroid and $2 \%$ change in area. Peak fitting with Bayesian statistics based on a Poisson likelihood was also explored. It was found that these results were consistent with the frequentist approach based on a $\chi^2$ function; however, the Bayesian method had larger uncertainties on average for the peak parameters. The additional computation overhead from the Bayesian method combined with the high number of states in the spectrum at each angle, led to a frequentist method being adopted. Parameter uncertainties adjusted by a factor $\sqrt{\chi^2/dof}$, where $dof$ is the degree of freedom of the fit, were adopted in the case that $\chi^2/dof > 1$. This correction was a marginally more conservative than the Bayesian method, but had the benefit of being computationally less expensive. All of the fits using the frequentist method were performed using the program \texttt{fityk} \cite{Wojdyr_2010}. Spectra were fit for the $10$ angles between $3^{\circ} \text{-} 21^{\circ}$. Centroids, areas, FWHM, and their associated uncertainties were tabulated for each peak in the spectrum.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/lognormal_fit_example.pdf}
\caption{Example of the low energy tail seen in the high energy particles. This fit is from $\theta_{lab} = 11^{\circ}$. The log-normal fit is shown to provide a mildly improved description of this tail.}
\label{fig:8864_fit}
\end{figure}
\section{Updates to Energy Levels Above $11$ MeV}
\label{sec:energy_level_update}
The current ENSDF evaluation, Ref.~\cite{firestone_2007}, is incomplete due to missing the measurements of Ref.~\cite{goerres_1989} and Ref.~\cite{hale_2004}, which are important in the astrophysical region of interest of $ 11 \lessapprox E_x \lessapprox 12$ MeV. This evaluation also has inaccuracies for many of the states above the proton threshold. It was found that \textit{deduced} $E_{\gamma}$ values from Ref.~\cite{schmalbrock_1983} were incorrectly used in the evaluation. Excitation energies can be derived by performing a least squares fits to all of the observed $E_{\gamma}$ values that decay from or feed into an excited state \cite{Firestone_1991}. However, deduced $E_{\gamma}$ values are inferred from excitation energies, meaning they cannot be considered as independent measurements. Additionally, many of the levels were calculated using calibration points from the spectrograph measurements of Ref.~\cite{moss_1976} and Ref.~\cite{zwieglinski_1978}. This error is long standing, and is present in every compilation and evaluation since 1978 \cite{ENDT_1978}. Both of these inaccuracies lead to overly precise recommended energies. Finally, the evaluation is outdated because many of the level energies have been determined through $^{20}\textnormal{Ne}(\alpha, \gamma)^{24}\textnormal{Mg}$ and $^{23}\textnormal{Na}(p, \gamma)^{24}\textnormal{Mg}$, and therefore
need to be updated based on the 2016 mass evaluation \cite{Wang_2017}. The measured energies of Vermeer \textit{et. al} \cite{vermeer_1988} also present challenges that are discussed in Appendix \ref{chap:rant_on_energies}, but have been adopted as is. Note that the measurements of Hale \textit{et al}. have been excluded from the compiled values. This decision will be discussed in depth in Section \ref{sec:hale_discussion}, but for now it is worth emphasizing that these compiled values are only for the purpose of accurately energy calibrating the current experiment and for calculating the astrophysical reaction rate.
The compiled energies are presented in Table.~\ref{tab:energy_comp}. Note that in the case of Ref.~\cite{endt_1990}, resonant capture was used to excite $^{24}$Mg, but the excitation energies were deduced from gamma ray energies making these values independent of the reaction $Q$-value. For the measurements that report the lab frame resonance energies, the excitation energies are deduced from:
\begin{equation}
\label{eq:lab_to_ex}
E_x = Q + E_{P} \frac{M_T}{M_T + M_{P}},
\end{equation}
where $E_P$ is the projectile energy measured in the laboratory frame, and $M_{P}$ and $M_T$ are the \textit{nuclear} masses for the projectile and target nuclei, respectively. I have used the \textit{atomic} masses from Ref.~\cite{Wang_2017} assuming the difference is negligible compared to the statistical uncertainty in $E_{P}$. $Q$ is the $Q$-value for either the $(p,\gamma)$ or $(\alpha, \gamma)$ reaction. The column in Table \ref{tab:energy_comp} from Ref.~\cite{endt_eval_1990} shows energies deduced from a weighted average of several $(p, \gamma)$ measurements, and that paper should be referred to for additional details. For the present work, the suggested value of these weighted averages is treated as a single measurement that is updated according to Eq.~\ref{eq:lab_to_ex}. The weighted averages presented in the last column were calculated from:
\begin{equation}
\bar{x} = \frac{\sum_i^N w_i x_i}{\sum_j^N w_j},
\end{equation}
with uncertainty given by:
\begin{equation}
\bar{\sigma} = \frac{1}{\sqrt{\sum_j^N w_j}},
\end{equation}
where the weight is $w_i = 1/\sigma_i^2$, $\sigma_i$ is the uncertainty of measurement $i$, $x_i$ is the reported value of measurement $i$, and $N$ is the total number of measurements. In order to reduce the effects of potential outliers, the lowest measured uncertainty was used instead of $\bar{\sigma}$ in the case of $N \leq 3$.
\begin{lscapenum}
\begin{table}[]
\centering
\setlength\tabcolsep{3pt}
\def1.2{1}
\caption{ \label{tab:energy_comp} Previously measured energies. An * indicates that the listed energy was used as a calibration point in the listed experiment. These values, therefore, have been excluded from the weighted average. Excitation energies derived from resonance energies have been updated based on the 2016 mass evaluation \cite{Wang_2017}. Note the $\dagger$ on the $12259.6$-keV state. This value was taken from Ref.~\cite{endt_eval_1990}, and is actually the unweighted average of a pair of states with updated energies of $12259.4(4)$ keV and $12259.8(4)$ keV, respectively.}
\begin{tabular}{llllllllll}
\toprule
\toprule
$(p, p^{\prime})$ & $(p, p^{\prime})$ & $(^{16} \textnormal{O}, \alpha)$ & $(p,\gamma)$ & $(p,\gamma)$ & $(\alpha,\gamma)$ & $(\alpha,\gamma)$ & $(\alpha,\gamma)$ & $(\alpha,\gamma)$ & Weighted Average \\
\cite{moss_1976} & \cite{zwieglinski_1978} & \cite{vermeer_1988} & \cite{endt_1990} & \cite{endt_eval_1990} & \cite{fiffield_1978} & \cite{schmalbrock_1983} & \cite{goerres_1989} & \cite{smulders_1965} \\ \hline
11389(3)* & 11391(7) & 11390(4) & & & & 11394(3) & & 11393(5) & 11392.6(21) \\
11456(3) & 11452(7) & 11455(4) & 11452.8(4) & & & 11456(3) & & & 11452.9(4) \\
11521(3)* & 11520(7) & 11519(4) & & & & 11522(2) & & 11523(5) & 11521.5(16) \\
11694(3)* & 11694(7)* & 11694(4) & & & & 11699(2) & & 11694(5) & 11698(2) \\
11727(3)* & 11727(7) & 11727(4) & & & & 11731(2) & & 11728(5) & 11729.8(16) \\
11828(3) & & 11827(4) & & & & & & & 11827(3) \\
11862(3)* & 11860(7) & 11860(4) & & & 11861(5) & 11868(3) & 11859.4(20) & 11862(5) & 11861.6(15) \\
11935(3)* & & 11930(4) & & 11933.05(19) & & & 11933.2(10) & & 11933.0(10) \\
11967(3)* & 11965(7) & 11963(4) & & 11966.6(5) & 11967(5) & 11974(3) & 11966.7(10) & 11968(5) & 11966.7(5) \\
11989(3)* & 11990(7) & 11985(4) & 11988.0(3) & 11988.47(6) & & & 11988.7(10) & & 11988.45(6) \\
12015(3)* & 12016(7)* & & & 12017.1(6) & 12016(5) & & 12016.5(10) & & 12016.9(5) \\
12050(3)* & 12050(7) & & & 12051.3(4) & 12050(5) & & & & 12051.3(4) \\
12121(3)* & 12124(7) & & & 12119(1) & 12121(5) & & & & 12119(1) \\
12181(3)* & & & & 12183.3(1) & & & & & 12183.3(1) \\
12258(3)* & 12261(7) & & & 12259.6(4)$^{\dagger}$ & 12258(5) & & & & 12259.6(4) \\
12342(3)* & & & & 12341.0(4) & & & & & 12341.0(4) \\
12402(3) & 12402(7) & & & 12405.3(3) & 12405(5) & & & & 12405.3(3) \\
12528(3)* & & & & 12528.4(6) & & & & & 12528.4(6) \\
12577(3)* & 12578(7) & & & & 12578(5) & & & & 12578(5) \\
12669(3) & & & 12669.9(2) & 12670.0(4) & & & & & 12669.9(4) \\
12736(3) & 12739(7) & & & 12739.0(7) & 12740(5) & & & & 12738.9(7) \\
& & & & 12817.77(19) & & & & & 12817.77(19) \\
12849(3) & 12850(7) & & & 12852.2(5) & & & & & 12852.1(5) \\
12921(3)* & & & & 12921.6(4) & 12923(5) & & & & 12921.6(4) \\
12963(3) & & & & 12963.9(5) & & & & & 12963.9(5) \\
\bottomrule
\bottomrule
\end{tabular}
\end{table}
\end{lscapenum}
\restoregeometry
\section{Energy Calibration}
\label{sec:energy_cal_na}
Using the position measurements of the focal plane detector, excitation energies were extracted using a third-order polynomial fit for the bending radius of the SPS in terms of the ADC channels, $x$:
\begin{equation}
\label{eq:energy_fit}
\rho = Ax^3 + Bx^2 + Cx + D.
\end{equation}
This was done using the updated version of the Bayesian method presented in Section \ref{sec:bay_energy_cal} that uses \texttt{emcee} to sample the posterior.
Calibration states were methodically selected by gradually extrapolating from bound states across the focal plane. This was done by first using a linear fit on the two states at $8654$, and $8864$ keV. By using these states as a starting point, it was possible to iteratively extend the fit by identifying new states to use as calibration points, incorporating these new points, increasing the order of the polynomial as needed, and repeating this process until the majority of the focal plane was calibrated. In a few regions, the most intensely populated peaks were excluded due to the possibility of closely spaced levels that differed in energy by more than a few keV, which could cause an additional source of error to be introduced into the calibration. The chosen calibration states at $\theta_{lab} = 11^{\circ}$ are shown in Fig.~\ref{fig:na_cal_spec}. The validity of this internal calibration in the astrophysical region of interest between $11$ and $12$ MeV was checked at $\theta_{lab} = 11^{\circ}$ against a separate external calibration using the $^{27}$Al$(^3$He$, d)^{28}$Si reaction. The aluminum states were selected based on the spectrum shown in Ref.~\cite{champagne_1986}. The two calibrations showed an energy offset of $\approx 7$ keV arising from the difference in the thicknesses of the Al and NaBr targets. Once the energy offset was corrected for, the two methods showed excellent agreement, and the internal calibration was subsequently adopted at each angle.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/23Na_11deg_cal.pdf}
\caption{Calibration states (orange) for $\theta_{lab} = 11^{\circ}$. The energy values for the states below the proton threshold are taken from Ref.~\cite{firestone_2007}, while the rest are from Table \ref{tab:energy_comp}.}
\label{fig:na_cal_spec}
\end{figure}
Once initial calibrations for each angle were found, it was necessary to identify the states associated with excited states of $^{24}$Mg and those arising from contaminants. Known contamination peaks arise from $^{12}$C$(^3 \textnormal{He}, d)$, $^{13}$C$(^3 \textnormal{He}, d)$, $^{14}$N$(^3 \textnormal{He}, d)$, and $^{16}$O$(^3 \textnormal{He}, d)$. This process would require tracking $> 50$ peaks across $10$ angles. However, once an energy calibration is obtained, these peaks can readily be sorted. If a state belongs to $^{24}$Mg, its predicted energy will be consistent from angle to angle; however, if a state arises from another reaction besides $^{23}$Na$(^3 \textnormal{He}, d)$, its kinematic shift will cause a dramatically changing energy prediction from angle to angle. Thus, states that belong to $^{24}$Mg will tend to cluster around some energy. In order to ease the identification of states, a clustering algorithm was used to look for this behaviour in the peaks at each angle. For this problem the \texttt{DBSCAN} algorithm was chosen \cite{Ester_1996}. \texttt{DBSCAN} is well suited towards the problem because it does not require prior knowledge on the number of clusters and one of its free parameters sets the minimum number of points needed in a region to construct a cluster, i.e, it mimics the requirement for a state to be observed at a minimum number of angles before it can be considered to be coming from $^{24}$Mg. The minimum number was set to $min=3$ in this case. The statistical energy variation between angles also needs to be accounted for. Membership to a cluster is controlled by the free parameter $eps$, which was set to $10$ keV. The implementation of \texttt{DBSCAN} in \texttt{scikit-learn} was used \cite{scikit-learn}. An example of the clustering is shown in Fig.~\ref{fig:clustering_energy}. In total $54$ levels were found, with $49$ peaks classified as contaminants. It is still necessary to check the results of the method by hand, but it effectively transforms the problem of sorting the nearly $600$ peaks over all angles, to checking the consistency of the $54$ states and $49$ contaminates.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/Cluster_Find_States_for_thesis.pdf}
\caption{A subset of $14$ clusters found using \texttt{DBSCAN}. Colors and point shapes indicate which cluster the peak belongs to. Connecting lines are to guide the eye.}
\label{fig:clustering_energy}
\end{figure}
The energies presented in Table \ref{tab:my_excitation_energies} are the weighted average of the energies deduced at each angle. Fig.~\ref{fig:calibrated_na_energies} shows the location of the peaks in the astrophysical region of interest at $11^{\circ}$. Only states that were seen at three or more angles are reported. The additional uncertainty estimated by our Bayesian method, see Eq.~\ref{eq:calibration_bayesian_model}, also introduces a further complication into the weighted averaging between the angles. Since this uncertainty is estimated directly from the data, it will be influenced by systematic effects. These systematic effects introduce correlations between the deduced energies and uncertainties at each angle, which can become significant because of the high number of angles measured in this experiment. A clear indication of correlation was the observation that the deduced energies of our calibration points from the fit tend to agree with their input values at each angle, but a simple weighted average of these points yields a disagreement at a high level of significance. In order to account for possible correlations, the uncertainties on the weighted average were estimated using the methods of Ref.~\cite{Schmelling_1995}. This correction is done by calculating the $\chi^2$ value of the data with respect to the weighted average, $\bar{x}$, which is given by:
\begin{equation}
\label{eq:chi_sq}
\chi^2 = \sum_i^N \frac{(x_i - \bar{x})^2}{\sigma_i^2}.
\end{equation}
Since the expected value of $\chi^2$ is $N - 1$, the idea is to adjust the uncertainties from the weighted average, $\bar{\sigma}$, based on the deviation from $N-1$. For the case of positive correlations, $\chi^2 < 1$, and, therefore, $\bar{\sigma}$ will need to be adjusted by:
\begin{equation}
\label{eq:positive_corr_chi}
\sigma_{adj} = \sqrt{(N-\chi^2) \bar{\sigma}^2}.
\end{equation}
A separate estimate can also be made if the scatter in the data is not well described by the weighted average. In this case, $\chi^2 > 1$, which gives the adjustment:
\begin{equation}
\label{eq:negative_corr_chi}
\sigma_{adj} = \sqrt{\frac{\chi^2}{N-1} \bar{\sigma}^2}.
\end{equation}
To be conservative, the larger of these two values is adopted.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/23Na_11deg_final_energies.pdf}
\caption{$\theta_{lab} = 11^{\circ}$ spectrum that has been zoomed in on the astrophysical region of interest. All of the peaks from $^{24}$Mg have been identified with the final weighted average energy value in keV.}
\label{fig:calibrated_na_energies}
\end{figure}
\begin{lscapenum}
\begin{table}
\caption{\label{tab:my_excitation_energies} $^{24}$Mg excitation energies from this work compared to those of Ref.~\cite{firestone_2007}. Because of the presence of a high number of states in certain regions, a unique identification of the observed state could not be made. States used for the energy calibration are reported in italics, marked with $*$, and listed without uncertainties, but they do represent the mean value obtained after calibration.}
\begin{tabular}{lll|lll|lll}
\toprule
\toprule
This Work & ENSDF \cite{firestone_2007} & $J^{\pi}$ & This Work & ENSDF \cite{firestone_2007} & $J^{\pi}$ & This Work & ENSDF \cite{firestone_2007} & $J^{\pi}$ \\ \hline \vspace{-2mm}
& & & & & & & & \\
7364(14) & 7349.00(3) & $2^+$ & 10660.1(21) & 10659.58(13)& $1,2^+$ & 12050(3) & 12049.1(20) & $4^+$ \\
7555(13) & 7555.04(15) & $1^-$ & & 10660.03(4)& $(4^+)$ & 12121.5(17) & 12119.5(18) & $(2,3,4)^+$ \\
7752(10) & 7747.51(9) & $1^+$ & 10713.9(12) & 10711.74(17)& $1^+$ & & 12128.3(10) & \\
8362(4) & 8357.98(13) & $3^-$ & 10732.5(16) & 10730.79(11)& $2^+$ & 12182.3(22) & 12181.23(23) & $(1,2)^+$ \\
8441(4) & 8437.31(15) & $1^-$ & 10824.3(13) & 10820.7(4)& $3^+,4^+$ & \textit{12260}* & 12257.5(7) & $(3^-)$ \\
& 8439.36(4) & $4^+$ & \textit{10918}* & 10916.96(17)& $2^+$ & & 12257.69(21) & $2^+$ \\
\textit{8654}* & 8654.53(15) & $2^+$ & 11011(3) & 11010.5(14)& $3$ & 12342(3) & 12339.00(24) & $2^+, 3, 4^+$ \\
\textit{8864}* & 8864.29(9) & $2^-$ & & 11015.8(7)& $2^+$ & & 12344(3) & \\
9002.9(24) & 9003.34(9) & $2^+$ & 11201(5) & 11186.82(24) & & 12406.0(22) & 12403.3(7) & $2^+$ \\
9145.0(16) & 9145.99(15) & $1^-$ & & 11208.4(16) & & 12530.5(24) & 12526.5(8) & $1,2^+$ \\
9292.6(12) & 9284.22(14) & $2^+$ & 11317(3) & 11314.4(15) & $(3)^+$ & 12576(3) & 12577(3) & $2^+$ \\
& 9299.77(24) & & 11387.7(14) & 11389.8(11) & $1^-$ & \textit{12670}* & 12669.9(2) & $2$ \\
\textit{9460}* & 9457.81(4) & $(3)^+$ & & 11394(4) & & 12738(3) & 12737.1(9) & $2^+$ \\
9520(3) & 9516.28(4) & $(4^+)$ & 11453.2(21) & 11452.51(13) & $2^+$ & 12819(4) & 12815.9(6) & $1^+,2^+$ \\
& 9527.8(21) & $(6^+)$ & & 11457(3) & $0^+$ & 12854(3) & 12850.3(8) & \\
9837.2(25) & 9828.11(11) & $1^+$ & 11520.3(23) & 11518.2(6) & $2^+$ & 12924(4) & 12919.7(7) & $2^+, 3, 4^+$ \\
9977(4) & 9967.19(22) & $1^+$ & 11688.7(14) & 11695.6(6) & $4^+$ & 12965(4) & 12961.9(8) & $1^+, 2^+$ \\
10021(3) & 10027.97(9) & $5^-$ & 11823(3) & 11827(4) & \\
10055(3) & 10058.54(16) & $(1,2)^+$ & & 11862(5) & \\
10163.2(19) & 10161(3) & $0^+$ & 11857(3) & 11864.9(13) & $1^-$ \\
10328.1(18) & 10333.29(13) & & 11935(3) & 11931.2(6) & \\
\textit{10358}* & 10360.51(13) & $2^+$ & 11989.3(14) & 11987.72(10) & $2^+$ \\
10572.7(21) & 10576.02(7) & $4^+$ & 12014(3) & 12015.2(8) & $3^-$ \\
\bottomrule
\bottomrule
\end{tabular}
\end{table}
\end{lscapenum}
\restoregeometry
\subsection{The Energies Reported by Hale \textit{et al}.}
\label{sec:hale_discussion}
Our energies are in fair agreement with those previously reported. However, in the astrophysical region of interest there is significant disagreement between the present values and those of Ref.~\cite{hale_2004}. Of particular concern is the state corresponding to the $138$-keV resonance, whose mean values falls $\approx 9$ keV below what is reported in Ref.~\cite{hale_2004}. Furthermore, the previous measurement was also performed at TUNL using the SPS.
Studying the information reported in Ref.~\cite{hale_2004} it was noticed that energies were only reported for a region of $\approx 400$ keV. Outside of this region the identity of twelve states were assumed for the calibration. Of these twelve states, the most interior, i.e, the last states before the interpolated region, were states identified as $11330$ keV and $12184$ keV. Comparing the spectrum from this work and that shown in Fig.~3 of Ref.~\cite{hale_2004}, the state labeled $11330$ keV in their spectrum corresponded to the state that our calibration identified as $11317(3)$ keV. Ref.~\cite{firestone_2007} lists two states around this energy range, one with $E_x = 11314.4(15) $ keV and the other $E_x = 11330.2(10)$ keV. Neither of these states has a definite spin parity in the current evaluation, but the compilation of Ref.~\cite{endt_eval_1990}, which was used as a reference in the previous study, identified the lower energy state as $(3,4)^+$ and the higher as $(2^+ \text{-} 4^+)$. These assignments seem to be in tension with the $(p, p^{\prime})$ angular distribution of Ref.~\cite{zwieglinski_1978}, which assigns the lower lying state $\ell=3$ giving $J^{\pi} = 3^{-}$. However, Ref.~\cite{Warburton_1981} reports $\log ft = 5.19(14)$ for $^{24}$Al$(\beta^+)$ ( ground state $J^{\pi} = 4^+$), which based on the empirical rules derived in Ref.~\cite{Raman_1973} requires an allowed decay giving $(3, 4, 5)^+$ for this state. In light of these discrepancies, it is hard to reach a firm conclusion about the identity of the state populated in this work and Ref.~\cite{hale_2004}.
One method to resolve the disagreement is to recalibrate our data using the calibration states of the previous study. This cannot be considered a one-to-one comparison because of the Bayesian method used to calibrate the focal plane, but it should show the impact of misidentifying the state around $11320$ keV. To be specific, I consider three sets of energies:
\begin{enumerate}
\item The results of this work from Section \ref{sec:energy_cal_na} (Set $\#1$).
\item The peak centroids of this work energy calibrated using the calibration states of Hale \textit{et al.} (Set $\#2$).
\item The energies reported in Ref.~\cite{hale_2004} (Set $\#3$).
\end{enumerate}
The results shown in Table \ref{tab:hale_comp_energies} report these three sets of energies. As can be seen, using the same calibration for our data (Set $\#2$) produces consistent results with the previous study (Set $\#3$).
The above discussion presents the evidence that led to the decision to exclude excitation energies of Ref.~\cite{hale_2004} from the recommended energies of the current work. There is a reasonable cause to do this at the current time, but further experiments are needed to firmly resolve this issue.
\begin{table}
\centering
\setlength{\tabcolsep}{8pt}
\caption{ \label{tab:hale_comp_energies} Comparison of the $^{24}$Mg excitation energies measured in this work (Set $\# 1$), the excitation energies derived from our data if the calibration of Hale \textit{et al.} is used (Set $\# 2$), and finally the energies Hale \textit{et al.} reported in Ref.~\cite{hale_2004} (Set $\# 3$). These results indicate that the state close to $11320$ keV was previously misidentified, and, as a result, led to systematically higher excitation energies.}
\begin{threeparttable}
\begin{tabular}{lllllllll}
\toprule
\toprule
Set $\# 1$ & Set $\# 2$ & Set $\# 3$ \\ \hline
$11688.7(14$) & $11695(3)$ & $11698.6(13)$ \\
$11823(3)$ & $11828(3)$ & $11831.7(18)$ \\
$11857(3)$ & $11860.1(19)$ & $11862.7(12)$ \\
$11935(3)$ & $11937.5(17)$ & $11936.5(12)$ \\
& & $11965.3(12)^{\dagger}$ \\
$11989.3(14)$ & $11991.2(17)$ & $11992.9(12)$ \\
$12014(3)$ & $12016.2(16)$ & $12019.0(12)$ \\
$12050(3)$ & $12051.4(17)$ & $12051.8(12)$ \\
\bottomrule
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$\dagger$] Ref.~\cite{hale_2004} reports this state, which appears as an unresolved peak in their spectrum. The current study does not find a corresponding peak in the same region.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Suggested Energies for Astrophysically Relevant States }
The recommended energies based on the measurements of this work, the compilation of Section \ref{sec:energy_level_update}, and the removal of Ref.~\cite{hale_2004} from consideration are presented below. Note that states not measured in this work are listed for completeness. All values come from a weighted average, except for the $11695$-keV state. This state has extreme tension between the two most precise measurements, which are those of this work and those of Ref.~\cite{schmalbrock_1983}. To reflect this disagreement in the uncertainty, the \textit{expected value method} of Ref.~\cite{BIRCH_2014} was used.
\begin{table}[]
\centering
\setlength{\tabcolsep}{8pt}
\caption{ \label{tab:recommened_energies} The recommended resonance energies for astrophysically relevant states. These energies were derived from the energies of this work and the compiled energies of Section \ref{sec:energy_level_update}, and using the $Q$-value calculated from Ref.~\cite{Wang_2017}.}
\begin{tabular}{ll}
\toprule
\toprule
$E_x$ (keV) & $E_r$ (keV) \\ \hline
$11389.6(12)$ & $-303.09(12)$ \\
$11452.9(4) $ & $-239.8(4)$ \\
$11521.1(14)$ & $-171.6(14)$ \\
$11695(5) $ & $2(5)$ \\
$11729.8(16)$ & $37.1(16)$ \\
$11825(3) $ & $132(3)$ \\
$11860.8(14)$ & $168.1(14)$ \\
$11933.06(19)$ & $240.37(19)$ \\
$11966.7(5) $ & $274.2(8)$ \\
$11988.45(6)$ & $295.76(6)$ \\
$12016.8(5) $ & $324.1(5)$ \\
$12051.3(4) $ & $358.6(4)$ \\
$12119.5(9) $ & $426.8(9)$ \\
$12183.3(1) $ & $490.6(1)$ \\
$12341.0(4) $ & $648.3(4)$ \\
$12405.3(3) $ & $712.6(3)$ \\
$12528.5(6) $ & $835.8(6)$ \\
$12576(3) $ & $883(3) $ \\
$12669.9(4) $ & $977.2(4)$ \\
$12738.8(7) $ & $1046.1(7)$ \\
$12817.77(19)$ & $1125.08(19)$ \\
$12852.2(5) $ & $1159.5(5) $ \\
$12921.6(4) $ & $1228.9(4) $ \\
$12963.9(5) $ & $ 1271.2(5) $ \\
\bottomrule
\bottomrule
\end{tabular}
\end{table}
\section{Background Subtraction for the Region of Interest}
\label{sec:background_subtraction}
Energy calibration revealed that the $11823$-keV and $11857$-keV states were obscured at multiple angles by the peak corresponding to the $7276$ keV state in $^{15}$O. This state arises from $^{14}$N contamination in the target, and its overlap with the $11857$ keV state can be seen in Fig.~\ref{fig:calibrated_na_energies}. Due to the unknown spin and parity of the $11823$ keV state, this contaminant peak presents a major challenge because it overlaps most strongly at the lower angles, which, in turn, have the most impact on the inferred $\ell$ assignment. In order to accurately subtract the contribution from the contaminant peak, it was decided to analyze these three states simultaneously using a Bayesian model. Because the spin and parity of the $11857$ keV state ($1^{-}$) and the $7276$ keV state (${7/2}^+$) of $^{15}$O are known, simultaneous analysis of this region can guarantee that the extracted yields are consistent with all known information.
Since the three states are not clearly resolved, it is necessary to constrain the shape, area, and location of the contaminant peak. The data at $\theta_{lab} = 13^{\circ}$ and $15^{\circ}$ are the only spectra which show a clear signal from the nitrogen contamination. Thus, it was decided to use the average of the peak parameters at these two angles to model the contribution of the nitrogen peak at the other angles. The centroid location can, in principle, be predicted from the energy calibration. However, the internal calibration for the sodium states does not accurately account for energy loss in the target. It was necessary to systematically shift the predicted peak locations by $\approx 4$ channels in order to match the observations at $13^{\circ}$ and $15^{\circ}$. Additionally, at several angles there were sufficient statistics in the background runs on the $^{\textnormal{nat}}$C target to check this correction. The $4$ channel offset was found to give excellent agreement with these observations. This agreement also indicates that the nitrogen is primarily located in the carbon backing.
The predicted area of the $^{15}$O state for the angles, at which it was not directly observed, was calculated using DWBA. Absolutely determining this quantity is infeasible without knowing the amount of nitrogen in the target and the spectroscopic factor of the state. However, the ratio of the DWBA cross sections gives a scaling factor that can convert the observed yields at $13^{\circ}$ and $15^{\circ}$ to predicted yields at the angles of interest. Furthermore, a ratio reduces the dependence on the chosen optical potential for the calculations. Thus, the predicted yield at an angle $\theta_i$ is given by:
\begin{equation}
\label{eq:predicted_yield_N}
\frac{dY}{d \Omega}(\theta_i) = \bigg[ \frac{d \sigma}{d \Omega}\underset{\textnormal{DWBA}}{(\theta_i)} / \frac{d \sigma}{d \Omega}\underset{\textnormal{DWBA}}{(\theta_{13, 15})} \bigg] \frac{dY}{d \Omega}(\theta_{13, 15}),
\end{equation}
where $\theta_{13,15}$ denotes either $13^{\circ}$ or $15^{\circ}$. The results of Ref.~\cite{bertone} indicate that the $7276$ keV state of $^{15}$O is characteristic of an $\ell = 2$ transfer. A zero-range DWBA calculation was carried out for a $1d_{7/2}$ state using the Becchetti and Greenlees $^{3}$He global optical potential, Ref.~\cite{b_g_3he}, and the Daehnick $d$ optical potential \cite{daehnick_global}. Another calculation was carried out for the potentials listed in Ref.~\cite{bertone}. As expected these two results were in excellent agreement. The deviation was $ < 1 \%$ for angles $< 20^{\circ}$ . The global calculations were adopted, and the yields from both $13^{\circ}$ and $15^{\circ}$ were scaled for angles $5^{\circ} \text{-} 11^{\circ}$. The average between these two predictions was adopted. These yields were then transformed back into the lab frame, and Eq.~\ref{eq:number_of_reactions} and Eq.~\ref{eq:number_of_beam} were used to calculate the expected number of counts in the focal plane spectrum.
A Bayesian model was constructed to account for all of the above considerations. The priors for the parameters of the contaminant peak are given by the predictions of the energy calibration, the observation of the FWHM at $13^{\circ}$ and $15^{\circ}$, and the DWBA calculations for the areas. The DWBA areas were assigned an uncertainty of $30 \%$. The priors for the states of interest were constructed based on the rough location of the peaks in the doublet, the widths of nearby states, and flat priors for the area. A linear background was also considered, and its priors were broad normal distributions. The likelihood function was selected to be a Poisson distribution for each bin:
\begin{equation}
\label{eq:poisson_like}
\textnormal{Poisson}(\mathbf{C}, \mathbf{D}) = \prod_i^N \frac{D_i^{C_i}}{C_i!}e^{-D_i},
\end{equation}
where $C_i$ is the number of counts in bin $i$, $D_i$ is the total number of predicted counts in bin $i$, and $N$ is the total number of bins. The model becomes:
\begin{align}
\label{eq:background_bayesian_model}
& \textnormal{Contaminant Priors:} \nonumber \\
& c^{\prime} \sim \mathcal{N}(c_{cal}, \sigma^2_{cal}) \nonumber \\
& w^{\prime} \sim \mathcal{N}(3.878, \{0.794\}^2) \nonumber \\
& A^{\prime} \sim \mathcal{N}(A_{\textnormal{DWBA}}, \{ 0.30A_{\textnormal{DWBA}} \}^2) \nonumber \\
& \textnormal{Other Priors:} \nonumber \\
& c_{j} \sim \mathcal{N}(c^{obs}_{j}, 5.0^2_{j}) \nonumber \\
& w_j \sim \mathcal{N}(w_{obs}, 1) \nonumber \\
& A_j \sim \textnormal{Uniform}(0, 10^4) \\
& a \sim \mathcal{N}(1, 100^2) \nonumber \\
& b \sim \mathcal{N}(100, 100^2) \nonumber \\
& \textnormal{Function:} \nonumber \\
& B_{i} = a x_{i} + b \nonumber \\
& f(x_i ; A, w, c) = \frac{A}{\sqrt{2 \pi w^2}} e^{\frac{(x_i-c)^2}{2 w^2}} \nonumber \\
& S_i = f(x_i; A^{\prime}, w^{\prime}, c^{\prime}) + \sum_j f(x_i; A_j, w_j, c_j) \nonumber \\
& D_{i} = S_i + B_i \nonumber \\
& \textnormal{Likelihood:} \nonumber \\
& \textnormal{Poisson}(\mathbf{C}, \mathbf{D}), \nonumber
\end{align}
where the primed variables correspond to the contaminant peak, the bin number is denoted $i$, the $x$ position of the bin is $x_i$, $cal$ are the predictions of the energy calibration, \say{DWBA} are the areas derived from the $\textnormal{DWBA}$ scaling, $obs$ refers to the observed widths of nearby peaks and the rough position of the doublet peaks, $j$ is the index that runs over the two peaks, $a$ is the slope of the linear background, $b$ is its intercept, $B_i$ is the total number of background counts in bin $i$, and $S_i$ is the total number of signal counts in a bin $i$ (contaminant peak plus $^{24}$Mg peaks). This model is applied to each angle individually, and the credibility intervals for resulting peaks are shown in Fig.~\ref{fig:background_subtraction_roi}. As a check, the posterior distributions for the number of counts in the contamination peak were used to recalculate the expected yields, and compare to the DWBA cross section. This calculation is shown in Fig.~\ref{fig:dwba_14N}, and it shows that the posterior values are consistent with the theoretical angular distribution that generated their priors. Once the background subtraction yielded the parameters for the peaks, the centroids were plugged into the energy calibrations and the yields were calculated. At $5^{\circ}$, the $11823$-keV state had a posterior area that was consistent with zero. This angle was treated as an upper limit, and was excluded from the weighted average for the energy and subsequent DWBA calculations.
\begin{figure}
\centering
\includegraphics[width=.75\textwidth]{Chapter-6/figs/peaks_angles.png}
\caption{Results of the Bayesian peak fitting. The bin numbers have been shifted to approximately align the $11823$ and $11857$ keV peaks. This procedure shows the clear kinematic shift of the $^{15}$O$_6$ peak, and the corresponding change in the shape of the doublet. Dark bands are $68 \%$ credibility intervals, and the light bands are $95 \%$ intervals. }
\label{fig:background_subtraction_roi}
\end{figure}{}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/14N_DWBA.pdf}
\caption{DWBA prediction for the $7276$ keV state in $^{15}$O using the global potentials described above. The results of the posterior values for the yields are the pink triangles, while the measured yields are shown in orange. This shows that the background subtraction is consistent with the known information about this state.}
\label{fig:dwba_14N}
\end{figure}
\section{Yield Determination}
Extraction of $C^2S$ for a state requires that the absolute scale of the differential cross section is known. Uncertainties associated with the target thickness and beam integration in the current experiment make the determination of an absolute scale subject to uncontrolled systematic effects. Beam and target effects can be removed by normalizing the data from the focal plane to the $^{3}$He elastic scattering measured by the monitor detector positioned at $45^{\circ}$. An absolute scale can be established by inferring an overall normalization from the comparison of the relative elastic scattering angular distribution collected in the focal plane to the optical model predictions. Similar approaches can be found in Refs.~\cite{hale_2001, hale_2004, vernotte_optical}
To clarify the point of the relative measurement, lets assume that two detectors measure the reaction products of a single reaction on a single target. In this case, detector $1$ is sensitive to the differential cross section $\frac{d \sigma}{d \Omega}( \theta_1 )$ at laboratory angle $\theta_1$, while detector $2$ measures $\frac{d \sigma}{d \Omega}( \theta_2 )$ at laboratory angle $\theta_2$. The ratio of these two measurements will, therefore, be insensitive to shared properties of the cross section at both angles. These include target stoichiometry, target thickness, and charge collection. Thus, if a relative measurement is made, the ratio will be free from the influence of these effects at the cost of no longer being directly comparable to theoretical calculations of the cross section. Considering the above, it can be seen that all that needs to be calculated in the case of a relative measurement is the differential reaction yield for both detectors:
\begin{equation}
\label{eq:reaction_yield}
\frac{dY}{d \Omega} = \frac{N_R}{N_B \Omega},
\end{equation}
where $N_R$ is the number of reactions, $N_B$ is the number of beam particles, and $\Omega$ is the detector solid angle. Returning to the $^{23}$Na$(^3 \textnormal{He}, d)$ data, $N_R$ is proportional to the area of a peak on the focal plane or the monitor detector. This area needs to be corrected for the livetime of the DAQ:
\begin{equation}
\label{eq:number_of_reactions}
N_R = \frac{A}{t_{live}}.
\end{equation}
$N_B$ is derived from the BCI reading, which is a scalar value that represents the number of BCI pulses per coulomb. The BCI full-scale for this experiment was set to $10^{-10}$ C$/$pulse. Combining these terms:
\begin{equation}
\label{eq:number_of_beam}
N_B = \frac{BCI \times full\text{-}scale}{eq},
\end{equation}
where $q$ is the charge state of the beam and $e$ is the elementary charge. Finally, the geometric solid angles of the detectors are $\Omega_{FP} = 1.00(4)$ msr and $\Omega_{Si} = 4.23(4)$ msr for the focal plane and monitor detector, respectively.
\subsection{Monitor Detector Analysis}
Extraction of the areas for peaks in the focal plane was already discussed in Section \ref{sec:peak_fitting_fp}. However, the relative method also requires detailed analysis of the monitor detector spectrum, with the goal being to extract the counts from $^{3}$He elastically scattering off of $^{23}$Na. Fig.~\ref{fig:e_de_si} shows the spectrum obtained with the monitor detector positioned at $45^{\circ}$, as it was throughout the experiment. Although the $^3$He overlaps mildly with the $^4$He band, it was still possible to gate effectively on the elastic scattering states.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/E_dE_Si.pdf}
\caption{2D $E/ \Delta E$ for the monitor detector. This spectrum was taken at $45^{\circ}$. The orange contour is drawn around the elastic scattering peaks. It can be seen that the intensity of the Br peaks caused significant pile up in the $\Delta E$ detector in addition to channeling.}
\label{fig:e_de_si}
\end{figure}
Peak fitting was performed on the gated $E$ spectrum. An energy calibration was not performed owing to the simplicity of the region of interest. In particular, the states that can be easily identified and used for calibration are the elastic scattering peaks, which, in turn, are the only peaks of interest. An energy calibration would also allow the creation of a $\Delta E + E$ spectrum that could be used to improve resolution, but the resolution of the gated $E$ spectrum was sufficient to make this step unnecessary. The monitor detector yields were analyzed for each individual DAQ run file. These run files were started and stopped approximately every hour during data taking. Examining the data in this granular fashion was done in order to look for systematic target degradation, which, if present, would need to be accounted for in the analysis of the focal plane spectrum as well. Gaussian fits with linear backgrounds were used to extract the area of the carbon and sodium peaks. An example fit for one of the run files is shown in Fig.~\ref{fig:si_spectrum}. The extracted counts for the sodium peak were highly sensitive to the selected fitting region. This sensitivity was caused by the tail of the bromine peak. Changes in the fit regions were capable of producing changes of $\approx 10 \%$ in the extracted area. To reduce variations between spectra, the same region was used in the fit each time.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/si_spectrum.pdf}
\caption{Monitor detector spectrum for a single run. An example of the Gaussian fit used to deduce the area of the sodium peak is shown.}
\label{fig:si_spectrum}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/Carbon_Na.pdf}
\caption{Yield fluctuations for the carbon and sodium peaks. All of these runs used the same NaBr target. These runs constitute the data for angles $9^{\circ} \text{-} 15^{\circ}$.}
\label{fig:yield fluctuations}
\end{figure}
It can be seen in Fig.~\ref{fig:yield fluctuations} that the sodium peak experienced far more variance than the carbon peak. The statistical uncertainties for all of these points amount to less than $2 \%$ because of the forward angle of the detector. Several causes for this behavior were explored such as changing beam angle, DAQ issues, and target effects. The only satisfactory explanation comes from target effects. This conclusion is based on the relatively low variation in the carbon yield, the majority of the strength of the carbon yield comes from the target backing, and the gradual reduction in variation of the sodium yield. A plausible explanation in light of these facts is that the evaporation of the NaBr targets produced a highly nonuniform layer of material. This nonuniformity increased the magnitude of the fluctuations that were seen in the carbon backing. Once the target had been exposed to beam for some period of time, the nonuniform layer was sputtered off, and the fluctuations in the yields decreased.
Accounting for the fluctuations in the monitor detector yields was done by doing a simple average of the observations at each angle. These averaged yields represent systematic effects, but it is clear our measurements scatter significantly from angle to angle. Thus, the run to run variation was treated as an additional statistical uncertainty when used to normalize the focal plane data. This approach gives significant additional contributions to the total uncertainties for $5^{\circ}$ and $9^{\circ}$, where the scatter amounts to $24 \%$ and $14 \%$, respectively. The average yields and their associated uncertainties are shown in Table \ref{tab:silicon_table}. Close inspection of these data support the hypothesis that the targets initially had a nonuniform layer, with the highest percentage uncertainties corresponding to the first angle measured with each target (note that $3^{\circ}$ was measured after $17^{\circ}$).
\begin{table}
\centering
\setlength\tabcolsep{6pt}
\def1.2{1.2}
\caption{\label{tab:silicon_table} Average yield values for the monitor detector at each SPS angle. The monitor detector was fixed at an angle of $\theta_{mon} = 45^{\circ}$.}
\begin{tabular}{lcl}
\toprule
\toprule
$\theta_{SPS}$ & NaBr Target & Average Yield (msr$^{-1}$) \\ \hline
$3^{\circ}$ & $\#3$ & $2.84(12) \times 10^{-9}$ \\
$5^{\circ}$ & $\#1$ & $1.9(4)\times 10^{-9}$ \\
$7^{\circ}$ & $\#1$ & $2.18(12)\times 10^{-9}$ \\
$9^{\circ}$ & $\#2$ & $2.6(4)\times 10^{-9}$ \\
$11^{\circ}$ & $\#2$ & $2.75(22)\times 10^{-9}$ \\
$13^{\circ}$ & $\#2$ & $3.70(6)\times 10^{-9}$ \\
$15^{\circ}$ & $\#2$ & $2.54(5) \times 10^{-9}$ \\
$17^{\circ}$ & $\#3$ & $2.79(18) \times 10^{-9}$ \\
$19^{\circ}$ & $\#3$ & $2.57(15) \times 10^{-9}$ \\
$21^{\circ}$ & $\#3$ & $2.40(7) \times 10^{-9}$ \\
\bottomrule
\bottomrule
\end{tabular}
\end{table}
\subsection{Final Form of the Data}
\label{sec:scale_of_na_data}
With the extraction of the monitor detector yields, it is now possible to express all of the angular distributions in terms of the ratio:
\begin{equation}
\label{eq:yield_ratio}
\frac{dY}{d \Omega}(\theta_i) = \bigg[ \frac{d Y}{d \Omega}_{FP}{(\theta_i)} / \frac{d Y}{d \Omega}_{mon} \bigg],
\end{equation}
for the yield of a peak measured by the spectrograph at center of mass angle $i$. The dependence of these quantities on the solid angle means they must be converted to the center of mass frame before the ratio is taken. Each ratio comes with an uncertainty given by:
\begin{equation}
\label{eq:yield_ratio_unc}
\sigma_{Y} = \sqrt{\sigma_{FP}^2 + \sigma_{mon}^2},
\end{equation}
where $\sigma_{FP}^2$ is the statistical uncertainty of the yield measured on the focal plane, while $\sigma_{mon}^2$ is the statistical uncertainty in the monitor detector coming from both the counting uncertainty and the averaging of the runs.
\section{Bayesian DWBA Analysis}
All of the measured angular distributions are expressed in terms of a ratio to the elastic scattering measured in the monitor detector at $\theta_{mon} = 45^{\circ}$. In order to extract spectroscopic factors from these data, an absolute normalization must be obtained for these data. Following the Bayesian method of Section \ref{sec:bay_dwba}, this is done by estimating the posterior distribution for $\eta$, which normalizes the measured elastic scattering data for the entrance channel to the predictions of the chosen optical potential. Since this analysis simultaneously extracts $\eta$ as well as $C^2S$, the uncertainty in the normalization will naturally be reflected in the uncertainty of $C^2S$.
As will be shown, the present elastic scattering data set differs substantially from the $^{70}$Zn$(d, d)$. It is therefore necessary to modify the Bayesian model to account for these differences.
\subsection{Global Potential Selection}
The first attempts to fit the elastic scattering data
used the optical model from the lab report of Beccehetti and Greenless \cite{b_g_3he}. The imaginary depth of this potential for a beam of $E_{^{3}\textnormal{He}}=21.249$ MeV on $^{23}$Na is $36$ MeV. This is nearly twice as deep as the values reported in the more recent works of Trost \textit{et al.} \cite{TROST_1980}, Pang \textit{et al.} \cite{pang_global} and Liang \textit{et al.} \cite{Liang_2009}. Although these works prefer a surface potential, the work of Vernotte \textit{et al.} \cite{vernotte_optical} is parameterized by a volume depth, and also favours depths around $20$ MeV. The overly deep well depth is a major issue for the Bayesian analysis method because the priors are centered around the global values (see Section \ref{sec:model}). It was observed that the data preferred a lower depth, thereby causing a bi-modal posterior, with one mode centered around the global depth and the other resulting from the influence of the data. Based on these results, a decision was made to use the Liang potential (Ref.~\cite{Liang_2009}) because of its applicability in the present mass and energy range. It should be noted that the potential as presented in Ref.~\cite{Liang_2009} has an imaginary spin-orbit part, but there is little evidence from their data set to support its inclusion in the optical model. Therefore, the imaginary portion of the spin-orbit has been excluded from this analysis. Finally, these results suggest against the future use of the Beccehetti and Greenless $^3$He and $t$ optical potential.
All of the potentials used in the following analysis are listed in Table~\ref{tab:opt_parms_na}. The deuteron potential is the non-relativistic \say{L} potential from Ref.~\cite{daehnick_global}. Since the region of interest is $11$-$12$ MeV, the outgoing deuterons will have an energy of $E_d \approx E_{^3\textnormal{He}} + Q_{(^3\textnormal{He}, d)} - E_x$. The potential was thus calculated for deuteron scattering at $15.5$ MeV. The bound state spin-orbit term was set to roughly satisfy $\lambda = 25$ with $\lambda \approx 180 V_{so} / V$ for values of $V$ in the above energy range.
\begin{table*}[ht]
\centering
\begin{threeparttable}[e]
\caption{\label{tab:opt_parms_na}Optical potential parameters used in this work before inference.}
\setlength{\tabcolsep}{4pt}
\begin{tabular}{ccccccccccccccccc}
\toprule[1.0pt]\addlinespace[0.3mm] Interaction & $V$ & $r_{0}$ & $a_{0}$ & $W$ & $W_{s}$ & $r_{i}$ & $a_{i}$ & $r_{c}$ & $V_{so}$ & $r_{so}$ & $a_{so}$ \\
& (MeV) & (fm) & (fm) & (MeV) & (MeV) & (fm) & (fm) & (fm) & (MeV) & (fm) & (fm)\\ \hline\hline\addlinespace[0.6mm]
\hspace{0.15cm} $^{3}$He $+ ^{23}$Na \tnote{a} & $117.31$ & $1.18$ & $0.67$ & & $19.87$ &$1.20$ & $0.65$ & $1.29$ & $2.08$ & $0.74$ & $0.78$ \\
$d$ $+$ $^{24}$Mg\tnote{b} & $88.1$ & $1.17$ & $0.74$ & $0.30$ & $12.30$ & $1.32$ & $0.73$ & $1.30$ & $6.88$ & $1.07$ & $0.66$ & \\
\hspace{0.1cm}$p$ $+$ $^{23}$Na & \tnote{c} & $1.25$ & $0.65$ & & & & & $1.25$ & $6.24$ & $1.25$ & $0.65$ \\[0.2ex]
\bottomrule[1.0pt]
\end{tabular}
\begin{tablenotes}
\item[a] Global potential of Ref. \cite{Liang_2009}.
\item[b] Global potential of Ref. \cite{daehnick_global}.
\item[c] Adjusted to reproduce binding energy of the final state.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Elastic Scattering}
As stated previously, elastic scattering yields were measured for $15^{\circ} \text{-} 55^{\circ}$ in $5^{\circ}$ steps and finally at $59^{\circ}$, for a total of $10$ angles. The yields at each angle were normalized to those measured by the monitor detector. A further normalization to the Rutherford cross section was applied to the elastic scattering data to ease the comparison to the optical model calculations.
Low angle elastic scattering cross sections in normal kinematics can be collected to almost arbitrary statistical precision, with the present data having statistical uncertainty
of approximately $2 \text{-} 7 \%$. It is likely that in this case the residuals between these data and the optical model predictions are dominated by theoretical and experimental systematic uncertainties. The Bayesian model can thus be modified to consider an additional unobserved uncertainty in the elastic channel:
\begin{equation}
\label{eq:elastic_unc}
\sigma_{elastic, i}^{\prime 2} = \sigma_{elastic, i}^2 + \bigg(f_{elastic} \frac{d \sigma}{d \Omega}_{optical, i} \bigg)^2,
\end{equation}
where the experimentally measured uncertainties, $\sigma_{elastic, i}$, at angle $i$ have been added in quadrature with an additional uncertainty coming from the predicted optical model cross section. This prescription is precisely the same procedure that is used for the additional transfer cross section uncertainty from Eq.~\ref{eq:unc}. However, the elastic data are the only meaningful constraints for the optical model parameters. With only $10$ data points, a weakly informative prior on $f_{elastic}$ would remove the predictive power of these data. Thus, an informative prior must be chosen. For this work, I chose the form:
\begin{equation}
\label{eq:elastic_f}
f_{elastic} \sim \textnormal{HalfNorm}(0.10^2).
\end{equation}
This quantifies the expectation that the data will have residuals with the theoretical prediction of about $10 \%$. The above prior was found to provide the best compromise between the experimental uncertainties, which led to unphysical optical model parameters, and solutions above $f = 50 \%$ where the data become non-predictive.
Once the above parameter was included, the data could be reliably fit. However, it became clear that the discrete ambiguity was a serious issue in this analysis. Recall that, in Section \ref{sec:bay_dwba} the biasing of the entrance channel potential priors towards their expected physical values was sufficient to remove any other mode from the posterior. For the present data, the potential priors did little to alleviate this issue. The nested sampling algorithm in \texttt{dynesty} was used to explore both of these modes, so that methods to meaningfully distinguish them could be investigated. Nested sampling, as opposed to MCMC, can explore multi-modal distributions with ease \cite{speagle2019dynesty}, but as previously mentioned is not suited towards precise posterior estimation. A run was carried out with $1000$ live points, and required over $5 \times 10^6$ likelihood calls. The pair correlation plot of these samples is shown in Fig.~\ref{fig:corner_discrete}, and the impacts of the discrete ambiguity can be seen.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/corner_nested.pdf}
\caption{Pair correlation plot of the posterior samples for the nested sampling run. The discrete ambiguity is prominent in the $^{3}$He $+ ^{23}$Na data.}
\label{fig:corner_discrete}
\end{figure}
Two different options were explored to differentiate the modes. The first was a simple selection of the modes based on the continuous ambiguity, $Vr_0^n = c$. Fig.~\ref{fig:corner_discrete} shows that the correlation between $V$ and $r_0$ can cleanly resolve the two modes, while the other correlations have significant overlap between them. In this approach, the constant, $c$, is calculated for each mode, with the exponent, $n=1.14$, taken from Ref.~\cite{vernotte_optical}. It was found that the correlation in the samples was well described by this relation as shown in Fig.~\ref{fig:vr_constant_samples}. The second method that was investigated utilized the volume integral from Eq.~\ref{eq:j_int}. Ref.~\cite{varner} gives an analytical form of this integral:
\begin{equation}
\label{eq:j_analytical}
J_R = \frac{1}{A_P A_T} \frac{4 \pi}{3} R_0^3 V \bigg[ 1 + \bigg( \frac{\pi a_0}{R_0}\bigg)^2 \bigg],
\end{equation}
where $R_0 = r_0 A_T^{1/3}$. Calculating $J_R$ for the samples in each mode resulted in two clearly resolved peaks, as shown in Fig.~\ref{fig:j_r_hist}.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/Discrete_Amg_samples_vr.pdf}
\caption{The discrete ambiguity as seen in the $V$ versus $r_0$ correlations between the samples of the nested sampling calculation. The colored lines show the description of the correlation based on the analytic form $Vr_0^n = c$. The value of $c$ provides a way to distinguish these modes. }
\label{fig:vr_constant_samples}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/j_r_hist.pdf}
\caption{Values from the volume integral of the real potential as calculated using Eq.~\ref{eq:j_analytical} and the samples from the nested sampling calculation. The discrete ambiguity causes two well separated peaks to appear.}
\label{fig:j_r_hist}
\end{figure}
It was decided to use the first method, and exclude the other modes via a uniform distribution based on the relationship between $V$ and $r_0$. This calculation had the advantages of being relatively simple and only involving two variables. The method based on $J_R$ has the advantage that the global values well predict the location of the peak, but its dependence on $a_0$ makes its possible effect on the posterior less clear. Integrating the $Vr_0^n$ relation into the Bayesian method requires a probability distribution be specified. A uniform distribution that covered $\pm 30 \%$ around $c$ of the physical mode was chosen. I have intentionally avoided the word prior because this condition clearly does not represent a belief about the parameter $c$ before inference. Rather, this is a \textit{constraint} enforced on the posterior to limit the inference to the physical mode \cite{Wu_2019}. It should be emphasized that the posterior distributions of all the parameters will be conditioned on $c$, i.e., $P(\theta|D, c)$. The constraint is written:
\begin{equation}
\label{eq:c_constraint}
c \sim \textnormal{Uniform}(c_0(1-0.30), c_0(1+0.30)),
\end{equation}
where $c_0$ is the value that is roughly centered around the lower mode. In this case $c_0 = 132.9$. As long as the distribution in Eq.~\ref{eq:c_constraint} covers all of the physical mode and excludes the unphysical ones, the value of $c_0$ and the width of the distribution should be understood to be arbitrary.
\subsection{Transfer Considerations}
A majority of the states of astrophysical interest lie above the proton threshold, and are therefore unbound. Calculation of the overlap functions, as discussed in Section \ref{sec:overlaps}, is done by using a single particle potential, with its Woods-Saxon depth adjusted to reproduce the binding energy. For unbound states, an analogous procedure would be to adjust the well depth to produce a resonance centered around $E_r$. FRESCO does not currently support a search routine to vary $V$ to create a resonance condition, meaning that $V$ would have to be varied by hand until a phase shift of $\pi/2$ is observed. Such a calculation is obviously time consuming. An alternative is the weak binding approximation. This approach assumes that the wave function of resonance scattering resembles the wave function of a loosely bound particle, typically with a binding energy on the order of $E_{bind} = 1$ keV. Studies have shown that this approximation performs well for states within $\approx 500$ keV of the particle threshold, and reproduce the unbound calculations to within $1 \%$ \cite{Kankainen_2016, KAHL_2019}. There are indications that the validity of this approximation depends on the $\ell$ value. The reasoning is that states with higher $\ell$ values more closely resemble bound states, due to the influence of the centrifugal barrier, and therefore are better described by the approximation \cite{Poxon_Pearson_2020}. For this work, DWBA calculations for states above the proton threshold were carried out with the weak binding approximation.
Further complications arise from the non-zero ground state of $^{23}$Na ($J^{\pi} = 3/2^+$). In this case, angular distributions can be characterized by a mixture of $\ell$ transitions. Although in principle every allowed $\ell$ transition can contribute, practically speaking, it is difficult to unambiguously determine all but the lowest two $\ell$ contributions because of the rapidly decreasing cross section with increasing $\ell$ \cite{hodgson1971}.
Ignoring the light particle spectroscopic factor, the relationship between the experimentally measured differential cross section and the DWBA prediction becomes:
\begin{equation}
\label{eq:mixed_l}
\frac{d \sigma}{d \Omega}_{exp} = C^2S \bigg[ \alpha \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_1} + (1-\alpha) \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_2} \bigg],
\end{equation}
where $\alpha$ is defined such that $C^2S_{\ell_1} = C^2S \alpha$
and $C^2S_{\ell_2} = C^2S (1 - \alpha)$. Note that the values for $\ell$ must still obey parity conservation, meaning the most probable combinations for $(^3$He$, d)$ are $\ell = 0 + 2$ and $\ell = 1 + 3$. Incorporating multiple $\ell$ transfers into the Bayesian framework requires assigning a prior to $\alpha$. The above definitions make it clear that $\alpha = [0, 1]$; therefore, an obvious choice is:
\begin{equation}
\label{eq:alpha_prior}
\alpha \sim \textnormal{Uniform}(0, 1).
\end{equation}
\subsection{Bayesian Model for $^{23}$Na$(^{3}\textnormal{He}, d)^{24}$Mg}
\label{sec:spec_factors}
Before explicitly defining the Bayesian model for the DWBA analysis, the points made above should be reiterated for clarity.
\begin{enumerate}
\item The measured elastic scattering uncertainties have been added in quadrature with an inferred theoretical uncertainty.
\item The $^3$He optical model used has a severe discrete ambiguity. A constraint based on the continuous ambiguity has been added to the model to select the physical mode.
\item Due to the non-zero ground state of $^{23}$Na, the transfer cross section can have contributions from multiple $\ell$ values.
\item Cross sections decrease rapidly with increasing $\ell$, which means the relative contributions can only be reliably determined for two distinct $\ell$ values.
\item The relative contributions are weighted according to a parameter $\alpha$ that is uniformly distributed from $0$ to $1$.
\end{enumerate}
Folding these additional parameters and considerations in the Bayesian model of Section \ref{sec:bay_dwba} gives:
\begin{align}
\label{eq:dwba_model_na}
& \textnormal{Parameters:} \nonumber \\
& n = 1.14 \nonumber \\
& c_0 = 132.9 \nonumber \\
& \textnormal{Priors:} \nonumber \\
& \boldsymbol{\mathcal{U}}_{\textnormal{Entrance}} \sim \mathcal{N}(\mu_{\textnormal{central}, k}, \{0.20 \, \mu_{\textnormal{central}, k}\}^2) \nonumber \\
& \boldsymbol{\mathcal{U}}_{\textnormal{Exit}} \sim \mathcal{N}(\mu_{\textnormal{global}, k}, \{0.10 \, \mu_{\textnormal{global}, k}\}^2) \nonumber \\
& f \sim \textnormal{HalfNorm}(1) \nonumber \\
& f_{elastic} \sim \textnormal{HalfNorm}(0.10^2) \nonumber \\
& \delta D_0^2 \sim \mathcal{N}(1.0, 0.15^2) \nonumber \\
& C^2S \sim \textnormal{HalfNorm}(1.0^2) \nonumber \\
& g \sim \textnormal{Uniform}(-10, 10) \nonumber \\
& \textnormal{Functions:} \\
& \eta = 10^{g} \nonumber \\
& c = \mathcal{U}_{\textnormal{Entrance}, (k=0)} \big(\mathcal{U}_{\textnormal{Entrance}, (k=1)}\big)^n \nonumber \\
& \frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{Optical}, j} = \eta \times \frac{d \sigma}{d \Omega}_{\textnormal{Optical}, j} \nonumber \\
& \frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i} = \eta \times \delta D_0^2 \times C^2S \times \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, i} \nonumber \\
& \sigma_i^{\prime 2} = \sigma_{\textnormal{Transfer}, i}^2 + \bigg(f\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}\bigg)^2 \nonumber \\
& \sigma_{elastic, i}^{\prime 2} = \sigma_{elastic, i}^2 + \bigg(f_{elastic} \frac{d \sigma}{d \Omega}_{optical, i} \bigg)^2 \nonumber \\
& \textnormal{Likelihoods:} \nonumber \\
& \frac{d \sigma}{d \Omega}_{\textnormal{Transfer}, i} \sim \mathcal{N}\bigg(\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}, \sigma_i^{\prime \, 2} \bigg) , \nonumber \\
& \frac{d \sigma}{d \Omega}_{\textnormal{Elastic}, j} \sim \mathcal{N}\bigg(\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{Optical}, j}, \sigma_{elastic, i}^{\prime 2} \bigg) , \nonumber \\
& \textnormal{Constraint:} \nonumber \\
& c \sim \textnormal{Uniform}(c_0(1-0.30), c_0(1+0.30)), \nonumber
\end{align}
where the index $k$ runs over the optical model potential parameters, $i$ and $j$ denote the elastic scattering and transfer cross section angles, respectively, and $\mathcal{U}_{\textnormal{Entrance}, (k=0,1)}$ are the real potential depth and radius for the entrance channel. Note that the prior for $C^2S$ has also changed from Eq.~\ref{eq:model}. Since the astrophysical states of interest are highly excited states in $^{24}$Mg, the majority of the strength for the shell they occupy has been exhausted. The expectation is that $C^2 << 1$, and the width of the prior can safely be reduced from $n_{nucleons}$ to $1.0$. In the case of a mixed $\ell$ transfer, the model has the additional terms:
\begin{align}
\label{eq:model_mixed_l}
& \textnormal{Prior:} \nonumber \\
& \alpha \sim \textnormal{Uniform}(0, 1) \nonumber \\
& \textnormal{Function:} \\
& \frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i} = \eta \times \delta D_0^2 \times C^2S \times \bigg[ \alpha \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_1} + (1-\alpha) \frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_2} \bigg] \nonumber,
\end{align}
where the definition for $\frac{d \sigma}{d \Omega}^{\prime}_{\textnormal{DWBA}, i}$ is understood to replace all other occurrences of that variable in Eq.~\ref{eq:dwba_model_na}. Note that the individual cross sections, $\frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_1}$ and $\frac{d \sigma}{d \Omega}_{\textnormal{DWBA}, \ell_2}$, are calculated using the same optical potential.
\subsection{Results}
The above Bayesian model was applied to the eleven states observed in the astrophysical region of interest. For each state, \texttt{emcee} was run with $400$ walkers taking $8000$ steps, giving a total of $3.2 \times 10^6$ samples. Of these samples, the first $6000$ steps were discarded as burn in, and the last $2000$ steps were thinned by $50$ for $16000$ final samples. These $16000$ samples were used to estimate the posterior distributions for $C^2S$, and to construct the differential cross sections shown in Fig.~\ref{fig:mcmc_cs_na}. An example of the simultaneous fit obtained for the elastic scattering data is shown in Fig.~\ref{fig:elastic_fit_na}. All of the data have been plotted as a function of their relative value (Section \ref{sec:scale_of_na_data}). The normalization $\eta$ was found to be $\eta = 0.075^{+0.007}_{-0.006}$, which shows that the absolute scale of the data, despite the influence of the optical model parameters, can be established with a $9 \%$ uncertainty.
The values obtained for $(2J_f+1)C^2S$ in this work are listed in Table \ref{tab: na_c2s_table}. There is general agreement between our values and those of Ref.~\cite{hale_2004}, which provides further evidence that the absolute scale of the data is well established. However, for the three $2^+$ states that show a mixture of $\ell = 0 + 2$, the current values are consistently lower. In these cases, the Bayesian method demonstrates that considerable uncertainty is introduced when a mixed $\ell$ transfer is present. The origin of this effect merits a deeper discussion, which I will now present.
The posterior distributions for $(2J_f+1)C^2S$ from states with unique $\ell$ transfers were found to be well described by log-normal distributions. Estimations of these distributions can be made by deriving the log-normal parameters $\mu$ and $\sigma$ from the samples. An example of the agreement of the posterior samples with a log-normal distribution is shown in Fig.~\ref{fig:11390_log_normal}. The $med.$ and $f.u.$ quantities calculated from these parameters are listed in Table \ref{tab: na_c2s_table}. It can be seen that states that have a unique $\ell$ transfer show factor uncertainties of $f.u. \approx 1.30$, or, rather, a $30 \%$ uncertainty. On the other hand, states that show a mixed $\ell$ transition vary from $f.u. = 1.40 \text{-} 2.00$. However, it can be seen in Fig.~\ref{fig:11453_log_normal}, that while the individual $\ell$ components have a high factor uncertainty or deviate strongly from a log-normal distribution, their sum shares the same properties as the states with a single $\ell$ transfer. In other words, the overall spectroscopic factor still has a $30 \%$ uncertainty. This situation is analogous to the one encountered in Section \ref{sec:mc_rates}. The mean grows linearly with each component of the sum while the uncertainty roughly grows as the square root. This fact requires, without appealing to the current Bayesian methods, that the individual $\ell$ components must have a greater percentage uncertainty than their sum. Since previous studies, like those of Ref.~\cite{hale_2004}, assume a constant uncertainty with the extraction of spectroscopic factors, each $\ell$ component is assigned the same percentage uncertainty. The above discussion highlights that this assumption cannot be true, regardless of the statistical method. The influence of optical model parameters limits the precision of the total normalization of the cross section; thereby, giving an upper limit on the precision that can be expected from the components. These results indicate that applying a standard $\chi^2$ fit to a mixed $\ell$ transfer might not accurately extract the individual spectroscopic factors if optical model uncertainties are ignored.
I will now discuss the results and the relevant previously reported information for each of these states.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/elastic_ci_plot.png}
\caption{The credibility intervals obtained for the elastic scattering fit. The dark and light purple bands show the $68 \%$ and $95 \%$ credibility intervals, respectively. The measured error bars are smaller than the points, while the adjusted uncertainty of Eq.~\ref{eq:elastic_unc} that is inferred from the data is not shown.}
\label{fig:elastic_fit_na}
\end{figure}
\afterpage{
\clearpage
\null
\hspace{0pt}
\vfill
\captionof{figure}{The DWBA calculations for the states of $^{24}$Mg. The $68 \%$ and $95 \%$ credibility intervals are shown in purple and light purple, respectively. Only data point up to the first minimum were considered, and they are shown in dark brown. For the $11825$-keV state, the $68 \%$ bands are shown for the $\ell=0\text{-}3$
transfers.}
\label{fig:mcmc_cs_na}
\vfill
\newpage
\clearpage
\begin{figure}
\ContinuedFloat\centering
\captionsetup[subfigure]{labelformat=empty}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11390_fit.png}
\caption{\label{fig:11390_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11453_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11453_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11521_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11521_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11695_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11695_fit}}
\end{subfigure}
\vspace{-1\baselineskip}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11825_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11824_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11860_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11860_fit}}
\end{subfigure}
\end{figure}
\begin{figure}\ContinuedFloat
\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11933_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11933_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/11988_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:11988_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/12017_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:12017_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/12051_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:12051_fit}}
\end{subfigure}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\textwidth]{Chapter-6/figs/12183_fit.png}
\vspace{-1\baselineskip}
\caption{\label{fig:12183_fit}}
\end{subfigure}
\vspace{-1.8\baselineskip}
\end{figure}
\clearpage
}
\newpage
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/11390_log_norm.pdf}
\caption{The posterior samples of $(2J_f+1)C^2S$ for the $11390$-keV state. The dark blue dashed line shows the corresponding log-normal distribution.}
\label{fig:11390_log_normal}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/11453_sf_log_norm_big_fig.pdf}
\caption{Plots of the three posterier distributions associated with the $11453$-keV spectroscopic factor. The top two plots show the individual $\ell$ $(2J_f+1)C^2S$ samples and their corresponding log-normal distributions. The bottom shows the sum of the two components and its log-normal distribution. It can be seen that the sum has the same factor uncertainty as transfers described by a single $\ell$ value.}
\label{fig:11453_log_normal}
\end{figure}
\begin{table}
\centering
\setlength{\tabcolsep}{5pt}
\caption{ \label{tab: na_c2s_table} The values of $(2J_f+1)C^2S$ that were derived in this work compared to those of Ref.~\cite{hale_2004}. All values for this work give the $68 \%$ credibility interval from the posterior estimation. Additionally, the parameters of the corresponding log-normal distribution are listed. All spin parity information, except that of the $11825$-keV state, is taken from Ref.~\cite{firestone_2007}, and are updated based on the current observations.}
\begin{threeparttable}
\begin{tabular}{lllllll}
\toprule
\toprule
$E_x$ (keV) & $J^{\pi}$ & $\ell$ & $(2J_f+1)C^2S$ & $med.$ & $f.u.$& Ref.~\cite{hale_2004} \\ \hline \vspace{-2mm}
\\\vspace{2mm}
$11390$ & $1^-$ & $1$ &$0.066^{+0.021}_{-0.015}$ & $0.067$ & $1.30$ & $0.06$ \\ \vspace{2mm}
$11453$ & $2^+$ & $0+2$ & $0.14^{+0.05}_{-0.04}$ + $0.05^{+0.03}_{-0.02}$ & $0.14$ + $0.048$ & $1.39$ + $2.00$ & $0.24$ + $0.16^{\dagger}$ \\ \vspace{2mm}
$11521$ & $2^+$ & $0+2$ & $0.05^{+0.03}_{-0.02}$ + $0.057^{+0.024}_{-0.018}$ & $0.055$ + $0.056$ & $1.61$ + $1.51$ & $0.10^{\ddagger}$ \\ \vspace{2mm}
$11695$ & $4^+$ & $2$ & $0.085^{+0.025}_{-0.018}$ & $0.086$ & $1.29$ & $0.11$ \\ \vspace{2mm}
{$11825$} & & $0$ & $0.023^{+0.012}_{-0.007}$ & $0.024$ & $1.52$ & $0.039$ \\ \vspace{2mm}
& & $1$ & $0.010^{+0.004}_{-0.003}$ & $0.010$ & $1.40$ & $0.009$ \\ \vspace{2mm}
& & $2$ & $0.014^{+0.005}_{-0.003}$ & $0.014$ & $1.36$ & $0.015$ \\ \vspace{2mm}
& & $3$ & $0.025^{+0.009}_{-0.006}$ & $0.025$ & $1.36$ & $0.024$ \\ \vspace{2mm}
$11860$ & $1^-$ & $1$ & $0.022^{+0.007}_{-0.005}$ & $0.022$ & $1.32$ & $0.026$ \\ \vspace{2mm}
$11933$ & $(2 \text{-} 4)^+$ & $2$ & $0.23^{+0.07}_{-0.05}$ & $0.24$ & $1.30$ & $0.25$ \\ \vspace{2mm}
$11988$ & $2^+$ & $0+2$ & $0.26^{+0.10}_{-0.07}$ + $0.24^{+0.10}_{-0.07}$ & $0.26$ + $0.24$ & $1.40$ + $1.45$ & $0.42$ + $0.33$ \\ \vspace{2mm}
$12017$ & $3^-$ & $1$ & $0.20^{+0.06}_{-0.04}$ & $0.20$ & $1.30$ & $0.13$ \\ \vspace{2mm}
$12051$ & $4^+$ & $2$ & $0.13^{+0.04}_{-0.03}$ & $0.14$ & $1.30$ & $0.13$ \\ \vspace{2mm}
$12183$ & $(1,2^+)$ & $2$ & $0.12^{+0.04}_{-0.03}$ & $0.12$ & $1.34$ & $0.13$ \\
\bottomrule
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$\dagger$] Ref.~\cite{hale_2004} assumed a doublet. The $(2J_f+1)C^2S$ values were taken from these two states.
\item[$\ddagger$] Ref.~\cite{hale_2004} assumed a doublet, with a portion of the strength assigned to a negative parity state.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsubsection{The $11391$-keV State; $-303$-keV Resonance}
This state has been reported in several studies, and is known to have a spin parity of $J^{\pi}=1^-$. Our measurements
confirm an $\ell=1$ nature to the angular distribution, making it a candidate for a subthreshold $p$-wave resonance. A higher lying state with unknown spin-parity has been reported in Ref.~\cite{vermeer_1988} at $E_x = 11394(4)$ keV. The current evaluation states that the
$^{25}\textnormal{Mg}(^{3}\textnormal{He}, ^{4}\!\textnormal{He})^{24}\textnormal{Mg}$ measurement of Ref.~\cite{El_Bedewi_1975} also observes this
higher state at $11397(10)$ keV, but their angular distribution gives an $\ell=1$ character, indicating it would be compatible with the lower $J^{\pi}=1^-$ state. Ref.~\cite{hale_2004} finds a similar peak in their spectrum, but considered it a doublet because of the ambiguous shape of the angular distribution, which was caused primarily by the behaviour of the data above $20^{\circ}$. Considering the excellent agreement between our data and an $\ell=1$ transfer, only the state at $11389.6(12)$ keV with $J^{\pi}=1^-$ was considered to be populated. The present calculation assumes a $2p_{3/2}$ transfer and is shown in Fig.~\ref{fig:11390_fit}.
\subsubsection{The $11453$-keV State; $-240$-keV Resonance}
Two states lie in the region around $11.45$ MeV, with the lower assigned $J^{\pi}=2^+$ and the upper $J^{\pi}=0^+$. The only study that reports a definitive observation of the $0^+$, $11460(5)$ keV state is the $(\alpha,\alpha_0)$ of Ref.~\cite{goldberg_54}. The current study and that of Ref.~\cite{hale_2004} indicate that there is a state around $E_x = 11452$ keV that shows a mixed $\ell = 0 + 2$ angular distribution. Since the ground state of $^{23}$Na is non-zero, this angular distribution can be the result of a single $2^+$ state, and the $\ell=2$ component cannot be unambiguously identified with the higher lying $0^+$ state. The $(p,p^{\prime})$ measurement of Ref.~\cite{zwieglinski_1978} notes a state at $11452(7)$ keV with $\ell=2$. The excellent agreement between our excitation energy and the gamma ray measurement of Ref.~\cite{endt_1990} leads us to assume the full strength of the observed peak comes from the $2^+$ state. The calculation shown in Fig.~\ref{fig:11453_fit} assumes transfers with quantum numbers $2s_{1/2}$ and $1d_{5/2}$.
\subsubsection{The $11521$-keV State; $-172$-keV Resonance}
Another sub-threshold $2^+$ state lies at $11521.1(14)$ keV. A state with unknown spin-parity was observed at $11528(4)$ keV in Ref.~\cite{vermeer_1988}, but has not been seen on other studies. Based on the measured $\Gamma_{\gamma}/\Gamma \approx 1$, and assuming the efficacy of the methods in Ref.~\cite{vermeer_1988}, there is a high likelihood this unknown state has an unnatural parity. The present angular distribution, Fig.~\ref{fig:11521_fit}, is indicative of a mixed $\ell = 0 + 2$ assignment. Thus, the observation is associated with the $2^+$ state at $11521.1(14)$ keV, and transfers were calculated using $2s_{1/2}$ and $1d_{5/2}$.
\subsubsection{The $11695$-keV State; $2$-keV Resonance}
For our measurement this state was partially obscured by a contaminant peak from the ground state of $^{17}$F coming from $^{16}$O$(^{3} \textnormal{He}, d)^{17}$F for $\theta_{Lab} < 9^{\circ}$. Previous measurements have established a firm $4^+$ assignment, and our angular distribution is consistent with an $\ell =2$ transfer. The fit for a $1d_{5/2}$ transfer is shown in Fig.~\ref{fig:11695_fit}.
\subsubsection{The $11825$-keV State; $132$-keV Resonance}
As discussed in Section \ref{sec:background_subtraction}, this state is obscured at several angles by the fifth excited state of $^{15}$O. The previous constraints on its spin parity come from the comparison of the extracted spectroscopic factors for each $\ell$ value in Ref.~\cite{hale_2004} and the upper limits established in Ref.~\cite{Rowland_2004} and subsequently Ref.~\cite{Cesaratto_2013}.
This DWBA analysis finds an angular distribution consistent with Ref.~\cite{hale_2004}, which it should be noted experienced similar problems with the nitrogen contamination, but with the Bayesian model comparison methods presented in Section \ref{sec:bay_dwba}, constraints can be set based purely on the angular distribution. All of the considered $\ell$ transfer are shown in Fig.~\ref{fig:11824_fit}, and were calculated assuming $2s_{1/2}$, $2p_{3/2}$, $1d_{5/2}$, and $1f_{7/2}$ transfers, respectively. The results of the nested sampling calculations, which give the relative probabilities of each transfer, are presented in Table~\ref{tab:probs} and shown in Fig.~\ref{fig:l_comp_probs_na}. One key difference between this calculation and that of the $3.70$-MeV state in Section \ref{sec:bay_dwba}, is that the adopted values were taken to be the mean instead of the median. Since the statistical errors of the nested sampling are normally distributed in $\ln Z$, the resulting probabilities are distributed log-normally. The choice of the mean instead of the median then amounts to selecting the arithmetic mean instead of the geometric mean, which ensures $\sum_{\ell} P(\ell) = 1$.
\begin{table*}[]
\centering
\setlength{\tabcolsep}{12pt}
\caption{\label{tab:probs} Results of the model comparison calculations for the $11825$ keV state. For each $\ell$ value, I list the $\log{Z}$ value calculated with nested sampling, the median Bayes factor when compared to the most likely transfer $\ell=3$, and the mean probability of each transfer.}
\begin{tabular}{llll}
\toprule
\toprule
$\ell$ & $\log{Z}_{\ell}$ & $B_{3 \ell}$ & $P(\ell)$ \\ \hline \vspace{-2mm}
\\
$0$ & $44.226(294)$ & $47.79$ & $1 \%$ \\
$1$ & $45.990(289)$ & $8.20$ & $7 \%$ \\
$2$ & $47.762(323)$ & $1.39$ & $39 \%$ \\
$3$ & $48.093(293)$ & $1.00$ & $53 \%$ \\
\bottomrule
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/l_comp.png}
\caption{The distributions from the nested sampling algorithm for the most likely $\ell$ values for the $11825$-keV state.}
\label{fig:l_comp_probs_na}
\end{figure}
\subsubsection{The $11860$-keV State; $168$-keV Resonance}
There are two states within a few keV of one another reported to be in this region. One is known to have $ J^{\pi}=1^-$, and has been populated in nearly all of the experiments listed in Table~\ref{tab:energy_comp}. The other state is reported to decay to the $6^+$, $8114$-keV state, with a $\gamma$-ray angular distribution that favors an assignment of $8^+$ \cite{branford_1972}. The later polarization measurements of Ref.~\cite{wender_1978} support the assignment of $8^+$. For our experiment, the tentative $8^+$ state is likely to have a negligible contribution to the observed peak, and the angular distribution in Fig.~\ref{fig:11860_fit} is consistent with a pure $\ell=1$ transfer. The calculation assumed $2p_{5/2}$.
\subsubsection{The $11933$-keV State; $240$-keV Resonance}
The $11933$-keV State does not have a suggested spin assignment in the current ENSDF evaluation \cite{firestone_2007}. The compilation of Ref.~\cite{endt_eval_1990} lists a tentative $(2-4)^+$. The
$\ell=2$ angular distribution from the $(^4 \textnormal{He}, ^3\textnormal{He})$ measurement of Ref.~\cite{El_Bedewi_1975} suggests $(0\text{-}4)^+$. The $0^+$ and $1^+$ assignments are ruled out by the $\gamma$-decay of this state to the $J^{\pi} = 2^+$ $1368$-keV and $J^{\pi} = 4^+$ $4122$-keV states observed in Ref.~\cite{Berkes_1964}. The results of this work indicate an $\ell=2$ transfer. Schmalbrock \textit{et. al} suggest that this state could be the analogue to a $T=1$ state with spin $3^+$ in $^{24}$Na \cite{schmalbrock_1983}. Based on these observations, and the satisfactory ability to describe the angular distribution with $\ell=2$, a $1d_{5/2}$ transfer was calculated, and is shown in Fig.~\ref{fig:11933_fit}.
\subsubsection{The $11988$-keV State; $295$-keV Resonance}
As can be seen in Table \ref{tab:energy_comp}, the $11988$-keV State has been observed in multiple experiments, including the high precision $\gamma$-ray measurement of Ref.~\cite{endt_1990}. A spin parity of $2^{+}$ has been assigned based on the inelastic measurement of Ref.~\cite{zwieglinski_1978}. The current fit is shown in Fig.~\ref{fig:11988_fit} and assumes a mixed $\ell = 0+2$ transition with $2s_{1/2}$ and $1d_{5/2}$.
\subsubsection{The $12017$-keV State; $324$-keV Resonance}
The $12017$-keV state is known to have $J^{\pi}=3^-$, which was established from the angular distributions of Ref.~\cite{Kuperus_1963, Fisher_1963} and confirmed by the inelastic scattering of Ref.~\cite{zwieglinski_1978}. Our angular distribution is consistent with an $\ell=1$ transfer, which rules out the $j = 1/2$ possibility, giving a unique single particle state with $2p_{3/2}$. The fit is shown in Fig.~\ref{fig:12017_fit}.
\subsubsection{The $12051$-keV State; $359$-keV Resonance}
The angular distributions of Ref.~\cite{Fisher_1963} established $J^{\pi}=4^+$ for the $12051$-keV state, which was later confirmed by the inelastic scattering of Ref.~\cite{zwieglinski_1978}. The angular distribution of the present work is well described by a transfer of $1d_{5/2}$, which is shown in Fig.~\ref{fig:12051_fit}.
\subsubsection{The $12183$-keV State; $491$-keV Resonance}
Ref.~\cite{MEYER_1972} observed that the $12183$-keV state $\gamma$-decays to $0^+$, $2^+$, and $1^{+}$ states, which permits values of $(1,2^{+})$. The angular distribution of Ref.~\cite{hale_2004} permits either $\ell = 0$ or $\ell = 0+2$ transfers, which requires the parity of this state be positive. The current work finds an angular distribution consistent with a pure $\ell=2$ transfer. The calculation of the $1d_{5/2}$ transfer is shown in Fig.~\ref{fig:12183_fit}.
\section{Proton Partial Widths}
The spectroscopic factors extracted in Section \ref{sec:spec_factors} are only an intermediate step in the calculation of the $^{23}$Na$(p, \gamma)$ reaction rate. As discussed in Section \ref{sec:calc_partial_widths}, $C^2S$ can be thought of as a scale factor that converts single-particle quantities to physical ones. From the proton spectroscopic factors of this work, proton partial widths can be calculated using Eq.~\ref{eq:proton_partial_width}. Additionally, in the case of a mixed $\ell$ transfer, the total proton width is calculated using:
\begin{equation}
\label{eq:mixed_l_proton_width}
\Gamma_p = \sum_{\ell} \Gamma_{p, \ell}.
\end{equation}
However, in the current case the $\ell=2$ single particle widths, $\Gamma_{sp}$, are typically two orders of magnitude lower than the $\ell=0$ ones, making them negligible in the calculations presented below.
\subsection{Bound State Uncertainties}
Before presenting the results of this work, it is important to discuss the uncertainties that could impact the determination of $\Gamma_p$. One of the largest is the bound state parameters used to define the overlap function. Since the overlap function is extremely sensitive to the choice of Woods-Saxon radius and diffuseness parameters, the extracted spectroscopic factor can vary considerably. This dependence has been discussed extensively in the literature, for a review, see Ref.~\cite{2014_Tribble}. Section \ref{sec:spec_factor_dis_cu} confirmed this strong dependence in a Bayesian framework. If the uncertainties of $C^2S$ are independent from those of $\Gamma_{sp}$, then single-particle transfer reaction experiments that determine spectroscopic factors will be unable to determine $\Gamma_p$ with the precision needed for astrophysics applications.
Ref.~\cite{bertone} noted an important consideration for the calculation of $\Gamma_p$ from $C^2S$ and $\Gamma_{sp}$. If these quantities are calculated using the \textit{same} bound state potential parameters, the variation in $C^2S$ is anticorrelated with that of $\Gamma_{sp}$. Thus, the product of these two quantities, i.e., $\Gamma_{p}$, has a reduced dependence on the chosen bound state potentials. Using the same bound state parameters for both parameters Refs.~\cite{hale_2001, hale_2004} found variations in $\Gamma_p$ of $\approx 5 \%$. With the Bayesian methods of this study, it is interesting to investigate whether this anticorrelation still holds in the presence of optical model uncertainties and using the bound state MCMC samples that are inherently correlated with all the model parameters.
Modifications were made to the code \texttt{BIND} so that it could be run on a set of tens of thousands of bound state samples to produce a set of $\Gamma_{sp}$ samples. Due to the numerical instability of the algorithm for low energy resonances, the potential impact of the weak binding approximation, and the difficulties for mixed $\ell$ transitions, the state selected for this calculation need to have a $ 500 \gtrapprox E_r \gtrapprox 100$ keV, $\ell \geq 2$, and a known spin parity. The only such state is at $E_x = 12051$ keV ($E_r = 358$ keV). A new MCMC calculation was carried out using the same model as Eq.~\ref{eq:dwba_model_na} with the additional parameters for the bound state $r_0$ and $a_0$. These were given priors:
\begin{align}
\label{eq:bound_state_priors}
& r_0 \sim \mathcal{N}(1.25, 0.125^2) \\
& a_0 \sim \mathcal{N}(0.65, 0.065^2). \nonumber
\end{align}
The sampler was again run with $400$ walkers taking $8000$ steps. The final $2000$ steps were thinned by $50$ giving $16000$ posterior samples. These samples were then plugged into \texttt{BIND} to produce the $16000$ samples of $\Gamma_{sp}$. Since these samples all come directly from the MCMC calculation they naturally account for the variations in the optical model parameters as well as $C^2S$. First it is worth establishing the bound state parameters influence on the uncertainty of $C^2S$. The log-normal distribution well described this distribution and had a factor uncertainty of $f.u.=1.50$ increased from $f.u.=1.30$ in the case of fixed bound state parameters. The pair correlation plot for $(2J_f+1)C^2S$ versus $\Gamma_{sp}$ is shown in Fig.~\ref{fig:corner_g_sp_sf}. The resulting distribution gives $(2J_f+1)\Gamma_{p} = 0.083^{+0.025}_{-0.018}$ eV, while the value calculated using fixed bound state parameters gives $(2J_f+1)\Gamma_{p} = 0.082^{+0.025}_{-0.018}$ eV. The cancellation between the variation in $\Gamma_{sp}$ and $C^2S$ is nearly exact in this case, with the resulting uncertainty being $30 \%$ in both calculations. This relation still requires further study using Bayesian methods, particularly the influence of the bound state quantum numbers $n$ and $j$ which cannot be determined from the transfer data, but for the present work the potential influence of the bound state parameters on $\Gamma_p$ is considered negligible.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{Chapter-6/figs/sf_gsp_corner.pdf}
\caption{Pair correlation plot for the MCMC posterior samples of $\Gamma_{sp}$ and $(2J_f+1)C^2S$. A strong anticorrelation exists when the same bound state parameters are used to calculate both quantities.}
\label{fig:corner_g_sp_sf}
\end{figure}
\subsection{Subthreshold Resonances}
Three of the observed states lie close enough to the proton threshold to be astrophysically relevant. As mentioned in Section \ref{sec:calc_partial_widths}, $P_{\ell}$ and therefore $\Gamma_{sp}$ cannot be calculated for subthreshold states. Instead these resonances will be integrated by \texttt{RatesMC} using $\theta^2 = C^2S \theta_{sp}^2$. $\theta_{sp}^2$ can be calculated using the fits provided in either Ref.~\cite{ILIADIS_1997} or Ref.~\cite{BARKER_1998}. I have adopted the fit of Ref.~\cite{ILIADIS_1997}. It should be noted that this fit was derived using the bound state parameters $r_0 = 1.26$ fm and $a_0 = 0.69$ fm which differ from those used in this work. The impact of this difference was investigated by using higher lying states where values of $\theta_{sp}^2$ could also be calculated using \texttt{BIND}. The maximum observed deviation was $10 \%$, which is in decent agreement with the expected accuracy of the fit as mentioned in Ref.~\cite{ILIADIS_1997}. The values of $\theta^2$ for this work are shown in Table \ref{tab:subthresh_resonance_table}. Besides the $68 \%$ credibility intervals, the mean and standard deviation are also reported for the samples. These values correspond to Eq.~\ref{eq:lognormal_mean} and the square root of Eq.~\ref{eq:lognormal_variance}, respectively, and are the required inputs for \texttt{RatesMC}, which derives the log-normal parameters based on the mean and standard deviation.
\begin{table*}
\centering
\setlength{\tabcolsep}{10pt}
\caption{\label{tab:subthresh_resonance_table} Reduced width calculations for the observed subthreshold resonances. All $\theta_{sp}^2$ values were calculated using the fit of Ref.~\cite{ILIADIS_1997} and should be considered to have a $10 \%$ systematic uncertainty. The $68 \%$ credibility intervals of the samples are presented in the fifth column, while the last column gives their mean and standard deviation, which are the inputs required by \texttt{RatesMC}.}
\begin{tabular}{llllll}
\toprule
\toprule
$E_x$(keV) & $E_r$(keV) & $J^{\pi}$ & $\theta_{sp}^2$ & $(2J_f + 1) \theta^2 $ & $E[x]$ \\ \hline
\\ [-1.5ex]
$11389.6(12) $ & $-303.1(12)$ & $1^-$ & $0.738$ & $0.049^{+0.016}_{-0.011}$ & $0.051(14)$ \\ [0.8ex]
$11452.9(4) $ & $-239.8(4)$ & $2^+$ & $0.654$ & $0.09^{+0.03}_{-0.03}$ & $0.10(3)$ \\ [0.8ex]
$11521.1(14) $ & $-171.6(14)$ & $2^+$ & $0.639$ & $0.035^{+0.018}_{-0.013}$ & $0.038(18)$ \\ [0.8ex]
\bottomrule
\bottomrule
\end{tabular}
\end{table*}
\subsection{Resonances Above Threshold}
Eight resonances were observed above the proton threshold and below $500$ keV. Except for $E_r = 2$, all of the $\Gamma_{sp}$ values were calculated using \texttt{BIND}. \texttt{BIND} calculations were carried out with the Woods-Saxon potential parameters $r_0 = 1.25$ fm, $a_0 = 0.65$ fm, $r_c = 1.25$ fm, $V_{so} = 6.24$, and channel radius of $1.25$ fm. The low resonance energy of $E_r = 2$ presented numerical challenges for \texttt{BIND}, so it was calculated using the fit of Ref.~\cite{ILIADIS_1997}. The shift in energy from $5$ keV from Ref.~\cite{hale_2004} to $2$ keV in this work is also an excellent example of the extreme energy dependence of the partial widths. If this resonance has an energy of $5$ keV, the single particle width is on the order of $\Gamma_{sp} \approx 10^{-59}$ eV, while the updated energy gives $\Gamma_{sp} \approx 10^{-97}$ eV.
\begin{table*}
\centering
\setlength{\tabcolsep}{8pt}
\caption{ \label{tab:gamma_p_table} Proton partial widths derived from this work. The values of $\Gamma_{sp}$ from \texttt{BIND} are listed for reference. $(2J_f+1)\Gamma_p$ values are given in terms of their $68 \%$ credibility intervals. Finally the mean and standard deviation are listed for use in \texttt{RatesMC} calculations.}
\begin{threeparttable}
\begin{tabular}{llllll}
\toprule
\toprule
$E_x$(keV) & $E_r$(keV) & $J^{\pi}$ & $\Gamma_{sp}$(eV) & $(2J_f + 1) \Gamma_p $(eV) & $E[x]$ \\ \hline
\\ [-1.5ex]
$11695(5) $ & $2(5)$ & $4^+$ & $2.737 \times 10^{-97}$ $^{\dagger}$ & $2.3^{+.7}_{-0.5} \times 10^{-98}$ & $2.4(6) \times 10^{-98}$ \\ [0.8ex]
$11825(3) $ & $132(3)$ & $\ell=0$ & $9.785 \times 10^{-04}$ & $2.3^{+1.2}_{-0.7} \times 10^{-5}$ & $2.6(12) \times 10^{-5}$ \\ [0.8ex]
& & $\ell=1$ & $2.072 \times 10^{-04}$ & $2.0^{+0.8}_{-0.6} \times 10^{-6}$ & $2.2(8) \times 10^{-6}$ \\ [0.8ex]
& & $\ell=2$ & $4.425 \times 10^{-06}$ & $6.0^{+2.1}_{-1.5} \times 10^{-8}$ & $6.3(20) \times 10^{-8}$ \\ [0.8ex]
& & $\ell=3$ & $5.492 \times 10^{-08}$ & $1.4^{+0.5}_{-0.3} \times 10^{-9}$ & $1.4(4) \times 10^{-9}$ \\ [0.8ex]
$11860.8(14) $ & $168.1(14)$ & $1^-$ & $5.894 \times 10^{-3}$ & $1.3^{+0.4}_{-0.3} \times 10^{-4}$ & $1.4(4) \times 10^{-4}$ \\ [0.8ex]
$11933.06(19) $ & $240.37(19)$ & $(2 \text{-} 4)^+$ & $1.034 \times 10^{-2}$ & $2.4^{+0.7}_{-0.5} \times 10^{-3}$ & $2.5(7) \times 10^{-3}$ \\ [0.8ex]
$11988.45(6) $ & $295.76(6)$ & $2^+$ & $15.39$ & $4.0^{+1.5}_{-1.1}$ & $4.2(15)$ \\ [0.8ex]
$12016.8(5) $ & $324.1(5)$ & $3^-$ & $8.550$ & $1.7^{+0.5}_{-0.4}$ & $1.8(5)$ \\ [0.8ex]
$12051.3(4) $ & $358.6(4)$ & $4^+$ & $6.141 \times 10^{-1}$ & $8.2^{+2.5}_{-1.8} \times 10^{-2}$ & $8.6(24) \times 10^{-2}$ \\ [0.8ex]
$12183.3(1) $ & $490.6(1)$ & $(1,2)^+$ & $9.318$ & $1.1^{+0.4}_{-0.3}$ & $1.2(4)$ \\ [0.8ex]
\bottomrule
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[$\dagger$] Calculated using $\theta_{sp}^2$ from the fit of Ref.~\cite{ILIADIS_1997} to avoid the numerical instability of \texttt{BIND} at $2$ keV. An additional $10 \%$ systematic uncertainty should be considered.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\subsection{Discussion}
The literature for $\omega \gamma$ and $\Gamma_p$ values is extensive. Ref.~\cite{hale_2004} compiled and corrected the measurements for stopping powers and target stoichiometry. Using the compiled values and the recent measurement of Ref.~\cite{BOELTZIG_2019}, a comparison can be made from the current work and previous measurements. The unknown spin and parity of many of these states makes direct comparison to $\omega \gamma$ subject to large uncertainties, so as an alternative $(2J_f + 1) \Gamma_p$ has been deduced from $\omega \gamma$ where possible. This of course requires knowledge of $\Gamma_{\gamma}/\Gamma$, which is only known for a select few of the observed resonances. I will now detail the information used for these resonances.
\subsubsection{$132$-keV Resonance}
The $132$-keV Resonance was measured directly for the first time at LUNA and is reported in Ref.~\cite{BOELTZIG_2019}. The value from that work is $\omega \gamma = 1.46^{+0.58}_{-0.53} \times 10^{-9}$ eV. Using $\Gamma_{\gamma}/\Gamma = 0.95(4)$ from Ref.~\cite{vermeer_1988} implies $(2J_f+1)\Gamma_p = 1.23^{+0.49}_{-0.45} \times 10^{-8}$ eV. The upper limit reported in Ref.~\cite{Cesaratto_2013} can also be used for comparison and yields $(2J_f+1)\Gamma_p \leq 4.35 \times 10^{-8}$ eV. The closest value from this work is the $\ell = 2$ transfer which gives $(2J_f+1)\Gamma_p = 6.3^{+2.1}_{-1.5} \times 10^{-8}$ eV. The disagreement between our value and that of LUNA is stark, and a significant amount of tension exists with the upper limit of Ref.~\cite{Cesaratto_2013}.
\subsubsection{$168$-keV Resonance}
The $168$-keV Resonance has a proton width of $(2J_f+1)\Gamma_p = 1.83(39) \times 10^{-4}$ eV as discussed in Ref.~\cite{hale_2004}. This value is in good agreement with the current work $(2J_f+1)\Gamma_p = 1.3^{+0.4}_{-0.3} \times 10^{-4}$ eV.
\subsubsection{$240$-keV Resonance}
Using the resonance strength measured in Ref.~\cite{BOELTZIG_2019} of $\omega \gamma = 4.82(82) \times 10^{-4}$ eV and $\Gamma_{\gamma}/\Gamma > 0.7$ from Ref.~\cite{vermeer_1988}, $(2J_f+1)\Gamma_p$ has a lower limit of $3.86(66) \times 10^{-3}$ eV, which is in mild tension with the transfer value of $2.5(7) \times 10^{-3}$ eV.
\subsubsection{$295$-keV Resonance}
Ref.~\cite{BOELTZIG_2019} measured $\omega \gamma = 1.08(19) \times 10^{-4}$ eV, while Ref.~\cite{vermeer_1988} gives $\Gamma_{\gamma}/\Gamma = 0.70(9)$. In this case, $(2J_f+1)\Gamma_p = 1.2(2) $ eV. The current value is in significant disagreement with $(2J_f+1)\Gamma_p = 4.0^{+1.5}_{-1.1}$ eV.
\subsection{$490$-keV Resonance}
The $490$-keV Resonance is considered a standard resonance for the $^{23}$Na$(p, \gamma)$ reaction, and has a value of $9.13(125) \times 10^{-2}$ eV \cite{PAINE_1979}. Unfortunately, $\Gamma_{\gamma}/\Gamma$ is not known. However, an upper limit for $\omega \gamma_{(p, \alpha)}$ has been set at $\leq 0.011$ eV \cite{hale_2004}. The ratio of the two resonances strengths can set an upper limit for $\Gamma_{\alpha}/\Gamma_{\gamma}$:
\begin{equation}
\label{eq:resonance_strength_ratio}
\frac{\omega \gamma_{(p, \alpha)}}{\omega \gamma_{(p, \gamma)}} = \frac{\Gamma_{\alpha}}{\Gamma_{\gamma}}.
\end{equation}
Plugging in the values gives $\Gamma_{\alpha}/\Gamma_{\gamma} \leq 0.12 $. Assuming $\Gamma_p << \Gamma_{\gamma}$, $\Gamma_{\gamma}/\Gamma \geq 0.89$. The current value for $(2J_f+1)\Gamma_p = 1.1^{+0.4}_{-0.3}$ eV which can be compared to the upper limit of the standard resonance of $(2J_f+1)\Gamma_p = 0.821(112)$ eV. If we assume the $\alpha$ channel is completely negligible, $(2J_f+1)\Gamma_p = 0.730(100)$ eV. The standard resonance value appears to be consistent with the current work.
\subsubsection{Final Remarks}
The above comparisons make it clear that the agreement between the current experiment and previous measurements is inconsistent. Of particular concern are the $132$-keV and $295$-keV resonances, in which the disagreement is at a high level of significance. However, the measurement of Ref.~\cite{BOELTZIG_2019} at LUNA used the $295$-keV resonance as a reference during the data collection on the $132$-keV resonance, which could explain some correlation between those resonance strengths when compared to this work. Furthermore, the updated resonance energy of $132$ keV compared to the previously assumed $138$ keV could move the beam off of the plateau of the target yield curve, but the magnitude of this effect is difficult to estimate. However, the measurement of Ref.~\cite{Cesaratto_2013} has an upper limit that is consistent with the LUNA value and is in tension with the current work. The upper limit also assumed the $138$-keV resonance energy, but appeared to use a much thicker target ($\approx 30$ keV) than the LUNA measurement ($\approx 15$ keV) making it less sensitive to the resonance energy shift. All of this discussion presupposes that the proton state has $\ell = 2$ and that our observed angular distribution arises completely from a direct reaction mechanism. If the spin is one of the other possible values, the current results will differ by over an order of magnitude, which could indicate the observed yields have significant contributions from a compound reaction mechanism.
\section{The $^{23}$Na$(p, \gamma)$ and $^{23}$Na$(p, \alpha)$ Reaction Rates}
It was briefly mentioned in the previous section that there exists a formidable amount of data relevant to the $^{23}$Na$(p, \gamma)$ reaction rate. The compiled values of Ref.~\cite{hale_2004} make up the majority of the current STARLIB rate. A detailed reanalysis of this rate is likely needed, but is well beyond the scope of the current work. For now two issues of relevance to globular cluster nucleosynthesis will be investigated:
\begin{enumerate}
\item The impact of our recommended energies.
\item The potential impacts of the observed discrepancy in the proton partial widths for the $132$-keV resonance.
\end{enumerate}
\subsection{Energy Update}
The resonance energies presented in Table \ref{tab:recommened_energies} were substituted into the \texttt{RatesMC} input files provided by STARLIB for the $(p, \gamma)$ and $(p, \alpha)$ rates. Particle partial widths were scaled as needed to reflect the new energies. The resulting rates, normalized to their median are shown in Fig.~\ref{fig:compare_p_g_energy_update}. The blue lines show the $68 \%$ coverage of the rates as determined by LUNA \cite{BOELTZIG_2019}. The influence of the new energy for the $132$-keV resonance on the $(p, \gamma)$ rate can be clearly seen. Recall that the resonance energy enters the rate exponentially, and in this case the $5$-keV shift in energy is responsible for the rate increasing by a factor of $2.5$ for temperatures of $70 \text{-} 80$ MK. The impact of the new energies on the $(p, \alpha)$ rate are more modest. While there is a factor of $1.25$ increase, the new rate is well within the uncertainty of the previous rate.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{Chapter-6/figs/GraphContour_energy_update_p_g.png}\includegraphics[width=.45\textwidth]{Chapter-6/figs/GraphContour_energy_update_p_a.png}
\caption{(left) The $(p, \gamma)$ reaction rate ratio. The rate has been normalized to its median, and the contours show the relative uncertainty as a function of temperature. The solid blue line is the recommend rate of Ref.~\cite{BOELTZIG_2019}, and the dashed blue lines show its $68 \%$ coverage. (right) The reaction rate ratio plot for the $(p, \alpha)$ rate. Again the blue line shows the previous rate.}
\label{fig:compare_p_g_energy_update}
\end{figure}
In order to assess the impact of the increase of the $(p, \gamma)$ rate, a Monte Carlo network calculation was carried out by integrating the updated rates into STARLIB. Remaining relatively agnostic towards the potential source of Na enrichment in globular clusters, a simple single zone calculation was carried out with $T = 75$ MK, $\rho = 10$ g/cm$^3$, and an initial composition taken from Ref.~\cite{iliadis_2016}. The initial mass fraction of H was $X_{\textnormal{H}} = 0.75 $, and the network was run until this mass fraction fell to $10 \%$ of its initial value. Two Monte Carlo runs were performed for the LUNA rates and the rates of this work, respectively. $10^4$ iterations were taken for each set of rates. The Spearman rank-order coefficient was used to identify correlations between the rate variation factor, $p_i$, and the final $^{23}$Na abundance. The four most influential reaction rates are shown in Fig.~\ref{fig:na_dot_plot}, notice the dramatic increase in correlation for the updated $^{23}$Na$(p, \gamma)$ rate, which is accompanied by a mild weakening of the correlation from $^{23}$Na$(p, \alpha)$ and $^{20}$Ne$(p, \gamma)$. Figure \ref{fig:na_abundance_comp} shows the distribution of the final mass fractions from the $10^4$ network runs. The peak of the histogram for the updated rates is shifted downward, with a comparable spread in $^{23}$Na to the samples using the LUNA rates. Both of these figures show that the increase in the $(p, \gamma)$ rate found in this work leads to an overall reduction in the final abundance of sodium. It is worth emphasizing that the majority of this change is just from the $5$-keV shift in the $132$-keV resonance.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Chapter-6/figs/Na_dot_plot.png}
\caption{Pair correlation plots for the rate variation factor and the final mass fraction of sodium. The top row shows the four most influential reactions when using the rates of Ref.~\cite{BOELTZIG_2019}, while the bottom row uses the updated energies of this work. The new energy of the $132$-keV resonance in $^{23}$Na$(p, \gamma)$ is shown to dramatically increase the dependence of the final sodium abundance on the $^{23}$Na$(p, \gamma)$ rate.}
\label{fig:na_dot_plot}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/Abund_comp_Na.pdf}
\caption{Histrograms of the samples from the Monte Carlo reaction network runs for both rates of Ref.~\cite{BOELTZIG_2019} and this work. While the spread of the two histograms is similar, the updated rate causes sodium to be destroyed more quickly.}
\label{fig:na_abundance_comp}
\end{figure}
\subsection{Transfer Measurement as the Only Constraint on the $132$-keV Resonance}
The large discrepancy between the proton partial widths derived in this work compared to those inferred from Ref.~\cite{BOELTZIG_2019} is concerning, but ultimately the dependence of the current proton partial widths on the assumptions of DWBA does not provide a strong argument to discount the findings of the direct studies. However, it is instructive to further examine the results of this work in order to demonstrate how assigning probabilities to $\ell$ values impacts the reaction rate. If it was the case that the only available constraints were the proton partial widths derived in this work, then the probability of each transfer derived from the Bayesian model comparison calculation would have to be accounted for in the Monte Carlo reaction rate calculation. Fortunately, \texttt{RatesMC} can sample these probabilities as described in Ref.~\cite{2014_Mohr}. The result of plugging the mean probabilities given in Table \ref{tab:probs} into the rate calculation is shown in Fig.~\ref{fig:compare_p_g_l_update}. This figure shows the orders of magnitude uncertainty introduced by the distinct partial widths. The $\approx 1\%$ chance of an $\ell = 0$ transfer gives a factor of $1000$ potential increase in the reaction rate in the tails of the distribution. Note that the high asymmetry is caused by only considering $\ell =0 \text{-} 3$ transfers. This calculation demonstrates the vital role of direct measurements in nuclear astrophysics. The uncertainties inherent to transfer reactions can compound rapidly if a clear $\ell$ value cannot be determined for a single influential resonance.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{Chapter-6/figs/GraphContour_ell_value_shift.png}
\caption{The $(p, \gamma)$ reaction rate ratio using just the constraints of the current work. The rate has been normalized to its median, and the contours show the relative uncertainty as a function of temperature. The solid blue line is the recommend rate of Ref.~\cite{BOELTZIG_2019}, and the dashed blue lines show its $68 \%$ coverage. The highly asymmetric distribution around $70$ MK is caused by the improbable but allowed $\ell=0,1$ transfers.}
\label{fig:compare_p_g_l_update}
\end{figure}
\section{Conclusions}
Utilizing the high resolution capabilities of the TUNL SPS, astrophysically import excited states in $^{24}$Mg were observed. Careful calibration and compilation of previous results give a significantly lower resonance energy for the $132$-keV resonance. This resonance has the single largest contribution to the $(p, \gamma)$ reaction rate at temperatures important for globular cluster nucleosynthesis. Angular distributions were analyzed using the Bayesian DWBA methods of Chapter \ref{chap:bay_dwba}, and spectroscopic factors were extracted. This chapter has presented an analysis that is the first of its kind, where Bayesian methods were used to accurately determine uncertainties at every step of the transfer reaction analysis. The astrophysical impact of these uncertainties was briefly investigated. While there is still a need for an updated rate evaluation, the results of this experiment indicate that significant uncertainties are still present in both the $^{23}$Na$(p, \gamma)$ and $^{23}$Na$(p, \alpha)$ reaction rates, and as a result our knowledge of the Na-O anticorrelation in globular clusters is still limited by the nuclear physics.
\chapter{Summary and Conclusions}
\label{chap:potassium}
The Na-O and K-Mg elemental abundance anomalies in globular clusters remain open questions. The astrophysical site that produces these signatures is still unknown, and as a result indicates that our current theories of stellar evolution are incomplete. Chapter 1 provided a brief introduction to these phenomena, focusing on the history of the astronomical observations. The continued improvement in observational techniques has revealed that these anomalies are a result of globular clusters containing multiple stellar population, with the oldest population of stars enriching approximately $30 \%$ of the newest generation by some unknown mechanism. While the burning site responsible for the Na-O and K-Mg are most likely different, our current understanding of globular cluster nucleosynthesis is hampered by our imprecise knowledge of both the sodium and potassium destroying reactions $^{23}$Na$(p, \gamma)$ and $^{39}$K$(p, \gamma)$, respectively.
Chapter 2 outlined how stellar burning processes are linked to nuclear properties that can be measured in the laboratory. At the low temperatures thought to characterize the pollution sites of globular clusters, thermonuclear reaction rates are dominated by resonant reactions. The coulomb barrier makes direct study of these reactions at the very least difficult, and frequently, as is the case of the lowest lying resonances, impossible. By connecting resonance parameters to nuclear structure, it becomes possible to constrain these rates using transfer reactions. In particular, transfer reaction can constrain the energy, spin parity, and particle partial widths of astrophysically relevant resonances.
Thermonuclear reactions rates are dependent on experimentally measured quantities, and Chapter 3 reviewed the propagation of experimental uncertainties through reaction rate calculations using Monte Carlo techniques. By collecting a network of reaction rates and their associated uncertainties, the impact of individual rates can be assessed. This procedure was demonstrated with the results of a reevaluation of the $^{39}$K$(p, \gamma)$ rate. By including all known experimental data, this rate was shown to have uncertainties that are too large to make precise predictions required to study the K-Mg abundance anomaly. Future experimental studies are required to better determine this rate.
Chapter 4 provided an overview of the TUNL tandem lab. The author's efforts in recommissioning the Split-pole spectrograph and the accompanying focal plane detector were discussed. This work provided the opportunity to study transfer reaction with excellent energy resolution, a necessity for nuclear astrophysics. Chapter 5 detailed two Bayesian techniques that accurately quantify the uncertainties from spectrograph experiments. The energy calibration method accounts for deviations in the data from the fit function, which has long been an issue for spectrograph measurements. These additional uncertainties are critical when deducing excitation energies that will be used to calculate resonance energies for astrophysically important states. Bayesian DWBA makes it possible to extract spectroscopic factors and assign $\ell$ values taking into account the uncertainties arising from the phenomenological optical potentials used by the theory.
Finally, all of the above techniques were combined to indirectly study the $^{23}$Na$(p, \gamma)$ reaction using $^{23}$Na$(^3\textnormal{He}, d)$. The single most import resonance in $^{23}$Na$(p, \gamma)$ for globular cluster nucleosynthesis has previously been determined to lie at $138$ keV. The excellent energy resolution of the Split-pole spectrograph allowed the state corresponding to this resonance to be observed at multiple angles. A thorough energy calibration was carried out using updated values and the Bayesian method of Chapter 5. These considerations resulted in a new suggested resonance energy of $E_r = 132(3)$ keV, which is over $5$ keV lower than previously suggested. Bayesian DWBA was carried out for 13 states close to the proton threshold. Spectroscopic factors were generally found to be in agreement with Ref.~\cite{hale_2004}. Probabilities were assigned to each allowed $\ell$ transfer for the $132(3)$-keV resonance. Proton partial widths were derived, and while decent agreement was found for some states, the $132$-keV resonance was found to be, at best, a factor of three different from those Ref.~\cite{BOELTZIG_2019}. The reason for this discrepancy is unclear at this time. The reaction rate for both $^{23}$Na$(p, \gamma)$ and $^{23}$Na$(p, \alpha)$ were updated for the energies of this work. Modest changes in the $^{23}$Na$(p, \alpha)$ rate were observed, while a factor of $2.5$ increase in the median rate of the $^{23}$Na$(p, \gamma)$ reaction was determined. This increase is due almost solely to the new recommended energy of the $132$-keV resonance. Finally, it was shown that the constraints from the transfer reaction measurement are insufficient to precisely determine the rate because of the ambiguous $\ell$ assignment for the $132$-keV resonance. These large uncertainties underscore the need for accurate direct measurements, while also indicate a need for a better understanding of the uncertainties in indirect studies.
Several directions exist for future work. The Bayesian DWBA methods of Chapter \ref{chap:bay_dwba} are still in their infancy. The assumptions on the prior distributions at this time are motivated primarily to account for global studies while making the problem computationally tractable. However, deriving these global values using a Bayesian method would allow future transfer measurements to more accurately account for parameter uncertainties. The discrepancy between the direct measurements and the partial widths extracted in Chapter \ref{chap:sodium} is of particular concern. Systematic studies that investigate the reliability of estimating partial widths from transfer reactions are needed. The disagreement between the excitation energies of this work and the study of Ref.~\cite{hale_2004} requires further study using independent methods. An attractive option is a particle-$\gamma$ coincidence study to provide precise energies with different systematic effects than those of the spectrograph measurements. If the resonance energy deduced in this work is found to be reliable, then further direct studies are likely needed to verify the results of measurements carried out which assumed the previous, higher resonance energy. Finally, the $^{39}$K$(^{3}\textnormal{He}, d)$ reaction has been successfully carried out at TUNL, and analysis of this experiment will provide useful constraints on the rate as discussed in Chapter 3.
\chapter*{#1}
\markboth{#1}{#1}}
\usepackage{amsmath,amssymb,amsfonts}
\usepackage{dcolumn
\usepackage{bm
\usepackage{cancel}
\usepackage{verbatim
\usepackage{ifthen}
\usepackage{url}
\usepackage{sectsty}
\usepackage{balance}
\usepackage{graphicx}
\usepackage{lastpage}
\usepackage[format=plain,justification=RaggedRight,singlelinecheck=false,font=small,labelfont=bf,labelsep=space]{caption}
\usepackage{fancyhdr}
\usepackage{dirtytalk}
\usepackage{threeparttable}
\usepackage{siunitx}
\usepackage{booktabs}
\usepackage{etoolbox}
\usepackage{braket}
\usepackage{threeparttable}
\usepackage{subcaption}
\usepackage{multirow}
\usepackage{comment}
\usepackage[ruled,vlined]{algorithm2e}
\pagestyle{fancy}
\usepackage{abbrevs}
\usepackage{etoolbox}
\robustify{\DateMark}
\usepackage{units}
\usepackage[sharp]{easylist}
\usepackage[table]{xcolor}
\usepackage{array,booktabs}
\usepackage{colortbl}
\newcolumntype{L}{@{}>{\kern\tabcolsep}l<{\kern\tabcolsep}}
\usepackage{titlesec}
\setcounter{secnumdepth}{4}
\titleformat{\paragraph}
{\normalfont\normalsize\bfseries}{\theparagraph}{1em}{}
\titlespacing*{\paragraph}
{0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex}
\usepackage[section]{placeins}
\dispositionformat{\bfseries}
\headingformat{\large\MakeUppercase}
\frenchspacing
\include{optional}
\setlength{\headheight}{14pt}
\committeesize{4}
\chair{Richard Longland}
\memberI{Arthur Champagne}
\memberII{Paul Huffman}
\memberIII{Gail McLaughlin}
\student{Caleb A.}{Marshall}
\program{Physics}
\thesistitle{Investigating Globular Cluster Elemental Abundance Anomalies Using $(^3\textnormal{He},d)$ Proton Transfer Reactions}
\newcommand{\uv}[1]{\ensuremath{\mathbf{\hat{#1}}}}
\newcommand{\ensuremath{\mathbf{\Omega}}}{\ensuremath{\mathbf{\Omega}}}
\newcommand{\eref}[1]{Eq.~\ref{#1}}
\newcommand{\fref}[1]{Fig.~\ref{#1}}
\newcommand{\tref}[1]{Table~\ref{#1}}
\newcommand{\nabla}{\nabla}
\renewcommand{\exp}[1]{e^{#1}}
\newcommand{\Conv}{\mathop{\scalebox{1.5}{\raisebox{-0.2ex}{$\ast$}}}}%
\usepackage{color}
\newcommand{\NEW}[1]{#1}
\newcommand{\COMMENT}[1]{\textcolor{green}{#1}}
\newcommand{\NOTER}[1]{\textcolor{orange}{#1}}
\newcommand{\NOTEC}[1]{\textcolor{blue}{#1}}
\newcommand{\NOTEK}[1]{\textcolor{magenta}{#1}}
\newcommand{\ensuremath{{\mu}\text{m}}}{\ensuremath{{\mu}\text{m}}}
\graphicspath{{./Chapter-1/figs/}{./Chapter-2/figs/}{./Chapter-3/figs/}{./Chapter-4/figs/}{./Chapter-5/figs/}{./Chapter-6/figs/}}
\usepackage{calc}
\newlength{\chaptercapitalheight}
\settoheight{\chaptercapitalheight}{D}
\newlength{\chapterfootskip}
\setlength{\chapterfootskip}{\chaptercapitalheight}
\addtolength{\chapterfootskip}{2\baselineskip}
\addtolength{\chapterfootskip}{0.5ex}
\renewcommand{\listfigurename}{LIST OF FIGURES}
\renewcommand{\listtablename}{LIST OF TABLES}
\renewcommand{\bibname}{BIBLIOGRAPHY}
\newlength\graphht
\newcommand\calculategraphicstargetheight[1]{%
\setlength\graphht{\textheight
-\parskip
-\abovecaptionskip -\belowcaptionskip
-(12pt * #1)
-\chapterfootskip
}}
\usepackage{pdflscape}
\usepackage{tikz}
\fancypagestyle{lscapedplain}{%
\fancyhf{}
\fancyfoot{%
\tikz[remember picture,overlay]
\node[outer sep=1cm,above,rotate=90] at (current page.east) {\thepage};}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
}
\begin{document}
\pagestyle{plain}
\frontmatter
\include{front}
\mainmatter
\pagestyle{plain}
\include{Chapter-1/Chapter-1}
\include{Chapter-2/Chapter-2}
\include{Chapter-3/Chapter-3}
\include{Chapter-4/Chapter-4}
\include{Chapter-5/Chapter-5}
\include{Chapter-6/Chapter-6}
\include{Chapter-7/Chapter-7}
\begin{spacing}{1}
\setlength\bibitemsep{11pt}
\phantomsection
\addcontentsline{toc}{chapter}{{\uppercase{\bibname}}}
\titleformat{\chapter}[display]{\bf\filcenter
}{\chaptertitlename\ \thechapter}{11pt}{\bf\filcenter}
\titlespacing*{\chapter}{0pt}{0.0in-9pt}{22pt}
\printbibliography[heading=myheading]
\end{spacing}
\restoregeometry
| {'timestamp': '2021-03-17T01:10:13', 'yymm': '2103', 'arxiv_id': '2103.08821', 'language': 'en', 'url': 'https://arxiv.org/abs/2103.08821'} |
\section{Introduction}
The Cluster Variation Method (CVM) was introduced by Kikuchi
\cite{Kik51} in 1951, as an approximation technique for the
equilibrium statistical mechanics of lattice (Ising--like) models,
generalizing the Bethe--Peierls \cite{Bet35,Pei36} and
Kramers--Wannier \cite{KraWan1,KraWan2} approximations, an account of
which can be found in several textbooks \cite{PliBer,LavBel}. Apart
from rederiving these methods, Kikuchi proposed a combinatorial
derivation of what today we can call the cube (respectively triangle,
tetrahedron) approximation of the CVM for the Ising model on the
simple cubic (respectively triangular, face centered cubic) lattice.
After the first proposal, many reformulations and applications, mainly
to the computation of phase diagram of lattice models in statistical
physics and material science, appeared, and have been reviewed in
\cite{PTPS}. The main line of activity has dealt with homogeneous,
translation--invariant lattice models with classical, discrete degrees
of freedom, but several other directions have been followed, including
for instance models with continuous degrees of freedom
\cite{KikuchiCont}, free surfaces \cite{MoranLopez,BuzPel}, models of
polymers \cite{Aguilera,LiseMaritan} and quantum models
\cite{MoritaQuant,Danani}. Out of equilibrium properties have also
been studied, in the framework of the path probability method
\cite{Ishii,Ducastelle,WadaKaburagi}, which is the dynamical version
of the CVM. Despite the CVM predicts mean--field like critical
behaviour, the problem of extracting critical behaviour from sequences
of CVM approximations has also been considered by means of different
approaches \cite{CVPAM1,CVPAM2,CVPAM3,CVPAM4,CAM}.
A line of research which is particularly relevant to the present
discussion has considered heterogeneous and random models. Much work
has been devoted in the 80's to applications of the CVM to models with
quenched random interactions (see e.g.\ \cite{SeinoKatsura} and refs.\
therein), mainly aiming to the phase diagram, and related equilibrium
properties, of Ising--like models of spin glasses in the average
case. The most common approach was based on the distribution of the
effective fields, and population dynamics algorithms were developed
and studied for the corresponding integral equations. All this effort
was however limited at the replica--symmetric level. Approaches taking
into account the first step of replica symmetry breaking have been
developed only recently \cite{SPScience}, at the level of the
Bethe--Peierls approximation, in its cavity method formulation, for
models on random graphs in both the single instance and average
case. These approaches have been particularly successful in their
application to combinatorial optimization problems, like
satisfiability \cite{SPSAT} and graph coloring \cite{SPCOL}. Another
interesting approach going in a similar direction has been proposed
recently \cite{Jort}, which relies on the analysis of the time
evolution of message--passing algorithms for the Bethe--Peierls
approximation.
Prompted by the interest in optimization and, more generally,
inference problems, a lot of work on the CVM has been done in recent
years also by researchers working on probabilistic graphical models
\cite{Smy97}, since the relation between the Bethe--Peierls
approximation and the belief propagation method \cite{Pearl} was
recognized \cite{Yed01}. The interaction between the two communities
of researchers working on statistical physics and optimization and
inference algorithms then led to the discovery of several new
algorithms for the CVM variational problem, and to a deeper
understanding of the method itself. There have been applications in
the fields of image restoration
\cite{TanMor,Tan02,Tanetal03,Tanetal04}, computer vision
\cite{FrePasCar}, interference in two--dimensional channels
\cite{Noam}, decoding of error--correcting codes
\cite{Gallager,McEliece,KabSaaLDPCC}, diagnosis \cite{Diagnosis},
unwrapping of phase images \cite{Unwrapping}, bioinformatics
\cite{BurgeKarlin,BioSeqAn,Krogh}, language processing
\cite{Huang,Manning}.
The purpose of the present paper is to give a short account of recent
advances on methodological aspects, and therefore applications will
not be considered in detail. It is not meant to be exhaustive and the
material included reflects in some way the interests of the
author. The plan of the paper is as follows. In \Sref{SMM-PGM} the
basic definitions for statistical mechanics and probabilistic
graphical models are given, and notation is established. In
\Sref{Fundamentals} the CVM is introduced in its modern formulation,
and in \Sref{RegionBased} it is compared with related approximation
techniques. Its properties are then discussed, with particular
emphasis on exact results, in \Sref{Exact}. Finally, the use of the CVM
as an approximation and the algorithms which can be used to solve the
CVM variational problem are illustrated in \Sref{Approx}. Conclusions
are drawn in \Sref{Conclusions}.
\section{Statistical mechanical models and probabilistic graphical
models}
\label{SMM-PGM}
We are interested in dealing with models with discrete degrees of
freedom which will be denoted by $\bi{s} = \{ s_1, s_2, \ldots s_N
\}$. For instance, variables $s_i$ could take values in the set
$\{ 0,1 \}$ (binary variables), $\{ -1, +1 \}$ (Ising spins), or $\{ 1,
2, \ldots q \}$ (Potts variables).
Statistical mechanical models are defined through an energy function,
usually called Hamiltonian, $H = H(\bi{s})$, and the corresponding
probability distribution at thermal equilibrium is the Boltzmann
distribution
\begin{equation}
p(\bi{s}) = \frac{1}{Z} \exp\left[ - H(\bi{s}) \right],
\end{equation}
where the inverse temperature $\beta = (k_B T)^{-1}$ has been absorbed
into the Hamiltonian and
\begin{equation}
Z \equiv \exp(-F) = \sum_{\bi{s}} \exp\left[ - H(\bi{s}) \right]
\end{equation}
is the partition function, with $F$ the free energy.
The Hamiltonian is typically a sum of terms, each involving a small
number of variables. A useful representation is given by the {\it
factor graph} \cite{Kschischang}. A factor graph is a bipartite graph
made of variable nodes $i, j, \ldots$, one for each variable, and {\it
function nodes} $a, b, \ldots$, one for each term of the
Hamiltonian. An edge joins a variable node $i$ and a function node $a$
if and only if $i \in a$, that is the variable $s_i$ appears in $H_a$,
the term of the Hamiltonian associated to $a$. The Hamiltonian can
then be written as
\begin{equation}
H = \sum_a H_a(\bi{s_a}), \qquad \bi{s_a} = \{ s_i, i \in a \}.
\label{HsumHa}
\end{equation}
A simple example of a factor graph is reported in
\Fref{FactorGraph}, and the corresponding Hamiltonian is written as
\begin{equation}
\fl H(s_1,s_2,s_3,s_4,s_5,s_6) = H_a(s_1,s_2) + H_b(s_2,s_3,s_4) +
H_c(s_3,s_4,s_5,s_6).
\end{equation}
\begin{figure}
\begin{center}
\pspicture(-3,-1)(10,3)
\scalebox{0.7}{
\pscircle(0,1){.3}
\pscircle(4,1){.3}
\pscircle(8,0){.3}
\pscircle(8,2){.3}
\pscircle(12,0){.3}
\pscircle(12,2){.3}
\rput(0,1){1}
\rput(4,1){2}
\rput(8,0){4}
\rput(8,2){3}
\rput(12,0){6}
\rput(12,2){5}
\psframe(1.7,.7)(2.3,1.3)
\psframe(5.7,.7)(6.3,1.3)
\psframe(9.7,.7)(10.3,1.3)
\rput(2,1){$a$}
\rput(6,1){$b$}
\rput(10,1){$c$}
\psline(.3,1)(1.7,1)
\psline(2.3,1)(3.7,1)
\psline(4.3,1)(5.7,1)
\psline(6.3,1.15)(7.73,1.87)
\psline(6.3,.85)(7.73,.13)
\psline(8.27,1.87)(9.7,1.15)
\psline(8.27,.13)(9.7,.85)
\psline(10.3,1.15)(11.73,1.87)
\psline(10.3,.85)(11.73,.13)
}
\endpspicture
\end{center}
\caption{\label{FactorGraph}An example of a factor graph: variable and
function nodes are denoted by circles and squares, respectively}
\end{figure}
The factor graph representation is particularly useful for models with
non--pairwise interactions. If the Hamiltonian contains only
1--variable and 2--variable terms, as in the Ising model
\begin{equation}
H = - \sum_i h_i s_i - \sum_{(i,j)} J_{ij} s_i s_j,
\label{Ising}
\end{equation}
then it is customary to draw a simpler graph, where only variable
nodes appear, and edges are drawn between pairs of interacting spins
$(i,j)$. In physical models the interaction strength $J_{ij}$ can
depend on the distance between spins, and interaction is often
restricted to nearest neighbours (NNs), which are denoted by $\langle
i,j \rangle$.
In combinatorial optimization problems, the Hamiltonian plays the role
of a cost function, and one is interested in the low--temperature
limit $T \to 0$, where only minimal energy states (ground states) have
a non--vanishing probability.
Probabilistic graphical models \cite{Smy97,Lauritzen} are usually
defined in a slightly different way. In the case of {\it Markov random
fields}, also called {\it Markov networks}, the joint distribution over
all variables is given by
\begin{equation}
p(\bi{s}) = \frac{1}{Z} \prod_a \psi_a(\bi{s_a}),
\end{equation}
where $\psi_a$ is called {\it potential} (potentials involving only one
variable are often called {\it evidences}) and
\begin{equation}
Z = \sum_{\bi{s}} \prod_a \psi_a(\bi{s_a}).
\end{equation}
Of course, a statistical mechanical model described by the Hamiltonian
(\ref{HsumHa}) corresponds to a probabilistic graphical models with
potentials $\psi_a = \exp(-H_a)$. On the other hand, {\it Bayesian
networks}, which we will not consider here in detail, are defined in
terms of directed graphs and conditional probabilities. It must be
noted, however, that a Bayesian network can always be mapped onto a
Markov network \cite{Smy97}.
\section{Fundamentals of the Cluster Variation Method}
\label{Fundamentals}
The original proposal by Kikuchi \cite{Kik51} was based on an
approximation for the number of configurations of a lattice model with
assigned local expectation values. The formalism was rather involved
to deal with in the general case, and since then many reformulations
came. A first important step was taken by Barker \cite{Bar53}, who
derived a computationally useful expression for the entropy
approximation. This was then rewritten as a cumulant expansion by
Morita \cite{Mor57,Mor72}, and Schlijper \cite{Sch83} noticed that
this expansion could have been written in terms of a M\"obius
inversion. A clear and simple formulation was then eventually set up
by An \cite{An88}, and this is the one we shall follow below.
The CVM can be derived from the
variational principle of equilibrium statistical mechanics, where the
free energy is given by
\begin{equation}
F = - \ln Z = \min_p {\cal F}(p) = \min_p \sum_{\bi{s}}
\left[ p(\bi{s}) H(\bi{s}) + p(\bi{s}) \ln p(\bi{s}) \right]
\label{VarPrin}
\end{equation}
subject to the normalization constraint
\begin{equation}
\sum_{\bi{s}} p(\bi{s}) = 1.
\end{equation}
It is easily verified that the minimum is obtained for the Boltzmann
distribution
\begin{equation}
\hat p(\bi{s}) = \frac{1}{Z} \exp[- H(\bi{s})] = {\rm arg} \,{\rm min}
\, {\cal F}
\end{equation}
and that the variational free energy can be written in the form of a
Kullback--Leibler distance
\begin{equation}
{\cal F}(p) = F + \sum_{\bi{s}} p(\bi{s}) \ln \frac{p(\bi{s})}{\hat
p(\bi{s})}.
\end{equation}
The basic idea underlying the CVM is to treat exactly the first term
(energy) of the variational free energy ${\cal F}(p)$ in
\Eref{VarPrin} and to approximate the second one (entropy) by means of
a truncated cumulant expansion.
We first define a {\it cluster} $\alpha$ as a subset of the factor
graph such that if a factor node belongs to $\alpha$, then all the
variable nodes $i \in a$ also belong to $\alpha$ (while the converse
needs not to be true, otherwise the only legitimate clusters would
be the connected components of the factor graph). Given a cluster we
can define its energy
\begin{equation}
H_\alpha(\bi{s_\alpha}) = \sum_{a \in \alpha} H_a(\bi{s_a}),
\end{equation}
probability distribution
\begin{equation}
p_\alpha(\bi{s_\alpha}) = \sum_{\bi{s} \setminus \bi{s_\alpha}} p(\bi{s})
\end{equation}
and entropy
\begin{equation}
S_\alpha = - \sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) \ln
p_\alpha(\bi{s_\alpha}).
\end{equation}
Then the entropy cumulants are defined by
\begin{equation}
S_\alpha = \sum_{\beta \subseteq \alpha} \tilde S_\beta,
\end{equation}
which can be solved with respect to the cumulants by means of a
M\"obius inversion, which yields
\begin{equation}
\tilde S_\beta = \sum_{\alpha \subseteq \beta}
(-1)^{n_\alpha - n_\beta} S_\alpha,
\end{equation}
where $n_\alpha$ denotes the number of variables in cluster
$\alpha$. The variational free energy can then be written as
\begin{equation}
{\cal F}(p) = \sum_{\bi{s}} p(\bi{s}) H(\bi{s}) - \sum_\beta \tilde
S_\beta,
\end{equation}
where the second summation is over all possible clusters.
The above equation is still an exact one, and here the approximation
enters. A set $R$ of clusters, made of maximal clusters and all their
subclusters, is selected, and the cumulant expansion of the entropy is
truncated retaining only terms corresponding to clusters in $R$. In
order to treat the energy term exactly it is necessary that each
function node is contained in at least one maximal cluster. One gets
\begin{equation}
\sum_\beta \tilde S_\beta \simeq \sum_{\beta \in R} \tilde S_\beta =
\sum_{\alpha \in R} a_\alpha S_\alpha,
\label{CVMapprox}
\end{equation}
where the coefficients $a_\alpha$, sometimes called M\"obius numbers,
satisfy \cite{An88}
\begin{equation}
\sum_{\beta \subseteq \alpha \in R} a_\alpha = 1 \qquad
\forall \beta \in R.
\label{MobiusNumbers}
\end{equation}
The above condition means that every subcluster must be counted
exactly once in the entropy expansion and allows to rewrite also the
energy term as a sum of cluster energies, yielding the approximate
variational free energy
\begin{equation}
{\cal F}(\{p_\alpha, \alpha \in R\}) = \sum_{\alpha \in R} a_\alpha
{\cal F}_\alpha(p_\alpha),
\label{CVMFree}
\end{equation}
where the cluster free energies are given by
\begin{equation}
{\cal F}_\alpha(p_\alpha) = \sum_{\bi{s_\alpha}} \left[
p_\alpha(\bi{s_\alpha)} H_\alpha(\bi{s_\alpha}) +
p_\alpha(\bi{s_\alpha)} \ln p_\alpha(\bi{s_\alpha)} \right].
\label{ClusterFree}
\end{equation}
The CVM then amounts to the minimization of the above variational free
energy with respect to the cluster probability distributions, subject
to the normalization
\begin{equation}
\sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) = 1 \qquad \forall \alpha
\in R
\end{equation}
and compatibility constraints
\begin{equation}
p_\beta(\bi{s_\beta)} = \sum_{\bi{s_{\alpha \setminus \beta}}}
p_\alpha(\bi{s_\alpha}) \qquad
\forall \beta \subset \alpha \in R.
\label{CompConstr}
\end{equation}
It is of great importance to observe that the above constraint set is
approximate, in the sense that there are sets of cluster probability
distributions that satisfy these constraints and nevertheless cannot
be obtained as marginals of a joint probability distribution. An
explicit example will be given in \Sref{Exact}.
The simplest example is the pair approximation for a model with
pairwise interactions, like the Ising model (\ref{Ising}). The maximal
clusters are the pairs of interacting variables, and the other
clusters appearing in $R$ are the variable nodes. The pairs have
M\"obius number 1, while for the variable nodes $a_i = 1 - d_i$, where
$d_i$ is the {\it degree} of node $i$, that is, in the factor graph
representation, the number of function nodes it belongs to.
The quality of the approximation (\ref{CVMapprox}) depends on the value
of the neglected cumulants. In the applications to lattice systems it
is typically assumed that, since cumulants are related to
correlations, they vanish quickly for clusters larger than the
correlation length of the model. In \Fref{Cumulants} the first
cumulants, relative to the site (single variable) entropy, are shown
for the homogeneous ($J_{ij} = J$), zero field ($h_i = 0$), square
lattice Ising model, in the square approximation of the CVM.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{Cumulants.eps}
\end{center}
\caption{\label{Cumulants}Cumulants for the square lattice Ising model}
\end{figure}
It can be seen that the cumulants peak at the (approximate) critical
point and decrease as the cluster size increases. This property is not
however completely general, it may depend on the interaction range. It
has been shown \cite{KappenWiegerinck} that this does not hold for
finite instances of the Sherrington--Kirkpatrick spin--glass model,
which is a fully connected model.
The meaning of cumulants as a measure of correlation can be easily
understood by considering a pair of weakly correlated variables and
writing their joint distribution as
\begin{equation}
p_{12}(s_1,s_2) = p_1(s_1) p_2(s_2) \left[ 1 +
\varepsilon \, q(s_1,s_2) \right], \qquad \varepsilon \ll 1.
\end{equation}
The corresponding cumulant is then
\begin{equation}
\tilde S_{12} = S_{12} - S_1 - S_2 = - \langle \ln \left[ 1 +
\varepsilon \, q(s_1,s_2) \right] \rangle = \Or(\varepsilon).
\end{equation}
\section{Region--based free energy approximations}
\label{RegionBased}
The idea of {\it region--based free energy approximations}, put
forward by Yedidia \cite{Yed04}, is quite useful to elucidate some of
the characteristics of the method, and its relations to other
techniques. A region--based free energy approximation is formally
similar to the CVM, and can be defined through equations (\ref{CVMFree})
and (\ref{ClusterFree}), but the requirements on the coefficients
$a_\alpha$ are weaker. The single counting condition is imposed only
on variable and function nodes, instead of all subclusters:
\begin{eqnarray}
\sum_{\alpha \in R, a \in \alpha} a_\alpha = 1 \qquad \forall a, \\
\sum_{\alpha \in R, i \in \alpha} a_\alpha = 1 \qquad \forall i.
\end{eqnarray}
Interesting particular cases are obtained if $R$ contains only two
types of regions, {\it large regions} and {\it small regions}. The
{\it junction graph} method \cite{Yed04,AjiMc} is obtained if they
form a directed graph, with edges from large to small regions, such
that:
\begin{enumerate}
\item every edge connects a large region with a small region which is
a subset of the former;
\item the subgraph of the regions containing a given node is a
connected tree.
\end{enumerate}
On the other hand, the {\it Bethe--Peierls approximation}, in its most general
formulation, is obtained by taking function nodes (with the associated
variable nodes) as large regions and variable nodes as small
regions. This reduces to the usual statistical physics formulation in
the case of pairwise interactions.
The CVM is a special region--based free energy approximation, with the
property that $R$ is closed under intersection. Indeed, one could
define $R$ for the CVM as the set made of the maximal clusters and all
the clusters which can be obtained by taking all the possible
intersections of (any number of) maximal clusters.
It is easy to verify that the Bethe--Peierls approximation is a
special case of CVM only if no function node shares more than one
variable node with another function node. If this is not the case, one
should be careful when applying the Bethe--Peierls
approximation. Consider a model with the factor graph depicted in
\Fref{BetheNotCVM}, where $s_i = \pm 1$ ($i = 1, 2, 3, 4$), $H = H_a +
H_b$ and
\begin{eqnarray}
H_a(s_1,s_2,s_3) = - h_0 s_1 - \frac{h}{2} (s_2 + s_3) - J s_1 s_2 s_3,
\\
H_b(s_2,s_3,s_4) = - h_0 s_4 - \frac{h}{2} (s_2 + s_3) - J s_2 s_3 s_4.
\end{eqnarray}
\begin{figure}
\begin{center}
\pspicture(-1,-2)(7,2)
\scalebox{0.7}{
\pscircle(0,0){.3}
\pscircle(4,1){.3}
\pscircle(4,-1){.3}
\pscircle(8,0){.3}
\psframe(1.7,-.3)(2.3,.3)
\psframe(5.7,-.3)(6.3,.3)
\rput(0,0){1}
\rput(4,1){2}
\rput(4,-1){3}
\rput(8,0){4}
\rput(2,0){$a$}
\rput(6,0){$b$}
\psline(.3,0)(1.7,0)
\psline(2.3,.15)(3.73,.87)
\psline(2.3,-.15)(3.73,-.87)
\psline(4.23,.87)(5.7,.15)
\psline(4.23,-.87)(5.7,-.15)
\psline(6.3,0)(7.7,0)
}
\endpspicture
\end{center}
\caption{\label{BetheNotCVM}Factor graph of a model for which the
Bethe--Peierls approximation is not a special case of the CVM}
\end{figure}
The CVM, with function nodes as maximal clusters, is exact (notice
that it coincides with the junction graph method), and the corresponding exact
cumulant expansion for the entropy is
\begin{equation}
S = S_a + S_b - S_{23},
\end{equation}
while the Bethe--Peierls entropy is
\begin{equation}
S_{\rm BP} = S_a + S_b - S_2 - S_3.
\end{equation}
The two entropies differ by the cumulant $\tilde S_{23} = S_{23} - S_2
- S_3$, and hence correlations between variable nodes 2 and 3 cannot
be captured by the Bethe--Peierls approximation. In
\Fref{BetheFailure} it is clearly illustrated how the Bethe--Peierls
approximation can fail. At large enough $J$ the exact entropy is
larger (by roughly $\ln 2$) than the Bethe--Peierls one.
\begin{figure}
\begin{center}
\includegraphics*[scale=.38]{BetheFailure.eps}
\end{center}
\caption{\label{BetheFailure}Entropy of the Bethe--Peierls approximation vs the
exact one for a model for which the Bethe--Peierls approximation is not a
special case of the CVM}
\end{figure}
\section{Exactly solvable cases}
\label{Exact}
The CVM is known to be exact in several cases, due to the topology of
the underlying graph, or to the special form of the Hamiltonian. In
the present section we shall first consider cases in which the CVM is
exact due to the graph topology, then proceed to the issue of
realizability and consider cases where the form of the Hamiltonian
makes an exact solution feasible with the CVM.
\subsection{Tree-like graphs}
It is well known that the CVM is exact for models defined on
tree--like graphs. This statement can be made more precise by
referring to the concept of {\it junction tree} \cite{LauSpi,Jensen},
which we shall actually use in its generalized form given by Yedidia,
Freeman and Weiss \cite{Yed04}. A junction tree is a tree--like
junction graph. The corresponding large regions are often called {\it
cliques}, and the small regions {\it separators}. With reference to
\Fref{BetheNotCVM} it is easy to check that the CVM, as described in
the previous section, corresponds to a junction tree with cliques
$(a123)$ and $(b234)$ and separator $(23)$, while the junction graph
corresponding to the Bethe--Peierls approximation is not a tree.
For a model defined on a junction tree the joint probability
distribution factors \cite{Yed04,Cowell} according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\alpha \in R_L}
p_{\alpha}(\bi{s_\alpha})}
{\displaystyle\prod_{\beta \in R_S} p_\beta^{d_\beta-1}(\bi{s_\beta})},
\end{equation}
where $R_L$ and $R_S$ denote the sets of large and small regions,
respectively, and $d_\beta$ is the degree of the small region $\beta$
in the junction tree. Notice that no normalization is needed.
The above factorization of the probability leads to the exact cumulant
expansion
\begin{equation}
S = \sum_{\alpha \in R_L} S_\alpha - \sum_{\beta \in R_S} (d_\beta-1)
S_\beta,
\end{equation}
and therefore the CVM with $R = R_L \cup R_S$ is exact.
As a first example, consider a particular subset of the square
lattice, the strip depicted in \Fref{Strip}, with open boundary
conditions in the horizontal direction, and define on it a model with
pairwise interactions (we do not use the factor graph representation
here).
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(7 0)
\move(0 1) \lvec(7 1)
\move(0 2) \lvec(7 2)
\move(0 3) \lvec(7 3)
\move(0 4) \lvec(7 4)
\move(1 0) \lvec(1 4)
\move(2 0) \lvec(2 4)
\move(3 0) \lvec(3 4)
\move(4 0) \lvec(4 4)
\move(5 0) \lvec(5 4)
\move(6 0) \lvec(6 4)
\textref h:R v:C \htext(-.1 4) {1}
\textref h:R v:C \htext(-.1 3) {2}
\textref h:R v:C \htext(-.1 1.5) {$\vdots$}
\textref h:R v:C \htext(-.1 0) {$N$}
\move(2.5 2) \lellip rx:.75 ry:3
\move(3.5 2) \lellip rx:.75 ry:3
\textref h:C v:C \htext(3.5 -1.5) {$L$}
\move(3 -1.5) \avec(0 -1.5) \move(4 -1.5) \avec(7 -1.5)
\move(9 0) \lvec(9 4)
\move(10 0) \lvec(10 4)
\move(9 0) \lvec(10 0)
\move(9 1) \lvec(10 1)
\move(9 2) \lvec(10 2)
\move(9 3) \lvec(10 3)
\move(9 4) \lvec(10 4)
\move(9 2) \lellip rx:.2 ry:2.5
\textref h:C v:B \htext(9 4.75) {$\bi{s}$}
\textref h:C v:B \htext(10 4.75) {$\bi{s^\prime}$}
\move(10 2) \lellip rx:.2 ry:2.5
\textref h:C v:C \htext(9.5 -1) {II}
\move(12 0) \lvec(12 4)
\move(11.9 0) \lvec(12.1 0)
\move(11.9 1) \lvec(12.1 1)
\move(11.9 2) \lvec(12.1 2)
\move(11.9 3) \lvec(12.1 3)
\move(11.9 4) \lvec(12.1 4)
\textref h:C v:C \htext(12 -1) {I}
}
\caption{\label{Strip}A one--dimensional strip and the clusters used
to solve a pairwise model on it}
\end{figure}
According to the junction tree property, the joint probability factors
as follows:
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\alpha \in {\rm II}}
p_\alpha(\bi{s_\alpha})}
{\displaystyle\prod_{\beta \in {\rm I}} p_\beta(\bi{s_\beta})},
\end{equation}
where I and II denote the sets of chains (except boundary ones) and
ladders, respectively, shown in \Fref{Strip}. As a consequence, the
cumulant expansion
\begin{equation}
S = \sum_{\alpha \in {\rm II}} S_\alpha -
\sum_{\beta \in {\rm I}} S_\beta
\end{equation}
of the entropy is also exact, and the cluster variation method with $R
= {\rm II} \cup {\rm I}$ is exact. For strip width $N = 1$ we obtain
the well--known statement that the Bethe--Peierls approximation is
exact for a one--dimensional chain. Rigorous proofs of this statement
have been given by Brascamp \cite{Bra71} and Percus \cite{Per77}. More
generally, Schlijper has shown \cite{Sch84} that the equilibrium
probability of a $d$--dimensional statistical mechanical model with
finite range interactions is completely determined by its restrictions
(marginals) to $d-1$--dimensional slices of width at least equal to
the interaction range.
In the infinite length limit $L \to \infty$ translational invariance
is recovered
\begin{equation}
\fl \frac{{\cal F}}{L} = \sum_{\bi{s},\bi{s^\prime}} \left([ p_{\rm
II}(\bi{s},\bi{s^\prime}) H_{\rm II}(\bi{s},\bi{s^\prime}) + p_{\rm
II}(\bi{s},\bi{s^\prime}) \ln p_{\rm II}(\bi{s},\bi{s^\prime})\right]
- \sum_{\bi{s}} p_{\rm I}(\bi{s}) \ln p_{\rm I}(\bi{s})
\end{equation}
and solving for $p_{\rm II}$ we obtain the transfer matrix formalism:
\begin{eqnarray}
\frac{F}{L} = - \ln \max_{p_{\rm I}} \left\{
\sum_{\bi{s},\bi{s^\prime}}
p_{\rm I}^{1/2}(\bi{s}) \exp\left[ -
H_{\rm II}(\bi{s},\bi{s^\prime}) \right]
p_{\rm I}^{1/2}(\bi{s^\prime}) \right\} \\
\sum_{\bi{s}} p_{\rm I}(\bi{s}) = 1
\end{eqnarray}
The natural iteration method (see \sref{VarAlg}) in this case reduces to
the power method for finding the largest eigenvalue of the transfer
matrix.
As a second example, consider a tree, like the one depicted in
\Fref{BetheLattice}, and a model with pairwise interactions defined on
it.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(.866 .5)
\move(.866 .5) \lvec(1.732 0)
\move(1.732 0) \lvec(1.732 -.5)
\move(1.732 0) \lvec(2.165 .25)
\move(.866 .5) \lvec(.866 1.5)
\move(.866 1.5) \lvec(1.299 1.75)
\move(.866 1.5) \lvec(.433 1.75)
\move(0 0) \lvec(-.866 .5)
\move(-.866 .5) \lvec(-1.732 0)
\move(-1.732 0) \lvec(-1.732 -.5)
\move(-1.732 0) \lvec(-2.165 .25)
\move(-.866 .5) \lvec(-.866 1.5)
\move(-.866 1.5) \lvec(-1.299 1.75)
\move(-.866 1.5) \lvec(-.433 1.75)
\move(0 0) \lvec(0 -1)
\move(0 -1) \lvec(.866 -1.5)
\move(.866 -1.5) \lvec(.866 -2)
\move(.866 -1.5) \lvec(1.299 -1.25)
\move(0 -1) \lvec(-.866 -1.5)
\move(-.866 -1.5) \lvec(-.866 -2)
\move(-.866 -1.5) \lvec(-1.299 -1.25)
\textref h:C v:B \htext(0 .1) {0}
\textref h:L v:B \htext(.966 .5) {1}
\textref h:R v:B \htext(-.966 .5) {2}
\textref h:L v:B \htext(.1 -1) {$d_0 = 3$}
}
\caption{\label{BetheLattice}A small portion of a tree}
\end{figure}
In this case the probability factors according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\langle i j \rangle} p_{ij}(s_i,s_j)}
{\displaystyle\prod_{i} p_i^{d_i-1}(s_i)},
\end{equation}
where $\langle i j \rangle$ denotes a pair of adjacent nodes. The
cumulant expansion of the entropy is therefore
\begin{equation}
S = \sum_{\langle i j \rangle} S_{ij} - \sum_{i} (d_i - 1) S_i,
\end{equation}
and the pair approximation of the CVM (coinciding with Bethe--Peierls and
junction graph) is exact. Recently this property has been exploited to
study models on finite connectivity random graphs, which strictly
speaking are not tree--like: loops are present, but in the
thermodynamic limit their typical length scales like $\ln N$ \cite{Bollobas}.
As a final example, consider the so--called (square) cactus lattice
(the interior of a Husimi tree), depicted in \Fref{Cactus}.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(-1.75 -1.5) \lvec(-.25 -1.5)
\move(1.75 -1.5) \lvec(.25 -1.5)
\move(-1.75 -.5) \lvec(1.75 -.5)
\move(-1.75 .5) \lvec(1.75 .5)
\move(-1.75 1.5) \lvec(-.25 1.5)
\move(1.75 1.5) \lvec(.25 1.5)
\move(-1.5 -1.75) \lvec(-1.5 -.25)
\move(-1.5 1.75) \lvec(-1.5 .25)
\move(-.5 -1.75) \lvec(-.5 1.75)
\move(.5 -1.75) \lvec(.5 1.75)
\move(1.5 -1.75) \lvec(1.5 -.25)
\move(1.5 1.75) \lvec(1.5 .25)
}
\caption{\label{Cactus}A small portion of a square cactus lattice}
\end{figure}
Here the probability factors according to
\begin{equation}
p(\bi{s}) = \frac{\displaystyle\prod_{\opensquare}
p_{\opensquare}(\bi{s_{\opensquare}})}{\displaystyle\prod_i p_i(s_i)},
\end{equation}
the entropy cumulant expansion takes the form
\begin{equation}
S = \sum_{\opensquare} S_{\opensquare} - \sum_{i} S_i,
\end{equation}
and the CVM with $R$ made of square plaquettes and sites is
exact. Again, this coincides with the junction graph method and, if
function nodes are associated to square plaquettes (so that the
corresponding factor graph is tree--like), with Bethe--Peierls.
\subsection{Realizability}
We have seen that when the probability factors in a suitable way, the
CVM can be used to find an exact solution. By analogy, we could ask
whether, as in the mean field approximation, CVM approximations can
yield an estimate of the joint probability distribution as a function
of the cluster distributions, in a factorized form. In the general
case, the answer is negative. One cannot, using a trial factorized
form like
\begin{equation}
\prod_\alpha [ p_\alpha(\bi{s_\alpha}) ]^{a_\alpha}
\label{CVMproduct}
\end{equation}
(which would lead to a free energy like that in Eqs.\
\ref{CVMFree}-\ref{ClusterFree}), obtain a joint probability
distribution which marginalizes down to the cluster probability
distributions used as a starting point. As a consequence, we have no
guarantee that the CVM free energy is an upper bound to the exact free
energy. Moreover, in sufficiently frustrated problems, the cluster
probability distributions cannot even be regarded as marginals of a
joint probability distribution \cite{Sch88}.
It can be easily checked that \Eref{CVMproduct} is not, in the general
case, a probability distribution. It is not normalized and therefore
its marginals do not coincide with the $p_\alpha$'s used to build
it. At best, one can show that
\begin{equation}
\prod_\alpha [ p_\alpha(\bi{s_\alpha}) ]^{a_\alpha} \propto
\exp[-H(\bi{s})],
\label{FactorProp}
\end{equation}
but the normalization constant is unknown. This has been proven in
\cite{WaiJaaWil} at the Bethe--Peierls level, and the proof can be
easily generalized to any CVM approximation.
Let us now focus on a very simple example. Consider three Ising
variables, $s_i = \pm 1$, $i = 1, 2, 3$, with the following node and
pair probabilities:
\begin{eqnarray}
p_i(s_i) = 1/2 \qquad i = 1, 2, 3 \\
p_{ij}(s_i,s_j) = \frac{1 + c s_i
s_j}{4}, \qquad -1 \le c \le 1, \qquad i < j.
\end{eqnarray}
A joint $p(s_1,s_2,s_3)$ marginalizing to the above probabilities
exists for $-1/3 \le c \le 1$, which shows clearly that the constraint
set \Eref{CompConstr} is approximate, and in particular it can be too
loose. For instance, in \cite{PelPre} it has been shown that due to
this problem the Bethe--Peierls approximation for the triangular Ising
antiferromagnet predicts, at low temperature, unphysical results for
the correlations and a negative entropy.
Moreover, the joint probability $p(s_1,s_2,s_3)$ is given by the
CVM--like factorized form
\begin{equation}
\frac{p_{12}(s_1,s_2) p_{13}(s_1,s_3) p_{23}(s_2,s_3)}{p_1(s_1)
p_2(s_2) p_3(s_3)}
\end{equation}
only if $c = 0$, that is if the variables are completely uncorrelated.
As a more interesting case, we shall consider in the next subsection
the square lattice Ising model. In this case it has been shown
\cite{Disorder1,Disorder2} that requiring realizability yields an
exactly solvable case.
\subsection{Disorder points}
For a homogeneous (translation--invariant) model defined on a
square lattice, the square approximation of the CVM, that is
the approximation obtained by taking the elementary square plaquettes
as maximal clusters, entails the following approximate entropy
expansion:
\begin{equation}
S \simeq \sum_{\opensquare} S_{\opensquare} - \sum_{\langle i j \rangle}
S_{ij} + \sum_i S_i.
\label{SquareEntropy}
\end{equation}
The corresponding factorization
\begin{equation}
\prod_{\opensquare}
p_{\opensquare}(\bi{s_{\opensquare}}) \prod_{\langle i j \rangle}
p_{ij}^{-1}(s_i,s_j) \prod_i p_i(s_i)
\label{pDisorder}
\end{equation}
for the probability does not, in general, give an approximation to the
exact equilibrium distribution. Indeed, it does not marginalize to the
cluster distributions and is not even normalized.
One could, however, try to impose that the joint probability given by
the above factorization marginalizes to the cluster
distributions. It turns out that it is sufficient to impose
such a condition on the probability distribution of a $3 \times 3$
square, like the one depicted in \Fref{Square3x3}. It is easy to check
that for an Ising model the CVM--like function
\begin{equation}
\fl
\frac{
p_{4}(s_1,s_2,s_5,s_4)
p_{4}(s_2,s_3,s_6,s_5)
p_{4}(s_4,s_5,s_8,s_7)
p_{4}(s_5,s_6,s_9,s_8)
p_{1}(s_5)}
{p_{2}(s_2,s_5) p_{2}(s_5,s_8)
p_{2}(s_4,s_5) p_{2}(s_5,s_6)}
\end{equation}
marginalizes to the square, pair and site distributions ($p_4$, $p_2$
and $p_1$ respectively) only if odd expectation values vanish and
\begin{equation}
\langle s_i s_k \rangle_{\langle \langle i k \rangle \rangle} =
\langle s_i s_j \rangle_{\langle i j \rangle}^2,
\end{equation}
where the l.h.s.\ is the next nearest neighbour correlation, while the
r.h.s.\ is the square of the nearest neighbour correlation.
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 0) \lvec(4 0)
\move(0 2) \lvec(4 2)
\move(0 4) \lvec(4 4)
\move(0 0) \lvec(0 4)
\move(2 0) \lvec(2 4)
\move(4 0) \lvec(4 4)
\textref h:L v:B
\htext(.1 .1) {$s_1$}
\htext(2.1 .1) {$s_2$}
\htext(4.1 .1) {$s_3$}
\htext(.1 2.1) {$s_4$}
\htext(2.1 2.1) {$s_5$}
\htext(4.1 2.1) {$s_6$}
\htext(.1 4.1) {$s_7$}
\htext(2.1 4.1) {$s_8$}
\htext(4.1 4.1) {$s_9$}
}
\caption{\label{Square3x3}A $3 \times 3$ square on the square lattice}
\end{figure}
Leaving apart the trivial non--interacting case, the above condition
is satisfied by an Ising model with nearest neighbour, next nearest
neighbour and plaquette interactions, described by the Hamiltonian
\begin{equation}
H = - J_1 \sum_{\langle i j \rangle} s_i s_j
- J_2 \sum_{\langle \langle i j \rangle \rangle} s_i s_j
- J_4 \sum_{\opensquare} s_i s_j s_k s_l,
\end{equation}
if the couplings satisfy the {\it disorder} condition (see
\cite{Disorder1} and refs.\ therein)
\begin{equation}
\cosh (2 J_1) = \frac{e^{4J_2+2J_4}+e^{-4J_2+2J_4}+2 e^{-2J_2}}
{2\left(e^{2J_2}+e^{2J_4}\right)}.
\end{equation}
This defines a variety in the parameter space, lying in the disordered
phase of the model, and in particular in the region where nearest
neighbour and next nearest neighbour interactions compete. In this case
the square approximation of the CVM yields the exact solution,
including the exact free energy density
\begin{equation}
f = - \ln \left[ \exp(-J_4)+\exp(J_4 - 2J_2) \right],
\end{equation}
and the nearest neighbour correlation
\begin{equation}
g = \langle s_i s_j \rangle_{\langle i j \rangle} =
\frac{\exp(-4J_2) - \cosh(2 J_1)}{\sinh(2J_1)}.
\end{equation}
Higher order correlations can be derived from the joint probability
\Eref{pDisorder}, for example the two--body correlation function
$\Gamma(x,y) = \langle s(x_0,y_0) s(x_0+x,y_0+y) \rangle$ (where spin
variables have been identified by their coordinates on the lattice),
which simply reduces to a power of the nearest neighbour correlation:
$\Gamma(x,y) = g^{|x|+|y|}$. For this reason a line of disorder points
is often referred to as a one--dimensional line. Or the plaquette
correlation:
\begin{equation}
q = \langle s_i s_j s_k s_l \rangle_{\opensquare} =
\frac{e^{4J_4}\left(1-e^{8J_2}\right) +
4 e^{2J_2}\left(e^{2J_4}-e^{2J_2}\right)}
{e^{4J_4}\left(1-e^{8J_2}\right) +
4 e^{2J_2}\left(e^{2J_4}+e^{2J_2}\right)}.
\end{equation}
Finally, since all the pair correlations are given simply as powers of
the nearest--neighbour correlation we can easily calculate the
momentum space correlation function, or structure factor. We first
write $\Gamma(x,y) =
\exp\left(-\displaystyle\frac{|x|+|y|}{\xi}\right)$, where $\xi =
-(\ln g)^{-1}$. After a Fourier transform one finds $S(p_x,p_y) =
S_1(p_x) S_1(p_y)$, where
\begin{equation}
S_1(p) = \frac{\sinh(1/\xi)}{\cosh(1/\xi) - \cos p}.
\end{equation}
It can be verified that the structure factors calculated by Sanchez
\cite{Sanchez} and (except for a misprint) Cirillo and coworkers
\cite{Cirillo} reduce to the above expression on the disorder line.
\subsection{Wako--Sait\^o--Mu\~noz--Eaton model of protein folding}
There is at least another case in which the probability factors at the
level of square plaquettes, and the CVM yields the exact solution. It
is the Wako--Sait\^o--Mu\~noz--Eaton model of protein folding
\cite{WakSat1,WakSat2,MunEat1,MunEat2,MunEat3,BruPel1,BruPel2,PelJSTAT}. Here
we will not delve into the details of the model, giving only its
Hamiltonian in the form
\begin{equation}
H = \sum_{i=1}^L \sum_{j=i}^L h_{i,j} x_{i,j}, \qquad x_{i,j} =
\prod_{k=i}^j x_k, \qquad x_k = 0, 1.
\end{equation}
It is a one--dimensional model with arbitrary range multivariable
interactions, but the particular form of these interactions makes an
exact solution possible. A crucial step in this solution was the
mapping to a two--dimensional model \cite{BruPel1}, where the
statistical variables are the $x_{i,j}$'s (see \Fref{MunozEaton} for
an illustration). In terms of these variables the Hamiltonian is
local, and the only price one has to pay is to take into account the
constraints
\begin{equation}
x_{i,j} = x_{i+1,j} x_{i,j-1},
\end{equation}
which can be viewed as local interactions.
\begin{figure}
\begin{center}
\psset{unit=.7cm}
\pspicture(-2,-11)(11,2)
\psline(-1,-.5)(11,-.5)
\psline(.5,-11)(.5,1)
\rput(1,0){1}
\rput(2,0){2}
\rput(3,0){3}
\rput(4,0){4}
\rput(5,0){5}
\rput(6,0){6}
\rput(7,0){7}
\rput(8,0){8}
\rput(9,0){9}
\rput(10,0){10}
\rput(5.5,.5){$j$}
\rput(0,-10){10}
\rput(0,-9){9}
\rput(0,-8){8}
\rput(0,-7){7}
\rput(0,-6){6}
\rput(0,-5){5}
\rput(0,-4){4}
\rput(0,-3){3}
\rput(0,-2){2}
\rput(0,-1){1}
\rput(-.5,-5.5){$i$}
\rput(1,-1){$\circ$}
\rput(2,-1){$\circ$}
\rput(3,-1){$\circ$}
\rput(4,-1){$\circ$}
\rput(5,-1){$\circ$}
\rput(6,-1){$\circ$}
\rput(7,-1){$\circ$}
\rput(8,-1){$\circ$}
\rput(9,-1){$\circ$}
\rput(10,-1){$\circ$}
\rput(2,-2){$\bullet$}
\rput(3,-2){$\bullet$}
\rput(4,-2){$\bullet$}
\rput(5,-2){$\bullet$}
\rput(6,-2){$\circ$}
\rput(7,-2){$\circ$}
\rput(8,-2){$\circ$}
\rput(9,-2){$\circ$}
\rput(10,-2){$\circ$}
\rput(3,-3){$\bullet$}
\rput(4,-3){$\bullet$}
\rput(5,-3){$\bullet$}
\rput(6,-3){$\circ$}
\rput(7,-3){$\circ$}
\rput(8,-3){$\circ$}
\rput(9,-3){$\circ$}
\rput(10,-3){$\circ$}
\rput(4,-4){$\bullet$}
\rput(5,-4){$\bullet$}
\rput(6,-4){$\circ$}
\rput(7,-4){$\circ$}
\rput(8,-4){$\circ$}
\rput(9,-4){$\circ$}
\rput(10,-4){$\circ$}
\rput(5,-5){$\bullet$}
\rput(6,-5){$\circ$}
\rput(7,-5){$\circ$}
\rput(8,-5){$\circ$}
\rput(9,-5){$\circ$}
\rput(10,-5){$\circ$}
\rput(6,-6){$\circ$}
\rput(7,-6){$\circ$}
\rput(8,-6){$\circ$}
\rput(9,-6){$\circ$}
\rput(10,-6){$\circ$}
\rput(7,-7){$\circ$}
\rput(8,-7){$\circ$}
\rput(9,-7){$\circ$}
\rput(10,-7){$\circ$}
\rput(8,-8){$\bullet$}
\rput(9,-8){$\bullet$}
\rput(10,-8){$\bullet$}
\rput(9,-9){$\bullet$}
\rput(10,-9){$\bullet$}
\rput(10,-10){$\bullet$}
\endpspicture
\end{center}
\caption{\label{MunozEaton}A typical configuration of the
Mu\~noz--Eaton model. An empty (resp.\ filled) circle at row $i$ and
column $j$ represents the variable $x_{i,j}$ taking value 0 (resp.\
1).}
\end{figure}
In order to derive the factorization of the probability
\cite{PelJSTAT}, we need first to exploit the locality of
interactions, which allows us to write
\begin{equation}
p(\{x_{i,j}\}) = \frac{p^{(1,2)} p^{(2,3)} \cdots p^{(L-1,L)}}{p^{(2)}
\cdots p^{(L-1)} },
\label{ME-TMfactoring}
\end{equation}
where $p^{(j)}$ denotes the probability of the $j$th row in
\Fref{MunozEaton} and $p^{(j,j+1)}$ denotes the joint probability of
rows $j$ and $j+1$.
As a second step, consider the effect of the constraints. This is best
understood looking at the following example:
\begin{eqnarray}
p^{(j)}(0, \cdots 0_i, 1_{i+1}, \cdots 1) &=& p^{(j)}_{i,i+1}(0,1) =
\nonumber \\
&=& \frac{p^{(j)}_{1,2}(0,0) \cdots p^{(j)}_{i,i+1}(0,1) \cdots
p^{(j)}_{j-1,j}(1,1)}{p^{(j)}_2(0) \cdots p^{(j)}_i(0)
p^{(j)}_{i+1}(1) \cdots p^{(j)}_{j-1}(1)}.
\end{eqnarray}
The CVM--like factorization is possible since every factor in the
numerator, except $p^{(j)}_{i,i+1}(0,1)$, cancels with a factor in the
denominator. A similar result can be obtained for the joint
probability of two adjacent rows, and substituting into
\eref{ME-TMfactoring} one eventually gets
\begin{equation}
p(\{x_{i,j}\}) = \prod_{\alpha \in R} p_\alpha(x_\alpha)^{a_\alpha},
\end{equation}
where $R = \{$square plaquettes, corners (on the diagonal), and their
subclusters$\}$ and $a_\alpha$ is the CVM M\"obius number for cluster
$\alpha$.
\section{Cluster Variation Method as an approximation}
\label{Approx}
In most applications the CVM does not yield exact results, and
hence it is worth investigating its properties as an
approximation.
An important issue is the choice of maximal clusters, and in
particular the existence of sequence of approximations (that is,
sequence of choices of maximal clusters) with some property of
convergence to the exact results. This has been long studied in the
literature regarding applications to lattice, translation invariant,
systems and will be the subject of subsection \ref{Asymptotic}. In
particular, rigorous results concerning sequences of approximations
which converge to the exact solution have been derived by Schlijper
\cite{Sch83,Sch84,Sch85}, providing a sound theoretical basis for the
earlier investigations by Kikuchi and Brush \cite{KikBru}.
Another important issue is related to the practical determination of
the minima of the CVM variational free energy. In the variational
formulation of statistical mechanics the free energy is convex, but
this property here is lost due to the presence of negative $a_\alpha$
coefficients in the entropy expansion. More precisely, it has been
shown \cite{PakAna} that the CVM variational free energy is convex if
\begin{equation}
\forall S \subseteq R \qquad \sum_{\alpha \in R_S} a_\alpha \ge 0
\qquad R_S = \{ \alpha \in R | \exists \beta \subseteq \alpha,
\beta \in S \}.
\end{equation}
Similar conditions have been obtained by McEliece and Yildirim
\cite{McEYil} and Heskes, Albers and Kappen \cite{HAK}.
If this is not the case multiple minima can appear, and their
determination can be nontrivial. Several algorithms have been
developed to deal with this problem, falling mainly in two classes:
message--passing algorithms, which will be discussed in subsection
\ref{MessPassAlg} and variational, provably convergent algorithms,
which will be discussed in subsection \ref{VarAlg}.
\subsection{Asymptotic behaviour}
\label{Asymptotic}
The first studies on the asymptotic behaviour of sequences of CVM
approximations are due to Schlijper \cite{Sch83,Sch84,Sch85}. He
showed that it is possible to build sequences of CVM approximations
(that is, sequences of sets of maximal clusters) such that the
corresponding sequence of free energies converge to the exact one, for
a translation--invariant model in the thermodynamic limit. The most
interesting result, related to the transfer matrix idea, is that for a
$d$--dimensional model the maximal clusters considered have to
increase in $d-1$ dimensions only.
These results provided a theoretical justification for the series of
approximations developed by Kikuchi and Brush \cite{KikBru}, who
introduced the $B_{2L}$ series of approximations for
translation--invariant models on the two--dimensional square lattice,
based on zig--zag maximal clusters, as shown in \Fref{KikBruFig}, and
applied it to the zero field Ising model. Based solely on the results
from the $B_2$ (which is equivalent to the plaquette approximation)
and $B_4$ approximations, Kikuchi and Brush postulated a linear
behaviour for the estimate of the critical temperature as a function
of $(2L+1)^{-1}$.
\begin{figure}[h]
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(-2 -2)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\rlvec(1 -1) \rlvec(1 1)
\textref h:C v:B
\htext(-2 -1.9) {$1$}
\htext(0 -1.9) {$3$}
\htext(3 -1.9) {$\ldots$}
\htext(6 -1.9) {$2L+1$}
\textref h:C v:T
\htext(-1 -3.1) {$2$}
\htext(2 -3.1) {$\ldots$}
\htext(5 -3.1) {$2L$}
}
\caption{Maximal cluster for the $B_{2L}$ approximation}
\label{KikBruFig}
\end{figure}
In \Fref{B2L-Tc} we have reported the inverse critical temperature as
a function of $(2L+1)^{-1}$ for $L = 1$ to 6. The extrapolated inverse
critical temperature is $\beta_c \simeq 0.4397$, to be compared with
the exactly known $\beta_c = \frac{1}{2} \ln(1 + \sqrt{2}) \simeq
0.4407$.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{betacvsL.eps}
\end{center}
\caption{\label{B2L-Tc}Inverse critical temperature of the $B_{2L}$
approximation series}
\end{figure}
It is not our purpose here to make a complete finite size scaling
analysis, in the spirit of the coherent anomaly method (see below), of
the CVM approximation series. We limit ourselves to show the finite
size behaviour of the critical magnetization. More precisely, we have
computed the magnetization of the zero field Ising model on the square
lattice at the exactly known inverse critical temperature, again for
$L = 1$ to 6. \Fref{FracBetaNu} shows that the critical magnetization
vanishes as $(2L+1)^{\beta/\nu}$, and the fit gives a very good
estimate for the exponent, consistent with the exact result $\beta/\nu
= 1/8$.
\begin{figure}
\begin{center}
\includegraphics*[trim = 0 0 0 50, scale=.5]{B2L-FSS.eps}
\end{center}
\caption{\label{FracBetaNu}Critical temperature of the $B_{2L}$
approximation series}
\end{figure}
As a further illustration of the asymptotic properties of the $B_{2L}$
series we report in \Fref{TrAFEntropy} the zero temperature entropy
(actually the difference between the extrapolated entropy density and
the $B_{2L}$ estimate) of the Ising triangular antiferromagnet as a
function of $1/L$ \cite{PelPre}. It is clearly seen that
asymptotically $s_L = s_0 - a L^{-\psi}$, and the fit yields the
numerical results $s_0 \approx 0.323126$ (the exact value being
$s \approx 0.323066$) and $\psi \approx 1.7512$ (remarkably close to
$7/4$).
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{TrAFEntropy.eps}
\end{center}
\caption{\label{TrAFEntropy}Zero temeperature entropy of the triangular
Ising antiferromagnet in the $B_{2L}$ approximation series}
\end{figure}
An attempt to extract non--classical critical behaviour from high
precision low and high temperature results from CVM was made by the
present author \cite{CVPAM1,CVPAM2,CVPAM3,CVPAM4}, using Pad\'e and
Adler approximants. This work has led to the development of an 18 ($3
\times 3 \times 2$) site cluster approximation for the simple cubic
lattice Ising model \cite{CVPAM4}, which is probably the largest
cluster ever considered. The results obtained for the Ising model are
still compatible with the most recent estimates \cite{PelVic},
although of lower precision.
It has also been considered the possibility of extracting
non--classical critical behaviour from CVM results by means of the
coherent anomaly method, which applies finite size scaling ideas to
series of mean--field--like approximations. A review of these results
can be found in \cite{CAM}.
\subsection{Message--passing algorithms}
\label{MessPassAlg}
In order to describe this class of algorithms it is useful to start
with the Bethe--Peierls approximation (pair approximation of the CVM)
free energy for the Ising model \Eref{Ising}:
\begin{eqnarray}
\fl {\cal F} = - \sum_i h_i \sum_{s_i} s_i p_i(s_i)
- \sum_{\langle i j \rangle} J_{ij}
\sum_{s_i,s_j} s_i s_j p_{ij}(s_i,s_j) + \nonumber \\
\lo + \sum_{\langle i j \rangle} \sum_{s_i,s_j} p_{ij}(s_i,s_j) \ln
p_{ij}(s_i,s_j)
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_{\langle i j \rangle} \lambda_{ij} \left( \sum_{s_i,s_j}
p_{ij}(s_i,s_j) - 1 \right)
+ \nonumber \\
\lo + \sum_{\langle i j \rangle} \left[ \nu_{i,j} \left(
\sum_{s_i} s_i p_i(s_i) - \sum_{s_i,s_j} s_i p_{ij}(s_i,s_j) \right) +
\right. \nonumber \\
\lo + \left. \nu_{j,i} \left(
\sum_{s_j} s_j p_j(s_j) - \sum_{s_i,s_j} s_j p_{ij}(s_i,s_j) \right)
\right].
\label{BetheIsing}
\end{eqnarray}
One can easily recognize the energy terms, the pair entropy, the site
entropy (with a M\"obius number $-(d_i-1)$, where $d_i$ is the degree
of node $i$), and the Lagrange terms corresponding to the
normalization and pair--site compatibility constraints. Observe that,
due the presence of normalization constraints, it is enough to impose
the consistency between the spin expectation values given by the site
and pair probabilities.
A physical way of deriving message--passing algorithms for the
determination of the stationary points of the above
free energy is through the introduction of the effective field
representation. Consider the interaction $J_{ik}$ and assume that,
whenever this is not taken into account exactly, its effect on
variable $s_i$ can be replaced by an effective field $h_{i,k}$. This
can be made rigorous by observing that the stationarity conditions
\begin{eqnarray}
\frac{\partial {\cal F}}{\partial p_i(s_i)} = 0 \nonumber \\
\frac{\partial {\cal F}}{\partial p_{ij}(s_i,s_j)} = 0
\label{Stationarity}
\end{eqnarray}
can be solved by writing the probabilities as
\begin{eqnarray}
\fl p_i(s_i) &=& \exp\left[ F_i + \left( h_i +
\sum_{k \, {\rm NN} \, i} h_{i,k} \right) s_i \right]
\\
\fl p_{ij}(s_i,s_j) &=& \exp\left[ F_{ij} +
\left( h_i + \sum_{k \, {\rm NN} \, i}^{k \ne j} h_{i,k} \right) s_i
+ \left( h_j + \sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k} \right) s_j +
J_{ij} s_i s_j \right],
\label{p-vs-heff}
\end{eqnarray}
where the effective fields, and the site and pair free energies $F_i$
and $F_{ij}$, are related to the Lagrange multipliers through
\begin{eqnarray}
\lambda_i &=& (d_i - 1)(1 + F_i) \nonumber \\
\lambda_{ij} &=& - 1 - F_{ij} \nonumber \\
\nu_{i,j} &=& h_i + \sum_{k \, {\rm NN} \, i}^{k \ne j} h_{i,k}.
\end{eqnarray}
$F_i$ and $F_{ij}$ are determined by the normalization, but first of
all the effective fields must be computed by imposing the
corresponding compatibility constraints, which can be cast into the
form
\begin{equation}
h_{i,j} = {\rm tanh}^{-1} \left [
{\rm tanh}\left(h_j +
\sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k}\right)
{\rm tanh} J_{ij} \right].
\label{heff-iter}
\end{equation}
This is a set of coupled nonlinear equations which is often solved by
iteration, that is an initial guess is made for the $h_{i,j}$'s,
plugged into the r.h.s.\ of \Eref{heff-iter} which returns a new
estimate, and the procedure is then repeated until a fixed point is
(hopefully) reached. The values of the effective fields at the fixed
point can then be used to compute the probabilities according to
\Eref{p-vs-heff}.
The above equations, and their generalizations at the CVM level, have
been intensively used in the 80's for studying the average behaviour
of models with quenched random interactions, like Ising spin glass
models. This work was started by a paper by Morita \cite{Mor79}, where
an integral equation for the probability distribution of the effective
field, given the probability distributions of the interactions and
fields, was derived. In the general case this integral equation takes
the form
\begin{eqnarray}
\fl p_{i,j}(h_{i,j}) = \int \delta \left( h_{i,j} -
{\rm tanh}^{-1} \left [
{\rm tanh}\left(h_j +
\sum_{k \, {\rm NN} \, j}^{k \ne i} h_{j,k}\right)
{\rm tanh} J_{ij} \right] \right) \times \nonumber \\
\lo \times P_j(h_j) dh_j P_{ij}(J_{ij}) dJ_{ij}
\prod_{k \, {\rm NN} \, j}^{k \ne i} p_{j,k} (h_{j,k}) dh_{j,k},
\label{IntegralEquation}
\end{eqnarray}
with simplifications occurring if the probability distributions can be
assumed to be site--independent, which is the most studied
case. Typical calculations concerned: the determination of elements of
the phase diagrams of Ising spin glass models, through the calculation
of the instability loci of the paramagnetic solution; results in the
zero temperature limit, where solutions with a discrete support are
found; iterative numerical solutions of the integral equation. A
review of this line of research until 1986 can be found in
\cite{Kat86}. It is important to notice that the results obtained by
this approach are equivalent to those by the replica method, at the
replica symmetric level.
The effective field approach is reminiscent of the message--passing
procedure at the heart of the belief propagation (BP) algorithm, and
indeed the messages
appearing in this algorithm are related, in the Ising case, to the
effective fields by $m_{\langle i j \rangle \to i}(s_i) =
\exp(h_{i,j} s_i)$, where $m_{\langle i j \rangle \to i}(s_i)$ denotes
a message going from the NN pair $\langle i j \rangle$ to node $i$.
In order to derive the BP algorithm consider the Bethe--Peierls
approximation for a model with variable nodes $i$ and factor nodes
$a$. The variables $s_i$ need not be limited to two states
and the Hamiltonian is written in the general form \Eref{HsumHa}.
The CVM free energy, with the normalization and compatibility
constraints, can then be written as
\begin{eqnarray}
\fl {\cal F} = - \sum_a \sum_{\bi{s_a}} H_a(\bi{s_a}) p_a(\bi{s_a})
+ \nonumber \\ \lo
+ \sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) \ln p_a(\bi{s_a})
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) + \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_a \lambda_a \left(
\sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) - 1 \right)
+ \nonumber \\
\lo + \sum_a \sum_{i \in a}
\sum_{s_i} \mu_{a,i}(s_i) \left( p_i(s_i) - \sum_{\bi{s_{a \setminus i}}}
p_a(\bi{s_a}) \right),
\label{BetheFree}
\end{eqnarray}
where $\bi{s_{a \setminus i}}$ denotes the set of variables entering
factor node $a$, except $i$.
The stationarity conditions
\begin{eqnarray}
\frac{\partial {\cal F}}{\partial p_i(s_i)} = 0 \nonumber \\
\frac{\partial {\cal F}}{\partial p_a(\bi{s_a})} = 0
\end{eqnarray}
can then be solved, in
analogy with \Eref{p-vs-heff}, by
\begin{eqnarray}
p_i(s_i) &=& \frac{1}{Z_i} \prod_{i \in a}
m_{a \to i}(s_i) \nonumber \\
p_a(\bi{s_a}) &=& \frac{1}{Z_a} \psi_a(\bi{s_a})
\prod_{k \in a} \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\label{p-vs-mess}
\end{eqnarray}
In the particular case of an Ising model with pairwise interactions,
the previously mentioned relationship between messages and effective
fields is evident from the above equation.
Now $Z_i$ and $Z_a$ take care of normalization, and the messages
$m_{a \to i}(s_i)$ are related to the Lagrange multipliers by
\begin{equation}
\mu_{a,k}(s_k) = \sum_{k \in b}^{b \ne a} \ln m_{b \to k}(s_k).
\end{equation}
Notice that the messages can be regarded as exponentials of a new set
of Lagrange multipliers, with the constraints rewritten as in the
following free energy
\begin{eqnarray}
\fl {\cal F} = - \sum_a \sum_{\bi{s_a}} H_a(\bi{s_a}) p_a(\bi{s_a})
+ \nonumber \\ \lo
+ \sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) \ln p_a(\bi{s_a})
- \sum_i (d_i-1) \sum_{s_i} p_i(s_i) \ln p_i(s_i) + \nonumber \\
\lo + \sum_i \lambda_i \left( \sum_{s_i} p_i(s_i) - 1 \right)
+ \sum_a \lambda_a \left(
\sum_a \sum_{\bi{s_a}} p_a(\bi{s_a}) - 1 \right)
+ \nonumber \\
\lo + \sum_a \sum_{i \in a}
\sum_{s_i} \ln m_{a \to i}(s_i) \left( (d_i - 1) p_i(s_i) -
\sum_{i \in b}^{b \ne a} \sum_{\bi{s_{b \setminus i}}}
p_b(\bi{s_b}) \right).
\label{BetheFreeRot}
\end{eqnarray}
Again, imposing compatibility between variable nodes and factor nodes,
one gets a set of coupled equations for the messages which, leaving
apart normalization, read
\begin{equation}
m_{a \to i}(s_i) \propto \sum_{\bi{s_{a \setminus i}}} \psi_a(\bi{s_a})
\prod_{k \in a}^{k \ne i} \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\label{BP-mess-upd}
\end{equation}
The above equations, and their iterative solution, are the core of the
BP algorithm. Also, their structure justifies the name ``Sum-Product''
\cite{Kschischang}, which is often given them in the literature on
probabilistic graphical models, and the corresponding term
``Max-Product'' for their zero temperature limit.
There are several issues which must be considered when discussing the
property of an iterative algorithm based on \Eref{BP-mess-upd}. First
of all, one could ask whether messages have to be updated sequentially
or in parallel. This degree of freedom does not affect the fixed
points of the algorithm, but it affects the dynamics. This issue has
been considered in some depth by Kfir and Kanter \cite{KfirKanter} in
the context of the decoding of error--correcting codes. In that case
they showed that the sequential update results in twice faster
convergence with respect to the parallel update.
Convergence is however not guaranteed if the underlying graph is not
tree--like, that is if the pair approximation of the CVM is not
exact. This issue has been investigated theoretically by Tatikonda and
Jordan \cite{TatiJor}, Mooij and Kappen \cite{MooKap}, Ihler et al
\cite{Ihler}, who derived sufficient conditions for convergence, and
by Heskes \cite{Heskes2004}, who derived sufficient conditions for the
uniqueness of the fixed point. In practice it is typically observed
that the BP algorithm converges if the frustration due to competitive
interactions, like those characteristic of spin--glass or constraint
satisfaction models, is not too large. In some cases, the trick of
damping, or inertia, can help extending the convergence domain. The
trick consists in taking the updated message equal to a weighted
(possibly geometrical) average of the old message and the new one
given by \Eref{BP-mess-upd}. The convergence domain of the BP
algorithm has been determined for several problems, like
satisfiability \cite{SPSAT}, graph colouring \cite{SPCOL}, error
correcting codes \cite{KabSaaLDPCC} and spin glasses
\cite{SG-BP-conv}. Within its convergence domain, the BP algorithm is
indeed very fast, and this is its real strength. See the next
subsection for some performance tests and a comparison with provably
convergent algorithms.
Once a fixed point has been obtained it is worth asking whether this
corresponds to a minimum of the free energy or not. This has been
partially solved by Heskes \cite{Heskes}, who has shown that stable
fixed points of the belief propagation algorithm are minima of the CVM
pair approximation free energy, but the converse is not necessarily
true. Actually, examples can be found of minima of the free energy
which correspond to unstable fixed points of the belief propagation
algorithm.
An important advancement in this topic is the {\em generalized belief
propagation} (GBP) algorithm by Yedidia and coworkers
\cite{Yed01}. The fixed points of the GBP algorithm for a certain
choice of clusters correspond to stationary points of the CVM free
energy at the approximation level corresponding by the same choice of
clusters or, more generally, of a region graph free energy. Actually,
for a given choice of clusters, different GBP algorithms can be
devised. Here only the so--called {\em parent to child} GBP algorithm
\cite{Yed04} will be considered. Other choices are described in
\cite{Yed04}.
In order to better understand this algorithm, notice a few
characteristics of the belief propagation algorithm. First of all,
looking at the probabilities \Eref{p-vs-mess} one can say that a
variable node receives messages from all the factor nodes it belongs
to, while a factor node $a$ receives messages from all the other
factor nodes to which its variable nodes $i \in a$ belong. In
addition, the constraint corresponding to the message $m_{a \to
i}(s_i)$ (see \Eref{BetheFreeRot}) can be written as
\begin{equation}
\sum_{\bi{s_{a \setminus i}}} p_a(\bi{s_a}) =
\sum_{i \in b} \sum_{\bi{s_{b \setminus i}}} p_b(\bi{s_b})
- (d_i - 1) p_i(s_i).
\end{equation}
The parent to child GBP algorithm generalizes these characteristics in
a rather straightforward way. First of all, messages $m_{\alpha \to
\beta}(\bi{s_\beta})$ ($\beta \subset \alpha$) are introduced from
regions (parent regions) to subregions (child regions). Then, the
probability of a region takes into account messages coming from outer
regions to itself and its subregions. Finally, exploiting the property
\Eref{MobiusNumbers} of the M\"obius numbers, the constraint
corresponding to $m_{\alpha \to \beta}(\bi{s_\beta})$ is written in the
form
\begin{equation}
\sum_{\alpha \subseteq \gamma \in R} a_\gamma \sum_{\bi{s_{\gamma \setminus
\beta}}} p_\gamma(\bi{s_\gamma}) = \sum_{\beta \subseteq \gamma \in R}
a_\gamma \sum_{\bi{s_{\gamma \setminus \beta}}}
p_\gamma(\bi{s_\gamma}).
\end{equation}
It can be shown \cite{Yed04} that this new set of constraints is
equivalent to the original one.
To make this more rigorous, consider the free energy given by
Equations (\ref{CVMFree}) and (\ref{ClusterFree}), with the above
compatibility constraints (with Lagrange multipliers $\ln m_{\alpha
\to \beta}(\bi{s_\beta})$) and the usual normalization constraints
(with multipliers $\lambda_\alpha$).
One obtains
\begin{eqnarray}
\fl {\cal F} = \sum_{\gamma \in R} a_\gamma \sum_{\bi{s_\gamma}} \left[
p_\gamma(\bi{s_\gamma}) H_\gamma(\bi{s_\gamma}) + p_\gamma(\bi{s_\gamma}) \ln
p_\gamma(\bi{s_\gamma}) \right] + \sum_{\gamma \in R} \lambda_\gamma
\left[ \sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma}) - 1 \right] +
\nonumber \\
\fl + \sum_{\beta \subset \alpha \in R} \sum_{\bi{s_\beta}}
\ln m_{\alpha \to \beta}(\bi{s_\beta}) \left[
\sum_{\alpha \subseteq \gamma \in R} a_\gamma \sum_{\bi{s_{\gamma \setminus
\beta}}} p_\gamma(\bi{s_\gamma}) - \sum_{\beta \subseteq \gamma \in R}
a_\gamma \sum_{\bi{s_{\gamma \setminus \beta}}}
p_\gamma(\bi{s_\gamma}) \right],
\end{eqnarray}
where it is not necessary to put all the possible $\alpha \to \beta$
compatibility constraints, but it is enough to put those which satisfy
$a_\alpha \ne 0$, $a_\beta \ne 0$ and $\beta$ is a direct subregion of
$\alpha$, that is there is no region $\gamma$ with $a_\gamma \ne 0$
such that $\beta \subset \gamma \subset \alpha$. Notice also that the
Lagrange term corresponding to the $\alpha \to \beta$ constraint can
be written as
\begin{equation}
- \ln m_{\alpha \to \beta}(\bi{s_\beta}) \sum_{\beta \subseteq \gamma
\in R}^{\alpha \nsubseteq \gamma} a_\gamma \sum_{\bi{s_{\gamma
\setminus \beta}}} p_\gamma(\bi{s_\gamma}).
\end{equation}
The stationarity conditions
\begin{equation}
\frac{\partial {\cal F}}{\partial p_\gamma(\bi{s_\gamma})} = 0
\end{equation}
can then be solved, leaving apart normalization, by
\begin{equation}
p_\gamma(\bi{s_\gamma}) \propto \exp\left[ - H_\gamma(\bi{s_\gamma})
\right] \prod_{\beta \subseteq \gamma} \prod_{\beta \subset \alpha
\in R}^{\alpha \nsubseteq \gamma} m_{\alpha \to
\beta}(\bi{s_\beta}),
\label{GBP-p-vs-mess}
\end{equation}
where $\bi{s_\beta}$ denotes the restriction of $\bi{s_\gamma}$ to
subregion $\beta$.
Finally, message update rules can be derived again by the compatibility
constraints, though some care is needed, since in the general case
these constraints are not immediately solved with respect to the
(updated) messages, as it occurs in the derivation of
\Eref{BP-mess-upd}. Here one obtains a coupled set of equations in the
updated messages, which can be solved starting from the constraints
involving the smallest clusters.
An example can be helpful here. Consider a model defined on a regular
square lattice, with periodic boundary conditions, and the CVM square
approximation, that is the approximation obtained by taking the
elementary square plaquettes as maximal clusters. The entropy
expansion contains only terms for square plaquettes (with M\"obius
numbers 1), NN pairs (M\"obius numbers -1) and single nodes (M\"obius
numbers 1), as in \Eref{SquareEntropy}. A minimal set of compatibility
constraints includes node--pair and pair--square constraints, and one
has therefore to deal with square--to--pair and pair--to--node
messages, which will be denoted by $m_{ij,kl}(s_i,s_j)$ and
$m_{i,j}(s_i)$ respectively. With reference to the portion of the
lattice depicted in \Fref{SquareLatticePortion} the probabilities,
according to \Eref{GBP-p-vs-mess}, can be written as
\begin{eqnarray}
\fl p_i(s_i) \propto \exp[-H_i(s_i)] \, m_{i,a}(s_i) \, m_{i,j}(s_i)
\, m_{i,l}(s_i) \, m_{i,h}(s_i), \nonumber \\
\fl p_{ij}(s_i,s_j) \propto \exp[-H_{ij}(s_i,s_j)] \, m_{i,a}(s_i) \,
m_{i,l}(s_i) \, m_{i,h}(s_i) \times \nonumber \\
\lo \times m_{j,b}(s_j) \, m_{j,c}(s_j) \, m_{j,k}(s_j) \,
m_{ij,ab}(s_i,s_j) \,
m_{ij,lk}(s_i,s_j), \nonumber \\
\fl p_{ijkl}(s_i,s_j,s_k,s_l) \propto \exp[-H_{ijkl}(s_i,s_j,s_k,s_l)]
\, m_{i,a}(s_i) \, m_{i,h}(s_i) \, m_{j,b}(s_j) \, m_{j,c}(s_j) \times
\nonumber \\
\lo \times m_{k,d}(s_k) \, m_{k,e}(s_k) \, m_{l,f}(s_l) \,
m_{l,g}(s_l) \times \nonumber \\
\lo \times m_{ij,ab}(s_i,s_j) \,
m_{jk,cd}(s_j,s_k) \, m_{kl,ef}(s_k,s_l) \, m_{lj,gh}(s_l,s_j).
\end{eqnarray}
\begin{figure}
\centertexdraw{
\drawdim cm \linewd 0.02
\arrowheadsize l:0.3 w:0.15
\arrowheadtype t:V
\move(0 2) \lvec(6 2)
\move(0 4) \lvec(6 4)
\move(2 0) \lvec(2 6)
\move(4 0) \lvec(4 6)
\textref h:L v:B \htext(2.1 2.1) {$i$}
\textref h:R v:B \htext(3.9 2.1) {$j$}
\textref h:L v:T \htext(2.1 3.9) {$l$}
\textref h:R v:T \htext(3.9 3.9) {$k$}
\textref h:C v:T \htext(2 -0.1) {$a$}
\textref h:C v:T \htext(4 -0.1) {$b$}
\textref h:L v:C \htext(6.1 2) {$c$}
\textref h:L v:C \htext(6.1 4) {$d$}
\textref h:C v:B \htext(4 6.1) {$e$}
\textref h:C v:B \htext(2 6.1) {$f$}
\textref h:R v:C \htext(-0.1 4) {$g$}
\textref h:R v:C \htext(-0.1 2) {$h$}
}
\caption{\label{SquareLatticePortion}A portion of the square lattice}
\end{figure}
Imposing node--pair and pair--square constraints one gets equations
like
\begin{eqnarray}
\fl \exp[-H_i(s_i)] \, m_{i,j}(s_i)
\propto \sum_{s_j} \exp[-H_{ij}(s_i,s_j)] \times \nonumber \\
\lo \times m_{j,b}(s_j) \, m_{j,c}(s_j) \,
m_{j,k}(s_j) \, m_{ij,ab}(s_i,s_j) \, m_{ij,lk}(s_i,s_j), \nonumber \\
\fl \exp[-H_{ij}(s_i,s_j)] \, m_{i,f}(s_i) \, m_{j,k}(s_j) \,
m_{ij,lk}(s_i,s_j) \propto \sum_{s_k,s_l}
\exp[-H_{ijkl}(s_i,s_j,s_k,s_l)] \times \nonumber \\
\lo \times m_{k,d}(s_k) \,
m_{k,e}(s_k) \, m_{l,f}(s_l) \, m_{l,g}(s_l) \times \nonumber \\
\lo \times m_{jk,cd}(s_j,s_k) \, m_{kl,ef}(s_k,s_l) \, m_{lj,gh}(s_l,s_j).
\end{eqnarray}
The above equations can be viewed as a set of equations in the updated
messages at iteration $t+1$, appearing in the l.h.s., given the
messages at iteration $t$, appearing in the r.h.s.. It is clear that
one has first to calculate the updated pair--to--site messages
according to the first equation, and then the updated square--to--pair
messages according to the second one, using in the l.h.s.\ the updated
pair--to--site messages just obtained.
GBP (possibly with damping) typically exhibits better convergence
properties (and greater accuracy) than BP, but the empirical rule that
a sufficient amount of frustration can make it not convergent is valid
also for GBP. It is therefore fundamental to look for provably
convergent algorithms, which will be discussed in the next
subsection. A variation of the BP algorithm, the conditioned
probability (CP) algorithm, with improved convergence properties, has
recently been introduced \cite{MP-Prop}. The extension of this
algorithm beyond the BP level is however not straightforward.
We conclude the present subsection by mentioning that
techniques like the Thouless--Anderson--Palmer equations, or the
cavity method, both widely used in the statistical physics of spin
glasses, are strictly related to the Bethe--Peierls approximation.
The Thouless--Anderson--Palmer \cite{TAP} equations can be derived
from the Bethe--Peierls free energy for the Ising model, through the
so-called Plefka expansion \cite{Plefka}. One has first to write the
free energy as a function of magnetizations and nearest--neighbour
correlations through the parameterization
\begin{equation}
p_i(s_i) = \frac{1 + s_i m_i}{2} \qquad
p_{ij}(s_i,s_j) = \frac{1 + s_i m_i + s_j m_j + s_i s_j c_{ij}}{4},
\end{equation}
then to solve analytically the stationarity conditions with respect to
the $c_{ij}$'s and finally to expand to second order in the inverse
temperature.
Finally, the cavity method \cite{MezPar86,MezPar87,MezPar01} is
particularly important since it allows to deal with replica symmetry
breaking. The cavity method, though historically derived in a
different way, can be regarded as an alternative choice of messages
and effective fields in the Bethe--Peierls approximation. With
reference to \Eref{p-vs-mess}, introduce messages $m_{k \to a}(s_k)$
from variable nodes to factor nodes according to
\begin{equation}
m_{k \to a}(s_k) = \prod_{k \in b}^{b \ne a} m_{b \to k}(s_k).
\end{equation}
Then the probabilities \Eref{p-vs-mess} become
\begin{eqnarray}
p_i(s_i) &=& \frac{1}{Z_i} \prod_{i \in a}
m_{a \to i}(s_i) \nonumber \\
p_a(\bi{s_a}) &=& \frac{1}{Z_a} \psi_a(\bi{s_a})
\prod_{k \in a} m_{k \to a}(s_k),
\end{eqnarray}
and the message update equations (\ref{BP-mess-upd}) become
\begin{equation}
m_{a \to i}(s_i) \propto \sum_{\bi{s_{a \setminus i}}} \psi_a(\bi{s_a})
\prod_{k \in a}^{k \ne i} m_{k \to a}(s_k).
\end{equation}
The effective fields corresponding to the factor--to--variable
messages $m_{a \to i}(s_i)$ are usually called cavity biases, while
those corresponding to the variable--to--factor messages $m_{i \to
a}(s_i)$ are called cavity fields. In the Ising example above a factor
node is just a pair of NNs and cavity biases reduce to effective
fields $h_{i,j}$, while cavity fields take the form
$\displaystyle{\sum_{k {\rm NN} i}^{k \ne j} h_{i,k}}$.
The cavity method admits an extension to cases where one step of
replica symmetry breaking occurs \cite{MezPar01,MezPar03}. In such a
case one assumes that there exist many states characterized by
different values of the cavity biases and fields, and introduces the
probability distributions of cavity biases and fields over the
states. From the above message update rules one can then derive
integral equations, similar to \Eref{IntegralEquation}, for the
distributions. These integral equations can in principle be solved by
iterative population dynamics algorithms, but most often one restricts
to the zero temperature case, where these distributions have a
discrete support.
The zero temperature case is particularly relevant for hard
combinatorial optimization problems, where 1--step replica symmetry
breaking corresponds to clustering of solutions. Clustering means that
the space of solutions becomes disconnected, made of subspaces which
cannot be reached from one another by means of local moves, and hence
all local algorithms, like BP or GBP, are bound to fail. The cavity
method has been used to solve these kind of problems in the framework
of the survey propagation algorithm \cite{SPScience}, which has been
shown to be a very powerful tool for constraint satisfaction problems
like satisfiability \cite{SPSAT} and colouring \cite{SPCOL} defined on
finite connectivity random graphs. These graphs are locally tree--like
and therefore all the analysis can be carried out at the
Bethe--Peierls level. A sort of generalized survey propagation capable
of dealing with short loops would really be welcome, but it seems that
realizability issues are crucial here and replica symmetry breaking
can only be introduced when CVM gives an exact solution.
A different approach, still aimed to generalize the BP algorithm to
situations where replica symmetry breaking occurs, has been suggested
by van Mourik \cite{Jort}, and is based on the analysis of the time
evolution of the BP algorithm.
\subsection{Variational algorithms}
\label{VarAlg}
In the present subsection we discuss algorithms which update
probabilities instead of messages. At every iteration a new estimate
of probabilities, and hence of the free energy, is obtained. These
algorithms are typically provably convergent, and the proof is based
on showing that the free energy decreases at each iteration. This is
of course not possible with BP and GBP algorithms, where the
probabilities and the free energy can be evaluated only at the fixed
point. The price one has to pay is that in variational algorithms one
has to solve the compatibility constraints at every iteration, and
therefore these are double loop algorithms, where the outer loop is
used to update probabilities and the inner loop is used to solve the
constraints.
The natural iteration method (NIM) \cite{Kik74,Kik76} is the oldest
algorithm specifically designed to minimize the CVM variational free
energy. It was originally introduced \cite{Kik74} in the context of
homogeneous models, for the pair and tetrahedron (for the fcc lattice)
approximations. In such cases the compatibility constraints are
trivial. Later \cite{Kik76} it was generalized to cases where the
compatibility constraints cannot be solved trivially. An improved
version of the algorithm, with tunable convergence properties,
appeared in \cite{KiKoKa} and its application is described in some
detail also in \cite{3CVM}, where higher order approximations are
considered.
The algorithm is based on a double loop scheme, where the inner loop
is used to solve the compatibility constraints, so that at each
iteration of the outer loop a set of cluster probabilities which
satisfy the constraints is obtained.
Proof of convergence, based on showing that the free energy decreases
at every outer loop iteration, exist in many cases, but it has also
been shown that there are non--convergent cases, like the
four--dimensional Ising model \cite{Pretti} in the hypercube
approximation.
We do not discuss in detail this algorithm since it is rather slow,
and better alternatives have been recently developed.
A first step in this direction was the {\it concave--convex procedure}
(CCCP) by Yuille \cite{Yuille}, who started from the observation that
the non--convergence problems of message--passing algorithms arise
from concave terms in the variational free energy, that is from the
entropy of clusters with negative M\"obius numbers. His idea was then
to split the CVM free energy into a convex and a concave part,
\begin{equation}
{\cal F}(\{p_\alpha\}) = {\cal F}_{\rm vex}(\{p_\alpha\}) +
{\cal F}_{\rm cave}(\{p_\alpha\}),
\label{CCCPsplit}
\end{equation}
and to write the update equations to be iterated to a fixed point
as
\begin{equation}
\nabla {\cal F}_{\rm vex}(\{p_\alpha^{(t+1)}\}) = -
\nabla {\cal F}_{\rm cave}(\{p_\alpha^{(t)}\}),
\label{CCCPiter}
\end{equation}
where $p_\alpha^{(t)}$ and $p_\alpha^{(t+1)}$ are successive
iterates. In order to solve the compatibility constraints, at each
iteration of \Eref{CCCPiter}, the Lagrange multipliers enforcing the
constraints are determined by another iterative algorithm where one
solves for one multiplier at a time, and it can be shown that the free
energy decreases at each outer loop iteration. Therefore we have another double
loop algorithm, which is provably convergent, faster than NIM (as we
shall see below), and allows some freedom in the splitting
between convex and concave parts.
A more general and elegant formalism, which will be described in the
following, has however been put forward by Heskes, Albers and Kappen
(HAK) \cite{HAK}. Their basic idea is to consider a sequence of convex
variational free energies such that the sequence of the corresponding
minima tends to the minimum of the CVM free energy. More precisely, if
the CVM free energy ${\cal F}(\{ p_\alpha, \alpha \in R \})$ is
denoted for simplicity by ${\cal F}(p)$, they consider a function
${\cal F}_{\rm conv}(p,p')$, convex in $p$, with the properties
\begin{eqnarray}
{\cal F}_{\rm conv}(p,p') \ge {\cal F}(p), \nonumber \\
{\cal F}_{\rm conv}(p,p) = {\cal F}(p).
\end{eqnarray}
The algorithm is then defined by the update rule for the probabilities
\begin{equation}
p^{(t+1)} = {\rm arg}\min_{p} {\cal F}_{\rm conv}(p,p^{(t)}),
\label{HAKouter}
\end{equation}
and it is easily proved that the free energy decreases at each
iteration and that a minimum of the CVM free energy is recovered at
the fixed point.
A lot of freedom is left in the definition of ${\cal F}_{\rm conv}$,
and strategies of varying complexity and speed can be obtained. NIM
(when convergent) and CCCP can also be recovered as special cases.
The general framework is based on the following three properties.
\begin{enumerate}
\item If $\beta \subset \alpha$, then
\begin{equation}
- S_\alpha + S_\beta = \sum_{\bi{s_\alpha}} p_\alpha(\bi{s_\alpha}) \ln
p_\alpha(\bi{s_\alpha}) - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta})
\ln p_\beta(\bi{s_\beta})
\end{equation}
is convex over the constraint set, i.e.\ it is a convex function of
$p_\alpha$ and $p_\beta$ if these satisfy the compatibility constraint
\Eref{CompConstr}.
\item The linear bound
\begin{equation}
S_\beta = - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta}) \ln
p_\beta(\bi{s_\beta}) \le - \sum_{\bi{s_\beta}}
p_\beta(\bi{s_\beta}) \ln p'_\beta(\bi{s_\beta}) = S'_\beta
\end{equation}
holds, with equality only for $p'_\beta = p_\beta$
\item If $\gamma \subset \beta$, and $p_\beta$ and $p_\gamma$
($p'_\beta$ and $p'_\gamma$) satisfy the compatibility constraints,
the bound
\begin{eqnarray}
\fl S_\beta - S_\gamma = - \sum_{\bi{s_\beta}} p_\beta(\bi{s_\beta}) \ln
p_\beta(\bi{s_\beta}) + \sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma})
\ln p_\gamma(\bi{s_\gamma}) \le \nonumber \\
\lo \le - \sum_{\bi{s_\beta}}
p_\beta(\bi{s_\beta}) \ln p'_\beta(\bi{s_\beta}) +
\sum_{\bi{s_\gamma}} p_\gamma(\bi{s_\gamma}) \ln
p'_\gamma(\bi{s_\gamma}) = S'_\beta - S'_\gamma
\end{eqnarray}
holds, and it is tighter than the previous bound. A tighter bound
typically entail faster convergence.
\end{enumerate}
In order to give an example, consider again the CVM square
approximation for a model on a regular square lattice with periodic
boundary conditions and focus on the entropy part of the free energy,
which according to the entropy expansion
\Eref{SquareEntropy} has the form
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i = \sum_{\opensquare} p_{\opensquare} \ln
p_{\opensquare} - \sum_{\langle i j \rangle} p_{ij} \ln p_{ij} +
\sum_i p_i \ln p_i.
\end{equation}
This contains both convex (from square and site entropy) and concave
terms (from pair entropy). Notice that the numbers of plaquettes is
the same as the number of sites, while there are two pairs (e.g.\
horizontal and vertical) per site. This implies that the free energy
is not convex over the constraint set.
Several bounding schemes are possible to define ${\cal F}_{\rm
conv}$. For instance, one can obtain a function which is just convex
over the constraint set by applying property (iii) to the site terms
and half the pair terms, with the result
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} +
\frac{1}{2} \sum_{\langle i j \rangle} S_{ij} +
\frac{1}{2} \sum_{\langle i j \rangle} S'_{ij} - \sum_i S'_i.
\label{JustConvex}
\end{equation}
In the following the HAK algorithm will always be used with this
bounding scheme.
The NIM can be obtained if, starting from the above expression, one
applies property (ii) to the not yet bounded pair terms, with the
result
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} +
\sum_{\langle i j \rangle} S'_{ij} - \sum_i S'_i.
\end{equation}
This is clearly a looser bound than the previous one, and hence it
leads to a (much) slower algorithm. In the general case, the NIM
(which of course was formulated in a different way) can be obtained by
bounding all entropy terms except those corresponding to the maximal
clusters. This choice does not always lead to a convex bound (though
in most practically relevant cases this happens) and hence convergence
is not always guaranteed.
The CCCP recipe corresponds to bounding every convex ($a_\beta < 0$)
term by
\begin{equation}
- a_\beta S_\beta \le - S_\beta + (1 - a_\beta) S'_\beta,
\end{equation}
using property (ii). In the present case this gives
\begin{equation}
\fl - \sum_{\opensquare} S_{\opensquare} + \sum_{\langle i j \rangle}
S_{ij} - \sum_i S_i \le - \sum_{\opensquare} S_{\opensquare} -
\sum_{\langle i j \rangle} S_{ij} +
2 \sum_{\langle i j \rangle} S'_{ij} - \sum_i S_i,
\end{equation}
which is convex independently of the constraints, and hence the bound
is again looser than \Eref{JustConvex}
In all cases one is left with a double loop algorithm, the outer loop
being defined by the update rule for probabilities, and the inner loop
being used for the minimization involved in \Eref{HAKouter}. This
minimization is simpler than the original problem, since the function
to be minimized is convex. In each of the above schemes a particular
technique was proposed for the convex minimization in the inner loop,
and here these will not be covered in detail.
A point which is important to notice here is that the bounding
operation gives a new free energy which is structurally different from
a CVM free energy. It must be minimized with respect to $p$ at fixed
$p'$ and, viewed as a function of $p$, it contains an entropy
expansion with coefficients $\tilde a_\beta$ which do not satisfy
anymore the M\"obius relation (\ref{MobiusNumbers}) (for instance, in
the ``just convex over the constraint set'' scheme, we have
$a_{\opensquare} = 1$, $a_{ij} = -1/2$ and $a_i = 0$). This means that
a message--passing algorithm like parent--to--child GBP, which relies
on the M\"obius property, cannot be applied. In \cite{HAK} a different
message--passing algorithm, which can still be viewed as a GBP
algorithm, is suggested.
Observe also that there are entropy--like terms $S'_\beta$ which are
actually linear in $p_\beta$ and must therefore be absorbed in the
energy terms.
The main reason for investigating these double loop, provably
convergent algorithms, is the non--convergence of BP and GBP in
frustrated cases. Since BP and GBP, when they converge, are the
fastest algorithms for the determination of the minima of the CVM free
energy, it is worth making some performance tests to evaluate the
speed of the various algorithms. The CPU times reported below refer to
an Intel Pentium 4 processor at 3.06 GHz, using g77 under GNU/Linux.
Consider first a chain of $N$ Ising spins, with ferromagnetic
interactions $J>0$ and random bimodal fields $h_i$ independently drawn
from the distribution
\begin{equation}
p(h_i) = \frac{1}{2} \delta(h_i - h_0) + \frac{1}{2} \delta(h_i + h_0).
\end{equation}
The boundary conditions are open, and the model is exactly solved by
the CVM pair approximation. The various algorithms described are run
from a disordered, uncorrelated state and stopped when the distance
between two successive iterations, defined as the sum of the squared
variations of the messages (or the probabilities, or the Lagrange
multipliers, depending on the algorithm and the loop -- outer or inner
-- considered). \Fref{CPU1d} reports the CPU times obtained with
several algorithms, for the case $J = 0.1$, $h_0 = 1$. The HAK
algorithm is not reported since it reduces to BP due to the convexity
of the free energy. It is seen that the CPU time grows linearly with
$N$ for all algorithms except NIM, in which case it goes like
$N^3$. Despite the common linear behaviour, there are order of
magnitude differences between the various algorithms. While BP and CP
converges in 4 and 9 seconds respectively for $N = 10^6$, CCCP takes
15 seconds for $N = 10^4$. For NIM, finally, the fixed point is
reached in 12 seconds for $N = 10^2$.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-1d.eps}
\end{center}
\caption{\label{CPU1d}CPU times (seconds) for the 1d Ising chain with
random fields}
\end{figure}
As a further test, consider, again at the level of the pair
approximation, the two--dimensional Edwards--Anderson spin glass
model, defined by the Hamiltonian \Eref{Ising} with $h_i = 0$ and
random bimodal interactions $J_{ij}$ independently drawn from the
distribution
\begin{equation}
p(J_{ij}) = (1-p) \delta(J_{ij} - J) + p \, \delta(J_{ij} + J).
\end{equation}
Here the frustration effects are even more important and the
non--convergence problem of BP becomes evident. As a rule of
thumb, when the temperature, measured by $J^{-1}$, is small enough and
$p$ (the fraction of antiferromagnetic bonds) is large enough, the BP
algorithm stops converging. The condition for the instability of the
BP fixed point has been computed, in the average case, for Ising spin
glass models with pairwise interactions \cite{SG-BP-conv}. In order to
compare algorithm performances, \Fref{CPU2dP} reports CPU times vs $L$
for $N = L^2$ lattices with periodic boundary conditions, $J = 0.2$
and $p = 1/2$, that is well into the paramagnetic phase of the
model. The initial guess is a ferromagnetic state with $m_i = 0.9,
\forall i$. It is seen that the CPU times scale roughly as $N^{1.1}$
for all the algorithms considered except NIM, which goes like
$N^{1.8}$. Again the algorithms with linear behaviour are separated by
orders of magnitude. For $L = 320$ BP converges in 6 seconds, HAK in
370 seconds and CCCP in 2460 seconds.
CP has not been considered in the present and the following tests,
although empirically it is seen that its behaviour is rather close to
the HAK algorithm. Its performance is however severely limited as soon
as one considers variable with more than two states, due to a sum over
the configurations of the neighbourhood of a NN pair.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-2d-P.eps}
\end{center}
\caption{\label{CPU2dP}CPU times (seconds) for the 2d
Edwards--Anderson model in the paramagnetic phase}
\end{figure}
A similar comparison can be made in the ferromagnetic phase, setting
$J = 0.5$ and $p = 0.1$. Here the CPU times for the BP algorithm
exhibit large fluctuations for different realizations of the disorder,
and the data reported are obtained by averaging over 30 such
realizations. Now all algorithms exhibit comparable scaling
properties, with CPU times growing like $N^{1.5} \div N^{1.7}$. As far
as absolute values are concerned, for $L = 50$ convergence is reached
in 4, 44, 680 and 1535 seconds by BP, HAK, CCCP and NIM
respectively.
\begin{figure}
\begin{center}
\includegraphics*[scale=.5]{CPUTime-2d-F.eps}
\end{center}
\caption{\label{CPU2dF}CPU times (seconds) for the 2d
Edwards--Anderson model in the ferromagnetic phase}
\end{figure}
A similar scaling analysis was not possible into the glassy phase
(which is unphysically predicted by the pair approximation), due to
non--convergence of BP and too large fluctuations of the convergence
time of the other algorithms.
As a general remark we observe that BP is the fastest algorithm
available whenever it converges. Among the provably convergent
algorithms, the fastest one turns out to be HAK, at least in the
``just convex over the constraints'' \cite{HAK} scheme which was used
here.
\section{Conclusions}
\label{Conclusions}
Some aspects of the cluster variation method have been briefly reviewed.
The emphasis was on recent developments, not yet covered by the 1994
special issue of Progress of Theoretical Supplement \cite{PTPS}, and
the focus was on the methodological aspects rather than on the
applications.
The discussion has been based on what can be considered the modern
formulation of the CVM, due to An \cite{An88}, based on a truncation
of the cumulant expansion of the entropy in the variational principle
of equilibrium statistical mechanics.
The advancements in this last decade were often due to the interaction
between two communities of researchers, working on statistical physics
and, in a broad sense, probabilistic graphical models for inference
and optimization problems. The interest of both communities is
currently on heterogeneous problems, while in the past the CVM was
most often applied to translation invariant lattice models (in this
topic, the only new advancements discussed have been the attempts to
extract information about critical behaviour from CVM results). The
more general point of view that has to be adopted in studying
heterogeneous problems has been crucial to achieve many of the results
discussed.
The formal properties of the CVM have been better understood by
comparing it with other region--based approximations, like the
junction graph method or the most general formulation of the
Bethe--Peierls approximation (the lowest order of the CVM), which can
treat also non--pairwise interactions. Studying realizability, that is
the possibility of reconstructing a global probability distribution
from the marginals predicted by the CVM, has led to the discovery of
non--tree--like models for which the CVM gives the exact solution.
A very important step was made by understanding that belief
propagation, a message--passing algorithm widely used in the
literature on probabilistic graphical models, has fixed points which
correspond to stationary points of the Bethe--Peierls
approximation. The belief propagation can thus be regarded as a
powerful algorithm to solve the CVM variational problem, that is to
find minima of the approximate free energy, at the Bethe--Peierls
level. This opened the way to the formulation of generalized belief
propagation algorithms, whose fixed points correspond to stationary
points of the CVM free energy, at higher level of approximation.
Belief propagation and generalized belief propagation are certainly
the fastest available algorithms for the minimization of the CVM free
energy, but they often fail to converge. Typically this happens when
the problems under consideration are sufficiently frustrated. In order
to overcome this difficulty double loop, provably convergent
algorithms have been devised, for which the free energy can be shown
to decrease at each iteration. These are similar in spirit to the old
natural iteration method by Kikuchi, but orders of magnitude faster,
though not as fast as BP and GBP.
When the frustration due to competitive interactions or constraints is
very strong, like in spin--glass models in the glassy phase or in
constraints satisfaction problems in the hard regime, even double loop
algorithms become useless, since we are faced with the problem of
replica symmetry breaking, corresponding to clustering of
solutions. Very important advancements have been made in recent years
by extending the belief propagation algorithm into this domain. These
results are in a sense at the border of the CVM, since they are at
present confined to the lowest order of the CVM approximation, that is
the Bethe--Peierls approximation.
It will be of particular importance, in view of the applications to
hard optimization problems with non--tree--like structure, to
understand how to generalize these results to higher order
approximations.
\ack
I warmly thank Pierpaolo Bruscolini, Carla Buzano and Marco Pretti,
with whom I have had the opportunity to collaborate and to exchange
ideas about the CVM, Marco Zamparo for a fruitful discussion about
\Eref{FactorProp}, Riccardo Zecchina for many discussions about the
survey propagation algorithm, and the organizers of the Lavin workshop
``Optimization and inference in machine learning and physics'' where I
had the opportunity to discuss an early version of this work.
\section*{References}
| {'timestamp': '2005-08-09T11:19:35', 'yymm': '0508', 'arxiv_id': 'cond-mat/0508216', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/0508216'} |